[jira] [Comment Edited] (YARN-7863) Modify placement constraints to support node attributes

2018-01-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346391#comment-16346391
 ] 

Arun Suresh edited comment on YARN-7863 at 1/31/18 7:58 AM:


[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tags on nodes with attribute java_version = 1.8.
# Supporting node attributes as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 


was (Author: asuresh):
[~sunilg], There are actually two aspects to this:
# Supporting node attributes as a Target Expression: For eg: place containers 
with allocation tags on nodes with attribute java_version = 1.8.
# Supporting a node attribute as a Scope: For eg: place 5 containers, no more 
than one per failure domain. Assuming all nodes in cluster has a node attribute 
called "failure_domain" with values 1 - 10.

1) can be tackled here I guess. but 2) needs some modifications to the 
AllocationTagsManager and should be handled YARN-7858.

Thoughts [~kkaranasos] / [~leftnoteasy] ? 

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346390#comment-16346390
 ] 

genericqa commented on YARN-7861:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7861 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908514/YARN-7861.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux bb19fd87edd5 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5206b2c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 440 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19541/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Logs page shows duplicated containers with ATS
> 
>
> Key: YARN-7861
> URL: https://issues.apache.org/jira/browse/YARN-7861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7861.001.patch
>
>
> There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7863) Modify placement constraints to support node attributes

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346382#comment-16346382
 ] 

Sunil G commented on YARN-7863:
---

[~cheersyang] I thought of creating this ticket under YARN-3409 itself as 
placement constraint work is close to merge? Thoughts?

> Modify placement constraints to support node attributes
> ---
>
> Key: YARN-7863
> URL: https://issues.apache.org/jira/browse/YARN-7863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
>
> This Jira will track to *Modify existing placement constraints to support 
> node attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7863) Modify placement constraints to support node attributes

2018-01-30 Thread Sunil G (JIRA)
Sunil G created YARN-7863:
-

 Summary: Modify placement constraints to support node attributes
 Key: YARN-7863
 URL: https://issues.apache.org/jira/browse/YARN-7863
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil G
Assignee: Sunil G


This Jira will track to *Modify existing placement constraints to support node 
attributes.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7862) YARN native service REST endpoint needs user.name as query param

2018-01-30 Thread Sunil G (JIRA)
Sunil G created YARN-7862:
-

 Summary: YARN native service REST endpoint needs user.name as 
query param
 Key: YARN-7862
 URL: https://issues.apache.org/jira/browse/YARN-7862
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Reporter: Sunil G


While accessing below yarn rest end point with POST method type,
{code:java}
http://rm_ip:8088/app/v1/services{code}
below error is coming in non-secure cluster.
{noformat}
{
"diagnostics": "Null user"
}{noformat}
When *user.name* is provided as query param with *dr.who* we can see that yarn 
started service with proxy user, not dr.who. 

In non-secure cluster, native service should ideally take the user from remote 
ugi.

in secure cluster, its better to derive user from kerberized shell.

 

cc/  [~jianhe] [~eyang]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7827) Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346370#comment-16346370
 ] 

Sunil G commented on YARN-7827:
---

Patch is cancelled as it is not very correct to pass user.name form UI. However 
native service needs user.name in all type of cases (Secure/non-secure/single 
user etc). Once that is fixed, this Jira will be resumed.

> Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404
> -
>
> Key: YARN-7827
> URL: https://issues.apache.org/jira/browse/YARN-7827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7827.001.patch
>
>
> Steps:
> 1) Enable Ats v2
> 2) Start Httpd Yarn service
> 3) Go to UI2 attempts page for yarn service 
> 4) Click on setting icon
> 5) Click on stop service
> 6) This action will pop up a box to confirm stop. click on "Yes"
> Expected behavior:
> Yarn service should be stopped
> Actual behavior:
> Yarn UI is not notifying on whether Yarn service is stopped or not.
> On checking network stack trace, the PUT request failed with HTTP error 404
> {code}
> Sorry, got error 404
> Please consult RFC 2616 for meanings of the error code.
> Error Details
> org.apache.hadoop.yarn.webapp.WebAppException: /v1/services/httpd-hrt-qa-n: 
> controller for v1 not found
>   at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:247)
>   at org.apache.hadoop.yarn.webapp.Router.resolve(Router.java:155)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:143)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
>   at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
>   at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>   at 
> com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> 

[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346367#comment-16346367
 ] 

Weiwei Yang commented on YARN-7778:
---

Test {{TestPlacementProcessor#testRePlacementAfterSchedulerRejection}} was not 
related, I saw this several times in past jenkins reports, looks like a flaky 
one. The other one is tracked via YARN-7860.

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf, 
> YARN-7778-YARN-7812.001.patch, YARN-7778-YARN-7812.002.patch
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346365#comment-16346365
 ] 

genericqa commented on YARN-7778:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7812 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7812 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908499/YARN-7778-YARN-7812.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d889e313a343 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7812 / e6d2d26 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19540/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19540/testReport/ |
| Max. process+thread count | 865 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346362#comment-16346362
 ] 

Weiwei Yang commented on YARN-7778:
---

[~kkaranasos] could you please help to review the patch? Thanks

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf, 
> YARN-7778-YARN-7812.001.patch, YARN-7778-YARN-7812.002.patch
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7675) The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API

2018-01-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346363#comment-16346363
 ] 

Gergely Novák commented on YARN-7675:
-

[~sunilg] Thank you for the review. Can you please open the other ticket with 
the exact description of the color coding issue you discovered?

> The new UI won't load for pre 2.8 Hadoop versions because 
> queueCapacitiesByPartition is missing from the scheduler API
> --
>
> Key: YARN-7675
> URL: https://issues.apache.org/jira/browse/YARN-7675
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-7675.001.patch
>
>
> If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't 
> load. The console shows this trace:
> {noformat}
> TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined
> at Class.normalizeSingleResponse (yarn-ui.js:13903)
> at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811)
> at Class.handleQueue (yarn-ui.js:13928)
> at Class.normalizeArrayResponse (yarn-ui.js:13952)
> at Class.normalizeQueryResponse (vendor.js:101566)
> at Class.normalizeResponse (vendor.js:101468)
> at 
> ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
> (vendor.js:95345)
> at vendor.js:95672
> at Backburner.run (vendor.js:10426)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7832) Logs page does not work for Running applications

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346358#comment-16346358
 ] 

Sunil G commented on YARN-7832:
---

YARN-7861 is raised to handle these issues mentioned by me. As the changes in 
patch not related to original jira, its better to have it separate.

 

> Logs page does not work for Running applications
> 
>
> Key: YARN-7832
> URL: https://issues.apache.org/jira/browse/YARN-7832
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: Screen Shot 2018-01-26 at 3.28.40 PM.png, 
> YARN-7832.001.patch
>
>
> Scenario
>  * Run yarn service application
>  * When application is Running, go to log page
>  * Select AttemptId and Container Id
> Logs are not showed on UI. It complains "No log data available!"
>  
> Here 
> [http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
>  API fails with 500 Internal Server Error.
> {"exception":"WebApplicationException","message":"java.io.IOException: 
> ","javaClassName":"javax.ws.rs.WebApplicationException"}
> {code:java}
> GET 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
>  500 (Internal Server Error)
> (anonymous) @ VM779:1
> send @ vendor.js:572
> ajax @ vendor.js:548
> (anonymous) @ vendor.js:5119
> initializePromise @ vendor.js:2941
> Promise @ vendor.js:3005
> ajax @ vendor.js:5117
> ajax @ yarn-ui.js:1
> superWrapper @ vendor.js:1591
> query @ vendor.js:5112
> ember$data$lib$system$store$finders$$_query @ vendor.js:5177
> query @ vendor.js:5334
> fetchLogFilesForContainerId @ yarn-ui.js:132
> showLogFilesForContainerId @ yarn-ui.js:126
> run @ vendor.js:648
> join @ vendor.js:648
> run.join @ vendor.js:1510
> closureAction @ vendor.js:1865
> trigger @ vendor.js:302
> (anonymous) @ vendor.js:339
> each @ vendor.js:61
> each @ vendor.js:51
> trigger @ vendor.js:339
> d.select @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> e.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> dispatch @ vendor.js:306
> elemData.handle @ vendor.js:281{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7861:
--
Description: 
There were couple of issues:
 # duplicated container listed from RM and ATS in log container list
 # log page has to be cleared every time same page is accessed

> [UI2] Logs page shows duplicated containers with ATS
> 
>
> Key: YARN-7861
> URL: https://issues.apache.org/jira/browse/YARN-7861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
> Environment: There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7861.001.patch
>
>
> There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7832) Logs page does not work for Running applications

2018-01-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7832:
--
Summary: Logs page does not work for Running applications  (was: Logs page 
is not getting rendered correctly)

> Logs page does not work for Running applications
> 
>
> Key: YARN-7832
> URL: https://issues.apache.org/jira/browse/YARN-7832
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: Screen Shot 2018-01-26 at 3.28.40 PM.png, 
> YARN-7832.001.patch
>
>
> Scenario
>  * Run yarn service application
>  * When application is Running, go to log page
>  * Select AttemptId and Container Id
> Logs are not showed on UI. It complains "No log data available!"
>  
> Here 
> [http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
>  API fails with 500 Internal Server Error.
> {"exception":"WebApplicationException","message":"java.io.IOException: 
> ","javaClassName":"javax.ws.rs.WebApplicationException"}
> {code:java}
> GET 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
>  500 (Internal Server Error)
> (anonymous) @ VM779:1
> send @ vendor.js:572
> ajax @ vendor.js:548
> (anonymous) @ vendor.js:5119
> initializePromise @ vendor.js:2941
> Promise @ vendor.js:3005
> ajax @ vendor.js:5117
> ajax @ yarn-ui.js:1
> superWrapper @ vendor.js:1591
> query @ vendor.js:5112
> ember$data$lib$system$store$finders$$_query @ vendor.js:5177
> query @ vendor.js:5334
> fetchLogFilesForContainerId @ yarn-ui.js:132
> showLogFilesForContainerId @ yarn-ui.js:126
> run @ vendor.js:648
> join @ vendor.js:648
> run.join @ vendor.js:1510
> closureAction @ vendor.js:1865
> trigger @ vendor.js:302
> (anonymous) @ vendor.js:339
> each @ vendor.js:61
> each @ vendor.js:51
> trigger @ vendor.js:339
> d.select @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> e.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> dispatch @ vendor.js:306
> elemData.handle @ vendor.js:281{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7861:
--
Attachment: YARN-7861.001.patch

> [UI2] Logs page shows duplicated containers with ATS
> 
>
> Key: YARN-7861
> URL: https://issues.apache.org/jira/browse/YARN-7861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7861.001.patch
>
>
> There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7860) Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346349#comment-16346349
 ] 

Weiwei Yang commented on YARN-7860:
---

Hi [~sunilg]

I haven't started a patch, just assigned this to you. I'll help to review, 
thanks!

> Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning
> 
>
> Key: YARN-7860
> URL: https://issues.apache.org/jira/browse/YARN-7860
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> {{TestRMWebServiceAppsNodelabel#testAppsRunning}} is failing since YARN-7817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7832) Logs page is not getting rendered correctly

2018-01-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7832:
--
Summary: Logs page is not getting rendered correctly  (was: Logs page does 
not work for Running applications)

> Logs page is not getting rendered correctly
> ---
>
> Key: YARN-7832
> URL: https://issues.apache.org/jira/browse/YARN-7832
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: Screen Shot 2018-01-26 at 3.28.40 PM.png, 
> YARN-7832.001.patch
>
>
> Scenario
>  * Run yarn service application
>  * When application is Running, go to log page
>  * Select AttemptId and Container Id
> Logs are not showed on UI. It complains "No log data available!"
>  
> Here 
> [http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
>  API fails with 500 Internal Server Error.
> {"exception":"WebApplicationException","message":"java.io.IOException: 
> ","javaClassName":"javax.ws.rs.WebApplicationException"}
> {code:java}
> GET 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
>  500 (Internal Server Error)
> (anonymous) @ VM779:1
> send @ vendor.js:572
> ajax @ vendor.js:548
> (anonymous) @ vendor.js:5119
> initializePromise @ vendor.js:2941
> Promise @ vendor.js:3005
> ajax @ vendor.js:5117
> ajax @ yarn-ui.js:1
> superWrapper @ vendor.js:1591
> query @ vendor.js:5112
> ember$data$lib$system$store$finders$$_query @ vendor.js:5177
> query @ vendor.js:5334
> fetchLogFilesForContainerId @ yarn-ui.js:132
> showLogFilesForContainerId @ yarn-ui.js:126
> run @ vendor.js:648
> join @ vendor.js:648
> run.join @ vendor.js:1510
> closureAction @ vendor.js:1865
> trigger @ vendor.js:302
> (anonymous) @ vendor.js:339
> each @ vendor.js:61
> each @ vendor.js:51
> trigger @ vendor.js:339
> d.select @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> e.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> dispatch @ vendor.js:306
> elemData.handle @ vendor.js:281{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7860) Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-7860:
-

Assignee: Sunil G  (was: Weiwei Yang)

> Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning
> 
>
> Key: YARN-7860
> URL: https://issues.apache.org/jira/browse/YARN-7860
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Sunil G
>Priority: Major
>
> {{TestRMWebServiceAppsNodelabel#testAppsRunning}} is failing since YARN-7817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7861:
--
Environment: (was: There were couple of issues:
 # duplicated container listed from RM and ATS in log container list
 # log page has to be cleared every time same page is accessed)

> [UI2] Logs page shows duplicated containers with ATS
> 
>
> Key: YARN-7861
> URL: https://issues.apache.org/jira/browse/YARN-7861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7861.001.patch
>
>
> There were couple of issues:
>  # duplicated container listed from RM and ATS in log container list
>  # log page has to be cleared every time same page is accessed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7861) [UI2] Logs page shows duplicated containers with ATS

2018-01-30 Thread Sunil G (JIRA)
Sunil G created YARN-7861:
-

 Summary: [UI2] Logs page shows duplicated containers with ATS
 Key: YARN-7861
 URL: https://issues.apache.org/jira/browse/YARN-7861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
 Environment: There were couple of issues:
 # duplicated container listed from RM and ATS in log container list
 # log page has to be cleared every time same page is accessed
Reporter: Sunil G
Assignee: Sunil G






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346323#comment-16346323
 ] 

genericqa commented on YARN-7739:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 11 new + 71 unchanged - 1 fixed = 82 total (was 72) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908493/YARN-7339.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 112ce50a5745 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5206b2c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19539/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-01-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7859:
---
Summary: New feature: add queue scheduling deadLine in fairScheduler.  
(was: New further: add queue scheduling deadLine in fairScheduler.)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7860) Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346289#comment-16346289
 ] 

Sunil G commented on YARN-7860:
---

Sorry for this. Some how missed this failure.

 We need to improve test case to NOT to do string comparison for validating 2 
resource objects. WE could write a helper for this. Please let me know if you 
have started this, otherwise i ll help to share a patch. [~cheersyang]

> Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning
> 
>
> Key: YARN-7860
> URL: https://issues.apache.org/jira/browse/YARN-7860
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> {{TestRMWebServiceAppsNodelabel#testAppsRunning}} is failing since YARN-7817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346287#comment-16346287
 ] 

Sunil G commented on YARN-7817:
---

Thanks [~cheersyang]. My bad. I ll take care of this. YARN-7860 is created for 
this, correct. Will discuss over there.

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-01-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346284#comment-16346284
 ] 

Sunil G commented on YARN-7773:
---

cc/ [~subru]

> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013)
>  at 
> com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104)
>  at 
> com.mysql.jdbc.CallableStatement.executeLargeUpdate(CallableStatement.java:2418)
>  at com.mysql.jdbc.CallableStatement.executeUpdate(CallableStatement.java:887)
>  at 
> com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
>  at 
> com.zaxxer.hikari.pool.HikariProxyCallableStatement.executeUpdate(HikariProxyCallableStatement.java)
>  at 
> org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore.addApplicationHomeSubCluster(SQLFederationStateStore.java:547)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy31.addApplicationHomeSubCluster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:345)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.JDFederationClientInterceptor.submitApplication(JDFederationClientInterceptor.java:334)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:196)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2072)
>  at 

[jira] [Created] (YARN-7860) Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning

2018-01-30 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7860:
-

 Summary: Fix UT failure 
TestRMWebServiceAppsNodelabel#testAppsRunning
 Key: YARN-7860
 URL: https://issues.apache.org/jira/browse/YARN-7860
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.1.0
Reporter: Weiwei Yang
Assignee: Weiwei Yang


{{TestRMWebServiceAppsNodelabel#testAppsRunning}} is failing since YARN-7817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346276#comment-16346276
 ] 

Weiwei Yang commented on YARN-7778:
---

Fix the UT failure in {{TestSingleConstraintAppPlacementAllocator}}, the other 
one is caused by {{YARN-7817}}, will open a JIRA to track.

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf, 
> YARN-7778-YARN-7812.001.patch, YARN-7778-YARN-7812.002.patch
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7778:
--
Attachment: YARN-7778-YARN-7812.002.patch

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf, 
> YARN-7778-YARN-7812.001.patch, YARN-7778-YARN-7812.002.patch
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4761) NMs reconnecting with changed capabilities can lead to wrong cluster resource calculations on fair scheduler

2018-01-30 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346254#comment-16346254
 ] 

Sangjin Lee edited comment on YARN-4761 at 1/31/18 5:12 AM:


+ [~templedf] [~yufeigu]

I'm not familiar enough with that piece of code. Daniel, Yufei, you guys know 
more about that code than I do. Thoughts?


was (Author: sjlee0):
+ [~templedf] [~yufeigu]

> NMs reconnecting with changed capabilities can lead to wrong cluster resource 
> calculations on fair scheduler
> 
>
> Key: YARN-4761
> URL: https://issues.apache.org/jira/browse/YARN-4761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Major
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: YARN-4761.01.patch, YARN-4761.02.patch
>
>
> YARN-3802 uncovered an issue with the scheduler where the resource 
> calculation can be incorrect due to async event handling. It was subsequently 
> fixed by YARN-4344, but it was never fixed for the fair scheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4761) NMs reconnecting with changed capabilities can lead to wrong cluster resource calculations on fair scheduler

2018-01-30 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346254#comment-16346254
 ] 

Sangjin Lee commented on YARN-4761:
---

+ [~templedf] [~yufeigu]

> NMs reconnecting with changed capabilities can lead to wrong cluster resource 
> calculations on fair scheduler
> 
>
> Key: YARN-4761
> URL: https://issues.apache.org/jira/browse/YARN-4761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Major
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: YARN-4761.01.patch, YARN-4761.02.patch
>
>
> YARN-3802 uncovered an issue with the scheduler where the resource 
> calculation can be incorrect due to async event handling. It was subsequently 
> fixed by YARN-4344, but it was never fixed for the fair scheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346252#comment-16346252
 ] 

Weiwei Yang commented on YARN-7817:
---

Seems this caused UT failure {{TestRMWebServiceAppsNodelabel}}?

> Add Resource reference to RM's NodeInfo object so REST API can get non 
> memory/vcore resource usages.
> 
>
> Key: YARN-7817
> URL: https://issues.apache.org/jira/browse/YARN-7817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: Screen Shot 2018-01-25 at 11.59.31 PM.png, 
> YARN-7817.001.patch, YARN-7817.002.patch, YARN-7817.003.patch, 
> YARN-7817.004.patch, YARN-7817.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346251#comment-16346251
 ] 

genericqa commented on YARN-7773:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906659/YARN-7773.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux cefb7addf34b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5206b2c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19538/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> 

[jira] [Commented] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-01-30 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346250#comment-16346250
 ] 

Wangda Tan commented on YARN-7739:
--

Thanks [~sunilg]/[~templedf] for review: 
{quote}It would be nice to have the test not hard-coded to CS
{quote}
I just updated test to cover both, but not a parameterized unit test, IIRC, 
parameterized unit test need to be a separate class and sometimes it will be 
hard to debug in IDE in my past experiences. Please check if the updated patch 
looks good to you. [~templedf]

> DefaultAMSProcessor should properly check customized resource types against 
> minimum/maximum allocation
> --
>
> Key: YARN-7739
> URL: https://issues.apache.org/jira/browse/YARN-7739
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7339.002.patch, YARN-7739.001.patch
>
>
> Currently, YARN RM reject requested resource if memory or vcores are less 
> than 0 or greater than maximum allocation. We should run the check for 
> customized resource types as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7858) Support special Node Attribute scopes in addition to NODE and RACK

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-7858:
-

Assignee: Weiwei Yang

> Support special Node Attribute scopes in addition to NODE and RACK
> --
>
> Key: YARN-7858
> URL: https://issues.apache.org/jira/browse/YARN-7858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
>
> Currently, we have only two scopes defined: NODE and RACK against which we 
> check the cardinality of the placement.
> This idea should be extended to support node-attribute scopes. For eg: 
> Placement of containers across *upgrade domains* and *failure domains*. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-01-30 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7739:
-
Attachment: YARN-7339.002.patch

> DefaultAMSProcessor should properly check customized resource types against 
> minimum/maximum allocation
> --
>
> Key: YARN-7739
> URL: https://issues.apache.org/jira/browse/YARN-7739
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7339.002.patch, YARN-7739.001.patch
>
>
> Currently, YARN RM reject requested resource if memory or vcores are less 
> than 0 or greater than maximum allocation. We should run the check for 
> customized resource types as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-01-30 Thread tartarus (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346236#comment-16346236
 ] 

tartarus commented on YARN-7773:


Good job! Very helpful to me!!! Thanks

> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013)
>  at 
> com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104)
>  at 
> com.mysql.jdbc.CallableStatement.executeLargeUpdate(CallableStatement.java:2418)
>  at com.mysql.jdbc.CallableStatement.executeUpdate(CallableStatement.java:887)
>  at 
> com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
>  at 
> com.zaxxer.hikari.pool.HikariProxyCallableStatement.executeUpdate(HikariProxyCallableStatement.java)
>  at 
> org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore.addApplicationHomeSubCluster(SQLFederationStateStore.java:547)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy31.addApplicationHomeSubCluster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:345)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.JDFederationClientInterceptor.submitApplication(JDFederationClientInterceptor.java:334)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:196)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2072)

[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-01-30 Thread Yiran Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346234#comment-16346234
 ] 

Yiran Wu commented on YARN-7773:


Cc [~Naganarasimha], [~sunilg], [~bibinchundatt], [~leftnoteasy] and 
[~LambertYe].

> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013)
>  at 
> com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104)
>  at 
> com.mysql.jdbc.CallableStatement.executeLargeUpdate(CallableStatement.java:2418)
>  at com.mysql.jdbc.CallableStatement.executeUpdate(CallableStatement.java:887)
>  at 
> com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
>  at 
> com.zaxxer.hikari.pool.HikariProxyCallableStatement.executeUpdate(HikariProxyCallableStatement.java)
>  at 
> org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore.addApplicationHomeSubCluster(SQLFederationStateStore.java:547)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy31.addApplicationHomeSubCluster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:345)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.JDFederationClientInterceptor.submitApplication(JDFederationClientInterceptor.java:334)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:196)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
>  at 

[jira] [Commented] (YARN-7773) YARN Federation used Mysql as state store throw exception, Unknown column 'homeSubCluster' in 'field list'

2018-01-30 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346230#comment-16346230
 ] 

maobaolong commented on YARN-7773:
--

Great job! Seems good!

> YARN Federation used Mysql as state store throw exception, Unknown column 
> 'homeSubCluster' in 'field list'
> --
>
> Key: YARN-7773
> URL: https://issues.apache.org/jira/browse/YARN-7773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.9.0, 3.0.0-alpha1, 3.0.0-alpha2, 3.0.0-beta1, 
> 3.0.0-alpha4, 3.0.0-alpha3, 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Yiran Wu
>Priority: Blocker
>  Labels: patch
> Attachments: YARN-7773.001.patch
>
>
> An error occurred when YARN Federation used Mysql as state store. The reason 
> I found it was because the field used to create the 
> applicationsHomeSubCluster table was 'subClusterId' and the stored procedure 
> used 'homeSubCluster'. I fixed this problem.
>  
> submitApplication appIdapplication_1516277664083_0014 try #0 on SubCluster 
> cluster1 , queue: root.bdp_federation
>  [2018-01-18T23:25:29.325+08:00] [ERROR] 
> store.impl.SQLFederationStateStore.logAndThrowRetriableException(FederationStateStoreUtils.java
>  158) [IPC Server handler 44 on 8050] : Unable to insert the newly generated 
> application application_1516277664083_0014
>  com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 
> 'homeSubCluster' in 'field list'
>  at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
>  at com.mysql.jdbc.Util.getInstance(Util.java:408)
>  at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
>  at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
>  at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
>  at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
>  at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)
>  at 
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079)
>  at 
> com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013)
>  at 
> com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104)
>  at 
> com.mysql.jdbc.CallableStatement.executeLargeUpdate(CallableStatement.java:2418)
>  at com.mysql.jdbc.CallableStatement.executeUpdate(CallableStatement.java:887)
>  at 
> com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
>  at 
> com.zaxxer.hikari.pool.HikariProxyCallableStatement.executeUpdate(HikariProxyCallableStatement.java)
>  at 
> org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore.addApplicationHomeSubCluster(SQLFederationStateStore.java:547)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy31.addApplicationHomeSubCluster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:345)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.JDFederationClientInterceptor.submitApplication(JDFederationClientInterceptor.java:334)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:196)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2076)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2072)
>  at 

[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346225#comment-16346225
 ] 

genericqa commented on YARN-7778:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7812 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 4s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7812 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.placement.TestSingleConstraintAppPlacementAllocator
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908472/YARN-7778-YARN-7812.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9079a777e1b7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7812 / e6d2d26 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19537/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19537/testReport/ |
| Max. process+thread count | 894 (vs. ulimit of 5000) |
| 

[jira] [Commented] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346217#comment-16346217
 ] 

genericqa commented on YARN-7816:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
38s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908473/YARN-7816.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 46761284b9a5 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2e7331c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19536/testReport/ |
| Max. process+thread count | 1442 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (YARN-7858) Support special Node Attribute scopes in addition to NODE and RACK

2018-01-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346205#comment-16346205
 ] 

Arun Suresh commented on YARN-7858:
---

Sure, feel free to take over. Yup, we would need YARN-3409 to get to some level 
of completion. But we need to decide if we want to allow ALL attributes to be 
scope-able or just a pre-configured subset.

> Support special Node Attribute scopes in addition to NODE and RACK
> --
>
> Key: YARN-7858
> URL: https://issues.apache.org/jira/browse/YARN-7858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> Currently, we have only two scopes defined: NODE and RACK against which we 
> check the cardinality of the placement.
> This idea should be extended to support node-attribute scopes. For eg: 
> Placement of containers across *upgrade domains* and *failure domains*. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7859) New further: add queue scheduling deadLine in fairScheduler.

2018-01-30 Thread wangwj (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346184#comment-16346184
 ] 

wangwj commented on YARN-7859:
--

[~ka...@cloudera.com]  [~yufeigu]
 Please give me some advices..
 THX...

> New further: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New further: add queue scheduling deadLine in fairScheduler.

2018-01-30 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: YARN-7859-v1.patch

> New further: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-7859-v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New further: add queue scheduling deadLine in fairScheduler.

2018-01-30 Thread wangwj (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangwj updated YARN-7859:
-
Attachment: (was: YARN-.patch)

> New further: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
> Environment:     The environment of my company is  
> hadoop2.6.0-cdh5.4.7
>Reporter: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Fix For: 2.9.0, 3.0.0-alpha2
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> Now the way of community solves queue scheduling to starvation is preempt 
> container.But this way may increases the failure rate of the job.
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7859) New further: add queue scheduling deadLine in fairScheduler.

2018-01-30 Thread wangwj (JIRA)
wangwj created YARN-7859:


 Summary: New further: add queue scheduling deadLine in 
fairScheduler.
 Key: YARN-7859
 URL: https://issues.apache.org/jira/browse/YARN-7859
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2
 Environment:     The environment of my company is  hadoop2.6.0-cdh5.4.7
Reporter: wangwj
 Fix For: 3.0.0-alpha2, 2.9.0
 Attachments: YARN-.patch

 As everyone knows.In FairScheduler the phenomenon of queue scheduling 
starvation often occurs when the number of cluster jobs is large.The App in one 
or more queue are pending.So I have thought a way to solve this problem.Add 
queue scheduling deadLine in fairScheduler.When a queue is not scheduled for 
FairScheduler within a specified time.We mandatory scheduler it!
Now the way of community solves queue scheduling to starvation is preempt 
container.But this way may increases the failure rate of the job.
On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7843) Container Localizer is failing with NPE

2018-01-30 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-7843.
-
  Resolution: Invalid
Target Version/s:   (was: 3.1.0)

I am closing this issue as invalid. If I find issue again with proper 
deployment, I will reopen this. 
Thanks [~jlowe] for the pointer.

> Container Localizer is failing with NPE
> ---
>
> Key: YARN-7843
> URL: https://issues.apache.org/jira/browse/YARN-7843
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is seen that container localizer are failing with NPE, as result none of 
> container are getting launched!
> {noformat}
> Caused by: java.lang.NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl.getPathForLocalization(LocalResourcesTrackerImpl.java:503)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.getPathForLocalization(ResourceLocalizationService.java:1189)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.processHeartbeat(ResourceLocalizationService.java:1153)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:753)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:371)
> at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:48)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7516) Security check for trusted docker image

2018-01-30 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346144#comment-16346144
 ] 

Shane Kumpf edited comment on YARN-7516 at 1/31/18 2:22 AM:


Thanks for the quick response and feedback, Eric. At this point, I think it's 
really just how we convey the feature to users. IMO, we need to try to keep 
this simply and avoid having too many "if you enable this, this gets disabled 
automatically" settings.
{quote}When using the double list check, how does a user switch the image to 
run in default-mode, if the image also exists in yarn-mode list?
{quote}
The yarn-mode would take precedence if the same image prefix exists in both 
configs. But really the user needs to pick one. It shouldn't exist in both 
places. :) By moving to the image level, they can get as granular as needed to 
avoid that scenario.
{quote}There are use cases to pass launch command in default mode. I don't 
think we can limit launch command for default mode for practical purpose of how 
docker operates, but I like to hear from others as well.
{quote}
With default-mode, all mounts are disabled. This means that the launch script 
simply can't run. Any "launch_command" set by native services gets injected 
into a launch script that is localized and run, AFAIK. I know with enabling 
entry points we fix some of this, but my thought behind default-mode containers 
is that we don't override anything, keep it simple. If you need YARN to 
override the entry point, it's a yarn-mode container.
{quote}For user parameter, I prefer to have it configurable via privileged:true 
flag (i.e. YARN-7446). 
{quote}
We have now restricted the container pretty seriously, I can't see how the user 
in the container could matter, but I'm open to discussion here. I'm in favor of 
removing --user for default-mode, again, no unnecessary overrides. I'm also 
still not sure I agree with disabling user with --privileged. I'm not sure the 
two are mutually exclusive. We can continue the discussion in YARN-7446.

Thanks again.


was (Author: shaneku...@gmail.com):
Thanks for the quick response and feedback, Eric. At this point, I think it's 
really just how we convey the feature to users. IMO, we need to try to keep 
this simply and avoid having too many "if you enable this, this gets disabled 
automatically" settings.
{quote}When using the double list check, how does a user switch the image to 
run in default-mode, if the image also exists in yarn-mode list?
{quote}
The yarn-mode would take precedence if the same image prefix exists in both 
configs. But really the user needs to pick one. It shouldn't exist in both 
places. :) By moving to the image level, they can get as granular as needed to 
avoid that scenario.
{quote}There are use cases to pass launch command in default mode. I don't 
think we can limit launch command for default mode for practical purpose of how 
docker operates, but I like to hear from others as well.
{quote}
With default-mode, all mounts are disabled. This means that the launch script 
simply can't run. Any "launch_command" set by native services gets injected 
into a launch script that is localized and run, AFAIK. I know with enabling 
entry points we fix some of this, but my thought behind default containers is 
that we don't override anything, keep it simple. If you need us to override the 
entry point, it's a yarn-mode container.
{quote}For user parameter, I prefer to have it configurable via privileged:true 
flag (i.e. YARN-7446). 
{quote}
We have now restricted the container pretty seriously, I can't see how the user 
in the container could matter, but I'm open to discussion here. I'm in favor of 
removing --user for default-mode, again, no unnecessary overrides. I'm also 
still not sure I agree with disabling user with --privileged. I'm not sure the 
two are mutually exclusive. We can continue the discussion in YARN-7446.

Thanks again.

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file 

[jira] [Commented] (YARN-7516) Security check for trusted docker image

2018-01-30 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346144#comment-16346144
 ] 

Shane Kumpf commented on YARN-7516:
---

Thanks for the quick response and feedback, Eric. At this point, I think it's 
really just how we convey the feature to users. IMO, we need to try to keep 
this simply and avoid having too many "if you enable this, this gets disabled 
automatically" settings.
{quote}When using the double list check, how does a user switch the image to 
run in default-mode, if the image also exists in yarn-mode list?
{quote}
The yarn-mode would take precedence if the same image prefix exists in both 
configs. But really the user needs to pick one. It shouldn't exist in both 
places. :) By moving to the image level, they can get as granular as needed to 
avoid that scenario.
{quote}There are use cases to pass launch command in default mode. I don't 
think we can limit launch command for default mode for practical purpose of how 
docker operates, but I like to hear from others as well.
{quote}
With default-mode, all mounts are disabled. This means that the launch script 
simply can't run. Any "launch_command" set by native services gets injected 
into a launch script that is localized and run, AFAIK. I know with enabling 
entry points we fix some of this, but my thought behind default containers is 
that we don't override anything, keep it simple. If you need us to override the 
entry point, it's a yarn-mode container.
{quote}For user parameter, I prefer to have it configurable via privileged:true 
flag (i.e. YARN-7446). 
{quote}
We have now restricted the container pretty seriously, I can't see how the user 
in the container could matter, but I'm open to discussion here. I'm in favor of 
removing --user for default-mode, again, no unnecessary overrides. I'm also 
still not sure I agree with disabling user with --privileged. I'm not sure the 
two are mutually exclusive. We can continue the discussion in YARN-7446.

Thanks again.

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346138#comment-16346138
 ] 

Gour Saha commented on YARN-7816:
-

Thanks [~eyang] for reviewing the patch. I have made the change and uploaded 
patch 003.

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch, YARN-7816.002.patch, 
> YARN-7816.003.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However after creating 
> an app of name "myapp" say as user A, and then I login as a different user 
> user B, I am unable to create a service of the exact same name ("myapp" in 
> this case). This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7816:

Attachment: YARN-7816.003.patch

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch, YARN-7816.002.patch, 
> YARN-7816.003.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However after creating 
> an app of name "myapp" say as user A, and then I login as a different user 
> user B, I am unable to create a service of the exact same name ("myapp" in 
> this case). This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7778:
--
Attachment: YARN-7778-YARN-7812.001.patch

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf, 
> YARN-7778-YARN-7812.001.patch
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7778:
--
Attachment: (was: YARN-7778-YARN-7812.001.patch)

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7778) Merging of constraints defined at different levels

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7778:
--
Attachment: YARN-7778-YARN-7812.001.patch

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Merge Constraints Solution.pdf
>
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7811) Service AM should use configured default docker network

2018-01-30 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7811:

Affects Version/s: 3.1.0
 Target Version/s: 3.1.0
Fix Version/s: 3.1.0
  Component/s: yarn-native-services

> Service AM should use configured default docker network
> ---
>
> Key: YARN-7811
> URL: https://issues.apache.org/jira/browse/YARN-7811
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7811.01.patch
>
>
> Currently the DockerProviderService used by the Service AM hardcodes a 
> default of bridge for the docker network. We already have a YARN 
> configuration property for default network, so the Service AM should honor 
> that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for trusted docker image

2018-01-30 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346116#comment-16346116
 ] 

Eric Yang commented on YARN-7516:
-

[~shaneku...@gmail.com] Thank you for the review.  When using the double list 
check, how does a user switch the image to run in default-mode, if the image 
also exists in yarn-mode list?  

For user parameter, I prefer to have it configurable via privileged:true flag 
(i.e. YARN-7446).  When we drop all capabilities, user can still run as root 
user in the docker image in default mode.  If they did not specify 
privileged:true flag, then user is reduced to normal user privileges.  This 
provides wider range of security support to allow default mode to run on 
privileged ports for a web server, or a really private instance of container.  
We probably need to give capability for net_bind_service for default mode with 
privileged flag enabled.

There are use cases to pass launch command in default mode.  I don't think we 
can limit launch command for default mode for practical purpose of how docker 
operates, but I like to hear from others as well.

[~ebadger] [~billie.rinaldi] What do you think?

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346087#comment-16346087
 ] 

genericqa commented on YARN-7816:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m  3s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverControllerStress |
|   | hadoop.yarn.service.TestServiceAM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908455/YARN-7816.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8fc17bdac41d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2e7331c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 

[jira] [Updated] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7757:
--
Priority: Blocker  (was: Critical)

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7757:
--
Priority: Critical  (was: Major)

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346083#comment-16346083
 ] 

Weiwei Yang edited comment on YARN-7757 at 1/31/18 1:07 AM:


Hi [~sunilg]

Thanks for your comments, 
bq. we cant deny a possibility of multi scripts for different types of 
attributes
Agree, it makes sense.

bq.Could we have permission checks etc
We already have it in the patch, by reusing existing check code from labels. 
Please see {{AbstractNodeDescriptorsProvider#verifyConfiguredScript}}, this is 
called in both script based providers implementation (labels and attributes).

bq. If the script is not back within next periodic check, we can interrupt and 
fail the op
Also make sense.

I suggest we track such enhancements as described in #1 and #3 in another lower 
priority task, this one is focus on the refactoring of existing code which is a 
blocker for the rest.
Do you have any other comments?

Thanks


was (Author: cheersyang):
Hi [~sunilg]

Thanks for your comments, 
bq. we cant deny a possibility of multi scripts for different types of 
attributes
Agree, itmakes sense.

bq.Could we have permission checks etc
We already have it in the patch, by reusing existing check code from labels. 
Please see {{AbstractNodeDescriptorsProvider#verifyConfiguredScript}}, this is 
called in both script based providers implementation (labels and attributes).

bq. If the script is not back within next periodic check, we can interrupt and 
fail the op
Also make sense.

I suggest we track such enhancements as described in #1 and #3 in another lower 
priority task, this one is focus on the refactoring of existing code which is a 
blocker for the rest.
Do you have any other comments?

Thanks

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346083#comment-16346083
 ] 

Weiwei Yang commented on YARN-7757:
---

Hi [~sunilg]

Thanks for your comments, 
bq. we cant deny a possibility of multi scripts for different types of 
attributes
Agree, itmakes sense.

bq.Could we have permission checks etc
We already have it in the patch, by reusing existing check code from labels. 
Please see {{AbstractNodeDescriptorsProvider#verifyConfiguredScript}}, this is 
called in both script based providers implementation (labels and attributes).

bq. If the script is not back within next periodic check, we can interrupt and 
fail the op
Also make sense.

I suggest we track such enhancements as described in #1 and #3 in another lower 
priority task, this one is focus on the refactoring of existing code which is a 
blocker for the rest.
Do you have any other comments?

Thanks

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, YARN-7757-YARN-3409.003.patch, 
> YARN-7757-YARN-3409.004.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346062#comment-16346062
 ] 

genericqa commented on YARN-7819:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7812 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
37s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} YARN-7812 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-7812 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt
 to org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt 
in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptAllocationOnNode(SchedulerApplicationAttempt,
 SchedulingRequest, SchedulerNode)  At 
FairScheduler.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptAllocationOnNode(SchedulerApplicationAttempt,
 SchedulingRequest, SchedulerNode)  At FairScheduler.java:[line 1883] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908446/YARN-7819-YARN-7812.001.patch
 |
| Optional Tests |  

[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346053#comment-16346053
 ] 

genericqa commented on YARN-2185:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 8 
fixed = 151 total (was 159) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-2185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908451/YARN-2185.014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d7923dc632c 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f9dd5b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19534/testReport/ |
| Max. process+thread count 

[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346052#comment-16346052
 ] 

Arun Suresh commented on YARN-7792:
---

{{TestRMWebServiceAppsNodelabel}} failure is unrelated
The remain tests run fine for me locally.


> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Priority: Blocker
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch, YARN-7792.004.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for trusted docker image

2018-01-30 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346045#comment-16346045
 ] 

Shane Kumpf commented on YARN-7516:
---

Hey Eric, thanks for the patches!

I'm sorry to jump in here late. I really like the premise, but I'll admit, I'm 
struggling with the naming of this feature and have been thinking about 
alternatives. IMO, "docker.privileged-containers.registries" implies containers 
requesting the --privileged flag, which isn't necessarily the case here.

To keep this short, I think we are looking at two ways that we launch 
containers, I'll refer to them as "default-mode" and "yarn-mode".

In "default-mode", the following are removed:
 * Capabilities
 * Mounts
 * Privileged flag
 * Devices
 * User*
 * Launch command*

*Note that I think this patch needs to also remove overriding the user and 
launch command, regardless of direction it is taken. With all the other 
features disabled, these don't make sense.

In "default-mode", the container runs as defined by the image with no features. 
This has the benefits of limiting any host access to improve security and 
making it easier to get started with Docker on YARN. This is the direction the 
patch is already headed.

In "yarn-mode" some combination of those features would be enabled. Using these 
features requires playing by the rules of being a "yarn-mode"  container. We 
already have controls around mounts, devices, capabilities and privileged, but 
there are gaps, and the current defaults need review, which we could tackle in 
another JIRA. This allows for some amount of "opt-in" or "opt-out" of YARN or 
OS features for "yarn-mode" containers, which may be desirable.

With the modes defined, we could have a configuration similar to (example):
{code:java}
docker.default-mode.image.prefix=*
docker.yarn-mode.image.prefix=registry.example.com/qe,registry.example.com/re/jenkins,httpd:2.4{code}
If the image begins with one of the prefix entries, run the container in that 
mode.

One benefit is that the settings don't have overlap with existing docker terms 
like privileged and trusted and IMO, a registry is too large to use to 
determine which mode a container should run in. Of course, the prefix can be 
used to specify an entire registry if desired. It would also allow for getting 
as specific as needed.

A couple other thoughts
 * docker.default-mode.image.prefix should default to * for ease of use.
 * An image (or namespace or registry) would be promoted to yarn-mode once it 
has had the proper vetting.
 * Any image not in either list would result in container failure. The image is 
not allowed.
 * yarn-mode list takes precedence, fall back to default-mode check.

Let me know what you think.

> Security check for trusted docker image
> ---
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch, 
> YARN-7516.012.patch, YARN-7516.013.patch, YARN-7516.014.patch, 
> YARN-7516.015.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346031#comment-16346031
 ] 

Eric Yang commented on YARN-7816:
-

ServiceClient is performed in doAs call, using System.getProperty("user.name") 
might return YARN service user instead of end user username.  It would be good 
to change the retrieval of username by calling: 
UserGroupInformation.getCurrentUser();

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch, YARN-7816.002.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However after creating 
> an app of name "myapp" say as user A, and then I login as a different user 
> user B, I am unable to create a service of the exact same name ("myapp" in 
> this case). This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345989#comment-16345989
 ] 

genericqa commented on YARN-7792:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 41 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 141 new + 2420 
unchanged - 46 fixed = 2561 total (was 2466) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-yarn-project_hadoop-yarn generated 7 new + 4197 
unchanged - 0 fixed = 4204 total (was 4197) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 6 new + 4183 unchanged - 0 fixed = 4189 total (was 4183) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Commented] (YARN-7840) Update PB for prefix support of node attributes

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16346002#comment-16346002
 ] 

Weiwei Yang commented on YARN-7840:
---

Hi [~Naganarasimha]

Can you please take a look at [~sunilg] and [~bibinchundatt]'s comments?
I think [~sunilg] is raising up a good point, since the type is only valid when 
there is a value, it would be good to wrap the type to value proto. Which I 
think it could be helpful to handle the case when a node attribute is added 
without a value (like a label). I also agree to [~bibinchundatt] to add 
[default=""] to the attributePrefix since it is optional.

Thanks

> Update PB for prefix support of node attributes
> ---
>
> Key: YARN-7840
> URL: https://issues.apache.org/jira/browse/YARN-7840
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Blocker
> Attachments: YARN-7840-YARN-3409.001.patch, 
> YARN-7840-YARN-3409.002.patch
>
>
> We need to support prefix (namespace) for node attributes, this will add the 
> flexibility to provide ability to do proper ACL, avoid naming conflicts etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7816:

Attachment: YARN-7816.002.patch

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch, YARN-7816.002.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However after creating 
> an app of name "myapp" say as user A, and then I login as a different user 
> user B, I am unable to create a service of the exact same name ("myapp" in 
> this case). This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7265) Hadoop Server Log Correlation

2018-01-30 Thread Tanping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanping Wang reopened YARN-7265:


[~eyang]  We are not looking into Chukwa at the moment.  There is some 
conversation regarding to explore other options. 

> Hadoop Server Log Correlation  
> ---
>
> Key: YARN-7265
> URL: https://issues.apache.org/jira/browse/YARN-7265
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: log-aggregation
>Reporter: Tanping Wang
>Priority: Major
>
> Hadoop has many server logs, yarn tasks logs, node manager logs, HDFS logs..  
>  There are also a lot of different ways can be used to expose the logs, build 
> relationship horizontally to correlate the logs or search the logs by 
> keyword. There is a need for a default and yet convenient  logging analytics 
> mechanism in Hadoop itself that at least covers all the server logs of 
> Hadoop.  This log analytics system can correlate the Hadoop server logs by 
> grouping them by various dimensions including application ID,  task ID, job 
> ID or node ID etc.   The raw logs with correlation can be easily accessed by 
> the application developer or cluster administrator via web page for managing 
> and debugging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7827) Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404

2018-01-30 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345973#comment-16345973
 ] 

Eric Yang commented on YARN-7827:
-

User.name shouldn't be hard code to dr.who.  The end username should be passed 
in the request for simple security and without user.name parameter for Kerberos 
enabled browser.  See the section "Accessing the server using curl" in hadoop 
auth 
[example|https://hadoop.apache.org/docs/current/hadoop-auth/Examples.html]. 


> Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404
> -
>
> Key: YARN-7827
> URL: https://issues.apache.org/jira/browse/YARN-7827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7827.001.patch
>
>
> Steps:
> 1) Enable Ats v2
> 2) Start Httpd Yarn service
> 3) Go to UI2 attempts page for yarn service 
> 4) Click on setting icon
> 5) Click on stop service
> 6) This action will pop up a box to confirm stop. click on "Yes"
> Expected behavior:
> Yarn service should be stopped
> Actual behavior:
> Yarn UI is not notifying on whether Yarn service is stopped or not.
> On checking network stack trace, the PUT request failed with HTTP error 404
> {code}
> Sorry, got error 404
> Please consult RFC 2616 for meanings of the error code.
> Error Details
> org.apache.hadoop.yarn.webapp.WebAppException: /v1/services/httpd-hrt-qa-n: 
> controller for v1 not found
>   at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:247)
>   at org.apache.hadoop.yarn.webapp.Router.resolve(Router.java:155)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:143)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
>   at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
>   at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>   at 
> com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> 

[jira] [Commented] (YARN-7858) Support special Node Attribute scopes in addition to NODE and RACK

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345967#comment-16345967
 ] 

Weiwei Yang commented on YARN-7858:
---

Hi [~asuresh], thanks for filing this. I've been working on YARN-3409 trying to 
bring node-attributes alive, would also like to work on this one since this is 
our ultimate goal. I am not sure if there is anything can be done before 
YARN-3409 is ready, any suggestions?

> Support special Node Attribute scopes in addition to NODE and RACK
> --
>
> Key: YARN-7858
> URL: https://issues.apache.org/jira/browse/YARN-7858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Priority: Major
>
> Currently, we have only two scopes defined: NODE and RACK against which we 
> check the cardinality of the placement.
> This idea should be extended to support node-attribute scopes. For eg: 
> Placement of containers across *upgrade domains* and *failure domains*. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-30 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345961#comment-16345961
 ] 

Miklos Szegedi commented on YARN-2185:
--

~[~rohithsharma] I am sorry about the inconvenience.

[~jlowe], thank you for reverting the patch. I was able to reproduce this in a 
secure cluster and fixed it. Please review patch 14 with the fix.

 

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch, 
> YARN-2185.013.patch, YARN-2185.014.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM

2018-01-30 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7765:
--
Reporter: Sumana Sathish  (was: Rohith Sharma K S)

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM
> 
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sumana Sathish
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7765.01.patch, YARN-7765.02.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2018-01-30 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-2185:
-
Attachment: YARN-2185.014.patch

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch, 
> YARN-2185.013.patch, YARN-2185.014.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7858) Support special Node Attribute scopes in addition to NODE and RACK

2018-01-30 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7858:
-

 Summary: Support special Node Attribute scopes in addition to NODE 
and RACK
 Key: YARN-7858
 URL: https://issues.apache.org/jira/browse/YARN-7858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh


Currently, we have only two scopes defined: NODE and RACK against which we 
check the cardinality of the placement.

This idea should be extended to support node-attribute scopes. For eg: 
Placement of containers across *upgrade domains* and *failure domains*. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7856) Validation and error handling when handling NM-RM with regarding to node-attributes

2018-01-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7856:
--
Description: When NM reports its distributed attributes to RM, RM needs to 
do proper validation of the received attributes, if attributes were not valid 
or failed to update, RM needs to notify NM about such failures. Such validation 
needs to be also done in NM registration as well.  (was: When NM reports its 
distributed attributes to RM, RM needs to do proper validation of the received 
attributes, if attributes were not valid or failed to update, RM needs to 
notify NM about such failures.)

> Validation and error handling when handling NM-RM with regarding to 
> node-attributes
> ---
>
> Key: YARN-7856
> URL: https://issues.apache.org/jira/browse/YARN-7856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> When NM reports its distributed attributes to RM, RM needs to do proper 
> validation of the received attributes, if attributes were not valid or failed 
> to update, RM needs to notify NM about such failures. Such validation needs 
> to be also done in NM registration as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345944#comment-16345944
 ] 

Weiwei Yang commented on YARN-7822:
---

Thanks [~asuresh], that sounds nice to me.

> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: YARN-7812
>
> Attachments: YARN-7822-YARN-6592.001.patch, 
> YARN-7822-YARN-6592.002.patch, YARN-7822-YARN-6592.003.patch, 
> YARN-7822-YARN-6592.004.patch, YARN-7822-YARN-6592.005.patch, 
> YARN-7822-YARN-6592.006.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7819:
--
Attachment: YARN-7819-YARN-7812.001.patch

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch, 
> YARN-7819-YARN-7812.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7819:
--
Attachment: (was: YARN-7819-YARN-7812.001.patch)

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7819) Allow PlacementProcessor to be used with the FairScheduler

2018-01-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7819:
--
Attachment: YARN-7819-YARN-7812.001.patch

> Allow PlacementProcessor to be used with the FairScheduler
> --
>
> Key: YARN-7819
> URL: https://issues.apache.org/jira/browse/YARN-7819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-7819-YARN-6592.001.patch, 
> YARN-7819-YARN-7812.001.patch
>
>
> The FairScheduler needs to implement the 
> {{ResourceScheduler#attemptAllocationOnNode}} function for the processor to 
> support the FairScheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-01-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345920#comment-16345920
 ] 

Daniel Templeton commented on YARN-7739:


It would be nice to have the test not hard-coded to CS.  Would it be too much 
to ask to parameterize it?  You could even leave CS as the only parameter and 
file a JIRA to add FS.  Otherwise, I think it looks good.

> DefaultAMSProcessor should properly check customized resource types against 
> minimum/maximum allocation
> --
>
> Key: YARN-7739
> URL: https://issues.apache.org/jira/browse/YARN-7739
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7739.001.patch
>
>
> Currently, YARN RM reject requested resource if memory or vcores are less 
> than 0 or greater than maximum allocation. We should run the check for 
> customized resource types as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-01-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345918#comment-16345918
 ] 

Daniel Templeton commented on YARN-7292:


I these are still relevant questions and something I'd like to sort out for 
3.1.  My thoughts:

1) I could go either way on the client v/s server, but I think it's more useful 
to have on the client side.  It makes the code significantly simpler on the 
server side and allows users to define their own profiles.

2) No, no, and no.  They're redundant with what's already specified by resource 
types.

3) Same question as #1.  I vote for client side.

4) If we're supporting server-side profiles, overrides are required in order 
for the profiles to be useful.  If we're going with client-side profiles, then 
overrides are moot.

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7852) FlowRunReader constructs min_start_time filter for both createdtimestart and createdtimeend.

2018-01-30 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-7852.
--
Resolution: Invalid

> FlowRunReader constructs min_start_time filter for both createdtimestart and 
> createdtimeend.
> 
>
> Key: YARN-7852
> URL: https://issues.apache.org/jira/browse/YARN-7852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
>
> {code:java}
> protected FilterList constructFilterListBasedOnFilters() throws IOException {
>     FilterList listBasedOnFilters = new FilterList();
>     // Filter based on created time range.
>     Long createdTimeBegin = getFilters().getCreatedTimeBegin();
>     Long createdTimeEnd = getFilters().getCreatedTimeEnd();
>     if (createdTimeBegin != 0 || createdTimeEnd != Long.MAX_VALUE) {
>   listBasedOnFilters.addFilter(TimelineFilterUtils
>   .createSingleColValueFiltersByRange(FlowRunColumn.MIN_START_TIME,
>   createdTimeBegin, createdTimeEnd));
>     }
>     // Filter based on metric filters.
>     TimelineFilterList metricFilters = getFilters().getMetricFilters();
>     if (metricFilters != null && !metricFilters.getFilterList().isEmpty()) {
>   listBasedOnFilters.addFilter(TimelineFilterUtils.createHBaseFilterList(
>   FlowRunColumnPrefix.METRIC, metricFilters));
>     }
>     return listBasedOnFilters;
>   }{code}
>  
> createdTimeEnd is used as an upper bound for MIN_START_TIME.  We should 
> create one filter based on createdTimeBegin and another based on 
> createdTimeEnd.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2018-01-30 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-2185:
-
Attachment: YARN-2185.013.patch

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch, 
> YARN-2185.013.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7844) Expose metrics for scheduler operation (allocate, schedulerEvent) to JMX

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345819#comment-16345819
 ] 

genericqa commented on YARN-7844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 20 new + 185 unchanged - 0 fixed = 205 total (was 185) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m  1s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue |
|   | hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerFairShare |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | 

[jira] [Commented] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345807#comment-16345807
 ] 

genericqa commented on YARN-7816:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 1 new + 78 unchanged - 
0 fixed = 79 total (was 78) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 27s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908400/YARN-7816.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3b08107f5cd3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f9dd5b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19531/artifact/out/diff-checkstyle-root.txt
 |
| unit 

[jira] [Issue Comment Deleted] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-30 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-7857:
--
Comment: was deleted

(was: The SIGSEGV reported in this Jira is a result of compiling with 
{{-fstack-check}}. )

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-30 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-7857:
--
Comment: was deleted

(was: The {{-fcheck-stack}} flag was introduced by this Jira.)

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-30 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345776#comment-16345776
 ] 

Jim Brennan commented on YARN-7857:
---

The SIGSEGV reported in this Jira is a result of compiling with 
{{-fstack-check}}. 

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7857) -fstack-check compilation flag causes binary incompatibility for container-executor between RHEL 6 and RHEL 7

2018-01-30 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345771#comment-16345771
 ] 

Jim Brennan commented on YARN-7857:
---

The {{-fcheck-stack}} flag was introduced by this Jira.

> -fstack-check compilation flag causes binary incompatibility for 
> container-executor between RHEL 6 and RHEL 7
> -
>
> Key: YARN-7857
> URL: https://issues.apache.org/jira/browse/YARN-7857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> The segmentation fault in container-executor reported in [YARN-7796]  appears 
> to be due to a binary compatibility issue with the {{-fstack-check}} flag 
> that was added in [YARN-6721]
> Based on my testing, a container-executor (without the patch from 
> [YARN-7796]) compiled on RHEL 6 with the -fstack-check flag always hits this 
> segmentation fault when run on RHEL 7.  But if you compile without this flag, 
> the container-executor runs on RHEL 7 with no problems.  I also verified this 
> with a simple program that just does the copy_file.
> I think we need to either remove this flag, or find a suitable alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-30 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345671#comment-16345671
 ] 

Zian Chen edited comment on YARN-7626 at 1/30/18 7:44 PM:
--

Hi [~miklos.szeg...@cloudera.com]  , thank you so much for your comments, for 
is_regex function, missing "through" for iteration and isUserMount boolean 
value change, I'm totally agreed with your suggestions. I'll update the patch 
according to your comments.

For those two contradictions, I would like to explain more in details, line 853 
and line 925-930 are inside function check_mount_permitted, this function 
accepts two params as input, permitted_mounts are mounts which are specified by 
admin which can be permitted for checking the user mounts, and requested are 
user mounts. in line
{code:java}
char *normalized_path = normalize_mount(requested, 0);{code}
we call function normalize_mount which will lead to line 853, in this function 
we use "isUsermount" to distinguish the input mount is user mount or permitted 
mount, here, in this case, it's user mount, then this part of the code will not 
be executed for user mount,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}
{code}
we will follow the original logic which checks if the user mount is a real 
path, if not, verify if it's a validate volume name, then do proper actions and 
return normalized_path for user mount.

Then line 925-930 is used to check if permitted mounts are a regular 
expression. If it is, we need to use regex matching to see if current 
permitted_mounts can be matched with normalized_path of the user mount, that's 
what line 925-930 trying to do. Let me give you an example for this,

for example one of the permitted_mounts is "/usr/local/.$" which is a regex and 
the user mount is "//usr/local/bin/", when we call function 
check_mount_permitted, we will call function normalize_mount for user mount 
"//usr/local/bin/", check if it's a real path, it is, then the rest of code 
will normalize the user mount into "/usr/local/bin/" then return as 
normalized_path, then we go back to function check_mount_permitted and loop 
permitted_mounts, no other can match "/usr/local/bin/" except regex 
"/usr/local/.$", and this is because we use line 925-930 check permitted_mounts 
is a regex and it can match "/usr/local/bin/" using regex matching.

However, function does not always get user mount as input, like when we call 
function normalize_mounts in these two lines inside function add_mounts,
{code:java}
ret = normalize_mounts(permitted_ro_mounts, 1);
ret |= normalize_mounts(permitted_rw_mounts, 1);{code}
normalize_mount will get permitted_mounts which let the code logic enter into 
this part,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}{code}
in this situation, we only accept permitted_mounts as regex when it's not a 
real path, otherwise, we don't allow it as a valid permitted_mount.

For the second contradiction, "this check should be before if 
(strcmp(normalized_path, permitted_mounts[i]) == 0)", the order of the first 
check and second check is not matter, because inside permitted_mounts array, we 
have regex format permitted_mount as well as non-regex ones, we try to use 
every possible permitted_mouunt to see if we could match current 
normalized_path, no matter the permitted_mount is regex or non-regex, if it's 
non-regex, we use strcmp to match, if it's regex, we use function 
validate_volume_name_with_argument to match.

Hope my explanation helps with the understanding of this part of the code, 
further comments are highly welcomed. [~leftnoteasy] What's your opinion

 

 

 


was (Author: zian chen):
Hi [~miklos.szeg...@cloudera.com]  , thank you so much for your comments, for 
is_regex function, missing "through" for iteration and isUserMount boolean 
value change, I'm totally agreed with your suggestions. I'll update the patch 
according to your comments.

For those two contradictions, I would like to explain more in details, line 853 
and line 925-930 are inside function check_mount_permitted, this function 
accepts two params as input, permitted_mounts are mounts which are specified by 
admin which can be permitted for checking the user mounts, and requested are 
user mounts. in line
{code:java}
char *normalized_path = normalize_mount(requested, 0);{code}
we call function normalize_mount which will lead to line 853, in this function 
we use "isUsermount" to distinguish the input mount is user mount or permitted 
mount, here, in this case, it's user mount, 

[jira] [Comment Edited] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-30 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345671#comment-16345671
 ] 

Zian Chen edited comment on YARN-7626 at 1/30/18 7:43 PM:
--

Hi [~miklos.szeg...@cloudera.com]  , thank you so much for your comments, for 
is_regex function, missing "through" for iteration and isUserMount boolean 
value change, I'm totally agreed with your suggestions. I'll update the patch 
according to your comments.

For those two contradictions, I would like to explain more in details, line 853 
and line 925-930 are inside function check_mount_permitted, this function 
accepts two params as input, permitted_mounts are mounts which are specified by 
admin which can be permitted for checking the user mounts, and requested are 
user mounts. in line
{code:java}
char *normalized_path = normalize_mount(requested, 0);{code}
we call function normalize_mount which will lead to line 853, in this function 
we use "isUsermount" to distinguish the input mount is user mount or permitted 
mount, here, in this case, it's user mount, then this part of the code will not 
be executed for user mount,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}
{code}
we will follow the original logic which checks if the user mount is a real 
path, if not, verify if it's a validate volume name, then do proper actions and 
return normalized_path for user mount.

Then line 925-930 is used to check if permitted mounts are a regular 
expression. If it is, we need to use regex matching to see if current 
permitted_mounts can be matched with normalized_path of the user mount, that's 
what line 925-930 trying to do. Let me give you an example for this,

for example one of the permitted_mounts is "^/usr/local/.*$" which is a regex 
and the user mount is "//usr/local/bin/", when we call function 
check_mount_permitted, we will call function normalize_mount for user mount 
"//usr/local/bin/", check if it's a real path, it is, then the rest of code 
will normalize the user mount into "/usr/local/bin/" then return as 
normalized_path, then we go back to function check_mount_permitted and loop 
permitted_mounts, no other can match "/usr/local/bin/" except regex 
"*^*/usr/local/.*$", and this is because we use line 925-930 check 
permitted_mounts is a regex and it can match "/usr/local/bin/" using regex 
matching.

However, function does not always get user mount as input, like when we call 
function normalize_mounts in these two lines inside function add_mounts,
{code:java}
ret = normalize_mounts(permitted_ro_mounts, 1);
ret |= normalize_mounts(permitted_rw_mounts, 1);{code}
normalize_mount will get permitted_mounts which let the code logic enter into 
this part,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}{code}
in this situation, we only accept permitted_mounts as regex when it's not a 
real path, otherwise, we don't allow it as a valid permitted_mount.

For the second contradiction, "this check should be before if 
(strcmp(normalized_path, permitted_mounts[i]) == 0)", the order of the first 
check and second check is not matter, because inside permitted_mounts array, we 
have regex format permitted_mount as well as non-regex ones, we try to use 
every possible permitted_mouunt to see if we could match current 
normalized_path, no matter the permitted_mount is regex or non-regex, if it's 
non-regex, we use strcmp to match, if it's regex, we use function 
validate_volume_name_with_argument to match.

Hope my explanation helps with the understanding of this part of the code, 
further comments are highly welcomed. [~leftnoteasy] What's your opinion

 

 

 


was (Author: zian chen):
Hi [~miklos.szeg...@cloudera.com]  , thank you so much for your comments, for 
is_regex function, missing "through" for iteration and isUserMount boolean 
value change, I'm totally agreed with your suggestions. I'll update the patch 
according to your comments.

For those two contradictions, I would like to explain more in details, line 853 
and line 925-930 are inside function check_mount_permitted, this function 
accepts two params as input, permitted_mounts are mounts which are specified by 
admin which can be permitted for checking the user mounts, and requested are 
user mounts. in line
{code:java}
char *normalized_path = normalize_mount(requested, 0);{code}
we call function normalize_mount which will lead to line 853, in this function 
we use "isUsermount" to distinguish the input mount is user mount or permitted 
mount, here, in this case, it's user 

[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-30 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345671#comment-16345671
 ] 

Zian Chen commented on YARN-7626:
-

Hi [~miklos.szeg...@cloudera.com]  , thank you so much for your comments, for 
is_regex function, missing "through" for iteration and isUserMount boolean 
value change, I'm totally agreed with your suggestions. I'll update the patch 
according to your comments.

For those two contradictions, I would like to explain more in details, line 853 
and line 925-930 are inside function check_mount_permitted, this function 
accepts two params as input, permitted_mounts are mounts which are specified by 
admin which can be permitted for checking the user mounts, and requested are 
user mounts. in line
{code:java}
char *normalized_path = normalize_mount(requested, 0);{code}
we call function normalize_mount which will lead to line 853, in this function 
we use "isUsermount" to distinguish the input mount is user mount or permitted 
mount, here, in this case, it's user mount, then this part of the code will not 
be executed for user mount,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}
{code}
we will follow the original logic which checks if the user mount is a real 
path, if not, verify if it's a validate volume name, then do proper actions and 
return normalized_path for user mount.

Then line 925-930 is used to check if permitted mounts are a regular 
expression. If it is, we need to use regex matching to see if current 
permitted_mounts can be matched with normalized_path of the user mount, that's 
what line 925-930 trying to do. Let me give you an example for this,

for example one of the permitted_mounts is "^/usr/local/.*$" which is a regex 
and the user mount is "//usr/local/bin/", when we call function 
check_mount_permitted, we will call function normalize_mount for user mount 
"//usr/local/bin/", check if it's a real path, it is, then the rest of code 
will normalize the user mount into "/usr/local/bin/" then return as 
normalized_path, then we go back to function check_mount_permitted and loop 
permitted_mounts, no other can match "/usr/local/bin/" except regex 
"^/usr/local/.*$", and this is because we use line 925-930 check 
permitted_mounts is a regex and it can match "/usr/local/bin/" using regex 
matching.

However, function does not always get user mount as input, like when we call 
function normalize_mounts in these two lines inside function add_mounts,
{code:java}
ret = normalize_mounts(permitted_ro_mounts, 1);
ret |= normalize_mounts(permitted_rw_mounts, 1);{code}
normalize_mount will get permitted_mounts which let the code logic enter into 
this part,
{code:java}
// we only allow permitted mount to be REGEX, for permitted mount, we check
// if it's a valid REGEX return; for user mount, we need to strictly check
if (isUserMount != 0) {
  if (is_regex(mount) == 0) {
return strdup(mount);
  }
}{code}
in this situation, we only accept permitted_mounts as regex when it's not a 
real path, otherwise, we don't allow it as a valid permitted_mount.

For the second contradiction, "this check should be before if 
(strcmp(normalized_path, permitted_mounts[i]) == 0)", the order of the first 
check and second check is not matter, because inside permitted_mounts array, we 
have regex format permitted_mount as well as non-regex ones, we try to use 
every possible permitted_mouunt to see if we could match current 
normalized_path, no matter the permitted_mount is regex or non-regex, if it's 
non-regex, we use strcmp to match, if it's regex, we use function 
validate_volume_name_with_argument to match.

Hope my explanation helps with the understanding of this part of the code, 
further comments are highly welcomed. Wangda Tan What's your opinion

 

 

 

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



[jira] [Commented] (YARN-7844) Expose metrics for scheduler operation (allocate, schedulerEvent) to JMX

2018-01-30 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345657#comment-16345657
 ] 

Wei Yan commented on YARN-7844:
---

Put a new diff 001.patch to fix some checkstyles and findbugs issues. 
[~yufeigu] [~leftnoteasy] any comments?

> Expose metrics for scheduler operation (allocate, schedulerEvent) to JMX
> 
>
> Key: YARN-7844
> URL: https://issues.apache.org/jira/browse/YARN-7844
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-7844.000.patch, YARN-7844.001.patch
>
>
> Currently FairScheduler's FSOpDurations records some scheduler operation 
> metrics: nodeUpdateCall, preemptCall, etc. We may need similar for 
> CapacityScheduler. Also, need to add more metrics there. This could help 
> monitor the RM scheduler performance, and get more insights whether scheduler 
> is under-pressure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7844) Expose metrics for scheduler operation (allocate, schedulerEvent) to JMX

2018-01-30 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-7844:
--
Attachment: YARN-7844.001.patch

> Expose metrics for scheduler operation (allocate, schedulerEvent) to JMX
> 
>
> Key: YARN-7844
> URL: https://issues.apache.org/jira/browse/YARN-7844
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-7844.000.patch, YARN-7844.001.patch
>
>
> Currently FairScheduler's FSOpDurations records some scheduler operation 
> metrics: nodeUpdateCall, preemptCall, etc. We may need similar for 
> CapacityScheduler. Also, need to add more metrics there. This could help 
> monitor the RM scheduler performance, and get more insights whether scheduler 
> is under-pressure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2018-01-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345645#comment-16345645
 ] 

Jason Lowe commented on YARN-7677:
--

Thanks for the patch!

+1 lgtm.  I will commit this tomorrow if there are no objections.

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7677.001.patch, YARN-7677.002.patch
>
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7794) SLSRunner is not loading timeline service jars causing failure

2018-01-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7794:
---
Priority: Blocker  (was: Major)

> SLSRunner is not loading timeline service jars causing failure
> --
>
> Key: YARN-7794
> URL: https://issues.apache.org/jira/browse/YARN-7794
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.1.0
>Reporter: Sunil G
>Assignee: Rohith Sharma K S
>Priority: Blocker
>
> {code:java}
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         ... 13 more
> Exception in thread "pool-2-thread-390" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:443)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:321)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:641){code}
> We are getting this error while running SLS. new patch of timelineservice 
> under share/hadoop/yarn is not loaded in SLS jvm (verified from slsrunner 
> classpath)
> cc/ [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7816:

Description: Now that YARN-7605 is committed, I am able to create a service 
in an unsecured cluster from cmd line as the logged in user. However after 
creating an app of name "myapp" say as user A, and then I login as a different 
user user B, I am unable to create a service of the exact same name ("myapp" in 
this case). This feature should be supported in a multi-user setup.  (was: Now 
that YARN-7605 is committed, I am able to create a service in an unsecured 
cluster from cmd line as the logged in user. However when I login as a 
different user, I am unable to create a service of the exact same name. This 
feature should be supported in a multi-user setup.)

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However after creating 
> an app of name "myapp" say as user A, and then I login as a different user 
> user B, I am unable to create a service of the exact same name ("myapp" in 
> this case). This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-30 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7816:

Attachment: YARN-7816.001.patch

> YARN Service - Two different users are unable to launch a service of the same 
> name
> --
>
> Key: YARN-7816
> URL: https://issues.apache.org/jira/browse/YARN-7816
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7816.001.patch
>
>
> Now that YARN-7605 is committed, I am able to create a service in an 
> unsecured cluster from cmd line as the logged in user. However when I login 
> as a different user, I am unable to create a service of the exact same name. 
> This feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-30 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345599#comment-16345599
 ] 

Konstantinos Karanasos commented on YARN-7780:
--

Thanks [~asuresh], [~cheersyang], [~sunilg], [~leftnoteasy].

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch, YARN-7780-YARN-6592.003.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7843) Container Localizer is failing with NPE

2018-01-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345571#comment-16345571
 ] 

Jason Lowe commented on YARN-7843:
--

The NoSuchMethodError implies some code is running with different jars than it 
was compiled with.  unJarAndSave is a method that was added by YARN-2185, so 
this looks like a problem where the yarn-common jar has the code for YARN-2185 
but the hadoop-common code does not.


> Container Localizer is failing with NPE
> ---
>
> Key: YARN-7843
> URL: https://issues.apache.org/jira/browse/YARN-7843
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> It is seen that container localizer are failing with NPE, as result none of 
> container are getting launched!
> {noformat}
> Caused by: java.lang.NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl.getPathForLocalization(LocalResourcesTrackerImpl.java:503)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.getPathForLocalization(ResourceLocalizationService.java:1189)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.processHeartbeat(ResourceLocalizationService.java:1153)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:753)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:371)
> at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:48)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >