[jira] [Reopened] (YARN-7142) Support placement policy in yarn native services

2018-04-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-7142:
--

Thanks [~gsaha], reopening to run Jenkins 

> Support placement policy in yarn native services
> 
>
> Key: YARN-7142
> URL: https://issues.apache.org/jira/browse/YARN-7142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7142-branch-3.1.004.patch, YARN-7142.001.patch, 
> YARN-7142.002.patch, YARN-7142.003.patch, YARN-7142.004.patch
>
>
> Placement policy exists in the API but is not implemented yet.
> I have filed YARN-8074 to move the composite constraints implementation out 
> of this phase-1 implementation of placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427945#comment-16427945
 ] 

Haibo Chen commented on YARN-8107:
--

Yes, I was thing more along the lines with 1), but we should include more 
complicated cases to be comprehensive. Can we change 2) to

"((key11 ne 234 AND key12 eq val12) AND " + "(key13eq OR key14 eq va14))"? 
Notice the malformed condition is in the middle.

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> 

[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-05 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427940#comment-16427940
 ] 

Bibin A Chundatt commented on YARN-8100:


[~sunilg]

{quote}
ln no 932 - 949 : we can have single method getAttributesToNodes which takes 
Set as argument and if its null or empty then we can return for 
all the attributes. I understand that its in parlance to labels api, but was 
not clear why 2 api's are required with hardly any difference. If not atleast 
we can have method names different like getAttributeMapping
{quote}
Could you please share your thoughts on this point.  I had a plan on keeping in 
parlance with nodelabel. On next update i will be able to handle this point too.

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427937#comment-16427937
 ] 

Rohith Sharma K S commented on YARN-8073:
-

[~vrushalic] shall I cherry-pick to above branches without any conflicts? seems 
like branch-2 jenkins issue is not going to be solved immediately. 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427934#comment-16427934
 ] 

Rohith Sharma K S commented on YARN-6936:
-

thanks [~haibochen] [~vrushalic] for the discussion and reviewing the patch. 

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427903#comment-16427903
 ] 

Suma Shivaprasad commented on YARN-7574:


Attached patch with UT fixed

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.2.patch, YARN-7574.3.patch, YARN-7574.4.patch, YARN-7574.5.patch, 
> YARN-7574.6.patch, YARN-7574.7.patch, YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.10.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.2.patch, YARN-7574.3.patch, YARN-7574.4.patch, YARN-7574.5.patch, 
> YARN-7574.6.patch, YARN-7574.7.patch, YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427866#comment-16427866
 ] 

genericqa commented on YARN-7221:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 25 unchanged - 0 fixed = 26 total (was 25) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7221 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917802/YARN-7221.020.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux d57a7187575c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6cf023f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20248/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20248/testReport/ |
| Max. process+thread count | 289 (vs. ulimit of 

[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427851#comment-16427851
 ] 

Rohith Sharma K S commented on YARN-8107:
-

bq. What happens if the filter is multiple conditions and only one condition is 
malformed?
It should be handled in previous check i.e 1st if condition. Can you provide 
example expression so that I can add it test?  Is that something like this ? 
# expr = "abc gt 234 AND defeq";
# expr = "((key11 ne 234 AND key12 eq val12) AND " + "(key13 ene val13 OR 
key14eq))";
{code}
if (!filterListStack.isEmpty()) {
  filterListStack.clear();
  throw new TimelineParseException("Encountered improper brackets while " +
  "parsing " + exprName + ".");
}
if (currentParseState == ParseState.PARSING_VALUE) {
  setValueToCurrentFilter(
  parseValue(expr.substring(kvStartOffset, offset)));
}

if (filterList == null || filterList.getFilterList().isEmpty()) {
  if (currentFilter == null) {
throw new TimelineParseException("Invalid expression provided.");
  } else {
filterList = new TimelineFilterList(currentFilter);
  }
} else if (currentFilter != null) {
  filterList.addFilter(currentFilter);
}
{code}

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> 

[jira] [Updated] (YARN-7875) Node Attribute store for storing and recovering attributes

2018-04-05 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7875:
--
Summary: Node Attribute store for storing and recovering attributes  (was: 
AttributeStore for store and recover attributes)

> Node Attribute store for storing and recovering attributes
> --
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Fix For: YARN-3409
>
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch, 
> YARN-7875-YARN-3409.008.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427836#comment-16427836
 ] 

Sunil G commented on YARN-8119:
---

Thank you [~rohithsharma] .

> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427832#comment-16427832
 ] 

Eric Yang commented on YARN-7221:
-

Patch 20 fixed test case errors.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch, YARN-7221.020.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.020.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch, YARN-7221.020.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8122) Component health threshold monitor

2018-04-05 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8122:

Attachment: YARN-8122.draft.patch

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.draft.patch
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8122) Component health threshold monitor

2018-04-05 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427814#comment-16427814
 ] 

Gour Saha commented on YARN-8122:
-

Attaching a draft patch. I am working on testing it out and writing tests. 
Early reviews/feedback are welcome.

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427809#comment-16427809
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 61 new + 545 unchanged - 23 fixed = 606 total (was 568) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917791/YARN-7574.9.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c66910ca3ec6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6cf023f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20245/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427804#comment-16427804
 ] 

genericqa commented on YARN-7667:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917789/YARN-7667.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 45caf070e77a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427802#comment-16427802
 ] 

genericqa commented on YARN-7221:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 52s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7221 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917794/YARN-7221.019.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 2262e590d88d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6cf023f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20247/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20247/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-05 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427780#comment-16427780
 ] 

Naganarasimha G R commented on YARN-8100:
-

Thanks for the patch [~bibinchundatt],
 Overall the patch looks fine except for these minor issues:

ApplicationClientProtocol,GetAttributesToNodesRequest,GetAttributesToNodesResponse,applicationclient_protocol.proto,yarn_protos.proto,yarn_service_protos.proto,YarnClient.java,ApplicationClientProtocolPBServiceImpl.java
 etc... : getAttributeToNodes is used in some places and in some places 
getAttributesToNodes, i would prefer to have the former but in any case better 
to be uniform across.


YarnClient.java
 * ln no 932 - 949 : we can have single method getAttributesToNodes which takes 
Set as argument and if its null or empty then we can return for 
all the attributes. I understand that its in parlance to labels api, but was 
not clear why 2 api's are required with hardly any difference. If not atleast 
we can have method names different like getAttributeMapping
 * ln no 938 : it should be specified attribute(s) and as well based on 
previous comment, please update the documentation.
 * ln no 906 : it can be worded as "to get all unique node attributes 
configured in the cluster".

GetAttributesToNodesResponse
 * ln no 50 : 55: javadoc is not mapping to the api.

GetAttributesToNodesRequestPBImpl
 * ln no 131: need to modify nodeLabelsList => nodeAttributesList

GetAttributesToNodesResponsePBImpl
 * ln no 60 : need to modify initLabelsToNodes => initAttributesToNodes
 * ln no 85 : addLabelsToNodesToProto => addAttributesToNodesToProto


NodeAttributesManagerImpl
 * ln no 359 : comment not required

There are many applicable checkstyle issues, hope to see them addressed in the 
next patch.

 

 

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8122) Component health threshold monitor

2018-04-05 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-8122:
---

Assignee: Gour Saha

I am working on a patch for this. Will upload it soon.

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8122) Component health threshold monitor

2018-04-05 Thread Gour Saha (JIRA)
Gour Saha created YARN-8122:
---

 Summary: Component health threshold monitor
 Key: YARN-8122
 URL: https://issues.apache.org/jira/browse/YARN-8122
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha


Slider supported component health threshold monitoring with SLIDER-1246. It 
would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7142) Support placement policy in yarn native services

2018-04-05 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427759#comment-16427759
 ] 

Gour Saha commented on YARN-7142:
-

/cc [~leftnoteasy]

> Support placement policy in yarn native services
> 
>
> Key: YARN-7142
> URL: https://issues.apache.org/jira/browse/YARN-7142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7142-branch-3.1.004.patch, YARN-7142.001.patch, 
> YARN-7142.002.patch, YARN-7142.003.patch, YARN-7142.004.patch
>
>
> Placement policy exists in the API but is not implemented yet.
> I have filed YARN-8074 to move the composite constraints implementation out 
> of this phase-1 implementation of placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427753#comment-16427753
 ] 

genericqa commented on YARN-8064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 26 unchanged - 0 fixed = 28 total (was 26) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
23s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Null passed for non-null parameter of 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
  Method invoked at DockerLinuxContainerRuntime.java:of 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerCommand,
 Container, Context) in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
  Method invoked at DockerLinuxContainerRuntime.java:[line 891] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8064 |
| JIRA Patch URL | 

[jira] [Comment Edited] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427739#comment-16427739
 ] 

Eric Yang edited comment on YARN-7221 at 4/5/18 11:31 PM:
--

I agree with Billie that we want to check for submitting user.  I did not know 
that ctx.getExecutionAttribute(RUN_AS_USER) is mapped to 
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user, when user 
remapping feature is enabled.  Patch 19 includes Billie's fix.


was (Author: eyang):
I agree with Billie that we want to check for submitting user.  I did not know 
that ctx.getExecutionAttribute(RUN_AS_USER) is mapped to 
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user, when user 
remapping feature is enabled.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.019.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch, YARN-7221.019.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427739#comment-16427739
 ] 

Eric Yang commented on YARN-7221:
-

I agree with Billie that we want to check for submitting user.  I did not know 
that ctx.getExecutionAttribute(RUN_AS_USER) is mapped to 
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user, when user 
remapping feature is enabled.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7984) Delete registry entries from ZK on ServiceClient stop

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427724#comment-16427724
 ] 

genericqa commented on YARN-7984:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 1 
new + 26 unchanged - 2 fixed = 27 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
50s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7984 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917782/YARN-7984.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 10e3ec497d01 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3121e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Created] (YARN-8121) Container runtime and runtime context information should be available in UI

2018-04-05 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created YARN-8121:
--

 Summary: Container runtime and runtime context information should 
be available in UI
 Key: YARN-8121
 URL: https://issues.apache.org/jira/browse/YARN-8121
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Suma Shivaprasad


Currently we do not publish container runtime and other runtime context 
information(important in case of docker ) - like the the docker image used, 
docker network the container ran on etc in ATS. This needs to be added in ATS 
and made available in UI to provide visibility on container runtime chosen and 
the runtime parameters to end user



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427716#comment-16427716
 ] 

Suma Shivaprasad commented on YARN-7574:


Attaching patch with errors fixed

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, 
> YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.9.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, 
> YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-05 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427696#comment-16427696
 ] 

Eric Badger commented on YARN-7667:
---

New patch adds in unit test functionality

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch, 
> YARN-7667.003.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7667) Docker Stop grace period should be configurable

2018-04-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7667:
--
Attachment: YARN-7667.003.patch

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch, 
> YARN-7667.003.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427683#comment-16427683
 ] 

Billie Rinaldi commented on YARN-7221:
--

I tried out patch 018. I only have one additional comment, which is that I am 
unsure whether the privileged check should be for the submitting user or the 
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user when user 
remapping is enabled. Currently it is being made for the local-user. We are 
kind of ignoring user remapping for privileged containers since we are dropping 
the user flag from the run command, which is why I initially expected it would 
be the submitting user. But I could see an argument made for either direction.

I tried out the following slight modification to Eric's patch, and this made 
the privileged check run on the submitting user. So this would be an easy 
change to make if that's what we think should happen.
{noformat}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 b/hadoop-yarn-proje
index 7623990..f539263 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -762,6 +762,8 @@ public void launchContainer(ContainerRuntimeContext ctx)
   }
   if (!allowPrivilegedContainerExecution(container)) {
 dockerRunAsUser = uid + ":" + gid;
+  } else {
+dockerRunAsUser = ctx.getExecutionAttribute(USER);
   }
 }
 
{noformat}

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427675#comment-16427675
 ] 

genericqa commented on YARN-7931:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-common in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7931 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1291/YARN-7391.0003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 81aec522de49 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3121e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20242/testReport/ |
| Max. process+thread 

[jira] [Commented] (YARN-8120) JVM can crash with SIGSEGV when exiting due to custom leveldb logger

2018-04-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427671#comment-16427671
 ] 

Jason Lowe commented on YARN-8120:
--

As a workaround we can remove the use of custom user logger for leveldb.

> JVM can crash with SIGSEGV when exiting due to custom leveldb logger
> 
>
> Key: YARN-8120
> URL: https://issues.apache.org/jira/browse/YARN-8120
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager, timelineserver
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
>
> The JVM can crash upon exit with a SIGSEGV when leveldb is configured with a 
> custom user logger as is done with LeveldbLogger.  See 
> https://github.com/fusesource/leveldbjni/issues/36 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8120) JVM can crash with SIGSEGV when exiting due to custom leveldb logger

2018-04-05 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-8120:


 Summary: JVM can crash with SIGSEGV when exiting due to custom 
leveldb logger
 Key: YARN-8120
 URL: https://issues.apache.org/jira/browse/YARN-8120
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager, timelineserver
Reporter: Jason Lowe
Assignee: Jason Lowe


The JVM can crash upon exit with a SIGSEGV when leveldb is configured with a 
custom user logger as is done with LeveldbLogger.  See 
https://github.com/fusesource/leveldbjni/issues/36 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-05 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427660#comment-16427660
 ] 

Eric Badger commented on YARN-8064:
---

New patch adds in the new calls to {{writeCommandToTempFile}} for the places 
that I missed and fixes the unit test failures

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8064) Docker ".cmd" files should not be put in hadoop.tmp.dir

2018-04-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-8064:
--
Attachment: YARN-8064.002.patch

> Docker ".cmd" files should not be put in hadoop.tmp.dir
> ---
>
> Key: YARN-8064
> URL: https://issues.apache.org/jira/browse/YARN-8064
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-8064.001.patch, YARN-8064.002.patch
>
>
> Currently all of the docker command files are being put into 
> {{hadoop.tmp.dir}}, which doesn't get cleaned up. So, eventually all of the 
> inodes will fill up and no more tasks will be able to run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-04-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427656#comment-16427656
 ] 

Jason Lowe commented on YARN-7905:
--

Thanks for the patch!  +1 lgtm.

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch, 
> YARN-7905-003.patch, YARN-7905-004.patch, YARN-7905-005.patch, 
> YARN-7905-006.patch, YARN-7905-007.patch, YARN-7905-008.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7984) Delete registry entries from ZK on ServiceClient stop

2018-04-05 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427654#comment-16427654
 ] 

Billie Rinaldi commented on YARN-7984:
--

[~eyang] I wasn't able to recreate the error you saw, but I did find an 
additional issue: destroying a nonexistent service succeeded through the REST 
API. The new patch also addresses this issue and tests for it, as well as 
expanding TestYarnNativeServices to check that registry ZK entries appear for 
containers and that they are cleaned up on stop.

> Delete registry entries from ZK on ServiceClient stop
> -
>
> Key: YARN-7984
> URL: https://issues.apache.org/jira/browse/YARN-7984
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7984.1.patch, YARN-7984.2.patch
>
>
> The service records written to the registry are removed by ServiceClient on a 
> destroy call, but not on a stop call. The service AM does have some code to 
> clean up the registry entries when component instances are stopped, but if 
> the AM is killed before it has a chance to perform the cleanup, these entries 
> will be left in ZooKeeper. It would be better to clean these up in the stop 
> call, so that RegistryDNS does not provide lookups for containers that don't 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7984) Delete registry entries from ZK on ServiceClient stop

2018-04-05 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7984:
-
Attachment: YARN-7984.2.patch

> Delete registry entries from ZK on ServiceClient stop
> -
>
> Key: YARN-7984
> URL: https://issues.apache.org/jira/browse/YARN-7984
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7984.1.patch, YARN-7984.2.patch
>
>
> The service records written to the registry are removed by ServiceClient on a 
> destroy call, but not on a stop call. The service AM does have some code to 
> clean up the registry entries when component instances are stopped, but if 
> the AM is killed before it has a chance to perform the cleanup, these entries 
> will be left in ZooKeeper. It would be better to clean these up in the stop 
> call, so that RegistryDNS does not provide lookups for containers that don't 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-05 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7931:
-
Attachment: YARN-7391.0003.patch

> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch, 
> YARN-7391.0003.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-05 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427619#comment-16427619
 ] 

Vrushali C commented on YARN-7931:
--

uploading 003 that addresses latest review suggestions

> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch, 
> YARN-7391.0003.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427611#comment-16427611
 ] 

Jason Lowe commented on YARN-7221:
--

Thanks for updating the patch!  +1 latest patch looks good to me.  Holding off 
on committing this since [~billie.rinaldi] said she would get back with some 
test results.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427606#comment-16427606
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 0 unchanged - 590 fixed = 1 total (was 590) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
58s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917769/YARN-7574.8.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 89449c841e0f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3121e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/20241/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 

[jira] [Commented] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427591#comment-16427591
 ] 

Jason Lowe commented on YARN-8118:
--

This is definitely not desirable in all cases.  We often decommission a node in 
order to gracefully remove it from service as soon as possible, and allowing 
new containers to run on it will usually harm more than help that effort given 
a sufficiently large cluster to run those containers elsewhere.  If this is 
added it should not change the default behavior as it exists today.  It would 
need to be either a config option for the whole cluster or a parameter as part 
of the rmadmin command to gracefully decomm a specific node.


> Better utilize gracefully decommissioning node managers
> ---
>
> Key: YARN-8118
> URL: https://issues.apache.org/jira/browse/YARN-8118
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.2
> Environment: * Google Compute Engine (Dataproc)
>  * Java 8
>  * Hadoop 2.8.2 using client-mode graceful decommissioning
>Reporter: Karthik Palaniappan
>Priority: Major
> Attachments: YARN-8118-branch-2.001.patch
>
>
> Proposal design doc with background + details (please comment directly on 
> doc): 
> [https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]
> tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
> to complete before shutting down, but they cannot run new containers from 
> those in-progress applications. This is wasteful, particularly in 
> environments where you are billed by resource usage (e.g. EC2).
> Proposal: YARN should schedule containers from in-progress applications on 
> DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
> applications. That will make in-progress applications complete faster and let 
> nodes decommission faster. Overall, this should be cheaper.
> I have a working patch without unit tests that's surprisingly just a few real 
> lines of code (patch 001). If folks are happy with the proposal, I'll write 
> unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7636) Re-reservation count may overflow when cluster resource exhausted for a long time

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7636:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Re-reservation count may overflow when cluster resource exhausted for a long 
> time 
> --
>
> Key: YARN-7636
> URL: https://issues.apache.org/jira/browse/YARN-7636
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.1.0, 2.9.1, 3.0.3
>
> Attachments: YARN-7636.001.patch, YARN-7636.002.patch, 
> YARN-7636.003.patch
>
>
> This happens on our production cluster twice, when a request cannot be 
> satisfied for a long time, it continually triggers the re-reservation and 
> eventually caused the overflow. This will crash the scheduler.
> Exception stack:
> {noformat}
> java.lang.IllegalArgumentException: Overflow adding 1 occurrences to a count 
> of 2147483647
>         at 
> com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:246)
>         at 
> com.google.common.collect.AbstractMultiset.add(AbstractMultiset.java:80)
>         at 
> com.google.common.collect.ConcurrentHashMultiset.add(ConcurrentHashMultiset.java:51)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.addReReservation(SchedulerApplicationAttempt.java:406)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.reserve(SchedulerApplicationAttempt.java:555)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1076)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
>         at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Refer to handling of SchedulerApplicationAttempt#addSchedulingOpportunity, we 
> can ignore this exception to avoid this problem.
> This problem may happens in 
> SchedulerApplicationAttempt#addMissedNonPartitionedRequestSchedulingOpportunity,
>  fix it in the same way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7734:

Fix Version/s: (was: 3.0.2)
   3.0.3

> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.2.0, 3.0.3
>
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427557#comment-16427557
 ] 

Haibo Chen commented on YARN-8107:
--

Thanks [~rohithsharma] for the patch! I have one question. It seems like the 
patch only handles incorrect format if it is a single filter. What happens if 
the filter is multiple conditions and only one condition is malformed?
{code:java}
    if (filterList == null || filterList.getFilterList().isEmpty()) {
  if (currentFilter == null) {
    throw new TimelineParseException("Invalid expression provided.");
  } else {
    filterList = new TimelineFilterList(currentFilter);
  }
    } else if (currentFilter != null) {
  filterList.addFilter(currentFilter);
    }{code}

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> 

[jira] [Updated] (YARN-8063) DistributedShellTimelinePlugin wrongly check for entityId instead of entityType

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-8063:

Fix Version/s: (was: 3.0.2)
   3.0.3

> DistributedShellTimelinePlugin wrongly check for entityId instead of 
> entityType
> ---
>
> Key: YARN-8063
> URL: https://issues.apache.org/jira/browse/YARN-8063
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.3
>
> Attachments: YARN-8063.01.patch
>
>
> DistributedShellTimelinePlugin#getTimelineEntityGroupId compare with entityId 
> rather than entityType. This causes to fail to getTimelineEntityGroupId.  
> {code}
>  public Set getTimelineEntityGroupId(String entityId,
>   String entityType) {
> if (ApplicationMaster.DSEntity.DS_CONTAINER.toString().equals(entityId)) {
>   ContainerId containerId = ContainerId.fromString(entityId);
>   ApplicationId appId = containerId.getApplicationAttemptId()
>   .getApplicationId();
>   return toEntityGroupId(appId.toString());
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8054) Improve robustness of the LocalDirsHandlerService MonitoringTimerTask thread

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-8054:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Improve robustness of the LocalDirsHandlerService MonitoringTimerTask thread
> 
>
> Key: YARN-8054
> URL: https://issues.apache.org/jira/browse/YARN-8054
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: YARN-8054.001.patch, YARN-8054.002.patch
>
>
> The DeprecatedRawLocalFileStatus#loadPermissionInfo can throw a 
> RuntimeException which can kill the MonitoringTimerTask thread. This can 
> leave the node is a bad state where all NM local directories are marked "bad" 
> and there is no automatic recovery. In the below can the error was "too many 
> open files",  but could be a number of other recoverable states.
> {noformat}
> 2018-03-18 02:37:42,960 [DiskHealthMonitor-Timer] ERROR 
> yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[DiskHealthMonitor-Timer,5,main] threw an Exception.
> java.lang.RuntimeException: Error while running command to get file 
> permissions : java.io.IOException: Cannot run program "ls": error=24, Too 
> many open files
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:942)
> at org.apache.hadoop.util.Shell.run(Shell.java:898)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
> at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1078)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkLocalDir(ResourceLocalizationService.java:1556)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkAndInitializeLocalDirs(ResourceLocalizationService.java:1521)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$1.onDirsChanged(ResourceLocalizationService.java:271)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:381)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:449)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$500(LocalDirsHandlerService.java:52)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:166)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> Caused by: java.io.IOException: error=24, Too many open files
> at java.lang.UNIXProcess.forkAndExec(Native Method)
> at java.lang.UNIXProcess.(UNIXProcess.java:247)
> at java.lang.ProcessImpl.start(ProcessImpl.java:134)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> ... 17 more
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:737)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkLocalDir(ResourceLocalizationService.java:1556)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.checkAndInitializeLocalDirs(ResourceLocalizationService.java:1521)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$1.onDirsChanged(ResourceLocalizationService.java:271)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:381)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:449)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$500(LocalDirsHandlerService.java:52)
> at 
> 

[jira] [Updated] (YARN-7623) Fix the CapacityScheduler Queue configuration documentation

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7623:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Fix the CapacityScheduler Queue configuration documentation
> ---
>
> Key: YARN-7623
> URL: https://issues.apache.org/jira/browse/YARN-7623
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: Screen Shot 2018-03-27 at 3.02.45 PM.png, 
> YARN-7623.001.patch, YARN-7623.002.patch
>
>
> It looks like the [Changing Queue 
> Configuration|https://hadoop.apache.org/docs/r2.9.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Changing_queue_configuration_via_API]
>  section is mis-formatted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427529#comment-16427529
 ] 

Suma Shivaprasad commented on YARN-7574:


Thanks [~leftnoteasy] Updated patch with the comment adddressed.

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, 
> YARN-7574.8.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.8.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, 
> YARN-7574.8.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7947) Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7947:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps
> --
>
> Key: YARN-7947
> URL: https://issues.apache.org/jira/browse/YARN-7947
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0, 3.1.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: YARN-7947.001.patch
>
>
> Intra-queue preemption policy can cause NPE for pending users with no 
> schedulable apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7736) Fix itemization in YARN federation document

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7736:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Fix itemization in YARN federation document
> ---
>
> Key: YARN-7736
> URL: https://issues.apache.org/jira/browse/YARN-7736
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: YARN-7736.001.patch
>
>
> https://hadoop.apache.org/docs/r3.0.0/hadoop-yarn/hadoop-yarn-site/Federation.html
> {noformat}
> Assumptions:
> * We assume reasonably good connectivity across sub-clusters (e.g., we are 
> not looking to federate across DC yet, though future investigations of this 
> are not excluded).
> * We rely on HDFS federation (or equivalently scalable DFS solutions) to take 
> care of scalability of the store side.
> {noformat}
> Blank line should be inserted before itemization to render correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-6936:

Fix Version/s: (was: 3.0.2)
   3.0.3

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7511) NPE in ContainerLocalizer when localization failed for running container

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7511:

Fix Version/s: (was: 3.0.2)
   3.0.3

> NPE in ContainerLocalizer when localization failed for running container
> 
>
> Key: YARN-7511
> URL: https://issues.apache.org/jira/browse/YARN-7511
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.3
>
> Attachments: YARN-7511.001.patch
>
>
> Error log:
> {noformat}
> 2017-09-30 20:14:32,839 FATAL [AsyncDispatcher event handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
> java.lang.NullPointerException
>         at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>         at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet.resourceLocalizationFailed(ResourceSet.java:151)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$ResourceLocalizationFailedWhileRunningTransition.transition(ContainerImpl.java:821)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl$ResourceLocalizationFailedWhileRunningTransition.transition(ContainerImpl.java:813)
>         at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
>         at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>         at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
>         at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:1335)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl.handle(ContainerImpl.java:95)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1372)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher.handle(ContainerManagerImpl.java:1365)
>         at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
>         at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
>         at java.lang.Thread.run(Thread.java:834)
> 2017-09-30 20:14:32,842 INFO [AsyncDispatcher ShutDown handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
> {noformat}
> Reproduce this problem:
> 1. Container was running and ContainerManagerImpl#localize was called for 
> this container
> 2. Localization failed in ResourceLocalizationService$LocalizerRunner#run and 
> sent out ContainerResourceFailedEvent with null LocalResourceRequest.
> 3. NPE when ResourceLocalizationFailedWhileRunningTransition#transition --> 
> container.resourceSet.resourceLocalizationFailed(null)
> I think we can fix this problem through ensuring that request is not null 
> before remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7813:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, 
> YARN-7813.005.branch-2.8.patch, YARN-7813.005.branch-3.0.patch, 
> YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427452#comment-16427452
 ] 

Wangda Tan commented on YARN-7574:
--

Thanks [~suma.shivaprasad], one comment: 
RMAppAttemptImpl: 
- It's better to getQueueInfo if amRequest has null node_label_expression. In 
most cases, we don't need to access getQueueInfo here.

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427436#comment-16427436
 ] 

Haibo Chen commented on YARN-6936:
--

Done.

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 2.10.0, 3.0.2, 3.2.0, 3.1.1
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427419#comment-16427419
 ] 

genericqa commented on YARN-7221:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7221 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917734/YARN-7221.018.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 46b0266e24d0 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f8b8bd5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20240/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20240/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427405#comment-16427405
 ] 

Hudson commented on YARN-8119:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13930 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13930/])
YARN-8119. [UI2] Timeline Server address' url scheme should be removed 
(rohithsharmaks: rev f32d6275ba9e377fb722e2440986033d7ce8b602)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js


> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427348#comment-16427348
 ] 

Rohith Sharma K S commented on YARN-6936:
-

It should good to go in for branch-3.1/branch-3.0/branch-2

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427334#comment-16427334
 ] 

Rohith Sharma K S commented on YARN-8119:
-

+1 lgtm. verified in cluster.

> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427324#comment-16427324
 ] 

Hudson commented on YARN-6936:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13929/])
YARN-6936. [Atsv2] Retrospect storing entities into sub application (haibochen: 
rev f8b8bd53c4797d406bea5b1b0cdb179e209169cc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesHBaseStorage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineV2Client.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/SubApplicationEntity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineV2ClientImpl.java


> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8116) Nodemanager fails with NumberFormatException: For input string: ""

2018-04-05 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8116:
-
Affects Version/s: 3.1.0
 Target Version/s: 3.2.0, 3.1.1
 Priority: Critical  (was: Major)

> Nodemanager fails with NumberFormatException: For input string: ""
> --
>
> Key: YARN-8116
> URL: https://issues.apache.org/jira/browse/YARN-8116
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Priority: Critical
>
> Steps followed.
> 1) Update nodemanager debug delay config
> {code}
> 
>   yarn.nodemanager.delete.debug-delay-sec
>   350
> {code}
> 2) Launch distributed shell application multiple times
> {code}
> /usr/hdp/current/hadoop-yarn-client/bin/yarn  jar 
> hadoop-yarn-applications-distributedshell-*.jar  -shell_command "sleep 120" 
> -num_containers 1 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=centos/httpd-24-centos7:latest -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL=true -jar 
> hadoop-yarn-applications-distributedshell-*.jar{code}
> 3) restart NM
> Nodemanager fails to start with below error.
> {code}
> {code:title=NM log}
> 2018-03-23 21:32:14,437 INFO  monitor.ContainersMonitorImpl 
> (ContainersMonitorImpl.java:serviceInit(181)) - ContainersMonitor enabled: 
> true
> 2018-03-23 21:32:14,439 INFO  logaggregation.LogAggregationService 
> (LogAggregationService.java:serviceInit(130)) - rollingMonitorInterval is set 
> as 3600. The logs will be aggregated every 3600 seconds
> 2018-03-23 21:32:14,455 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
>  failed in state INITED
> java.lang.NumberFormatException: For input string: ""
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:601)
>   at java.lang.Long.parseLong(Long.java:631)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(NMLeveldbStateStoreService.java:350)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainersState(NMLeveldbStateStoreService.java:253)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:365)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:464)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
> 2018-03-23 21:32:14,458 INFO  logaggregation.LogAggregationService 
> (LogAggregationService.java:serviceStop(148)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
>  waiting for pending aggregation during exit
> 2018-03-23 21:32:14,460 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service NodeManager failed in state 
> INITED
> java.lang.NumberFormatException: For input string: ""
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:601)
>   at java.lang.Long.parseLong(Long.java:631)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(NMLeveldbStateStoreService.java:350)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainersState(NMLeveldbStateStoreService.java:253)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:365)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:316)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:464)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> 

[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427294#comment-16427294
 ] 

Haibo Chen commented on YARN-6936:
--

I have committed the patch to trunk. [~rohithsharma] Any other branches that we 
should cherry-pick into?

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6936:
-
Fix Version/s: 3.2.0

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427290#comment-16427290
 ] 

Haibo Chen commented on YARN-6936:
--

There seems to be some problem with patch command not parsing the patch 
correctly. I was able to apply the cleanly with 'git apply'

+1 on the latest patch. Checking this in shortly.

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-04-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427285#comment-16427285
 ] 

Eric Yang commented on YARN-7973:
-

[~shaneku...@gmail.com] Yes, it make sense to fix #2 like problem in another 
JIRA as clean up.  We probably want to get YARN-7654 changes in first, then 
work on the clean up.  This will reduce the amount of overlaps that we would 
have to do.

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7986) ATSv2 REST API queries do not return results for uppercase application tags

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7986:

Fix Version/s: (was: 3.0.2)
   3.0.3

> ATSv2 REST API queries do not return results for uppercase application tags
> ---
>
> Key: YARN-7986
> URL: https://issues.apache.org/jira/browse/YARN-7986
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Charan Hebri
>Assignee: Charan Hebri
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 3.0.3
>
> Attachments: YARN-7986.001.patch
>
>
> When applications are submitted to YARN with application tags, the tags are 
> converted to lowercase. This can be seen on the old/new UI. But using the 
> original tags for ATSv2 REST API queries do not return results as they expect 
> the query url to have the tags in lowercase. 
> This is additional work for the client because each tag needs to be 
> lowercased before running a query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8062) yarn rmadmin -getGroups returns group from which the user has been removed

2018-04-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-8062:

Fix Version/s: (was: 3.0.2)
   3.0.3

> yarn rmadmin -getGroups returns group from which the user has been removed
> --
>
> Key: YARN-8062
> URL: https://issues.apache.org/jira/browse/YARN-8062
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: YARN-8062.001.patch, YARN-8062.002.patch, 
> YARN-8062.003.patch, YARN-8062.004.patch
>
>
> {code:title= adding group hrt_yarn_rmadmin_test}
> sudo su - -c "groupadd hrt_yarn_rmadmin_test" root
> {code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group hrt_yarn_rmadmin_test}
> sudo su - -c "useradd hrt_yarn_rmadmin_test -g hrt_yarn_rmadmin_test" root
> {Code}
> {Code:title= adding group hrt_yarn_rmadmin_test_group2 }
> sudo su - -c "groupadd hrt_yarn_rmadmin_test_group2" root
> {Code}
> {Code:title=adding user hrt_yarn_rmadmin_test to group 
> hrt_yarn_rmadmin_test_group2}
> sudo su - -c "usermod -a -G hrt_yarn_rmadmin_test_group2 
> hrt_yarn_rmadmin_test" root
> {Code}
> Refresh and getGroups
> {code}
> yarn rmadmin -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}
> Delete group hrt_yarn_rmadmin_test_group2 from user hrt_yarn_rmadmin_test  
> and refresh and do getGroups.
> We can still see group hrt_yarn_rmadmin_test_group2
> {code}
> sudo su - -c "gpasswd -d hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2" 
> root
> {code}
> Removing user hrt_yarn_rmadmin_test from group hrt_yarn_rmadmin_test_group2
> {code}
> bash-4.2$  /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin 
> -refreshUserToGroupsMappings
> /usr/hdp/current/hadoop-yarn-client/bin/yarn rmadmin -getGroups 
> hrt_yarn_rmadmin_test
> hrt_yarn_rmadmin_test : hrt_yarn_rmadmin_test hrt_yarn_rmadmin_test_group2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-05 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427240#comment-16427240
 ] 

Vrushali C commented on YARN-8073:
--

Needs to go into branch-2 (pending npm error / jenkins), branch-3.0 and 
branch-3.1 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.018.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427229#comment-16427229
 ] 

Eric Yang commented on YARN-7221:
-

[~jlowe] Thanks for the sample code.  This sample code works as intended.  I 
was confused by which process received the signal, hence my code didn't make 
sense.  Patch 18 integrates your changes.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch, 
> YARN-7221.018.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427134#comment-16427134
 ] 

Haibo Chen commented on YARN-6936:
--

I notice one issue with the latest patch. It adds the SubAppEntity in a 
different file structure

b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/SubApplicationEntity.java

 

git status output on my laptop:
{code:java}
On branch trunk
Your branch is up-to-date with 'apache/trunk'.
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineV2Client.java
    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineV2ClientImpl.java
    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServicesHBaseStorage.java
    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
    modified:   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java

Untracked files:
  (use "git add ..." to include in what will be committed)

    b/
    
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/SubApplicationEntity.java

no changes added to commit (use "git add" and/or "git commit -a"){code}

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427066#comment-16427066
 ] 

genericqa commented on YARN-8119:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917707/YARN-8119.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux cdd548f849a3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20239/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426994#comment-16426994
 ] 

Jason Lowe commented on YARN-7221:
--

{quote}WNOHANG flag was used to track all child processes in case interrupted 
child process has not been reported by the while loop.
{quote}
WNOHANG has nothing to do with the what the child processes are doing. It just 
tells wait/waitpid to not block until a child process state change, and that's 
why we don't want to specify it. It will cause the parent process to spin 
quickly in the while loop as long as the child process is still running. That 
wastes CPU resources for no benefit.
{quote}If exec failed WUNTRACED, the unreported child process termination may 
get caught and getting reported.
{quote}
I'm not quite sure what is being said here. WUNTRACED means that wait/waitpid 
will also return if the process is stopped (e.g.: SIGSTOP) outside of ptrace. 
We don't care about those state changes. We only care when the child process 
terminates because only then do we potentially have an exit code we can act 
upon.
{quote}Waitpid doesn't seem to set EINTR flag for errno when SIGINT is sent to 
the child process.
{quote}
The EINTR errno has nothing to do with the child process. This simply means the 
current process received a signal while in the system call. Most system calls 
that block will return EINTR if the process receives an unblocked signal. 
Signal handlers are rather limited in what they can do directly, so they tend 
to simply set a global flag. The main process code then can react to that flag 
after being kicked out of the system call by EINTR. In this case we aren't 
handling any special flags, so we just want to re-enter the system call if we 
happened to get kicked out via EINTR.
{quote}If you have code example of how to make this better, I am happy to 
integrate it into the patch.
{quote}
Here's some sample code that should handle it properly and gives a bit more 
feedback when the privilege check fails because we couldn't launch sudo 
properly or sudo crashed. NOTE: I haven't compiled/tested this, but it should 
be close enough to convey the approach.
{code:java}
int child_pid = fork();
if (child_pid == 0) {
  execl("/bin/sudo", "sudo", "-U", user, "-n", "-l", "docker", NULL);
  fprintf(ERRORFILE, "sudo exec failed: %s\n", strerror(errno));
  exit(INITIALIZE_USER_FAILED);
} else {
  while ((waitid = waitpid(child_pid, , 0)) != child_pid) {
if (waitid == -1 && errno != EINTR) {
  fprintf(ERRORFILE, "waitpid failed: %s\n", strerror(errno));
  break;
}
  }
  if (waitid == child_pid) {
if (WIFEXITED(statval)) {
  if (WEXITSTATUS(statval) == 0) {
ret = 1;
  }
} else if (WIFSIGNALED(statval)) {
  fprintf(ERRORFILE, "sudo terminated by signal %d\n", 
WTERMSIG(statval));
}
  }
}
{code}
 

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 

[jira] [Commented] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426971#comment-16426971
 ] 

Sunil G commented on YARN-8119:
---

[~rohithsharma] Could u pls help to check this patch. Thanks.

> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8119:
--
Attachment: YARN-8119.001.patch

> [UI2] Timeline Server address' url scheme should be removed while accessing 
> via KNOX
> 
>
> Key: YARN-8119
> URL: https://issues.apache.org/jira/browse/YARN-8119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8119.001.patch
>
>
> While accessing UI2 behind knox, timeline server address will come with URL 
> scheme.
> Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8119) [UI2] Timeline Server address' url scheme should be removed while accessing via KNOX

2018-04-05 Thread Sunil G (JIRA)
Sunil G created YARN-8119:
-

 Summary: [UI2] Timeline Server address' url scheme should be 
removed while accessing via KNOX
 Key: YARN-8119
 URL: https://issues.apache.org/jira/browse/YARN-8119
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Sunil G
Assignee: Sunil G


While accessing UI2 behind knox, timeline server address will come with URL 
scheme.

Since UI already takes care of URL scheme, this need to be skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8055) Yarn service: revisit the version in service

2018-04-05 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426951#comment-16426951
 ] 

Billie Rinaldi commented on YARN-8055:
--

Related to this, I just noticed that the version field has not been added to 
the examples in the hadoop-yarn-site documentation yet.

> Yarn service: revisit the version in service
> 
>
> Key: YARN-8055
> URL: https://issues.apache.org/jira/browse/YARN-8055
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> There is a version field added to service with YARN-7523. This version is 
> mandatory. 
> There are couple of suggestions related to version:
> 1. make the version optional.
> 2. have a product_version and config_version.
> We need to revisit the version strategy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426763#comment-16426763
 ] 

genericqa commented on YARN-7875:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3409 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 264 unchanged - 0 fixed = 271 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7875 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917656/YARN-7875-YARN-3409.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname 

[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426756#comment-16426756
 ] 

genericqa commented on YARN-8107:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917665/YARN-8107.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ea6a13ac3740 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e52539b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20238/testReport/ |
| Max. process+thread count | 397 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20238/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE when incorrect format is used in ATSv2 filter 

[jira] [Commented] (YARN-8095) Allow disable non-exclusive allocation

2018-04-05 Thread kyungwan nam (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426751#comment-16426751
 ] 

kyungwan nam commented on YARN-8095:


Hi [~cheersyang]

my configs are as follows.
 * ‘label1’ is set to non-exclusive node-label
 * capacity-scheduler

{code:java}
yarn.scheduler.capacity.root.ordering-policy=priority-utilization
yarn.scheduler.capacity.ordering-policy.priority-utilization.underutilized-preemption.enabled=true
yarn.scheduler.capacity.reservations-continue-look-all-nodes=true
yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
yarn.scheduler.capacity.root.accessible-node-labels.label1.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue= 
yarn.scheduler.capacity.root.acl_submit_applications= 
yarn.scheduler.capacity.root.acl_submit_queue=*
yarn.scheduler.capacity.root.queues=longlived,batch,label1
yarn.scheduler.capacity.root.capacity=100

yarn.scheduler.capacity.root.batch.accessible-node-labels= 
yarn.scheduler.capacity.root.batch.acl_submit_applications=*
yarn.scheduler.capacity.root.batch.capacity=50
yarn.scheduler.capacity.root.batch.maximum-applications=100
yarn.scheduler.capacity.root.batch.maximum-capacity=100
yarn.scheduler.capacity.root.batch.minimum-user-limit-percent=50
yarn.scheduler.capacity.root.batch.priority=0
yarn.scheduler.capacity.root.batch.state=RUNNING
yarn.scheduler.capacity.root.batch.user-limit-factor=2

yarn.scheduler.capacity.root.longlived.accessible-node-labels= 
yarn.scheduler.capacity.root.longlived.acl_submit_applications=*
yarn.scheduler.capacity.root.longlived.capacity=50
yarn.scheduler.capacity.root.longlived.maximum-capacity=50
yarn.scheduler.capacity.root.longlived.minimum-user-limit-percent=50
yarn.scheduler.capacity.root.longlived.priority=1
yarn.scheduler.capacity.root.longlived.state=RUNNING
yarn.scheduler.capacity.root.longlived.user-limit-factor=2

yarn.scheduler.capacity.root.label1.accessible-node-labels=label1
yarn.scheduler.capacity.root.label1.accessible-node-labels.label1.capacity=100
yarn.scheduler.capacity.root.label1.accessible-node-labels.label1.maximum-am-resource-percent=0.7
yarn.scheduler.capacity.root.label1.acl_submit_applications=*
yarn.scheduler.capacity.root.label1.capacity=0
yarn.scheduler.capacity.root.label1.default-node-label-expression=label1
yarn.scheduler.capacity.root.label1.maximum-am-resource-percent=0.7
yarn.scheduler.capacity.root.label1.maximum-applications=100
yarn.scheduler.capacity.root.label1.maximum-capacity=0
yarn.scheduler.capacity.root.label1.minimum-user-limit-percent=50
yarn.scheduler.capacity.root.label1.priority=0
yarn.scheduler.capacity.root.label1.state=RUNNING
yarn.scheduler.capacity.root.label1.user-limit-factor=2
{code}
How to reproduce
 * in situation where 'batch' Queue is fully used.
 * app1, which is submitted to 'longlived' Queue is allocated to label1 
partition by non-exclusive allocation.
 * after then, if an app is submitted to 'label1' Queue, app1 containers which 
are placed in 'label1' partition are preempted first.

> Allow disable non-exclusive allocation
> --
>
> Key: YARN-8095
> URL: https://issues.apache.org/jira/browse/YARN-8095
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.8.3
>Reporter: kyungwan nam
>Priority: Major
>
> We have 'longlived' Queue, which is used for long-lived apps.
> In situation where default Partition resources are not enough, containers for 
> long-lived app can be allocated to sharable Partition.
> Since then, containers for long-lived app can be easily preempted.
> We don’t want long-lived apps to be killed abruptly.
> Currently, non-exclusive allocation can happen regardless of whether the 
> queue is accessible to the sharable Partition.
> It would be good if non-exclusive allocation can be disabled at queue level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426710#comment-16426710
 ] 

genericqa commented on YARN-8073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
56s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917638/YARN-8073-branch-2.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20237/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-05 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8107:

Attachment: YARN-8107.001.patch

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8107.001.patch
>
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter.doFilter(TimelineReaderWhitelistAuthorizationFilter.java:85)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> 

[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426655#comment-16426655
 ] 

genericqa commented on YARN-8073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
51s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917638/YARN-8073-branch-2.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20236/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-05 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426630#comment-16426630
 ] 

Bibin A Chundatt commented on YARN-8117:


Committed to YARN-3409 . Thank you [~sunilg] for review

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Fix For: YARN-3409
>
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7875) AttributeStore for store and recover attributes

2018-04-05 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7875:
---
Attachment: YARN-7875-YARN-3409.008.patch

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch, 
> YARN-7875-YARN-3409.008.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-04-05 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426625#comment-16426625
 ] 

Bibin A Chundatt commented on YARN-7875:


Thank you [~sunilg]

# Handled checkstyle issue.
# Regarding other testcase failures looks random.YARN-8117 last run all 
testcases passed. Locally also did not find any failures

{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:3399:
  /**: First sentence should end with a period. [JavadocStyle]
{noformat}
Not part of code change
{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java:61:
registerLog(NODE_ATTRIBUTE, AddNodeToAttributeLogOp.OPCODE, 
AddNodeToAttributeLogOp.class);: Line is longer than 80 characters (found 95). 
[LineLength]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java:62:
registerLog(NODE_ATTRIBUTE, RemoveNodeToAttributeLogOp.OPCODE, 
RemoveNodeToAttributeLogOp.class);: Line is longer than 80 characters (found 
101). [LineLength]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/FSStoreOpHandler.java:63:
registerLog(NODE_ATTRIBUTE, ReplaceNodeToAttributeLogOp.OPCODE, 
ReplaceNodeToAttributeLogOp.class);: Line is longer than 80 characters (found 
103). [LineLength]
{noformat}
As discussed offline its better to be in same line
{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/op/AddNodeToAttributeLogOp.java:0::
 Missing package-info.java file. [JavadocPackage]
{noformat}
Added package-info file
{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NodeAttributesManagerImpl.java:73:
  protected Dispatcher dispatcher;:24: Variable 'dispatcher' must be private 
and have accessor methods. [VisibilityModifier]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/NodeAttributesManagerImpl.java:74:
  protected NodeAttributeStore store;:32: Variable 'store' must be private and 
have accessor methods. [VisibilityModifier]
{noformat}
package-private for testcase purpose.
{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestFileSystemNodeAttributeStore.java:22:import
 org.apache.hadoop.net.Node;:8: Unused import - org.apache.hadoop.net.Node. 
[UnusedImports]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestFileSystemNodeAttributeStore.java:40:public
 class TestFileSystemNodeAttributeStore {: Missing a Javadoc comment. 
[JavadocType]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestFileSystemNodeAttributeStore.java:42:
  MockNodeAttrbuteManager mgr = null;:27: Variable 'mgr' must be private and 
have accessor methods. [VisibilityModifier]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestFileSystemNodeAttributeStore.java:43:
  Configuration conf = null;:17: Variable 'conf' must be private and have 
accessor methods. [VisibilityModifier]
{noformat}
Handled the same.

Thank you

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch, 
> YARN-7875-YARN-3409.008.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-05 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426619#comment-16426619
 ] 

Bibin A Chundatt commented on YARN-8117:


[~sunilg]

{quote}
may be because of the rebase to trunk last day
{quote}
Yes 


> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426594#comment-16426594
 ] 

Sunil G commented on YARN-8117:
---

Yes. This has happened post rebase.

Some additional items are added in trunk, then with current branch change, we 
have to bump up info count by one more. Thanks [~bibinchundatt]. 

Pls feel free to commit.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426538#comment-16426538
 ] 

Sunil G edited comment on YARN-8117 at 4/5/18 7:45 AM:
---

Its still interesting why YARN-8092 is not failing any test case when it was 
committed. It ll be really great if we know why this started failing now (may 
be because of the rebase to trunk last day?)

Patch seems fine. +1 to fix these test error.


was (Author: sunilg):
Its still interesting why YARN-8092 is not failing any test case when it was 
committed. It ll be really great why this started failing now (may be because 
of the rebase to trunk last day?)

Patch seems fine. +1 to fix these test error.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426540#comment-16426540
 ] 

Sunil G commented on YARN-7875:
---

Thanks [~bibinchundatt]

I also wanted to check the jenkins, hence was holding off.

Some minor last set of issues to clear before getting this.

1. 
[https://builds.apache.org/job/PreCommit-YARN-Build/20185/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt]
 has reported bunch of unused imports, javadoc and line lengths which seems to 
be fixable. Cud u pls help in that.

2. For other test cases, is it random one? Do we have some jiras tracking 
those? If not, cud u pls raise one to track them as well so tat we will not 
loose track on those.

Thank you.

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426539#comment-16426539
 ] 

genericqa commented on YARN-6936:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 43m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
38s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not 

[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426538#comment-16426538
 ] 

Sunil G commented on YARN-8117:
---

Its still interesting why YARN-8092 is not failing any test case when it was 
committed. It ll be really great why this started failing now (may be because 
of the rebase to trunk last day?)

Patch seems fine. +1 to fix these test error.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org