[jira] [Commented] (YARN-4612) Fix rumen and scheduler load simulator handle killed tasks properly
[ https://issues.apache.org/jira/browse/YARN-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016909#comment-16016909 ] Ye Zhou commented on YARN-4612: --- [~shv] Patch applies to branch 2.7. Build and Unit Tests passed. > Fix rumen and scheduler load simulator handle killed tasks properly > --- > > Key: YARN-4612 > URL: https://issues.apache.org/jira/browse/YARN-4612 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Ming Ma >Assignee: Ming Ma > Labels: release-blocker > Fix For: 2.9.0, 3.0.0-alpha1 > > Attachments: YARN-4612-2.patch, YARN-4612.patch > > > Killed tasks might not any attempts. Rumen and SLS throw exceptions when > processing such data. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4476) Matcher for complex node label expresions
[ https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-4476: Attachment: (was: YARN-4476.005.patch) > Matcher for complex node label expresions > - > > Key: YARN-4476 > URL: https://issues.apache.org/jira/browse/YARN-4476 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Chris Douglas >Assignee: Chris Douglas > Labels: oct16-medium > Attachments: YARN-4476.003.patch, YARN-4476.004.patch, > YARN-4476.005.patch, YARN-4476-0.patch, YARN-4476-1.patch, YARN-4476-2.patch > > > Implementation of a matcher for complex node label expressions based on a > [paper|http://dl.acm.org/citation.cfm?id=1807171] from SIGMOD 2010. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4476) Matcher for complex node label expresions
[ https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-4476: Attachment: YARN-4476.005.patch > Matcher for complex node label expresions > - > > Key: YARN-4476 > URL: https://issues.apache.org/jira/browse/YARN-4476 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Chris Douglas >Assignee: Chris Douglas > Labels: oct16-medium > Attachments: YARN-4476.003.patch, YARN-4476.004.patch, > YARN-4476.005.patch, YARN-4476-0.patch, YARN-4476-1.patch, YARN-4476-2.patch > > > Implementation of a matcher for complex node label expressions based on a > [paper|http://dl.acm.org/citation.cfm?id=1807171] from SIGMOD 2010. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4476) Matcher for complex node label expresions
[ https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated YARN-4476: Attachment: YARN-4476.005.patch > Matcher for complex node label expresions > - > > Key: YARN-4476 > URL: https://issues.apache.org/jira/browse/YARN-4476 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Chris Douglas >Assignee: Chris Douglas > Labels: oct16-medium > Attachments: YARN-4476.003.patch, YARN-4476.004.patch, > YARN-4476.005.patch, YARN-4476-0.patch, YARN-4476-1.patch, YARN-4476-2.patch > > > Implementation of a matcher for complex node label expressions based on a > [paper|http://dl.acm.org/citation.cfm?id=1807171] from SIGMOD 2010. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1471) The SLS simulator is not running the preemption policy for CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016887#comment-16016887 ] Ye Zhou commented on YARN-1471: --- [~shv] Attached patch for 2.7 branch. Build and Tests passed locally. Trigger Jenkins. > The SLS simulator is not running the preemption policy for CapacityScheduler > > > Key: YARN-1471 > URL: https://issues.apache.org/jira/browse/YARN-1471 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino >Priority: Minor > Labels: release-blocker > Fix For: 3.0.0-alpha1 > > Attachments: SLSCapacityScheduler.java, YARN-1471.2.patch, > YARN-1471-branch-2.7.4.patch, YARN-1471.patch, YARN-1471.patch > > > The simulator does not run the ProportionalCapacityPreemptionPolicy monitor. > This is because the policy needs to interact with a CapacityScheduler, and > the wrapping done by the simulator breaks this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-1471) The SLS simulator is not running the preemption policy for CapacityScheduler
[ https://issues.apache.org/jira/browse/YARN-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ye Zhou updated YARN-1471: -- Attachment: YARN-1471-branch-2.7.4.patch > The SLS simulator is not running the preemption policy for CapacityScheduler > > > Key: YARN-1471 > URL: https://issues.apache.org/jira/browse/YARN-1471 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino >Priority: Minor > Labels: release-blocker > Fix For: 3.0.0-alpha1 > > Attachments: SLSCapacityScheduler.java, YARN-1471.2.patch, > YARN-1471-branch-2.7.4.patch, YARN-1471.patch, YARN-1471.patch > > > The simulator does not run the ProportionalCapacityPreemptionPolicy monitor. > This is because the policy needs to interact with a CapacityScheduler, and > the wrapping done by the simulator breaks this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4367) SLS webapp doesn't load
[ https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016849#comment-16016849 ] Ye Zhou commented on YARN-4367: --- [~shv] Patch applies to 2.7.4. Build and Tests passed. > SLS webapp doesn't load > --- > > Key: YARN-4367 > URL: https://issues.apache.org/jira/browse/YARN-4367 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler-load-simulator >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Labels: release-blocker > Fix For: 2.8.0 > > Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, > YARN-4367-branch-2.patch > > > When I run the SLS, the webapp doesn't load and I see the following error: > {noformat} > 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: > org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper > java.lang.NullPointerException > at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87) > at > org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483) > at > org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4302) SLS not able start due to NPE in SchedulerApplicationAttempt#getResourceUsageReport
[ https://issues.apache.org/jira/browse/YARN-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016848#comment-16016848 ] Ye Zhou commented on YARN-4302: --- [~shv]Patch applies to 2.7.4. Build and test passed. > SLS not able start due to NPE in > SchedulerApplicationAttempt#getResourceUsageReport > --- > > Key: YARN-4302 > URL: https://issues.apache.org/jira/browse/YARN-4302 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > Labels: release-blocker > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: 0001-YARN-4302.patch, 0001-YARN-4302.patch > > > Configure the samples from tools/sls > yarn-site.xml > capacityscheduler.xml > sls-runner.xml > to /etc/hadoop > Start sls using > > bin/slsrun.sh --input-rumen=sample-data/2jobs2min-rumen-jh.json > --output-dir=out > {noformat} > 15/10/27 14:43:36 ERROR resourcemanager.ResourceManager: Error in handling > event type ATTEMPT_ADDED for applicationAttempt application_1445937212593_0001 > java.lang.NullPointerException > at org.apache.hadoop.yarn.util.resource.Resources.clone(Resources.java:117) > at org.apache.hadoop.yarn.util.resource.Resources.multiply(Resources.java:151) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceUsageReport(SchedulerApplicationAttempt.java:692) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getAppResourceUsageReport(AbstractYarnScheduler.java:326) > at > org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.getAppResourceUsageReport(ResourceSchedulerWrapper.java:912) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:121) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.storeNewApplicationAttempt(RMStateStore.java:819) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.storeAttempt(RMAppAttemptImpl.java:2011) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.access$2700(RMAppAttemptImpl.java:109) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$ScheduleTransition.transition(RMAppAttemptImpl.java:1021) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$ScheduleTransition.transition(RMAppAttemptImpl.java:974) > at > org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:839) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:108) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:820) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:801) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart
[ https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016841#comment-16016841 ] Rohith Sharma K S edited comment on YARN-6555 at 5/19/17 3:17 AM: -- [~vrushalic] If you have not started progress, would you mind if I take over this? Since this is causing NM recovery failure, I feel this is a blocker for YARN-5355 branch merge. was (Author: rohithsharma): [~vrushalic] If you have not started progress, would you mind if you take over this? Since this is causing NM recovery failure, I feel this is a blocker for YARN-5355 branch merge. > Enable flow context read (& corresponding write) for recovering application > with NM restart > > > Key: YARN-6555 > URL: https://issues.apache.org/jira/browse/YARN-6555 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Vrushali C >Assignee: Vrushali C > > If timeline service v2 is enabled and NM is restarted with recovery enabled, > then NM fails to start and throws an error as "flow context can't be null". > This is happening because the flow context did not exist before but now that > timeline service v2 is enabled, ApplicationImpl expects it to exist. > This would also happen even if flow context existed before but since we are > not persisting it / reading it during > ContainerManagerImpl#recoverApplication, it does not get passed in to > ApplicationImpl. > full stack trace > {code} > 2017-05-03 21:51:52,178 FATAL > org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting > NodeManager > java.lang.IllegalArgumentException: flow context cannot be null > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6555) Enable flow context read (& corresponding write) for recovering application with NM restart
[ https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016841#comment-16016841 ] Rohith Sharma K S commented on YARN-6555: - [~vrushalic] If you have not started progress, would you mind if you take over this? Since this is causing NM recovery failure, I feel this is a blocker for YARN-5355 branch merge. > Enable flow context read (& corresponding write) for recovering application > with NM restart > > > Key: YARN-6555 > URL: https://issues.apache.org/jira/browse/YARN-6555 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Vrushali C >Assignee: Vrushali C > > If timeline service v2 is enabled and NM is restarted with recovery enabled, > then NM fails to start and throws an error as "flow context can't be null". > This is happening because the flow context did not exist before but now that > timeline service v2 is enabled, ApplicationImpl expects it to exist. > This would also happen even if flow context existed before but since we are > not persisting it / reading it during > ContainerManagerImpl#recoverApplication, it does not get passed in to > ApplicationImpl. > full stack trace > {code} > 2017-05-03 21:51:52,178 FATAL > org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting > NodeManager > java.lang.IllegalArgumentException: flow context cannot be null > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at > org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588) > at > org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016791#comment-16016791 ] Hadoop QA commented on YARN-6111: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 45s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6111 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868827/YARN-6111.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit | | uname | Linux b38ff0fd8083 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b4adc83 | | Default Java | 1.8.0_131 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15967/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15967/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016768#comment-16016768 ] Carlo Curino commented on YARN-6111: Thanks [~yufeigu]. I am +1 on this subject to a clean yetus. (I marked the patch as submitted to kick that) > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler
[ https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016765#comment-16016765 ] Carlo Curino commented on YARN-6599: [~leftnoteasy] I am not sure I understand this JIRA. I thought we discussed this offline with [~vinodkv], [~chris.douglas], [~kkaranasos], [~subru] and [~asuresh], and agreed that we were *not* going to add extra stuff inside the scheduler to support constraints, as the scheduler(s) code is becoming untenable to maintain and scale. I think it would be best to keep the constraints management code in the scheduler pre-processor / {{PlacementConstraintManager}} so that the feature is orthogonal to the scheduler(s) and does not add complexity there. Can you explain what has changed since that discussion? I noticed that in the document you also stated there is no agreement on this, while I thought was (after a lengthy discussion) a settled issue. > Support rich placement constraints in scheduler > --- > > Key: YARN-6599 > URL: https://issues.apache.org/jira/browse/YARN-6599 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016761#comment-16016761 ] Arun Suresh commented on YARN-6614: --- [~leftnoteasy], thanks for chiming in. So, when we designed the DistributedSchedulingProtocol, I agree it was to ensure that it does not affect the existing protocol. But the drawback of the existing design is that the DS protocol, adds extra methods. This unfortunately complicates both the AMRMProxy RequestInterceptors on the NM as well as work on YARN-6355. The root cause of the problem is that the version of protobuf we have does not support inheritance / extensions. That would have allowed us to just extends the Request and Response objects rather than the protocol itself. > Deprecate DistributedSchedulingProtocol and add required fields directly to > ApplicationMasterProtocol > - > > Key: YARN-6614 > URL: https://issues.apache.org/jira/browse/YARN-6614 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-6614.001.patch, YARN-6614.002.patch > > > The {{DistributedSchedulingProtocol}} was initially designed as a wrapper > protocol over the {{ApplicaitonMasterProtocol}}. > This JIRA proposes to deprecate the protocol itself and move the extra fields > of the {{RegisterDistributedSchedulingAMResponse}} and > {{DistributedSchedulingAllocateResponse}} to the > {{RegisterApplicationMasterResponse}} and {{AllocateResponse}} respectively. > This will simplify the code quite a bit and make it easier to expose it as a > preprocessor. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4109) Exception on RM scheduler page loading with labels
[ https://issues.apache.org/jira/browse/YARN-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated YARN-4109: -- Labels: (was: release-blocker) Fix Version/s: 2.7.4 Just committed this to branch-2.7. > Exception on RM scheduler page loading with labels > -- > > Key: YARN-4109 > URL: https://issues.apache.org/jira/browse/YARN-4109 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Mohammad Shahid Khan >Priority: Minor > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: YARN-4109_1.patch > > > Configure node label and load scheduler Page > On each reload of the page the below exception gets thrown in logs > {code} > 2015-09-03 11:27:08,544 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:139) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) > at > com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:663) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:291) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615) > at > org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1211) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at >
[jira] [Created] (YARN-6626) Embed REST API service into RM
Gour Saha created YARN-6626: --- Summary: Embed REST API service into RM Key: YARN-6626 URL: https://issues.apache.org/jira/browse/YARN-6626 Project: Hadoop YARN Issue Type: Sub-task Reporter: Gour Saha Fix For: yarn-native-services As of now the deployment model of the Native Services REST API service is standalone. There are several cross-cutting solutions that can be inherited for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API service if it is embedded into the RM process. In fact we can expose the REST API via the same port as RM UI (8088 default). The URI path /services/v1/applications will distinguish the REST API calls from other RM APIs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016512#comment-16016512 ] Yufei Gu edited comment on YARN-6111 at 5/18/17 10:46 PM: -- Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. I change rumen trace file from: {code} [{job1}, {job2}, {job3}] {code} To: {code} {job1} {job2} {code} BTW, I tested this trace file. It works very well. was (Author: yufeigu): Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} To: {code} {job1} {job2} {code} BTW, I tested this trace file. It works very well. > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016512#comment-16016512 ] Yufei Gu edited comment on YARN-6111 at 5/18/17 10:45 PM: -- Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} To: {code} {job1} {job2} {code} BTW, I tested this trace file. It works very well. was (Author: yufeigu): Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} {code} {job1} {job2} {code} BTW, I tested this trace file. It works very well. > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4109) Exception on RM scheduler page loading with labels
[ https://issues.apache.org/jira/browse/YARN-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016522#comment-16016522 ] Jonathan Hung commented on YARN-4109: - [~shv] this applies cleanly to branch-2.7. Can we backport it there? Thanks! > Exception on RM scheduler page loading with labels > -- > > Key: YARN-4109 > URL: https://issues.apache.org/jira/browse/YARN-4109 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Mohammad Shahid Khan >Priority: Minor > Labels: release-blocker > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: YARN-4109_1.patch > > > Configure node label and load scheduler Page > On each reload of the page the below exception gets thrown in logs > {code} > 2015-09-03 11:27:08,544 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:139) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) > at > com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:663) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:291) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615) > at > org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1211) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) >
[jira] [Comment Edited] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016512#comment-16016512 ] Yufei Gu edited comment on YARN-6111 at 5/18/17 9:50 PM: - Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} {code} {job1} {job2} {code} BTW, I tested this trace file. It works very well. was (Author: yufeigu): Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} {code} {job1} {job2} {code} > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4109) Exception on RM scheduler page loading with labels
[ https://issues.apache.org/jira/browse/YARN-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-4109: Attachment: YARN-4109-branch-2.7.001.patch > Exception on RM scheduler page loading with labels > -- > > Key: YARN-4109 > URL: https://issues.apache.org/jira/browse/YARN-4109 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Mohammad Shahid Khan >Priority: Minor > Labels: release-blocker > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: YARN-4109_1.patch > > > Configure node label and load scheduler Page > On each reload of the page the below exception gets thrown in logs > {code} > 2015-09-03 11:27:08,544 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:139) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) > at > com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:663) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:291) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615) > at > org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1211) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at >
[jira] [Commented] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016512#comment-16016512 ] Yufei Gu commented on YARN-6111: Mostly like {{JobTraceReader}} changed its way to read the trace files. It reads json objects one by one instead of reading them as an array. So my patch v1 removes the array mark of all jobs json object, and it works. Besides, the example rumen file is supposed to container two jobs instead three jobs indicated by its name. I removed the last invalid job configuration as well. Change from: {code} [{job1}, {job2}, {job3}] {code} {code} {job1} {job2} {code} > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4109) Exception on RM scheduler page loading with labels
[ https://issues.apache.org/jira/browse/YARN-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-4109: Attachment: (was: YARN-4109-branch-2.7.001.patch) > Exception on RM scheduler page loading with labels > -- > > Key: YARN-4109 > URL: https://issues.apache.org/jira/browse/YARN-4109 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt >Assignee: Mohammad Shahid Khan >Priority: Minor > Labels: release-blocker > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: YARN-4109_1.patch > > > Configure node label and load scheduler Page > On each reload of the page the below exception gets thrown in logs > {code} > 2015-09-03 11:27:08,544 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:139) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) > at > com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:663) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:291) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615) > at > org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1211) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at >
[jira] [Assigned] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu reassigned YARN-6111: -- Assignee: Yufei Gu > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6111) Rumen input does't work in SLS
[ https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6111: --- Attachment: YARN-6111.001.patch > Rumen input does't work in SLS > -- > > Key: YARN-6111 > URL: https://issues.apache.org/jira/browse/YARN-6111 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2 > Environment: ubuntu14.0.4 os >Reporter: YuJie Huang >Assignee: Yufei Gu > Labels: test > Attachments: YARN-6111.001.patch > > > Hi guys, > I am trying to learn the use of SLS. > I would like to get the file realtimetrack.json, but this it only > contains "[]" at the end of a simulation. This is the command I use to > run the instance: > HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json > --output-dir=sample-data > All other files, including metrics, appears to be properly populated.I can > also trace with web:http://localhost:10001/simulate > Can someone help? > Thanks -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016502#comment-16016502 ] Robert Kanter commented on YARN-6602: - In fact, you wouldn't want to re-use the proxy object among multiple users; unless you want userA to unknowingly submit things as userB :) > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch, YARN-6602.002.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first
[ https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016493#comment-16016493 ] Karthik Kambatla commented on YARN-6602: +1 on the latest patch. Let us wait for a day or two for [~jianhe] or others to review/comment. Just to call out for anyone looking at the patch: This patch will lead to not re-using the proxy object among multiple users. We don't anticipate that to be a major problem. > Impersonation does not work if standby RM is contacted first > > > Key: YARN-6602 > URL: https://issues.apache.org/jira/browse/YARN-6602 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 3.0.0-alpha3 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Blocker > Attachments: YARN-6602.001.patch, YARN-6602.002.patch > > > When RM HA is enabled, impersonation does not work correctly if the Yarn > Client connects to the standby RM first. When this happens, the > impersonation is "lost" and the client does things on behalf of the > impersonator user. We saw this with the OOZIE-1770 Oozie on Yarn feature. > I need to investigate this some more, but it appears to be related to > delegation tokens. When this issue occurs, the tokens have the owner as > "oozie" instead of the actual user. On a hunch, we found a workaround that > explicitly adding a correct RM HA delegation token fixes the problem: > {code:java} > org.apache.hadoop.yarn.api.records.Token token = > yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf)); > org.apache.hadoop.security.token.Token token2 = new > org.apache.hadoop.security.token.Token(token.getIdentifier().array(), > token.getPassword().array(), new Text(token.getKind()), new > Text(token.getService())); > UserGroupInformation.getCurrentUser().addToken(token2); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment
[ https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016302#comment-16016302 ] Yufei Gu edited comment on YARN-6625 at 5/18/17 8:41 PM: - Patch v1 provides a solution to makes RM admin service to support token like RM scheduler service does. [~daryn], can you take a look? was (Author: yufeigu): Patch v1 provides a solution to makes RM admin service to support token like RM scheduler service does. > yarn application -list returns a tracking URL for AM that doesn't work in > secured and HA environment > > > Key: YARN-6625 > URL: https://issues.apache.org/jira/browse/YARN-6625 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6625.001.patch > > > The tracking URL given at the command line should work secured or not. The > tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed > to redirect it to a RM address like this > http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it > fails to do that because the connection is rejected when AM is talking to RM > admin service to get HA status. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment
[ https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016302#comment-16016302 ] Yufei Gu commented on YARN-6625: Patch v1 provides a solution to makes RM admin service to support token like RM scheduler service does. > yarn application -list returns a tracking URL for AM that doesn't work in > secured and HA environment > > > Key: YARN-6625 > URL: https://issues.apache.org/jira/browse/YARN-6625 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6625.001.patch > > > The tracking URL given at the command line should work secured or not. The > tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed > to redirect it to a RM address like this > http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it > fails to do that because the connection is rejected when AM is talking to RM > admin service to get HA status. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6585) RM fails to start when upgrading from 2.7 to 2.8 for clusters with node labels.
[ https://issues.apache.org/jira/browse/YARN-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-6585: -- Attachment: YARN-6585.0002.patch {{AddToClusterNodeLabelsRequestPBImpl.initLocalNodeLabels()}} was not handling deprecated fields. Thanks [~leftnoteasy] for clarifying the same. Attaching a new patch which handles deprecated labels in string format. However i also had to a newInstance method in addToClusterNodeLabelsRequest to accept labels as string. Added a test case also. [~leftnoteasy], could you please take a look. > RM fails to start when upgrading from 2.7 to 2.8 for clusters with node > labels. > --- > > Key: YARN-6585 > URL: https://issues.apache.org/jira/browse/YARN-6585 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Eric Payne >Assignee: Sunil G >Priority: Blocker > Attachments: YARN-6585.0001.patch, YARN-6585.0002.patch > > > {noformat} > Caused by: java.io.IOException: Not all labels being replaced contained by > known label collections, please check, new labels=[abc] > at > org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.checkReplaceLabelsOnNode(CommonNodeLabelsManager.java:718) > at > org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.replaceLabelsOnNode(CommonNodeLabelsManager.java:737) > at > org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.replaceLabelsOnNode(RMNodeLabelsManager.java:189) > at > org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.loadFromMirror(FileSystemNodeLabelsStore.java:181) > at > org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore.recover(FileSystemNodeLabelsStore.java:208) > at > org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.initNodeLabelStore(CommonNodeLabelsManager.java:251) > at > org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager.serviceStart(CommonNodeLabelsManager.java:265) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > ... 13 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016252#comment-16016252 ] Hadoop QA commented on YARN-6614: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 1s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 12 new + 123 unchanged - 28 fixed = 135 total (was 151) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 26s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 227 unchanged - 4 fixed = 227 total (was 231) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s{color} | {color:green}
[jira] [Updated] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment
[ https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6625: --- Attachment: YARN-6625.001.patch > yarn application -list returns a tracking URL for AM that doesn't work in > secured and HA environment > > > Key: YARN-6625 > URL: https://issues.apache.org/jira/browse/YARN-6625 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6625.001.patch > > > The tracking URL given at the command line should work secured or not. The > tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed > to redirect it to a RM address like this > http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it > fails to do that because the connection is rejected when AM is talking to RM > admin service to get HA status. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016215#comment-16016215 ] Haibo Chen commented on YARN-5705: -- Thanks [~akhilpb] for the update! I played with the latests patch with an ATSv2-enabled YARN cluster. The UI looks good. One issue I noticed is that the cpu vcores & memory used are always zero for flow run. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment
[ https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6625: --- Description: The tracking URL given at the command line should work secured or not. The tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed to redirect it to a RM address like this http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it fails to do that because the connection is rejected when AM is talking to RM admin service to get HA status. was: The tracking URL given at the command line should work secured or not. The tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed to redirect it to a RM address like this http://node-1.abc.com:8088/proxy/application_1494544954891_0002/. AM web server cannot redirect the tracking URL to RM because the connection is rejected when AM is talking to RM admin service to get HA status. > yarn application -list returns a tracking URL for AM that doesn't work in > secured and HA environment > > > Key: YARN-6625 > URL: https://issues.apache.org/jira/browse/YARN-6625 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > > The tracking URL given at the command line should work secured or not. The > tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed > to redirect it to a RM address like this > http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it > fails to do that because the connection is rejected when AM is talking to RM > admin service to get HA status. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6593) [API] Introduce Placement Constraint object
[ https://issues.apache.org/jira/browse/YARN-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016212#comment-16016212 ] Panagiotis Garefalakis commented on YARN-6593: -- [~kkaranasos] thanks for the patch! The discussion totally makes sense to me. Some comments: * Totally agree on using a more object oriented way of representing both PlacementConstraint -> CompoundPlacementConstraint/SimplePlacementConstraint and SimplePlacementConstraint -> TargetConstraint/CardinalityConstraint. I think the main value for doing so is usability. * Protobuf extentions might also be something we could use. For example: {code:java} message TargetConstraintProto { extend SimplePlacementConstraintProto { required TargetConstraintProto costraint = 10; // Unique extension number } } message CardinalityConstraintProto { extend SimplePlacementConstraintProto { required CardinalityConstraintProto costraint = 11; // Unique extension number } } {code} * We will definitely need a validator implementation - also as a way to ensure users type constraints that do make sense * I am also wondering if IN_ANY should be a separate **TargetOperator** - in a case like C5 design-doc example we would avoid using any TargetValues Panagiotis > [API] Introduce Placement Constraint object > --- > > Key: YARN-6593 > URL: https://issues.apache.org/jira/browse/YARN-6593 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Konstantinos Karanasos >Assignee: Konstantinos Karanasos > Attachments: YARN-6593.001.patch > > > This JIRA introduces an object for defining placement constraints. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016208#comment-16016208 ] Sunil G commented on YARN-5705: --- Thanks [~akhilpb] Screen shots looks good for me. I installed ATSv2 based on trunk code. And also tested new UI with timeline support against that. All pages are getting loaded correctly.. Few comments: # In Flow Runs and Metrics pages, decrease font size for all headers and make its sync with other pages # Ensure that page shows correct error message when ATSv2 is not up. If ATSv1 is up, whats the behavior for this UI? # single-metric-table.js uses many HTML tags. Could we make it better? # I think if app is not found in RM, its been fetched from RM. Correct? # Remove unnecessary commented code from yarn-flowrun-metric.js # Could you also ensure all jshint errors are handled? > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016185#comment-16016185 ] Wangda Tan commented on YARN-6614: -- [~asuresh], I'm not sure if it is the correct approach, IIRC, previously DistributedSchedulingProtocol is introduced to make distributed container allocation doesn't affect any protocol for AM-RM communication. Adding it to ApplicationMasterProtocol/RegisterAMRequest makes centralized/distributed request mixed in protocol, which is hard for yarn developers to use, could you explain a little bit more about why doing this? > Deprecate DistributedSchedulingProtocol and add required fields directly to > ApplicationMasterProtocol > - > > Key: YARN-6614 > URL: https://issues.apache.org/jira/browse/YARN-6614 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-6614.001.patch, YARN-6614.002.patch > > > The {{DistributedSchedulingProtocol}} was initially designed as a wrapper > protocol over the {{ApplicaitonMasterProtocol}}. > This JIRA proposes to deprecate the protocol itself and move the extra fields > of the {{RegisterDistributedSchedulingAMResponse}} and > {{DistributedSchedulingAllocateResponse}} to the > {{RegisterApplicationMasterResponse}} and {{AllocateResponse}} respectively. > This will simplify the code quite a bit and make it easier to expose it as a > preprocessor. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5531) UnmanagedAM pool manager for federating application across clusters
[ https://issues.apache.org/jira/browse/YARN-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-5531: --- Hi Karthik, Can you please take another look when you have time? Thanks in advance! Best, Botong On Wed, May 10, 2017 at 3:30 PM, Karthik Kambatla (JIRA)> UnmanagedAM pool manager for federating application across clusters > --- > > Key: YARN-5531 > URL: https://issues.apache.org/jira/browse/YARN-5531 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Botong Huang > Attachments: YARN-5531-YARN-2915.v10.patch, > YARN-5531-YARN-2915.v11.patch, YARN-5531-YARN-2915.v1.patch, > YARN-5531-YARN-2915.v2.patch, YARN-5531-YARN-2915.v3.patch, > YARN-5531-YARN-2915.v4.patch, YARN-5531-YARN-2915.v5.patch, > YARN-5531-YARN-2915.v6.patch, YARN-5531-YARN-2915.v7.patch, > YARN-5531-YARN-2915.v8.patch, YARN-5531-YARN-2915.v9.patch > > > One of the main tenets the YARN Federation is to *transparently* scale > applications across multiple clusters. This is achieved by running UAMs on > behalf of the application on other clusters. This JIRA tracks the addition of > a UnmanagedAM pool manager for federating application across clusters which > will be used the FederationInterceptor (YARN-3666) which is part of the > AMRMProxy pipeline introduced in YARN-2884. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment
Yufei Gu created YARN-6625: -- Summary: yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment Key: YARN-6625 URL: https://issues.apache.org/jira/browse/YARN-6625 Project: Hadoop YARN Issue Type: Bug Components: amrmproxy Affects Versions: 3.0.0-alpha2 Reporter: Yufei Gu Assignee: Yufei Gu The tracking URL given at the command line should work secured or not. The tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed to redirect it to a RM address like this http://node-1.abc.com:8088/proxy/application_1494544954891_0002/. AM web server cannot redirect the tracking URL to RM because the connection is rejected when AM is talking to RM admin service to get HA status. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6560) SLS doesn't honor node total resource specified in sls-runner.xml
[ https://issues.apache.org/jira/browse/YARN-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016153#comment-16016153 ] Hudson commented on YARN-6560: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11751 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11751/]) YARN-6560. SLS doesn't honor node total resource specified in (sunilg: rev 40e6a85d25387d4025585c5726b3e4e24c2c1572) * (edit) hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java > SLS doesn't honor node total resource specified in sls-runner.xml > - > > Key: YARN-6560 > URL: https://issues.apache.org/jira/browse/YARN-6560 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6560.1.patch, YARN-6560.2.patch, YARN-6560.3.patch > > > Now SLSRunner extends ToolRunner, so setConf will be called twice: once in > the init() of SLSRunner and once in ToolRunner. The later one will overwrite > the previous one so it won't correctly load sls-runner.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6540) Resource Manager is spelled "Resource Manger" in ResourceManagerRestart.md and ResourceManagerHA.md
[ https://issues.apache.org/jira/browse/YARN-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016128#comment-16016128 ] Yufei Gu commented on YARN-6540: +1 > Resource Manager is spelled "Resource Manger" in ResourceManagerRestart.md > and ResourceManagerHA.md > --- > > Key: YARN-6540 > URL: https://issues.apache.org/jira/browse/YARN-6540 > Project: Hadoop YARN > Issue Type: Bug > Components: site >Reporter: Grant Sohn >Assignee: Grant Sohn >Priority: Trivial > Attachments: YARN-6540.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016109#comment-16016109 ] Sunil G commented on YARN-6280: --- I think its fine. [~rohithsharma], any further comments? > Add a query parameter in ResourceManager Cluster Applications REST API to > control whether or not returns ResourceRequest > > > Key: YARN-6280 > URL: https://issues.apache.org/jira/browse/YARN-6280 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager, restapi >Affects Versions: 2.7.3 >Reporter: Lantao Jin >Assignee: Lantao Jin > Attachments: YARN-6280.001.patch, YARN-6280.002.patch, > YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, > YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, > YARN-6280.009.patch > > > Begin from v2.7, the ResourceManager Cluster Applications REST API returns > ResourceRequest list. It's a very large construction in AppInfo. > As a test, we use below URI to query only 2 results: > http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2 > The results are very different: > ||Hadoop version|Total Character|Total Word|Total Lines|Size|| > |2.4.1|1192| 42| 42| 1.2 KB| > |2.7.1|1222179| 48740| 48735| 1.21 MB| > Most RESTful API requesters don't know about this after upgraded and their > old queries may cause ResourceManager more GC consuming and slower. Even if > they know this but have no idea to reduce the impact of ResourceManager > except slow down their query frequency. > The patch adding a query parameter "showResourceRequests" to help requesters > who don't need this information to reduce the overhead. In consideration of > compatibility of interface, the default value is true if they don't set the > parameter, so the behaviour is the same as now. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6471) Support to add min/max resource configuration for a queue
[ https://issues.apache.org/jira/browse/YARN-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-6471: -- Attachment: YARN-6471.006.patch Optimizing apis in AbstractCSQueue further to make it more simpler. Few additional changes # Consider either capacity or absolute resource. Covering all validation to handle this case. # Updated REST api to add absolute resources in queue info [~leftnoteasy], please help to take a look. ToD: # more test cases > Support to add min/max resource configuration for a queue > - > > Key: YARN-6471 > URL: https://issues.apache.org/jira/browse/YARN-6471 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-6471.001.patch, YARN-6471.002.patch, > YARN-6471.003.patch, YARN-6471.004.patch, YARN-6471.005.patch, > YARN-6471.006.patch > > > This jira will track the new configurations which are needed to configure min > resource and max resource of various resource types in a queue. > For eg: > {noformat} > yarn.scheduler.capacity.root.default.memory.min-resource > yarn.scheduler.capacity.root.default.memory.max-resource > yarn.scheduler.capacity.root.default.vcores.min-resource > yarn.scheduler.capacity.root.default.vcores.max-resource > {noformat} > Uploading a patch soon -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6614) Deprecate DistributedSchedulingProtocol and add required fields directly to ApplicationMasterProtocol
[ https://issues.apache.org/jira/browse/YARN-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6614: -- Attachment: YARN-6614.002.patch Uploading patch to fix testcase, findbugs issues and some of the checkstyle warnings. > Deprecate DistributedSchedulingProtocol and add required fields directly to > ApplicationMasterProtocol > - > > Key: YARN-6614 > URL: https://issues.apache.org/jira/browse/YARN-6614 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-6614.001.patch, YARN-6614.002.patch > > > The {{DistributedSchedulingProtocol}} was initially designed as a wrapper > protocol over the {{ApplicaitonMasterProtocol}}. > This JIRA proposes to deprecate the protocol itself and move the extra fields > of the {{RegisterDistributedSchedulingAMResponse}} and > {{DistributedSchedulingAllocateResponse}} to the > {{RegisterApplicationMasterResponse}} and {{AllocateResponse}} respectively. > This will simplify the code quite a bit and make it easier to expose it as a > preprocessor. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6613) Update json validation for new native services providers
[ https://issues.apache.org/jira/browse/YARN-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016003#comment-16016003 ] Hadoop QA commented on YARN-6613: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 25 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 34s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} yarn-native-services passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 9 new + 528 unchanged - 4 fixed = 537 total (was 532) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 56s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 14s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | slider.core.conf.TestConfTreeLoadExamples | | | hadoop.yarn.services.api.impl.TestApplicationApiService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6613 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868547/YARN-6613-yarn-native-services.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 969517f7fddc 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / 08c756e | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015650#comment-16015650 ] Hadoop QA commented on YARN-6615: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 52s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 49s{color} | {color:red} root in branch-2.6.2 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} branch-2.6.2 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.2 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} branch-2.6.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} branch-2.6.2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} branch-2.6.2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 23s{color} | {color:green} branch-2.6.2 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s{color} | {color:red} hadoop-yarn-server-web-proxy in branch-2.6.2 failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} branch-2.6.2 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 1429 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 30s{color} | {color:red} The patch 73 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK v1.8.0_131. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed with JDK v1.7.0_131. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s{color} | {color:red} The patch generated 98 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy | | | HTTP parameter directly written to HTTP header output in org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(ServletRequest, ServletResponse, FilterChain) At AmIpFilter.java:HTTP header output in
[jira] [Updated] (YARN-6615) AmIpFilter drops query parameters on redirect
[ https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wilfred Spiegelenburg updated YARN-6615: Attachment: YARN-6615-branch-2.6.2.patch Attached the wrong version of the patch this one has the fixed junit test and passes the redirect URL correctly through the encoding. The find bugs warning would need more to fix and would require most of what we do in ProxyUtils in later releases. let me know if that needs fixing for branch-2.6 or not. > AmIpFilter drops query parameters on redirect > - > > Key: YARN-6615 > URL: https://issues.apache.org/jira/browse/YARN-6615 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Attachments: YARN-6615.1.patch, YARN-6615-branch-2.6.1.patch, > YARN-6615-branch-2.6.2.patch, YARN-6615-branch-2.8.1.patch > > > When an AM web request is redirected to the RM the query parameters are > dropped from the web request. > This happens for Spark as described in SPARK-20772. > The repro steps are: > - Start up the spark-shell in yarn mode and run a job > - Try to access the job details through http://:4040/jobs/job?id=0 > - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter) > This works fine in local or standalone mode, but does not work on Yarn where > the query parameter is dropped. If the UI filter > org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from > the config which shows that the problem is in the filter -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor
[ https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015619#comment-16015619 ] Daniel Templeton commented on YARN-6623: It's not obvious to me why having two ways to disable privileged containers is necessary. The container-executer.cfg also lives on the NM, so what we're saying is that we want two ways to disable privileged containers on the NM, both controlled by the administrator. Is the point to keep someone from being able to use the container-executor binary as a security exploit outside of the NM? If someone manages to gain the ability to launch the container-executor directly, privileged containers are the least of our worries. > Add support to turn off launching privileged containers in the > container-executor > - > > Key: YARN-6623 > URL: https://issues.apache.org/jira/browse/YARN-6623 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > > Currently, launching privileged containers is controlled by the NM. We should > add a flag to the container-executor.cfg allowing admins to disable launching > privileged containers at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6601) Allow service to be started as System Services during serviceapi start up
[ https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015612#comment-16015612 ] Hadoop QA commented on YARN-6601: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 57s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} yarn-native-services passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 13 new + 206 unchanged - 1 fixed = 219 total (was 207) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6601 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868719/YARN-6601-yarn-native-services.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 31b06daad55a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / 08c756e | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15963/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit |
[jira] [Commented] (YARN-6622) Document Docker work as experimental
[ https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015611#comment-16015611 ] Daniel Templeton commented on YARN-6622: Is this intended as a temporary notice or for the long haul? If it's a permanent change, I would rather say that it "may have" security implications. Either way, it would be nice to enumerate the known security risks in a security section (and reference that section from the notice). There are some we could mention, like use of privileged containers. When YARN-5534 is in, we could add the volume mounts to the list. Etc. > Document Docker work as experimental > > > Key: YARN-6622 > URL: https://issues.apache.org/jira/browse/YARN-6622 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6622.001.patch > > > We should update the Docker support documentation calling out the Docker work > as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6622) Document Docker work as experimental
[ https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015582#comment-16015582 ] Varun Vasudev commented on YARN-6622: - [~chris.douglas], [~templedf], [~sidharta-s], [~shaneku...@gmail.com] - can you take a look and let me know if the modified documentation looks ok? > Document Docker work as experimental > > > Key: YARN-6622 > URL: https://issues.apache.org/jira/browse/YARN-6622 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6622.001.patch > > > We should update the Docker support documentation calling out the Docker work > as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6601) Allow service to be started as System Services during serviceapi start up
[ https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated YARN-6601: -- Attachment: YARN-6601-yarn-native-services.001.patch > Allow service to be started as System Services during serviceapi start up > - > > Key: YARN-6601 > URL: https://issues.apache.org/jira/browse/YARN-6601 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S > Attachments: SystemServices.pdf, > YARN-6601-yarn-native-services.001.patch > > > This is extended from YARN-1593 focusing only on system services. This > particular JIRA focusing on starting the system services during > native-service-api start up. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6601) Allow service to be started as System Services during serviceapi start up
[ https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated YARN-6601: -- Attachment: (was: system_services_001.patch) > Allow service to be started as System Services during serviceapi start up > - > > Key: YARN-6601 > URL: https://issues.apache.org/jira/browse/YARN-6601 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S > Attachments: SystemServices.pdf > > > This is extended from YARN-1593 focusing only on system services. This > particular JIRA focusing on starting the system services during > native-service-api start up. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015477#comment-16015477 ] Akhil PB edited comment on YARN-5705 at 5/18/17 9:48 AM: - Adding new patch {{YARN-5705.016.patch}} with minor bug fixes on top of YARN-5705.015 patch. * Bug fix for page breaking when clicking on barchart in yarn-flowrun/info page was (Author: akhilpb): Minor bug fix on top of YARN-5705.015 patch. * Bug fix for page breaking when clicking on barchart in yarn-flowrun/info page > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6622) Document Docker work as experimental
[ https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015504#comment-16015504 ] Hadoop QA commented on YARN-6622: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6622 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868711/YARN-6622.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux d483969cae16 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b46cd31 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15961/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document Docker work as experimental > > > Key: YARN-6622 > URL: https://issues.apache.org/jira/browse/YARN-6622 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6622.001.patch > > > We should update the Docker support documentation calling out the Docker work > as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6601) Allow service to be started as System Services during serviceapi start up
[ https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015497#comment-16015497 ] Hadoop QA commented on YARN-6601: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} YARN-6601 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6601 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868714/system_services_001.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15962/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow service to be started as System Services during serviceapi start up > - > > Key: YARN-6601 > URL: https://issues.apache.org/jira/browse/YARN-6601 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S > Attachments: system_services_001.patch, SystemServices.pdf > > > This is extended from YARN-1593 focusing only on system services. This > particular JIRA focusing on starting the system services during > native-service-api start up. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6601) Allow service to be started as System Services during serviceapi start up
[ https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated YARN-6601: -- Attachment: system_services_001.patch > Allow service to be started as System Services during serviceapi start up > - > > Key: YARN-6601 > URL: https://issues.apache.org/jira/browse/YARN-6601 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S > Attachments: system_services_001.patch, SystemServices.pdf > > > This is extended from YARN-1593 focusing only on system services. This > particular JIRA focusing on starting the system services during > native-service-api start up. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015483#comment-16015483 ] Hadoop QA commented on YARN-5705: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-5705 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868712/YARN-5705.016.patch | | Optional Tests | asflicense | | uname | Linux 77d350040788 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b46cd31 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15960/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015484#comment-16015484 ] Akhil PB commented on YARN-5705: Hi [~haibochen] I have uploaded latest UI patch for ATSv2 support which is rebased on top of latest trunk UI code. Please use {{YARN-5705.016.patch}} file and apply on top of latest trunk. cc/ [~sunilg] > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-5705: --- Attachment: YARN-5705.016.patch Minor bug fix on top of YARN-5705.015 patch. * Bug fix for page breaking when clicking on barchart in yarn-flowrun/info page > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705.016.patch, > YARN-5705-YARN-3368.001.patch, YARN-5705-YARN-3368.002.patch, > YARN-5705-YARN-3368.003.patch, YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6622) Document Docker work as experimental
[ https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-6622: Attachment: YARN-6622.001.patch > Document Docker work as experimental > > > Key: YARN-6622 > URL: https://issues.apache.org/jira/browse/YARN-6622 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6622.001.patch > > > We should update the Docker support documentation calling out the Docker work > as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-5705: --- Attachment: Screenshots.zip Uploading screenshots of ATSv2 UI pages. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: Screenshots.zip, YARN-5705.001.patch, > YARN-5705.002.patch, YARN-5705.003.patch, YARN-5705.004.patch, > YARN-5705.005.patch, YARN-5705.006.patch, YARN-5705.007.patch, > YARN-5705.008.patch, YARN-5705.009.patch, YARN-5705.010.patch, > YARN-5705.011.patch, YARN-5705.012.patch, YARN-5705.013.patch, > YARN-5705.014.patch, YARN-5705.015.patch, YARN-5705-YARN-3368.001.patch, > YARN-5705-YARN-3368.002.patch, YARN-5705-YARN-3368.003.patch, > YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015465#comment-16015465 ] Hadoop QA commented on YARN-5705: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-5705 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868708/YARN-5705.015.patch | | Optional Tests | asflicense | | uname | Linux 193bd67b7c69 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b46cd31 | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15958/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: YARN-5705.001.patch, YARN-5705.002.patch, > YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, > YARN-5705.006.patch, YARN-5705.007.patch, YARN-5705.008.patch, > YARN-5705.009.patch, YARN-5705.010.patch, YARN-5705.011.patch, > YARN-5705.012.patch, YARN-5705.013.patch, YARN-5705.014.patch, > YARN-5705.015.patch, YARN-5705-YARN-3368.001.patch, > YARN-5705-YARN-3368.002.patch, YARN-5705-YARN-3368.003.patch, > YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI
[ https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-5705: --- Attachment: YARN-5705.015.patch Attaching latest UI patch for ATSv2 support rebased on latest trunk code. > [YARN-3368] Add support for Timeline V2 to new web UI > - > > Key: YARN-5705 > URL: https://issues.apache.org/jira/browse/YARN-5705 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Akhil PB > Labels: oct16-hard > Attachments: YARN-5705.001.patch, YARN-5705.002.patch, > YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, > YARN-5705.006.patch, YARN-5705.007.patch, YARN-5705.008.patch, > YARN-5705.009.patch, YARN-5705.010.patch, YARN-5705.011.patch, > YARN-5705.012.patch, YARN-5705.013.patch, YARN-5705.014.patch, > YARN-5705.015.patch, YARN-5705-YARN-3368.001.patch, > YARN-5705-YARN-3368.002.patch, YARN-5705-YARN-3368.003.patch, > YARN-5705-YARN-3368.004.patch > > > Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6624) The implementation of getLocalizationStatus
Bingxue Qiu created YARN-6624: - Summary: The implementation of getLocalizationStatus Key: YARN-6624 URL: https://issues.apache.org/jira/browse/YARN-6624 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 2.9.0 Reporter: Bingxue Qiu Fix For: 2.9.0 We have a use case, where the client need to know the state of localization resources, With the design of [Continuous-resource-localization | https://issues.apache.org/jira/secure/attachment/12825041/Continuous-resource-localization.pdf] , we choose to include it as part of ContainerStatus. Proposal: When using the getContainerStatus, we can check the state by pendingResources,resourcesFailedToBeLocalized in ResourceSet. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath
[ https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015317#comment-16015317 ] Ayappan commented on YARN-6141: --- Hi Guys, Any update on this ? > ppc64le on Linux doesn't trigger __linux get_executable codepath > > > Key: YARN-6141 > URL: https://issues.apache.org/jira/browse/YARN-6141 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha3 > Environment: $ uname -a > Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 > 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux >Reporter: Sonia Garudi >Assignee: Ayappan > Labels: ppc64le > Attachments: YARN-6141.patch > > > On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' > project with the below error: > Cannot safely determine executable path with a relative HADOOP_CONF_DIR on > this operating system. > [WARNING] #error Cannot safely determine executable path with a relative > HADOOP_CONF_DIR on this operating system. > [WARNING] ^ > [WARNING] make[2]: *** > [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o] > Error 1 > [WARNING] make[2]: *** Waiting for unfinished jobs > [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 > [WARNING] make: *** [all] Error 2 > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > Cmake version used : > $ /usr/bin/cmake --version > cmake version 2.8.12.2 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath
[ https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayappan reassigned YARN-6141: - Assignee: Ayappan > ppc64le on Linux doesn't trigger __linux get_executable codepath > > > Key: YARN-6141 > URL: https://issues.apache.org/jira/browse/YARN-6141 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 3.0.0-alpha3 > Environment: $ uname -a > Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 > 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux >Reporter: Sonia Garudi >Assignee: Ayappan > Labels: ppc64le > Attachments: YARN-6141.patch > > > On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' > project with the below error: > Cannot safely determine executable path with a relative HADOOP_CONF_DIR on > this operating system. > [WARNING] #error Cannot safely determine executable path with a relative > HADOOP_CONF_DIR on this operating system. > [WARNING] ^ > [WARNING] make[2]: *** > [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o] > Error 1 > [WARNING] make[2]: *** Waiting for unfinished jobs > [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2 > [WARNING] make: *** [all] Error 2 > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > Cmake version used : > $ /usr/bin/cmake --version > cmake version 2.8.12.2 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6622) Document Docker work as experimental
[ https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-6622: Component/s: documentation > Document Docker work as experimental > > > Key: YARN-6622 > URL: https://issues.apache.org/jira/browse/YARN-6622 > Project: Hadoop YARN > Issue Type: Task > Components: documentation >Reporter: Varun Vasudev >Assignee: Varun Vasudev > > We should update the Docker support documentation calling out the Docker work > as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6623) Add support to turn off launching privileged containers in the container-executor
Varun Vasudev created YARN-6623: --- Summary: Add support to turn off launching privileged containers in the container-executor Key: YARN-6623 URL: https://issues.apache.org/jira/browse/YARN-6623 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Reporter: Varun Vasudev Assignee: Varun Vasudev Currently, launching privileged containers is controlled by the NM. We should add a flag to the container-executor.cfg allowing admins to disable launching privileged containers at the container-executor level. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6622) Document Docker work as experimental
Varun Vasudev created YARN-6622: --- Summary: Document Docker work as experimental Key: YARN-6622 URL: https://issues.apache.org/jira/browse/YARN-6622 Project: Hadoop YARN Issue Type: Task Reporter: Varun Vasudev Assignee: Varun Vasudev We should update the Docker support documentation calling out the Docker work as experimental. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org