[jira] [Updated] (YARN-7672) hadoop-sls can not simulate huge scale of YARN
[ https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangshilong updated YARN-7672: --- Attachment: YARN-7672.patch > hadoop-sls can not simulate huge scale of YARN > -- > > Key: YARN-7672 > URL: https://issues.apache.org/jira/browse/YARN-7672 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: zhangshilong >Assignee: zhangshilong > Attachments: YARN-7672.patch > > > Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler > pressure test. > Using SLS,we start 2000+ threads to simulate NM and AM. But cpu.load very > high to 100+. I thought that will affect performance evaluation of > scheduler. > So I thought to separate the scheduler from the simulator. > I start a real RM. Then SLS will register nodes to RM,And submit apps to RM > using RM RPC. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
[ https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296390#comment-16296390 ] Jeff Zhang commented on YARN-7673: -- \cc [~djp] > ClassNotFoundException: > org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using > hadoop-client-minicluster > -- > > Key: YARN-7673 > URL: https://issues.apache.org/jira/browse/YARN-7673 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Jeff Zhang > > I'd like to use hadoop-client-minicluster for hadoop downstream project, but > I encounter the following exception when starting hadoop minicluster. And I > check the hadoop-client-minicluster, it indeed does not have this class. Is > this something that is missing when packaging the published jar ? > {code} > java.lang.NoClassDefFoundError: > org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:763) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) > at java.net.URLClassLoader.access$100(URLClassLoader.java:73) > at java.net.URLClassLoader$1.run(URLClassLoader.java:368) > at java.net.URLClassLoader$1.run(URLClassLoader.java:362) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:361) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) > at > org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) > at > org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster
Jeff Zhang created YARN-7673: Summary: ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster Key: YARN-7673 URL: https://issues.apache.org/jira/browse/YARN-7673 Project: Hadoop YARN Issue Type: Bug Reporter: Jeff Zhang I'd like to use hadoop-client-minicluster for hadoop downstream project, but I encounter the following exception when starting hadoop minicluster. And I check the hadoop-client-minicluster, it indeed does not have this class. Is this something that is missing when packaging the published jar ? {code} java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851) at org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7032: Summary: [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled. (was: [ATSv2] NPE while starting hbase co-processor) > [ATSv2] NPE while starting hbase co-processor when HBase authorization is > enabled. > -- > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: > hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log > > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296344#comment-16296344 ] Rohith Sharma K S commented on YARN-7032: - I found that when HBase is enabled with authorization by configuring _hbase.coprocessor.region.classes_ with value _org.apache.hadoop.hbase.security.access.AccessController,_ makes RegionServer to go down along FlowRunCoprocessor. This is because when authorization is enabled, additional attributes are added to put#attributes map by AccessController. So FlowRunCoprocessor should handle null values before adding into tags > [ATSv2] NPE while starting hbase co-processor > - > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: > hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log > > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7032) [ATSv2] NPE while starting hbase co-processor
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reassigned YARN-7032: --- Assignee: Rohith Sharma K S > [ATSv2] NPE while starting hbase co-processor > - > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: > hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log > > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7672) hadoop-sls can not simulate huge scale of YARN
[ https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangshilong updated YARN-7672: --- Description: Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler pressure test. Using SLS,we start 2000+ threads to simulate NM and AM. But cpu.load very high to 100+. I thought that will affect performance evaluation of scheduler. So I thought to separate the scheduler from the simulator. I start a real RM. Then SLS will register nodes to RM,And submit apps to RM using RM RPC. was: Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler pressure test. we start 2000+ threads to simulate NM and AM. So cpu.load very high to 100+. I thought that will affect performance evaluation of scheduler. So I thought to separate the scheduler from the simulator. I start a real RM. Then SLS will register nodes to RM,And submit apps to RM using RM RPC. > hadoop-sls can not simulate huge scale of YARN > -- > > Key: YARN-7672 > URL: https://issues.apache.org/jira/browse/YARN-7672 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: zhangshilong >Assignee: zhangshilong > > Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler > pressure test. > Using SLS,we start 2000+ threads to simulate NM and AM. But cpu.load very > high to 100+. I thought that will affect performance evaluation of > scheduler. > So I thought to separate the scheduler from the simulator. > I start a real RM. Then SLS will register nodes to RM,And submit apps to RM > using RM RPC. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7672) hadoop-sls can not simulate huge scale of YARN
zhangshilong created YARN-7672: -- Summary: hadoop-sls can not simulate huge scale of YARN Key: YARN-7672 URL: https://issues.apache.org/jira/browse/YARN-7672 Project: Hadoop YARN Issue Type: Improvement Reporter: zhangshilong Assignee: zhangshilong Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler pressure test. we start 2000+ threads to simulate NM and AM. So cpu.load very high to 100+. I thought that will affect performance evaluation of scheduler. So I thought to separate the scheduler from the simulator. I start a real RM. Then SLS will register nodes to RM,And submit apps to RM using RM RPC. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7620) Allow partition filters on Queues page
[ https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296312#comment-16296312 ] genericqa commented on YARN-7620: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7620 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902778/YARN-7620.004.patch | | Optional Tests | asflicense shadedclient | | uname | Linux 9ba532bffd0d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 25a36b7 | | maven | version: Apache Maven 3.3.9 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18978/artifact/out/whitespace-eol.txt | | Max. process+thread count | 313 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18978/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow partition filters on Queues page > -- > > Key: YARN-7620 > URL: https://issues.apache.org/jira/browse/YARN-7620 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Attachments: YARN-7620.001.patch, YARN-7620.002.patch, > YARN-7620.003.patch, YARN-7620.004.patch > > > Allow users their queues based on node labels -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7620) Allow partition filters on Queues page
[ https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasudevan Skm updated YARN-7620: Attachment: YARN-7620.004.patch > Allow partition filters on Queues page > -- > > Key: YARN-7620 > URL: https://issues.apache.org/jira/browse/YARN-7620 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Attachments: YARN-7620.001.patch, YARN-7620.002.patch, > YARN-7620.003.patch, YARN-7620.004.patch > > > Allow users their queues based on node labels -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296241#comment-16296241 ] Wilfred Spiegelenburg commented on YARN-7622: - Looking good now, thank you for the update. Can you please have a look at the newly introduced checkstyle issues and clean them up please. After that we should be good to go. > Allow fair-scheduler configuration on HDFS > -- > > Key: YARN-7622 > URL: https://issues.apache.org/jira/browse/YARN-7622 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: YARN-7622.001.patch, YARN-7622.002.patch, > YARN-7622.003.patch, YARN-7622.004.patch > > > The FairScheduler requires the allocation file to be hosted on the local > filesystem on the RM node(s). Allowing HDFS to store the allocation file will > provide improved redundancy, more options for scheduler updates, and RM > failover consistency in HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7661) NodeManager metrics return wrong value after update node resource
[ https://issues.apache.org/jira/browse/YARN-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296229#comment-16296229 ] Yang Wang commented on YARN-7661: - [~jlowe], thanks for your review and commit. > NodeManager metrics return wrong value after update node resource > - > > Key: YARN-7661 > URL: https://issues.apache.org/jira/browse/YARN-7661 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yang Wang >Assignee: Yang Wang > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6 > > Attachments: YARN-7661.001.patch, YARN-7661.002.patch > > > {code:title=NodeManagerMetrics.java} > public void addResource(Resource res) { > availableMB = availableMB + res.getMemorySize(); > availableGB.incr((int)Math.floor(availableMB/1024d)); > availableVCores.incr(res.getVirtualCores()); > } > {code} > When the node resource was updated through RM-NM heartbeat, the NM metric > will get wrong value. > The root cause of this issue is that new resource has been added to > availableMB, so not needed to increase for availableGB again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6266) Extend the resource class to support ports management
[ https://issues.apache.org/jira/browse/YARN-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296222#comment-16296222 ] genericqa commented on YARN-6266: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} YARN-6266 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6266 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867498/YARN-6266.001.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18976/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Extend the resource class to support ports management > - > > Key: YARN-6266 > URL: https://issues.apache.org/jira/browse/YARN-6266 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: jialei weng > Attachments: YARN-6266.001.patch > > > Just like the vcores and memory, ports is an important resource for job to > allocate. We should add the ports management logic to yarn. It can support > the user to allocate two jobs(with same port requirement) to different > machines. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7079) to support nodemanager ports management
[ https://issues.apache.org/jira/browse/YARN-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296223#comment-16296223 ] genericqa commented on YARN-7079: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} YARN-7079 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7079 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883317/YARN_7079.001.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18977/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > to support nodemanager ports management > - > > Key: YARN-7079 > URL: https://issues.apache.org/jira/browse/YARN-7079 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: 田娟娟 > Attachments: YARN_7079.001.patch > > > Just like the vcores and memory, ports is also important resource > information to job allocation . So we add the ports management logic to yarn. > It can meet the user jobs' ports request, and never allocate two jobs(with > same port requirement) to one machine. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7507) TestNodeLabelContainerAllocation failing in trunk
[ https://issues.apache.org/jira/browse/YARN-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296216#comment-16296216 ] Eric Yang commented on YARN-7507: - This test was broken by YARN-7466. > TestNodeLabelContainerAllocation failing in trunk > - > > Key: YARN-7507 > URL: https://issues.apache.org/jira/browse/YARN-7507 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Bibin A Chundatt > > > https://builds.apache.org/job/PreCommit-YARN-Build/18498/testReport/ > {code} > TestNodeLabelContainerAllocation.testPreferenceOfNeedyPrioritiesUnderSameAppTowardsNodePartitions:786->checkPendingResource:557 > expected:<1024> but was:<0> > > TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions:985->checkPendingResource:557 > expected:<5120> but was:<0> > TestNodeLabelContainerAllocation.testQueueMetricsWithLabels:1962 > expected:<0> but was:<1024> > > TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode:2065 > expected:<1024> but was:<2048> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container
[ https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296214#comment-16296214 ] Eric Yang commented on YARN-7466: - This patch is breaking Resource Manager unit test. {code} mvn clean test -Dtest=TestNodeLabelContainerAllocation {code} Please check with this unit test. > ResourceRequest has a different default for allocationRequestId than Container > -- > > Key: YARN-7466 > URL: https://issues.apache.org/jira/browse/YARN-7466 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Chandni Singh >Assignee: Chandni Singh > Fix For: 3.1.0 > > Attachments: YARN-7466.001.patch > > > The default value of allocationRequestId is inconsistent. > It is -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle
[ https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296151#comment-16296151 ] genericqa commented on YARN-5366: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 17 new + 595 unchanged - 0 fixed = 612 total (was 595) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 3s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} | |
[jira] [Commented] (YARN-7612) Add Placement Processor Framework
[ https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296117#comment-16296117 ] Konstantinos Karanasos commented on YARN-7612: -- [~leftnoteasy]/[~asuresh], I was just checking the packages structure. At the moment we have resourcemanager/placement, resourcemanager/scheduler/placement, and resourcemanager/scheduler/constraint. Do we need to have two placement packages. It seems that we should either consolidate them or find a better name for one of the two. Is the first one just queue-placement (that's what I see) or can it be anything else? I know it is not directly related to this JIRA, but since we are at this part of the code, I wanted to hear your opinion and we can open another JIRA if needed. > Add Placement Processor Framework > - > > Key: YARN-7612 > URL: https://issues.apache.org/jira/browse/YARN-7612 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7612-YARN-6592.001.patch, > YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, > YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, > YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, > YARN-7612-v2.wip.patch, YARN-7612.wip.patch > > > This introduces a Placement Processor and a Planning algorithm framework to > handle placement constraints and scheduling requests from an app and places > them on nodes. > The actual planning algorithm(s) will be handled in a YARN-7613. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API
[ https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296071#comment-16296071 ] Eric Yang commented on YARN-7605: - The unit test failure for hadoop-common is broken by HADOOP-10054. Resource Manager unit test failure is tracked by YARN-7507, YARN-7559. Both failures are not related to this patch. > Implement doAs for Api Service REST API > --- > > Key: YARN-7605 > URL: https://issues.apache.org/jira/browse/YARN-7605 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Eric Yang > Fix For: yarn-native-services > > Attachments: YARN-7605.001.patch, YARN-7605.004.patch, > YARN-7605.005.patch, YARN-7605.006.patch > > > In YARN-7540, all client entry points for API service is centralized to use > REST API instead of having direct file system and resource manager rpc calls. > This change helped to centralize yarn metadata to be owned by yarn user > instead of crawling through every user's home directory to find metadata. > The next step is to make sure "doAs" calls work properly for API Service. > The metadata is stored by YARN user, but the actual workload still need to be > performed as end users, hence API service must authenticate end user kerberos > credential, and perform doAs call when requesting containers via > ServiceClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296046#comment-16296046 ] genericqa commented on YARN-7622: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 6 new + 40 unchanged - 2 fixed = 46 total (was 42) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 43s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7622 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902736/YARN-7622.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5da1ed1ae3c0 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c7a4dda | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/18973/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit |
[jira] [Commented] (YARN-6632) Backport YARN-3425 to branch 2.7
[ https://issues.apache.org/jira/browse/YARN-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295997#comment-16295997 ] Íñigo Goiri commented on YARN-6632: --- The whitespaces seem to be coming from yarn-default. I think this is OK. > Backport YARN-3425 to branch 2.7 > > > Key: YARN-6632 > URL: https://issues.apache.org/jira/browse/YARN-6632 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: YARN-3425-branch-2.7.patch > > > NPE from RMNodeLabelsManager.serviceStop when NodeLabelsManager.serviceInit > failed -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7671) Improve Diagonstic message for stop yarn native service
Yesha Vora created YARN-7671: Summary: Improve Diagonstic message for stop yarn native service Key: YARN-7671 URL: https://issues.apache.org/jira/browse/YARN-7671 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.0.0 Reporter: Yesha Vora Steps: 1) Install Hadoop 3.0 cluster 2) Run Yarn service application {code:title=sleeper.json}{ "name": "sleeper-service", "components" : [ { "name": "sleeper", "number_of_containers": 1, "launch_command": "sleep 90", "resource": { "cpus": 1, "memory": "256" } } ] }{code} {code:title=cmd} yarn app -launch my-sleeper1 sleeper.json{code} 3) stop yarn service app {code:title=cmd} yarn app -stop my-sleeper1{code} On stopping yarn service, appId finishes with YarnApplicationState: FINISHED , FinalStatus Reported by AM: ENDED and Diagnostics: Navigate to the failed component for more details. Here, Diagnostics message should be improved. When an application is explicitly stopped by user, the diagnostics message should say " Application stopped by user" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7612) Add Placement Processor Framework
[ https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-7612: -- Attachment: YARN-7612-YARN-6592.007.patch Updating patch. Attaching only the Processor and related classes. This now depends on YARN-7669 and YARN-7670 > Add Placement Processor Framework > - > > Key: YARN-7612 > URL: https://issues.apache.org/jira/browse/YARN-7612 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7612-YARN-6592.001.patch, > YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, > YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, > YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, > YARN-7612-v2.wip.patch, YARN-7612.wip.patch > > > This introduces a Placement Processor and a Planning algorithm framework to > handle placement constraints and scheduling requests from an app and places > them on nodes. > The actual planning algorithm(s) will be handled in a YARN-7613. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests
[ https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-7670: -- Attachment: YARN-7670-YARN-6592.001.patch Moving changes to ResourceScheduler and CapacityScheduler changes from YARN-7612 here > Modifications to the ResourceScheduler to support SchedulingRequests > > > Key: YARN-7670 > URL: https://issues.apache.org/jira/browse/YARN-7670 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7670-YARN-6592.001.patch > > > As per discussions in YARN-7612. This JIRA tracks the changes to the > ResourceScheduler interface and implementation in CapacityScheduler to > support SchedulingRequests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7669) [API] Introduce interfaces for placement constraint processing
[ https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-7669: -- Attachment: YARN-7669-YARN-6592.001.patch Moving all API changes and package changes from YARN-7612 > [API] Introduce interfaces for placement constraint processing > -- > > Key: YARN-7669 > URL: https://issues.apache.org/jira/browse/YARN-7669 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7669-YARN-6592.001.patch > > > As per discussions in YARN-7612. This JIRA will introduce the generic > interfaces which will be implemented in YARN-7612 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6632) Backport YARN-3425 to branch 2.7
[ https://issues.apache.org/jira/browse/YARN-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295982#comment-16295982 ] genericqa commented on YARN-6632: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 27s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 36 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 | | JIRA Issue | YARN-6632 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869352/YARN-3425-branch-2.7.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e3832960359d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.7 / fb19423 | | maven | version: Apache Maven 3.0.5 | | Default Java | 1.7.0_151 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18974/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18974/testReport/ | | Max. process+thread count | 80 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18974/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Backport YARN-3425 to branch 2.7 > > > Key: YARN-6632 > URL:
[jira] [Commented] (YARN-7543) FileNotFoundException due to a broken link when creating a yarn service and missing max cpu limit check
[ https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295932#comment-16295932 ] Gour Saha commented on YARN-7543: - Ok, I see that the description mentions the cpu limit check. I updated the summary to reflect it. > FileNotFoundException due to a broken link when creating a yarn service and > missing max cpu limit check > --- > > Key: YARN-7543 > URL: https://issues.apache.org/jira/browse/YARN-7543 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-7543.01.patch > > > The hadoop lib dir had a broken link to a ojdb jar which was not really > required for a YARN service creation. The app submission failed with the > below FNFE. Ideally it should be handled and app should be successfully > submitted and let the app fail if it really needed the jar of the broken link > - > {code} > [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch > gour-sleeper sleeper > WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of > YARN_LOG_DIR. > WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of > YARN_LOGFILE. > WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of > YARN_PID_DIR. > WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS. > 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at > ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050 > 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit > local reads feature cannot be used because libhadoop cannot be loaded. > 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at > ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050 > 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from > local FS: > /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json > 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper > at > hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json > 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found > 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't > exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties > 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to > HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using > command: yarn app -enableFastLaunch > Exception in thread "main" java.io.FileNotFoundException: File > /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > at > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399) > at > org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434) > at > org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409) > at > org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138) > at > org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695) > at > org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553) > at > org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212) > at > org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional
[jira] [Updated] (YARN-7543) FileNotFoundException due to a broken link when creating a yarn service and missing max cpu limit check
[ https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-7543: Summary: FileNotFoundException due to a broken link when creating a yarn service and missing max cpu limit check (was: FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory) > FileNotFoundException due to a broken link when creating a yarn service and > missing max cpu limit check > --- > > Key: YARN-7543 > URL: https://issues.apache.org/jira/browse/YARN-7543 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Jian He > Fix For: yarn-native-services > > Attachments: YARN-7543.01.patch > > > The hadoop lib dir had a broken link to a ojdb jar which was not really > required for a YARN service creation. The app submission failed with the > below FNFE. Ideally it should be handled and app should be successfully > submitted and let the app fail if it really needed the jar of the broken link > - > {code} > [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch > gour-sleeper sleeper > WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of > YARN_LOG_DIR. > WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of > YARN_LOGFILE. > WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of > YARN_PID_DIR. > WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS. > 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at > ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050 > 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit > local reads feature cannot be used because libhadoop cannot be loaded. > 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at > ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050 > 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from > local FS: > /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json > 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper > at > hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json > 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found > 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't > exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties > 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to > HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using > command: yarn app -enableFastLaunch > Exception in thread "main" java.io.FileNotFoundException: File > /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454) > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > at > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399) > at > org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434) > at > org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409) > at > org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138) > at > org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695) > at > org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553) > at > org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212) > at > org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) -
[jira] [Created] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests
Arun Suresh created YARN-7670: - Summary: Modifications to the ResourceScheduler to support SchedulingRequests Key: YARN-7670 URL: https://issues.apache.org/jira/browse/YARN-7670 Project: Hadoop YARN Issue Type: Sub-task Reporter: Arun Suresh Assignee: Arun Suresh As per discussions in YARN-7612. This JIRA tracks the changes to the ResourceScheduler interface and implementation in CapacityScheduler to support SchedulingRequests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Phillips updated YARN-7622: Attachment: YARN-7622.004.patch > Allow fair-scheduler configuration on HDFS > -- > > Key: YARN-7622 > URL: https://issues.apache.org/jira/browse/YARN-7622 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: YARN-7622.001.patch, YARN-7622.002.patch, > YARN-7622.003.patch, YARN-7622.004.patch > > > The FairScheduler requires the allocation file to be hosted on the local > filesystem on the RM node(s). Allowing HDFS to store the allocation file will > provide improved redundancy, more options for scheduler updates, and RM > failover consistency in HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Phillips updated YARN-7622: Attachment: (was: YARN-7622.004.patch) > Allow fair-scheduler configuration on HDFS > -- > > Key: YARN-7622 > URL: https://issues.apache.org/jira/browse/YARN-7622 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: YARN-7622.001.patch, YARN-7622.002.patch, > YARN-7622.003.patch, YARN-7622.004.patch > > > The FairScheduler requires the allocation file to be hosted on the local > filesystem on the RM node(s). Allowing HDFS to store the allocation file will > provide improved redundancy, more options for scheduler updates, and RM > failover consistency in HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295920#comment-16295920 ] genericqa commented on YARN-7622: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} YARN-7622 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7622 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902706/YARN-7622.004.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18972/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow fair-scheduler configuration on HDFS > -- > > Key: YARN-7622 > URL: https://issues.apache.org/jira/browse/YARN-7622 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: YARN-7622.001.patch, YARN-7622.002.patch, > YARN-7622.003.patch, YARN-7622.004.patch > > > The FairScheduler requires the allocation file to be hosted on the local > filesystem on the RM node(s). Allowing HDFS to store the allocation file will > provide improved redundancy, more options for scheduler updates, and RM > failover consistency in HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7669) [API] Introduce interfaces for placement constraint processing
Arun Suresh created YARN-7669: - Summary: [API] Introduce interfaces for placement constraint processing Key: YARN-7669 URL: https://issues.apache.org/jira/browse/YARN-7669 Project: Hadoop YARN Issue Type: Sub-task Reporter: Arun Suresh Assignee: Arun Suresh As per discussions in YARN-7612. This JIRA will introduce the generic interfaces which will be implemented in YARN-7612 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7668) Remove unused variables from ContainerLocalizer
Ray Chiang created YARN-7668: Summary: Remove unused variables from ContainerLocalizer Key: YARN-7668 URL: https://issues.apache.org/jira/browse/YARN-7668 Project: Hadoop YARN Issue Type: Task Reporter: Ray Chiang Assignee: Ray Chiang Priority: Trivial While figuring out something else, I found two class constants in ContainerLocalizer that look like aren't being used anymore. {noformat} public static final String OUTPUTDIR = "output"; public static final String WORKDIR = "work"; {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7668) Remove unused variables from ContainerLocalizer
[ https://issues.apache.org/jira/browse/YARN-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated YARN-7668: - Labels: newbie (was: ) > Remove unused variables from ContainerLocalizer > --- > > Key: YARN-7668 > URL: https://issues.apache.org/jira/browse/YARN-7668 > Project: Hadoop YARN > Issue Type: Task >Reporter: Ray Chiang >Assignee: Ray Chiang >Priority: Trivial > Labels: newbie > > While figuring out something else, I found two class constants in > ContainerLocalizer that look like aren't being used anymore. > {noformat} > public static final String OUTPUTDIR = "output"; > public static final String WORKDIR = "work"; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7612) Add Placement Processor Framework
[ https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295858#comment-16295858 ] Konstantinos Karanasos commented on YARN-7612: -- I looked at the latest patch and had an offline discussion with [~asuresh]. First, we agreed to split the current JIRA in three parts to better review it: * API of the processor framework * Implementation of the processor framework * Coupling with Capacity Scheduler Some first comments in the meantime: * I know it might not be that easy, but let's try to remove the isConstraintedAllocation from the RMContainer. It will simplify a lot the code of the CapacityScheduler adn the FicaSchedulerApp. * One downside of the current implementation is that it relies on the commit API that is there only in the CapacityScheduler... This is not a blocker for now, but we should see in another JIRA what it will take to make this work for the Fair Scheduler too. * Do we need the placeApplication in the RMAppImpl? * CapacityScheduler: ** Can we unify the two createResourceCommitRequest in the CapacityScheduler that seem to duplicate a lot of code? ** In the createResourceCommitRequest, since it assumes we request a single container in the SchedulingRequest, shouldn't we add a check for that? ** The SchedulerContainer is confusing... Looks like an inner class to me, for sure it deserves a better name. * Does the NodeCandidateSelector belong to the constraint/processor package? * ApplicationMasterService: since biggest part of the serviceInit is now the AMS-processor-chain related stuff, let's put all that initialization in a separate amsProcessorInit method. Also some minor comments: * YarnConfiguration: placement.algorithm -> constraint-placement.algorithm * yarn_protos.proto: ** RR prefix reminds me of ResourceRequest, instead of RejectionReason. Same in ProtoUtils. Maybe we can do sth like PRR (PlacementRR). ** COULD_NOT_SCHEDULE_ON_PLACED_NODE -> COULD_NOT_SCHEDULE_ON_NODE * In the ApplicationMasterServiceUtils, I would put the setRejectedSchedulingRequests inside the first if clause, assuming most responses will not have a rejection. * Typo in RMActiveServiceContext, RMContext, and RMContextImpl: PlacementConstriantsManager * RMContainerImpl: isConstraintAllocation, is ConstraintPlacement… Make consistent * ResourceScheduler: ** True is -> true if ** tryAllocate -> attemptAllocation? Or better attemptAllocationOnNode? ** "Propose a SchedulerRequest for the Scheduler to try allocate" -> "attempt to place a SchedulerRequest" * constraint/api/Algorithm: rename to sth like PlacementAlgorithm > Add Placement Processor Framework > - > > Key: YARN-7612 > URL: https://issues.apache.org/jira/browse/YARN-7612 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7612-YARN-6592.001.patch, > YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, > YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, > YARN-7612-YARN-6592.006.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch > > > This introduces a Placement Processor and a Planning algorithm framework to > handle placement constraints and scheduling requests from an app and places > them on nodes. > The actual planning algorithm(s) will be handled in a YARN-7613. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.
[ https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295818#comment-16295818 ] Varun Saxena commented on YARN-7662: Latest patch looks good. Will wait for a few hours before committing to give others a chance to review. By the way in yarn-default.xml, description of yarn reader bind host config can say " The actual address the reader will bind to. " instead of saying "The actual address the server will bind to. ". No need to give a patch for it though. Can change this while committing. > [Atsv2] Define new set of configurations for reader and collectors to bind. > --- > > Key: YARN-7662 > URL: https://issues.apache.org/jira/browse/YARN-7662 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-7662.01.patch, YARN-7662.01.patch, > YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch > > > While starting Timeline Reader in secure mode, login happens using > timeline.service.address even though timeline.service.bindhost is configured > with 0.0.0.0. This causes exact principal name that matches address name to > be present in keytabs. > It is always better to login using getLocalHost that gives machine hostname > which is configured in /etc/hosts unlike NodeManager does in serviceStart. > And timeline.service.address is not required in non-secure mode, so its > better to keep consistent in secure and non-secure mode -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7612) Add Placement Processor Framework
[ https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantinos Karanasos updated YARN-7612: - Summary: Add Placement Processor Framework (was: Add Placement Processor and planner framework) > Add Placement Processor Framework > - > > Key: YARN-7612 > URL: https://issues.apache.org/jira/browse/YARN-7612 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-7612-YARN-6592.001.patch, > YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, > YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, > YARN-7612-YARN-6592.006.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch > > > This introduces a Placement Processor and a Planning algorithm framework to > handle placement constraints and scheduling requests from an app and places > them on nodes. > The actual planning algorithm(s) will be handled in a YARN-7613. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle
[ https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shane Kumpf updated YARN-5366: -- Attachment: YARN-5366.007.patch Attaching a reworked patch that addresses some, but not all, of the issues listed in the description. Items 1, 2, 3, and 6 are covered. I'll rework the description and open the follow on tasks once it appears we are getting close on this issue. > Improve handling of the Docker container life cycle > --- > > Key: YARN-5366 > URL: https://issues.apache.org/jira/browse/YARN-5366 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Shane Kumpf >Assignee: Shane Kumpf > Labels: oct16-medium > Attachments: YARN-5366.001.patch, YARN-5366.002.patch, > YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, > YARN-5366.006.patch, YARN-5366.007.patch > > > There are several paths that need to be improved with regard to the Docker > container lifecycle when running Docker containers on YARN. > 1) Provide the ability to keep a container on the NodeManager for a set > period of time for debugging purposes. > 2) Support sending signals to the process in the container to allow for > triggering stack traces, heap dumps, etc. > 3) Support for Docker's live restore, which means moving away from the use of > {{docker wait}}. (YARN-5818) > 4) Improve the resiliency of liveliness checks (kill -0) by adding retries. > 5) Improve the resiliency of container removal by adding retries. > 6) Only attempt to stop, kill, and remove containers if the current container > state allows for it. > 7) Better handling of short lived containers when the container is stopped > before the PID can be retrieved. (YARN-6305) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7598) Document how to use classpath isolation for aux-services in YARN
[ https://issues.apache.org/jira/browse/YARN-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-7598: Attachment: YARN-7598.trunk.1.patch > Document how to use classpath isolation for aux-services in YARN > > > Key: YARN-7598 > URL: https://issues.apache.org/jira/browse/YARN-7598 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-7598.trunk.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API
[ https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295774#comment-16295774 ] genericqa commented on YARN-7605: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 38s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 9s{color} | {color:orange} root: The patch generated 17 new + 160 unchanged - 2 fixed = 177 total (was 162) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 6s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 11s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 51s{color} | {color:green} hadoop-yarn-services-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}207m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem | | | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce
[jira] [Commented] (YARN-6632) Backport YARN-3425 to branch 2.7
[ https://issues.apache.org/jira/browse/YARN-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295727#comment-16295727 ] Konstantin Shvachko commented on YARN-6632: --- Hey [~elgoiri] can you change the status to "Patch Available"? Otherwise Jenkins [refuses to run|https://builds.apache.org/job/PreCommit-YARN-Build/18957/console]. > Backport YARN-3425 to branch 2.7 > > > Key: YARN-6632 > URL: https://issues.apache.org/jira/browse/YARN-6632 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: YARN-3425-branch-2.7.patch > > > NPE from RMNodeLabelsManager.serviceStop when NodeLabelsManager.serviceInit > failed -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7661) NodeManager metrics return wrong value after update node resource
[ https://issues.apache.org/jira/browse/YARN-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295720#comment-16295720 ] Hudson commented on YARN-7661: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13396 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13396/]) YARN-7661. NodeManager metrics return wrong value after update node (jlowe: rev 811fabdebe881248756c0165bf7667bfc22be9bb) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java > NodeManager metrics return wrong value after update node resource > - > > Key: YARN-7661 > URL: https://issues.apache.org/jira/browse/YARN-7661 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yang Wang >Assignee: Yang Wang > Attachments: YARN-7661.001.patch, YARN-7661.002.patch > > > {code:title=NodeManagerMetrics.java} > public void addResource(Resource res) { > availableMB = availableMB + res.getMemorySize(); > availableGB.incr((int)Math.floor(availableMB/1024d)); > availableVCores.incr(res.getVirtualCores()); > } > {code} > When the node resource was updated through RM-NM heartbeat, the NM metric > will get wrong value. > The root cause of this issue is that new resource has been added to > availableMB, so not needed to increase for availableGB again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7661) NodeManager metrics return wrong value after update node resource
[ https://issues.apache.org/jira/browse/YARN-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295667#comment-16295667 ] Jason Lowe commented on YARN-7661: -- Thanks for updating the patch! +1 lgtm. Committing this. > NodeManager metrics return wrong value after update node resource > - > > Key: YARN-7661 > URL: https://issues.apache.org/jira/browse/YARN-7661 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yang Wang >Assignee: Yang Wang > Attachments: YARN-7661.001.patch, YARN-7661.002.patch > > > {code:title=NodeManagerMetrics.java} > public void addResource(Resource res) { > availableMB = availableMB + res.getMemorySize(); > availableGB.incr((int)Math.floor(availableMB/1024d)); > availableVCores.incr(res.getVirtualCores()); > } > {code} > When the node resource was updated through RM-NM heartbeat, the NM metric > will get wrong value. > The root cause of this issue is that new resource has been added to > availableMB, so not needed to increase for availableGB again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS
[ https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Phillips updated YARN-7622: Attachment: YARN-7622.004.patch [~wilfreds] - Thanks again for the review. The two fields which triggered the findbugs synchronization issues have been reverted to volatile (Atomic types removed), and an additional allocation file check has been added to ensure only hdfs or local file schemes are provided. A unit test has been added to confirm that providing an invalid URI scheme will fail prior to {{serviceInit}}. In the case where an absolute path for the allocation file is provided and the file doesn't exist the original behavior would throw an exception inside the {{serviceInit}} thread at the call to lastModified(). This exception is handled within the thread without breaking the loop. {{getAllocationFile}} only checks for file existence if it is a resource accessible to the classloader, otherwise a potentially bogus URL would still throw an exception within the {{serviceInit}} thread. > Allow fair-scheduler configuration on HDFS > -- > > Key: YARN-7622 > URL: https://issues.apache.org/jira/browse/YARN-7622 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: YARN-7622.001.patch, YARN-7622.002.patch, > YARN-7622.003.patch, YARN-7622.004.patch > > > The FairScheduler requires the allocation file to be hosted on the local > filesystem on the RM node(s). Allowing HDFS to store the allocation file will > provide improved redundancy, more options for scheduler updates, and RM > failover consistency in HA. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7667) Docker Stop grace period should be configurable
Eric Badger created YARN-7667: - Summary: Docker Stop grace period should be configurable Key: YARN-7667 URL: https://issues.apache.org/jira/browse/YARN-7667 Project: Hadoop YARN Issue Type: Sub-task Reporter: Eric Badger {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never called. So, the stop uses the 10 second default grace period from docker -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.
[ https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295494#comment-16295494 ] genericqa commented on YARN-7662: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 51m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | |
[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API
[ https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated YARN-7605: Attachment: YARN-7605.006.patch - Fix some status report issues. > Implement doAs for Api Service REST API > --- > > Key: YARN-7605 > URL: https://issues.apache.org/jira/browse/YARN-7605 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Yang >Assignee: Eric Yang > Fix For: yarn-native-services > > Attachments: YARN-7605.001.patch, YARN-7605.004.patch, > YARN-7605.005.patch, YARN-7605.006.patch > > > In YARN-7540, all client entry points for API service is centralized to use > REST API instead of having direct file system and resource manager rpc calls. > This change helped to centralize yarn metadata to be owned by yarn user > instead of crawling through every user's home directory to find metadata. > The next step is to make sure "doAs" calls work properly for API Service. > The metadata is stored by YARN user, but the actual workload still need to be > performed as end users, hence API service must authenticate end user kerberos > credential, and perform doAs call when requesting containers via > ServiceClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7661) NodeManager metrics return wrong value after update node resource
[ https://issues.apache.org/jira/browse/YARN-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295372#comment-16295372 ] genericqa commented on YARN-7661: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 51m 53s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 27s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7661 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902580/YARN-7661.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cc346e158dc2 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0010089 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18968/testReport/ | | Max. process+thread count | 438 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18968/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NodeManager metrics return wrong value after update node resource
[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
[ https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295326#comment-16295326 ] Miklos Szegedi commented on YARN-7577: -- The other unit test errors should be unrelated. I only changed this single test. > Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart > -- > > Key: YARN-7577 > URL: https://issues.apache.org/jira/browse/YARN-7577 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7577.000.patch, YARN-7577.001.patch, > YARN-7577.002.patch, YARN-7577.003.patch, YARN-7577.004.patch, > YARN-7577.005.patch, YARN-7577.006.patch > > > This happens, if Fair Scheduler is the default. The test should run with both > schedulers > {code} > java.lang.AssertionError: > Expected :-102 > Actual :-106 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.
[ https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295228#comment-16295228 ] genericqa commented on YARN-7662: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 10m 10s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-7662 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902672/YARN-7662.05.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18969/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Atsv2] Define new set of configurations for reader and collectors to bind. > --- > > Key: YARN-7662 > URL: https://issues.apache.org/jira/browse/YARN-7662 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-7662.01.patch, YARN-7662.01.patch, > YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch > > > While starting Timeline Reader in secure mode, login happens using > timeline.service.address even though timeline.service.bindhost is configured > with 0.0.0.0. This causes exact principal name that matches address name to > be present in keytabs. > It is always better to login using getLocalHost that gives machine hostname > which is configured in /etc/hosts unlike NodeManager does in serviceStart. > And timeline.service.address is not required in non-secure mode, so its > better to keep consistent in secure and non-secure mode -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.
[ https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7662: Attachment: YARN-7662.05.patch Updated the patch fixing review comment! > [Atsv2] Define new set of configurations for reader and collectors to bind. > --- > > Key: YARN-7662 > URL: https://issues.apache.org/jira/browse/YARN-7662 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-7662.01.patch, YARN-7662.01.patch, > YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch > > > While starting Timeline Reader in secure mode, login happens using > timeline.service.address even though timeline.service.bindhost is configured > with 0.0.0.0. This causes exact principal name that matches address name to > be present in keytabs. > It is always better to login using getLocalHost that gives machine hostname > which is configured in /etc/hosts unlike NodeManager does in serviceStart. > And timeline.service.address is not required in non-secure mode, so its > better to keep consistent in secure and non-secure mode -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.
[ https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295155#comment-16295155 ] Varun Saxena commented on YARN-7662: [~rohithsharma], thanks for the latest patch. As we have already released ATSv2 and the addresses depend on yarn.timeline-service.bind-host, to maintain backward compatibility, we should fall back to it if yarn.timeline-service.reader/collector.bind-host is not set > [Atsv2] Define new set of configurations for reader and collectors to bind. > --- > > Key: YARN-7662 > URL: https://issues.apache.org/jira/browse/YARN-7662 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-7662.01.patch, YARN-7662.01.patch, > YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch > > > While starting Timeline Reader in secure mode, login happens using > timeline.service.address even though timeline.service.bindhost is configured > with 0.0.0.0. This causes exact principal name that matches address name to > be present in keytabs. > It is always better to login using getLocalHost that gives machine hostname > which is configured in /etc/hosts unlike NodeManager does in serviceStart. > And timeline.service.address is not required in non-secure mode, so its > better to keep consistent in secure and non-secure mode -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7664) Several javadoc errors
[ https://issues.apache.org/jira/browse/YARN-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated YARN-7664: Attachment: YARN-7664-branch-3.0.patch Attaching the patch for branch-3.0. > Several javadoc errors > -- > > Key: YARN-7664 > URL: https://issues.apache.org/jira/browse/YARN-7664 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Blocker > Fix For: 3.1.0, 3.0.1 > > Attachments: YARN-7664-branch-3.0.patch, YARN-7664.001.patch > > > Not sure if I'm somehow on a different version as our Yetus infra or what, > but I'm unable to build Hadoop due to some recent changes that are throwing > JavaDoc errors. Most significantly: > {code} > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java:425: > error: reference not found > [ERROR] * @throws IllegalArgumentExcpetion if units contain non alpha > characters > [ERROR] ^ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7664) Several javadoc errors
[ https://issues.apache.org/jira/browse/YARN-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned YARN-7664: --- Assignee: Sean Mackrory > Several javadoc errors > -- > > Key: YARN-7664 > URL: https://issues.apache.org/jira/browse/YARN-7664 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Blocker > Fix For: 3.1.0, 3.0.1 > > Attachments: YARN-7664.001.patch > > > Not sure if I'm somehow on a different version as our Yetus infra or what, > but I'm unable to build Hadoop due to some recent changes that are throwing > JavaDoc errors. Most significantly: > {code} > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java:425: > error: reference not found > [ERROR] * @throws IllegalArgumentExcpetion if units contain non alpha > characters > [ERROR] ^ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7664) Several javadoc errors
[ https://issues.apache.org/jira/browse/YARN-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294993#comment-16294993 ] Hudson commented on YARN-7664: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13394 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13394/]) YARN-7664. Several javadoc errors. Contributed by Sean Mackrory. (aajisaka: rev 001008958d8da008ed2e3be370ea4431fd023c97) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java > Several javadoc errors > -- > > Key: YARN-7664 > URL: https://issues.apache.org/jira/browse/YARN-7664 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Priority: Blocker > Attachments: YARN-7664.001.patch > > > Not sure if I'm somehow on a different version as our Yetus infra or what, > but I'm unable to build Hadoop due to some recent changes that are throwing > JavaDoc errors. Most significantly: > {code} > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java:425: > error: reference not found > [ERROR] * @throws IllegalArgumentExcpetion if units contain non alpha > characters > [ERROR] ^ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7664) Several javadoc errors
[ https://issues.apache.org/jira/browse/YARN-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294953#comment-16294953 ] Akira Ajisaka commented on YARN-7664: - LGTM, +1. Committing this. > Several javadoc errors > -- > > Key: YARN-7664 > URL: https://issues.apache.org/jira/browse/YARN-7664 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Priority: Blocker > Attachments: YARN-7664.001.patch > > > Not sure if I'm somehow on a different version as our Yetus infra or what, > but I'm unable to build Hadoop due to some recent changes that are throwing > JavaDoc errors. Most significantly: > {code} > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397: > error: self-closing element not allowed > [ERROR] * > [ERROR] ^ > [ERROR] > /home/sean/src/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java:425: > error: reference not found > [ERROR] * @throws IllegalArgumentExcpetion if units contain non alpha > characters > [ERROR] ^ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7609) mvn package fails by javadoc error
[ https://issues.apache.org/jira/browse/YARN-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294950#comment-16294950 ] Akira Ajisaka commented on YARN-7609: - Thank you for the update, but unfortunately, YARN-7119 and YARN-7586 were committed recently and they broke javadocs. The patch in YARN-7664 fixes all of them and I'll commit this. > mvn package fails by javadoc error > -- > > Key: YARN-7609 > URL: https://issues.apache.org/jira/browse/YARN-7609 > Project: Hadoop YARN > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Chandni Singh > Attachments: YARN-7609.001.patch, YARN-7609.002.patch > > > {{mvn package -Pdist -DskipTests}} failed. > {noformat} > [ERROR] > /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:379: > error: self-closing element not allowed > [ERROR]* > [ERROR] ^ > [ERROR] > /home/centos/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java:397: > error: self-closing element not allowed > [ERROR]* > [ERROR] ^ > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service
[ https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294857#comment-16294857 ] genericqa commented on YARN-7616: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core: The patch generated 1 new + 83 unchanged - 2 fixed = 84 total (was 85) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 42s{color} | {color:green} hadoop-yarn-services-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7616 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12902618/YARN-7616.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5c92e8c3377f 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9289641 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/18967/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18967/testReport/ | | Max. process+thread count | 591 (vs. ulimit of 5000) | | modules | C:
[jira] [Commented] (YARN-7457) Delay scheduling should be an individual policy instead of part of scheduler implementation
[ https://issues.apache.org/jira/browse/YARN-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294836#comment-16294836 ] Sunil G commented on YARN-7457: --- Thank you. I raised YARN-7666 to track this. > Delay scheduling should be an individual policy instead of part of scheduler > implementation > --- > > Key: YARN-7457 > URL: https://issues.apache.org/jira/browse/YARN-7457 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Tao Yang > > Currently, different schedulers have slightly different delay scheduling > implementations. Ideally we should make delay scheduling independent from > scheduler implementation. Benefits of doing this: > 1) Applications can choose which delay scheduling policy to use, it could be > time-based / missed-opportunistic-based or whatever new delay scheduling > policy supported by the cluster. Now it is global config of scheduler. > 2) Make scheduler implementations simpler and reusable. > h2. {color:red}Running design doc: > https://docs.google.com/document/d/1rY-CJPLbGk3Xj_8sxre61y2YkHJFK8oqKOshro1ZY3A/edit#heading=h.xnzvh9nn283a{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7666) Introduce scheduler specific environment variable support in ASC for better scheduling placement configurations
[ https://issues.apache.org/jira/browse/YARN-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7666: -- Attachment: YARN-7666.001.patch Attaching an initial version of patch. cc/ [~leftnoteasy] [~kkaranasos] [~Tao Yang] > Introduce scheduler specific environment variable support in ASC for better > scheduling placement configurations > --- > > Key: YARN-7666 > URL: https://issues.apache.org/jira/browse/YARN-7666 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7666.001.patch > > > Introduce a scheduler specific key-value map to hold env variables in ASC. > And also convert AppPlacementAllocator initialization to each app based on > policy configured at each app. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7616) App status does not return state STABLE for a running and stable service
[ https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated YARN-7616: Attachment: YARN-7616.002.patch [~billie.rinaldi], as per your suggestion, uploading 002 patch with all state update logic moved to AM. Please review. > App status does not return state STABLE for a running and stable service > > > Key: YARN-7616 > URL: https://issues.apache.org/jira/browse/YARN-7616 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: yarn-native-services > > Attachments: YARN-7616.001.patch, YARN-7616.002.patch > > > state currently returns null for a running and stable service. Looks like the > code does not return ServiceState.STABLE under any circumstance. Will need to > wire this in. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7666) Introduce scheduler specific environment variable support in ASC for better scheduling placement configurations
Sunil G created YARN-7666: - Summary: Introduce scheduler specific environment variable support in ASC for better scheduling placement configurations Key: YARN-7666 URL: https://issues.apache.org/jira/browse/YARN-7666 Project: Hadoop YARN Issue Type: Sub-task Reporter: Sunil G Assignee: Sunil G Introduce a scheduler specific key-value map to hold env variables in ASC. And also convert AppPlacementAllocator initialization to each app based on policy configured at each app. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7665) Allow FS scheduler state dump to be turned on/off separate from FS debug
[ https://issues.apache.org/jira/browse/YARN-7665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294722#comment-16294722 ] Wilfred Spiegelenburg commented on YARN-7665: - No new test was added because it is just a log level clean up The failing junit tests are known and logged as YARN-7507 > Allow FS scheduler state dump to be turned on/off separate from FS debug > > > Key: YARN-7665 > URL: https://issues.apache.org/jira/browse/YARN-7665 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg > Labels: fairscheduler > Attachments: YARN-7665.001.patch > > > The FS state dump can currently not be turned on or off independently of the > FS debug logging. > The logic for dumping the state uses a mixture of {{FairScheduler}} and > {{FairScheduler.ststedump}} to check wether it dumps. It should be just using > the state dump. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7032) [ATSv2] NPE while starting hbase co-processor
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7032: Attachment: hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log attached the region server log that fail to start. > [ATSv2] NPE while starting hbase co-processor > - > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Priority: Critical > Attachments: > hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log > > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-7032) [ATSv2] NPE while starting hbase co-processor
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S reopened YARN-7032: - Reopening this issue since this is appearing in again and again. This looks to be an issue in our code where we are passing null in ArrayList that causing RegionServer to shutdown. > [ATSv2] NPE while starting hbase co-processor > - > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7032) [ATSv2] NPE while starting hbase co-processor
[ https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-7032: Priority: Critical (was: Major) > [ATSv2] NPE while starting hbase co-processor > - > > Key: YARN-7032 > URL: https://issues.apache.org/jira/browse/YARN-7032 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Rohith Sharma K S >Priority: Critical > > It is seen randomly that hbase co-processor fails to start with NPE. But > again starting RegionServer, able to succeed in starting RS. > {noformat} > 2017-08-17 05:53:13,535 ERROR > [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] > coprocessor.CoprocessorHost: The coprocessor > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor > threw java.lang.NullPointerException > java.lang.NullPointerException > at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187) > at > org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org