[jira] [Commented] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134450#comment-16134450 ] Varun Saxena commented on YARN-6861: Committed to YARN-5355. [~rohithsharma], the patch isn't compiling in YARN-5355_branch2. Can you provide a patch for YARN-5355_branch2. > Reader API for sub application entities > --- > > Key: YARN-6861 > URL: https://issues.apache.org/jira/browse/YARN-6861 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: atsv2-subapp, yarn-5355-merge-blocker > Attachments: YARN-6861-YARN-5355.001.patch, > YARN-6861-YARN-5355.002.patch, YARN-6861-YARN-5355.003.patch, > YARN-6861-YARN-5355.004.patch, YARN-6861-YARN-5355.005.patch, > YARN-6861-YARN-5355.006.patch > > > YARN-6733 and YARN-6734 writes data into sub application table. There should > be a way to read those entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, > {Age: 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Priority: Major (was: Minor) > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. It may exchange by > ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, > {Age: 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6047) Documentation updates for TimelineService v2
[ https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134548#comment-16134548 ] Hadoop QA commented on YARN-6047: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} YARN-5355 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 17s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} YARN-5355 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6047 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882792/YARN-6047-YARN-5355.004.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 27f0b5b065ba 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 73ee0d4 | | modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17026/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Documentation updates for TimelineService v2 > > > Key: YARN-6047 > URL: https://issues.apache.org/jira/browse/YARN-6047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: documentation, timelineserver >Reporter: Varun Saxena >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Fix For: YARN-5355 > > Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.001.patch, > YARN-6047-YARN-5355.002.patch, YARN-6047-YARN-5355.003.patch, > YARN-6047-YARN-5355.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7044) TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails on trunk
[ https://issues.apache.org/jira/browse/YARN-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134442#comment-16134442 ] Lingfeng Su commented on YARN-7044: --- I met the same problem when running junit test in [yarn-7007|https://issues.apache.org/jira/browse/YARN-7007] > TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails on > trunk > -- > > Key: YARN-7044 > URL: https://issues.apache.org/jira/browse/YARN-7044 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, test >Reporter: Wangda Tan > > {code} > Failed > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable > Failing for the past 2 builds (Since Failed#16961 ) > Took 30 sec. > Error Message > test timed out after 3 milliseconds > Stacktrace > java.lang.Exception: test timed out after 3 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:330) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6979: -- Summary: Add flag to notify all types of container updates to NM via NodeHeartbeatResponse (was: Add flag to allow all container updates to be initiated via NodeHeartbeatResponse) > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh reopened YARN-6979: --- Re-opening to fix a class name and add some minor javadocs > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7057) FSAppAttempt#getResourceUsage doesn't need to consider resources queued for preemption
[ https://issues.apache.org/jira/browse/YARN-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134487#comment-16134487 ] Karthik Kambatla commented on YARN-7057: The test failure in CapacityScheduler should be unrelated. Existing tests should verify both the behavior of getResourceUsage and canContainerBePreempted. Hence, no new tests. > FSAppAttempt#getResourceUsage doesn't need to consider resources queued for > preemption > -- > > Key: YARN-7057 > URL: https://issues.apache.org/jira/browse/YARN-7057 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-7057.001.patch > > > FSAppAttempt#getResourceUsage excludes resources that are currently allocated > to the app but are about to be preempted. This inconsistency shows in the UI > and can affect scheduling of containers. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.expandEnvironment(): {code:java} @VisibleForTesting public static String expandEnvironment(String var, Path containerLogDir) { var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR, containerLogDir.toString()); var = var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR, File.pathSeparator); // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced // as %VAR% and on Linux replaced as "$VAR" if (Shell.WINDOWS) { var = var.replaceAll("(\\{\\{)|(\\}\\})", "%"); } else { var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$"); var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, ""); } return var; } {code} was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements method: {code:java} @VisibleForTesting public static String expandEnvironment(String var, Path containerLogDir) { var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR, containerLogDir.toString()); var = var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR, File.pathSeparator); // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced // as %VAR% and on Linux replaced as "$VAR" if (Shell.WINDOWS) { var = var.replaceAll("(\\{\\{)|(\\}\\})", "%"); } else { var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$"); var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, ""); } return var; } {code} > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: > 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c >
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Priority: Minor (was: Major) > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su >Priority: Minor > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: > 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.expandEnvironment(): > > {code:java} > @VisibleForTesting > public static String expandEnvironment(String var, > Path containerLogDir) { > var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR, > containerLogDir.toString()); > var = var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR, > File.pathSeparator); > // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced > // as %VAR% and on Linux replaced as "$VAR" > if (Shell.WINDOWS) { > var = var.replaceAll("(\\{\\{)|(\\}\\})", "%"); > } else { > var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$"); > var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, ""); > } > return var; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
[ https://issues.apache.org/jira/browse/YARN-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134454#comment-16134454 ] Hadoop QA commented on YARN-7058: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 10 new + 61 unchanged - 9 fixed = 71 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 50s{color} | {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_151. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}159m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_144 Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient | | JDK v1.7.0_151 Timed out junit tests | org.apache.hadoop.yarn.client.TestRMFailover | | | org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling | | | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.TestHedgingRequestRMFailoverProxyProvider | | | org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA | | | org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Updated] (YARN-6960) definition of active queue allows idle long-running apps to distort fair shares
[ https://issues.apache.org/jira/browse/YARN-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steven Rand updated YARN-6960: -- Attachment: YARN-6960.002.patch Attaching a slightly modified patch that sets the fair share of an inactive queue equal to its current utilization. This doesn't change the behavior for queues with no running applications, since the fair share before the patch and with the patch are both equal to zero. It does protect AM containers in queues that are inactive by the new definition from being preempted though, since queues containing those AMs are no longer over their fair shares. > definition of active queue allows idle long-running apps to distort fair > shares > --- > > Key: YARN-6960 > URL: https://issues.apache.org/jira/browse/YARN-6960 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.1, 3.0.0-alpha4 >Reporter: Steven Rand >Assignee: Steven Rand > Attachments: YARN-6960.001.patch, YARN-6960.002.patch > > > YARN-2026 introduced the notion of only considering active queues when > computing the fair share of each queue. The definition of an active queue is > a queue with at least one runnable app: > {code} > public boolean isActive() { > return getNumRunnableApps() > 0; > } > {code} > One case that this definition of activity doesn't account for is that of > long-running applications that scale dynamically. Such an application might > request many containers when jobs are running, but scale down to very few > containers, or only the AM container, when no jobs are running. > Even when such an application has scaled down to a negligible amount of > demand and utilization, the queue that it's in is still considered to be > active, which defeats the purpose of YARN-2026. For example, consider this > scenario: > 1. We have queues {{root.a}}, {{root.b}}, {{root.c}}, and {{root.d}}, all of > which have the same weight. > 2. Queues {{root.a}} and {{root.b}} contain long-running applications that > currently have only one container each (the AM). > 3. An application in queue {{root.c}} starts, and uses the whole cluster > except for the small amount in use by {{root.a}} and {{root.b}}. An > application in {{root.d}} starts, and has a high enough demand to be able to > use half of the cluster. Because all four queues are active, the app in > {{root.d}} can only preempt the app in {{root.c}} up to roughly 25% of the > cluster's resources, while the app in {{root.c}} keeps about 75%. > Ideally in this example, the app in {{root.d}} would be able to preempt the > app in {{root.c}} up to 50% of the cluster, which would be possible if the > idle apps in {{root.a}} and {{root.b}} didn't cause those queues to be > considered active. > One way to address this is to update the definition of an active queue to be > a queue containing 1 or more non-AM containers. This way if all apps in a > queue scale down to only the AM, other queues' fair shares aren't affected. > The benefit of this approach is that it's quite simple. The downside is that > it doesn't account for apps that are idle and using almost no resources, but > still have at least one non-AM container. > There are a couple of other options that seem plausible to me, but they're > much more complicated, and it seems to me that this proposal makes good > progress while adding minimal extra complexity. > Does this seem like a reasonable change? I'm certainly open to better ideas > as well. > Thanks, > Steve -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7060) Consider using delegation token for publishing entities from NM
Varun Saxena created YARN-7060: -- Summary: Consider using delegation token for publishing entities from NM Key: YARN-7060 URL: https://issues.apache.org/jira/browse/YARN-7060 Project: Hadoop YARN Issue Type: Sub-task Reporter: Varun Saxena -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7059) Use app user to publish entities from NM
Varun Saxena created YARN-7059: -- Summary: Use app user to publish entities from NM Key: YARN-7059 URL: https://issues.apache.org/jira/browse/YARN-7059 Project: Hadoop YARN Issue Type: Sub-task Reporter: Varun Saxena Currently NM login UGI is used to publish entities from NM to app collector. This can lead to writes happening in sub application table. This JIRA is to use app user to publish entities from NM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7061) Use token for communication between NM and node collector
[ https://issues.apache.org/jira/browse/YARN-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-7061: --- Description: We can consider using token for communication between NM and node collector once collector runs outside NM. (was: Use token for communication between NM and node collector once collector runs outside NM,) > Use token for communication between NM and node collector > - > > Key: YARN-7061 > URL: https://issues.apache.org/jira/browse/YARN-7061 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineclient, timelinereader, timelineserver >Reporter: Varun Saxena > > We can consider using token for communication between NM and node collector > once collector runs outside NM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134505#comment-16134505 ] Hadoop QA commented on YARN-6979: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 70 unchanged - 1 fixed = 70 total (was 71) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 22s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6979 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882787/YARN-6979.addendum-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9d5d75afdc26 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8410d86 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17024/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17024/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17024/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Updated] (YARN-6047) Documentation updates for TimelineService v2
[ https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6047: --- Attachment: TimelineServiceV2.html > Documentation updates for TimelineService v2 > > > Key: YARN-6047 > URL: https://issues.apache.org/jira/browse/YARN-6047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: documentation, timelineserver >Reporter: Varun Saxena >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Fix For: YARN-5355 > > Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.001.patch, > YARN-6047-YARN-5355.002.patch, YARN-6047-YARN-5355.003.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6047) Documentation updates for TimelineService v2
[ https://issues.apache.org/jira/browse/YARN-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-6047: --- Attachment: YARN-6047-YARN-5355.004.patch Made some changes in documentation for security and changed the current status section. Also attached the HTML form of documentation > Documentation updates for TimelineService v2 > > > Key: YARN-6047 > URL: https://issues.apache.org/jira/browse/YARN-6047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: documentation, timelineserver >Reporter: Varun Saxena >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Fix For: YARN-5355 > > Attachments: TimelineServiceV2.html, YARN-6047-YARN-5355.001.patch, > YARN-6047-YARN-5355.002.patch, YARN-6047-YARN-5355.003.patch, > YARN-6047-YARN-5355.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1613#comment-1613 ] Arun Suresh commented on YARN-6979: --- +1, Will check this in shortly > Add flag to allow all container updates to be initiated via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kartheek muthyala updated YARN-6979: Attachment: YARN-6979.addendum-001.patch Attaching the addendum patch for renaming CMgrDecreaseContainersResourceEvent to CMgrUpdateContainersEvent and changing DECREASE_CONTAINERS_RESOURCE to UPDATE_RESOURCE > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch, YARN-6979.addendum-001.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: > 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements method: {code:java} @VisibleForTesting public static String expandEnvironment(String var, Path containerLogDir) { var = var.replace(ApplicationConstants.LOG_DIR_EXPANSION_VAR, containerLogDir.toString()); var = var.replace(ApplicationConstants.CLASS_PATH_SEPARATOR, File.pathSeparator); // replace parameter expansion marker. e.g. {{VAR}} on Windows is replaced // as %VAR% and on Linux replaced as "$VAR" if (Shell.WINDOWS) { var = var.replaceAll("(\\{\\{)|(\\}\\})", "%"); } else { var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_LEFT, "$"); var = var.replace(ApplicationConstants.PARAMETER_EXPANSION_RIGHT, ""); } return var; } {code} was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. > It was changed by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: > 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements > method: >
[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues
[ https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134517#comment-16134517 ] Arun Suresh commented on YARN-6475: --- [~templedf] / [~soumabrata], can we cherry-pick this to branch-2 as well. Cherry-picking changes from trunk to branch-2 for classes like {{NodeStatusUpdaterImpl}} is becoming harder now since changes like the {{StatusUpdaterRunnable}} is not in branch-2, and we end up having to do a lot of copy paste. > Fix some long function checkstyle issues > > > Key: YARN-6475 > URL: https://issues.apache.org/jira/browse/YARN-6475 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Soumabrata Chakraborty >Priority: Trivial > Labels: newbie > Fix For: 3.0.0-alpha4 > > Attachments: YARN-6475.001.patch > > > I am talking about these two: > {code} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441: > @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength] > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159: > @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134540#comment-16134540 ] Varun Saxena commented on YARN-6861: BTW, should we return the app user id as well in INFO. There is no way to differentiate for end user right now. For Tez LLAP use case though, all AMs' would start with same user, I assume. > Reader API for sub application entities > --- > > Key: YARN-6861 > URL: https://issues.apache.org/jira/browse/YARN-6861 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: atsv2-subapp, yarn-5355-merge-blocker > Attachments: YARN-6861-YARN-5355.001.patch, > YARN-6861-YARN-5355.002.patch, YARN-6861-YARN-5355.003.patch, > YARN-6861-YARN-5355.004.patch, YARN-6861-YARN-5355.005.patch, > YARN-6861-YARN-5355.006.patch > > > YARN-6733 and YARN-6734 writes data into sub application table. There should > be a way to read those entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134472#comment-16134472 ] Hudson commented on YARN-6979: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12215 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12215/]) YARN-6979. Add flag to notify all types of container updates to NM via (arun suresh: rev 8410d862d3a72740f461ef91dddb5325955e1ca5) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestYarnServerApiClasses.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeUpdateContainerEvent.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeEventType.java * (edit) hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestIncreaseAllocationExpirer.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeDecreaseContainerEvent.java * (edit) hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NodeHeartbeatResponse.java > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun
[jira] [Created] (YARN-7061) Use token for communication between NM and node collector
Varun Saxena created YARN-7061: -- Summary: Use token for communication between NM and node collector Key: YARN-7061 URL: https://issues.apache.org/jira/browse/YARN-7061 Project: Hadoop YARN Issue Type: Sub-task Reporter: Varun Saxena Use token for communication between NM and node collector once collector runs outside NM, -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134512#comment-16134512 ] Hudson commented on YARN-6979: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12216 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12216/]) YARN-6979. [Addendum patch] Fixed classname and added javadocs. (arun suresh: rev 7a82d7bcea8124e1b65c275fac15bf2047d17471) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerManagerEventType.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrDecreaseContainersResourceEvent.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrUpdateContainersEvent.java > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch, YARN-6979.addendum-001.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
Lingfeng Su created YARN-7062: - Summary: yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}" Key: YARN-7062 URL: https://issues.apache.org/jira/browse/YARN-7062 Project: Hadoop YARN Issue Type: Improvement Components: applications Reporter: Lingfeng Su Assignee: Lingfeng Su Priority: Minor I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:bash} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7062) yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" and "}}"
[ https://issues.apache.org/jira/browse/YARN-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lingfeng Su updated YARN-7062: -- Description: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:java} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements was: I passed a json string like "{User: Billy, {Age: 18}}" to main method args, when running spark jobs on yarn. It may exchange by ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") I found the final arg in launch_container.sh of yarn containers, as: --arg '{User: Billy, {Age: 18' {code:bash} exec /bin/bash -c "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout 2> /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" {code} We could make some improvements > yarn job args changed by ContainerLaunch.expandEnvironment(), such as "{{" > and "}}" > --- > > Key: YARN-7062 > URL: https://issues.apache.org/jira/browse/YARN-7062 > Project: Hadoop YARN > Issue Type: Improvement > Components: applications >Reporter: Lingfeng Su >Assignee: Lingfeng Su >Priority: Minor > > I passed a json string like "{User: Billy, {Age: 18}}" to main method args, > when running spark jobs on yarn. It may exchange by > ContainerLaunch.expandEnvironment() to "{User: Billy, {Age: 18" ("}}" to "") > I found the final arg in launch_container.sh of yarn containers, as: > --arg '{User: Billy, {Age: 18' > {code:java} > exec /bin/bash -c > "LD_LIBRARY_PATH="$HADOOP_COMMON_HOME/../../../CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hadoop/lib/native:$LD_LIBRARY_PATH" > $JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp > -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01 > org.apache.spark.deploy.yarn.ApplicationMaster --class > 'org.apache.spark.examples.sql.hive.HiveFromSpark' --jar > file:/opt/spark-submit/spark_sql_test-1.0.jar --arg '{User: Billy, {Age: 18' > --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stdout > 2> > /var/log/hadoop-yarn/container/application_1503214867081_0015/container_1503214867081_0015_01_01/stderr" > {code} > We could make some improvements -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134701#comment-16134701 ] Yeliang Cang commented on YARN-7047: Thanks [~ajisakaa] for the review! Submit patch004 to undo changes in WindowsSecureContainerExecutor.java! > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch, YARN-7047.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134685#comment-16134685 ] kartheek muthyala edited comment on YARN-6979 at 8/21/17 4:08 AM: -- Thank you [~asuresh] for the review and committing this to trunk. was (Author: kartheek): Thank you Arun for the review and committing this to trunk. > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch, YARN-6979.addendum-001.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to notify all types of container updates to NM via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134685#comment-16134685 ] kartheek muthyala commented on YARN-6979: - Thank you Arun for the review and committing this to trunk. > Add flag to notify all types of container updates to NM via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch, YARN-6979.addendum-001.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6384) Add configuratin to set max cpu usage when strict-resource-usage is false with cgroups
[ https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengkai updated YARN-6384: -- Attachment: YARN-6384-0.patch > Add configuratin to set max cpu usage when strict-resource-usage is false > with cgroups > -- > > Key: YARN-6384 > URL: https://issues.apache.org/jira/browse/YARN-6384 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: dengkai > Attachments: YARN-6384-0.patch > > > When using cgroups on yarn, if > yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is > false, user may get very more cpu time than expected based on the vcores. > There should be a upper limit even resource-usage is not strict, just like a > percentage which user can get more than promised by vcores. I think it's > important in a shared cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134640#comment-16134640 ] Rohith Sharma K S commented on YARN-6861: - bq. should we return the app user id as well in INFO As a default FROM_ID info, application submitted user has been returned as well. bq. For Tez LLAP use case though, all AMs' would start with same user, I assume. AM will start with same user but they publish data with doAs user. The whole DAG execution will happens with one doAs user. > Reader API for sub application entities > --- > > Key: YARN-6861 > URL: https://issues.apache.org/jira/browse/YARN-6861 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: atsv2-subapp, yarn-5355-merge-blocker > Attachments: YARN-6861-YARN-5355.001.patch, > YARN-6861-YARN-5355.002.patch, YARN-6861-YARN-5355.003.patch, > YARN-6861-YARN-5355.004.patch, YARN-6861-YARN-5355.005.patch, > YARN-6861-YARN-5355.006.patch > > > YARN-6733 and YARN-6734 writes data into sub application table. There should > be a way to read those entities. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yeliang Cang updated YARN-7047: --- Attachment: YARN-7047.004.patch > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch, YARN-7047.004.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7035) Add health checker to ResourceManager
[ https://issues.apache.org/jira/browse/YARN-7035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134636#comment-16134636 ] sandflee commented on YARN-7035: Thanks [~yufeigu], YARN-6061 is very useful to handle critical thread exit, for dead locks we use ThreadMxBean to detect. > Add health checker to ResourceManager > - > > Key: YARN-7035 > URL: https://issues.apache.org/jira/browse/YARN-7035 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: sandflee > > RM may becomes unhealthy but still alive, for example scheduling thread > exit, dead lock happens. seems useful to add a healthy checker service, if > check failed, let RM exit -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134690#comment-16134690 ] Akira Ajisaka commented on YARN-7047: - bq. LOG in "IOUtils.cleanup(LOG, os);" is defined in org.apache.hadoop.fs.FileSystem in hadoop-common module, which can be called by other modules. So I leave it unchanged. Okay. Would you undo the change in WindowsSecureContainerExecutor.java in the patch? It looks no-op. I'm +1 if that is addressed. > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134727#comment-16134727 ] Hadoop QA commented on YARN-7047: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 24 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 16 unchanged - 6 fixed = 16 total (was 22) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 4 new + 1513 unchanged - 11 fixed = 1517 total (was 1524) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7047 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882812/YARN-7047.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c562619d72b5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7a82d7b | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/17027/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17027/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/17027/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
[jira] [Comment Edited] (YARN-7047) Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager
[ https://issues.apache.org/jira/browse/YARN-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16132989#comment-16132989 ] Yeliang Cang edited comment on YARN-7047 at 8/20/17 11:06 AM: -- In WindowsSecureContainerExecutor.java, {code:java} @Override protected OutputStream createOutputStreamWithMode(Path f, boolean append, FsPermission permission) throws IOException { if (LOG.isDebugEnabled()) { LOG.debug(String.format("EFS:createOutputStreamWithMode: %s %b %s", f, append, permission)); } boolean success = false; OutputStream os = Native.Elevated.create(f, append); try { setPermission(f, permission); success = true; return os; } finally { if (!success) { IOUtils.cleanup(LOG, os); } } } {code} LOG in "IOUtils.cleanup(LOG, os);" is defined in org.apache.hadoop.fs.FileSystem in hadoop-common module, which can be called by other modules. So I leave it unchanged.[~ajisakaa], [~vincent he], what is your opinion? Would you help me reviewing the patch? was (Author: cyl): In WindowsSecureContainerExecutor.java, {code:java} @Override protected OutputStream createOutputStreamWithMode(Path f, boolean append, FsPermission permission) throws IOException { if (LOG.isDebugEnabled()) { LOG.debug(String.format("EFS:createOutputStreamWithMode: %s %b %s", f, append, permission)); } boolean success = false; OutputStream os = Native.Elevated.create(f, append); try { setPermission(f, permission); success = true; return os; } finally { if (!success) { IOUtils.cleanup(LOG, os); } } } {code} LOG in "IOUtils.cleanup(LOG, os);" is defined in org.apache.hadoop.fs.FileSystem in hadoop-common module, which can be called by other modules. So I leave it unchanged. [~vincent he], what is your opinion? Would you help me reviewing the patch? > Moving logging APIs over to slf4j in hadoop-yarn-server-nodemanager > --- > > Key: YARN-7047 > URL: https://issues.apache.org/jira/browse/YARN-7047 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha4 >Reporter: Yeliang Cang >Assignee: Yeliang Cang > Attachments: YARN-7047.001.patch, YARN-7047.002.patch, > YARN-7047.003.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134388#comment-16134388 ] Hadoop QA commented on YARN-6979: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 17s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 9s{color} | {color:green} root: The patch generated 0 new + 548 unchanged - 10 fixed = 548 total (was 558) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 32s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 52s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.sls.appmaster.TestAMSimulator | | |
[jira] [Created] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
Kousuke Saruta created YARN-7058: Summary: Add null check in AMRMClientImpl#getMatchingRequest Key: YARN-7058 URL: https://issues.apache.org/jira/browse/YARN-7058 Project: Hadoop YARN Issue Type: Bug Components: client Affects Versions: 2.9.0 Reporter: Kousuke Saruta As of YARN-4889, the behavior of AMRMClientImpl#getMatchingRequests has slightly changed. After YARN-4889, the method will throw NPE if the method is called before calling addContainerRequest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
[ https://issues.apache.org/jira/browse/YARN-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kousuke Saruta reassigned YARN-7058: Assignee: Kousuke Saruta > Add null check in AMRMClientImpl#getMatchingRequest > --- > > Key: YARN-7058 > URL: https://issues.apache.org/jira/browse/YARN-7058 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 2.9.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Attachments: YARN-7058-branch-2.001.patch > > > As of YARN-4889, the behavior of AMRMClientImpl#getMatchingRequests has > slightly changed. > After YARN-4889, the method will throw NPE if the method is called before > calling addContainerRequest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7058) Add null check in AMRMClientImpl#getMatchingRequest
[ https://issues.apache.org/jira/browse/YARN-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kousuke Saruta updated YARN-7058: - Attachment: YARN-7058-branch-2.001.patch > Add null check in AMRMClientImpl#getMatchingRequest > --- > > Key: YARN-7058 > URL: https://issues.apache.org/jira/browse/YARN-7058 > Project: Hadoop YARN > Issue Type: Bug > Components: client >Affects Versions: 2.9.0 >Reporter: Kousuke Saruta > Attachments: YARN-7058-branch-2.001.patch > > > As of YARN-4889, the behavior of AMRMClientImpl#getMatchingRequests has > slightly changed. > After YARN-4889, the method will throw NPE if the method is called before > calling addContainerRequest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134410#comment-16134410 ] kartheek muthyala commented on YARN-6979: - [~asuresh], I have fixed the unit test according to your suggestion. The failing unit tests in the Hadoop report seems to be unrelated to this patch and are failing in trunk too. Taken care of the checkstyle and whitespace issues in the last patch too. > Add flag to allow all container updates to be initiated via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6960) definition of active queue allows idle long-running apps to distort fair shares
[ https://issues.apache.org/jira/browse/YARN-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steven Rand updated YARN-6960: -- Attachment: YARN-6960.001.patch > definition of active queue allows idle long-running apps to distort fair > shares > --- > > Key: YARN-6960 > URL: https://issues.apache.org/jira/browse/YARN-6960 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.1, 3.0.0-alpha4 >Reporter: Steven Rand >Assignee: Steven Rand > Attachments: YARN-6960.001.patch > > > YARN-2026 introduced the notion of only considering active queues when > computing the fair share of each queue. The definition of an active queue is > a queue with at least one runnable app: > {code} > public boolean isActive() { > return getNumRunnableApps() > 0; > } > {code} > One case that this definition of activity doesn't account for is that of > long-running applications that scale dynamically. Such an application might > request many containers when jobs are running, but scale down to very few > containers, or only the AM container, when no jobs are running. > Even when such an application has scaled down to a negligible amount of > demand and utilization, the queue that it's in is still considered to be > active, which defeats the purpose of YARN-2026. For example, consider this > scenario: > 1. We have queues {{root.a}}, {{root.b}}, {{root.c}}, and {{root.d}}, all of > which have the same weight. > 2. Queues {{root.a}} and {{root.b}} contain long-running applications that > currently have only one container each (the AM). > 3. An application in queue {{root.c}} starts, and uses the whole cluster > except for the small amount in use by {{root.a}} and {{root.b}}. An > application in {{root.d}} starts, and has a high enough demand to be able to > use half of the cluster. Because all four queues are active, the app in > {{root.d}} can only preempt the app in {{root.c}} up to roughly 25% of the > cluster's resources, while the app in {{root.c}} keeps about 75%. > Ideally in this example, the app in {{root.d}} would be able to preempt the > app in {{root.c}} up to 50% of the cluster, which would be possible if the > idle apps in {{root.a}} and {{root.b}} didn't cause those queues to be > considered active. > One way to address this is to update the definition of an active queue to be > a queue containing 1 or more non-AM containers. This way if all apps in a > queue scale down to only the AM, other queues' fair shares aren't affected. > The benefit of this approach is that it's quite simple. The downside is that > it doesn't account for apps that are idle and using almost no resources, but > still have at least one non-AM container. > There are a couple of other options that seem plausible to me, but they're > much more complicated, and it seems to me that this proposal makes good > progress while adding minimal extra complexity. > Does this seem like a reasonable change? I'm certainly open to better ideas > as well. > Thanks, > Steve -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6960) definition of active queue allows idle long-running apps to distort fair shares
[ https://issues.apache.org/jira/browse/YARN-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134416#comment-16134416 ] Steven Rand commented on YARN-6960: --- [~dan...@cloudera.com], I've uploaded a patch proposing a new definition of queue activity. It also needs tests, but I wanted to first see how the community feels about this change, and revise it as necessary based on feedback before writing tests for it. My understanding of a queue's demand is that it's the cumulative current usage of all apps in the queue plus the cumulative requested additional resources for all apps in the queue. Therefore if no apps are requesting additional resources, the demand will be equal to the usage of the AMs. Then, as soon as any app attempts to do anything, it's demand will be greater than the AM usage, and the queue will become active. I've tested this patch and it seems to have the desired effect. Going back to the example in the description, {{root.c}} and {{root.d}} have equal fair shares despite the idle applications in {{root.a}} and {{root.b}}. > definition of active queue allows idle long-running apps to distort fair > shares > --- > > Key: YARN-6960 > URL: https://issues.apache.org/jira/browse/YARN-6960 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.1, 3.0.0-alpha4 >Reporter: Steven Rand >Assignee: Steven Rand > Attachments: YARN-6960.001.patch > > > YARN-2026 introduced the notion of only considering active queues when > computing the fair share of each queue. The definition of an active queue is > a queue with at least one runnable app: > {code} > public boolean isActive() { > return getNumRunnableApps() > 0; > } > {code} > One case that this definition of activity doesn't account for is that of > long-running applications that scale dynamically. Such an application might > request many containers when jobs are running, but scale down to very few > containers, or only the AM container, when no jobs are running. > Even when such an application has scaled down to a negligible amount of > demand and utilization, the queue that it's in is still considered to be > active, which defeats the purpose of YARN-2026. For example, consider this > scenario: > 1. We have queues {{root.a}}, {{root.b}}, {{root.c}}, and {{root.d}}, all of > which have the same weight. > 2. Queues {{root.a}} and {{root.b}} contain long-running applications that > currently have only one container each (the AM). > 3. An application in queue {{root.c}} starts, and uses the whole cluster > except for the small amount in use by {{root.a}} and {{root.b}}. An > application in {{root.d}} starts, and has a high enough demand to be able to > use half of the cluster. Because all four queues are active, the app in > {{root.d}} can only preempt the app in {{root.c}} up to roughly 25% of the > cluster's resources, while the app in {{root.c}} keeps about 75%. > Ideally in this example, the app in {{root.d}} would be able to preempt the > app in {{root.c}} up to 50% of the cluster, which would be possible if the > idle apps in {{root.a}} and {{root.b}} didn't cause those queues to be > considered active. > One way to address this is to update the definition of an active queue to be > a queue containing 1 or more non-AM containers. This way if all apps in a > queue scale down to only the AM, other queues' fair shares aren't affected. > The benefit of this approach is that it's quite simple. The downside is that > it doesn't account for apps that are idle and using almost no resources, but > still have at least one non-AM container. > There are a couple of other options that seem plausible to me, but they're > much more complicated, and it seems to me that this proposal makes good > progress while adding minimal extra complexity. > Does this seem like a reasonable change? I'm certainly open to better ideas > as well. > Thanks, > Steve -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7043) Cleanup ResourceProfileManager
[ https://issues.apache.org/jira/browse/YARN-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134355#comment-16134355 ] Hadoop QA commented on YARN-7043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} YARN-3926 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 42s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 18s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} YARN-3926 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 7s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 156 unchanged - 4 fixed = 158 total (was 160) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 57s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 4s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 36s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.client.api.impl.TestAMRMClient | | Timed out junit tests | org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-7043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882764/YARN-7043.YARN-3926.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 91ba27629468 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (YARN-6861) Reader API for sub application entities
[ https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134333#comment-16134333 ] Hadoop QA commented on YARN-6861: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} YARN-5355 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 0s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} YARN-5355 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 6 new + 35 unchanged - 0 fixed = 41 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 53s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6861 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882746/YARN-6861-YARN-5355.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 341133283dcd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar
[jira] [Updated] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse
[ https://issues.apache.org/jira/browse/YARN-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kartheek muthyala updated YARN-6979: Attachment: YARN-6979.003.patch On top of the last version of the patch, this patch enhances testContainerAutoUpdateContainer with promotion, demotion, increase and decrease logic > Add flag to allow all container updates to be initiated via > NodeHeartbeatResponse > - > > Key: YARN-6979 > URL: https://issues.apache.org/jira/browse/YARN-6979 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala > Attachments: YARN-6979.001.patch, YARN-6979.002.patch, > YARN-6979.003.patch > > > Currently, only the Container Resource decrease command is sent to the NM via > NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow > ALL container updates (increase, decrease, promote and demote) to be > initiated via node HB. > The AM is still free to use the ContainerManagementPrototol's > {{updateContainer}} API in cases where for instance, the Node HB frequency is > very low and the AM needs to update the container as soon as possible. In > these situations, if the Node HB arrives before the updateContainer API call, > the call would error out, due to a version mismatch and the AM is required to > handle it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7043) Cleanup ResourceProfileManager
[ https://issues.apache.org/jira/browse/YARN-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7043: -- Attachment: YARN-7043.YARN-3926.004.patch Fixed jenkins. > Cleanup ResourceProfileManager > -- > > Key: YARN-7043 > URL: https://issues.apache.org/jira/browse/YARN-7043 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Critical > Attachments: YARN-7043.YARN-3926.001.patch, > YARN-7043.YARN-3926.002.patch, YARN-7043.YARN-3926.003.patch, > YARN-7043.YARN-3926.004.patch > > > Several cleanups we can do for ResourceProfileManager: > 1) Move GetResourceTypesInfo from profile manager to ResourceUtils. > 2) Move logics to check profile enabled, etc. from ClientRMService to > ResourceUtils. > 3) Throw exception when resource profile is disabled and method accessed by > other modules. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org