[jira] [Commented] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965217#comment-16965217 ] Hadoop QA commented on YARN-9942: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 57s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}177m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9942 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984670/YARN-9942.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 911bfd88bb86 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / de6b8b0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Updated] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-9942: -- Attachment: YARN-9942.002.patch > Reset overcommit timeout > > > Key: YARN-9942 > URL: https://issues.apache.org/jira/browse/YARN-9942 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.2.1 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: YARN-9942.000.patch, YARN-9942.001.patch, > YARN-9942.002.patch > > > Once the overcommit requirements has been satisfied, we should reset the > timeout. > In addition, there are a few instances where we change the amount of > resources (e.g., decommissioning) which are using a value of 0. > This triggers preemption events. > We should set it to the default (not do anything). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965145#comment-16965145 ] Hadoop QA commented on YARN-9942: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 30s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 52s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.overcommitTimeout; locked 85% of time Unsynchronized access at SchedulerNode.java:85% of time Unsynchronized access at SchedulerNode.java:[line 146] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9942 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984663/YARN-9942.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Updated] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-9942: -- Attachment: YARN-9942.001.patch > Reset overcommit timeout > > > Key: YARN-9942 > URL: https://issues.apache.org/jira/browse/YARN-9942 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.2.1 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: YARN-9942.000.patch, YARN-9942.001.patch > > > Once the overcommit requirements has been satisfied, we should reset the > timeout. > In addition, there are a few instances where we change the amount of > resources (e.g., decommissioning) which are using a value of 0. > This triggers preemption events. > We should set it to the default (not do anything). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-9942: -- Description: Once the overcommit requirements has been satisfied, we should reset the timeout. In addition, there are a few instances where we change the amount of resources (e.g., decommissioning) which are using a value of 0. This triggers preemption events. We should set it to the default (not do anything). was:Currently, there are a few instances where we change the amount of resources (e.g., decommissioning) which are using a value of 0. This triggers preemption events. We should set it to the default (not do anything). > Reset overcommit timeout > > > Key: YARN-9942 > URL: https://issues.apache.org/jira/browse/YARN-9942 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.2.1 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: YARN-9942.000.patch > > > Once the overcommit requirements has been satisfied, we should reset the > timeout. > In addition, there are a few instances where we change the amount of > resources (e.g., decommissioning) which are using a value of 0. > This triggers preemption events. > We should set it to the default (not do anything). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9942) Reset overcommit timeout
[ https://issues.apache.org/jira/browse/YARN-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated YARN-9942: -- Summary: Reset overcommit timeout (was: Node resource update should use OVER_COMMIT_TIMEOUT_MILLIS_DEFAULT) > Reset overcommit timeout > > > Key: YARN-9942 > URL: https://issues.apache.org/jira/browse/YARN-9942 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.2.1 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: YARN-9942.000.patch > > > Currently, there are a few instances where we change the amount of resources > (e.g., decommissioning) which are using a value of 0. This triggers > preemption events. We should set it to the default (not do anything). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9561) Add C changes for the new RuncContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965064#comment-16965064 ] Jim Brennan commented on YARN-9561: --- Actually, it might be better in this case to just do the stat and fail if it doesn't exist. > Add C changes for the new RuncContainerRuntime > -- > > Key: YARN-9561 > URL: https://issues.apache.org/jira/browse/YARN-9561 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Attachments: YARN-9561.001.patch, YARN-9561.002.patch, > YARN-9561.003.patch, YARN-9561.004.patch, YARN-9561.005.patch, > YARN-9561.006.patch, YARN-9561.007.patch, YARN-9561.008.patch > > > This JIRA will be used to add the C changes to the container-executor native > binary that are necessary for the new RuncContainerRuntime. There should be > no changes to existing code paths. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9561) Add C changes for the new RuncContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965061#comment-16965061 ] Jim Brennan commented on YARN-9561: --- Thanks for updating the patch [~ebadger]! I tested this along with the patches for YARN-9562 and YARN-9564. Everything seems to be working well. I did run into one issue with the container executor unit tests (cetest). I normally compile with this option: -Dcontainer-executor.conf.dir=${HADOOP_CONF_DIR} This causes some failures in cetest: {noformat} [--] 7 tests from TestRunc [ RUN ] TestRunc.test_parse_runc_launch_cmd_valid Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_valid (1 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_bad_container_id Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_bad_container_id (0 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_existing_pidfile Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_existing_pidfile (0 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_invalid_media_type Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_media_type (0 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_invalid_num_reap_layers_keep Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_num_reap_layers_keep (0 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_valid_mounts Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_valid_mounts (0 ms) [ RUN ] TestRunc.test_parse_runc_launch_cmd_invalid_mounts Could not create /home/gs/hadoop/conf/container-executor.cfg /home/jbrennan02/git/y-hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_runc_util.cc:47: Failure Expected: ret Which is: 1 To be equal to: 0 Container executor cfg setup failed [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_mounts (0 ms) [--] 7 tests from TestRunc (2 ms total) [--] Global test environment tear-down [==] 89 tests from 10 test cases ran. (82 ms total) [ PASSED ] 82 tests. [ FAILED ] 7 tests, listed below: [ FAILED ] TestRunc.test_parse_runc_launch_cmd_valid [ FAILED ] TestRunc.test_parse_runc_launch_cmd_bad_container_id [ FAILED ] TestRunc.test_parse_runc_launch_cmd_existing_pidfile [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_media_type [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_num_reap_layers_keep [ FAILED ] TestRunc.test_parse_runc_launch_cmd_valid_mounts [ FAILED ] TestRunc.test_parse_runc_launch_cmd_invalid_mounts {noformat} It's failing because I already have a container-executor.cfg there and it is owned by root. If I run without defining {{container-executor.conf.dir}}, all of the tests pass. I was able to get this to work by modifying test_runc_util.cc::create_ce_file(): {noformat} int create_ce_file() {
[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime
[ https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965048#comment-16965048 ] Jim Brennan commented on YARN-9562: --- Thanks for the updates [~ebadger]! I am +1 (non-binding) on patch 013. I tested it with the patches for YARN-9561 and YARN-9564. I was able to run with the runc container executor on a one node cluster. I verified that I could use the {{YARN_CONTAINER_RUNTIME_RUNC_MOUNTS}} environment variable to specify the mounts. I also ran all of the relevant unit tests. > Add Java changes for the new RuncContainerRuntime > - > > Key: YARN-9562 > URL: https://issues.apache.org/jira/browse/YARN-9562 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Attachments: YARN-9562.001.patch, YARN-9562.002.patch, > YARN-9562.003.patch, YARN-9562.004.patch, YARN-9562.005.patch, > YARN-9562.006.patch, YARN-9562.007.patch, YARN-9562.008.patch, > YARN-9562.009.patch, YARN-9562.010.patch, YARN-9562.011.patch, > YARN-9562.012.patch, YARN-9562.013.patch > > > This JIRA will be used to add the Java changes for the new > RuncContainerRuntime. This will work off of YARN-9560 to use much of the > existing DockerLinuxContainerRuntime code once it is moved up into an > abstract class that can be extended. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion
[ https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965038#comment-16965038 ] Jim Brennan commented on YARN-9564: --- Based on my testing with the latest patches for YARN-9561, YARN-9562, and this patch, I am +1 (non-binding) on patch 004. I was able to use docker2squash.py to pull a docker image, squash it, and push the layers to my local hdfs. I was then able to run some test jobs using the runc container runtime. > Create docker-to-squash tool for image conversion > - > > Key: YARN-9564 > URL: https://issues.apache.org/jira/browse/YARN-9564 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Attachments: YARN-9564.001.patch, YARN-9564.002.patch, > YARN-9564.003.patch, YARN-9564.004.patch > > > The new runc runtime uses docker images that are converted into multiple > squashfs images. Each layer of the docker image will get its own squashfs > image. We need a tool to help automate the creation of these squashfs images > when all we have is a docker image -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9948) Remove attempts that are beyond max-attempt limit from RMAppImpl
Hu Ziqian created YARN-9948: --- Summary: Remove attempts that are beyond max-attempt limit from RMAppImpl Key: YARN-9948 URL: https://issues.apache.org/jira/browse/YARN-9948 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 3.1.3 Reporter: Hu Ziqian -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9789) Disable Option for Write Ahead Logs of LogMutation
[ https://issues.apache.org/jira/browse/YARN-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964766#comment-16964766 ] Hadoop QA commented on YARN-9789: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 33m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 44s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9789 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984585/YARN-9789-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a40733b3b79a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef9d12d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25078/testReport/ | | Max. process+thread count | 882 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25078/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Disable Option for Write Ahead Logs of LogMutation >
[jira] [Commented] (YARN-9788) Queue Management API does not support parallel updates
[ https://issues.apache.org/jira/browse/YARN-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964765#comment-16964765 ] Hadoop QA commented on YARN-9788: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 34m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 2s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 27s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 54s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}224m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | YARN-9788 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984580/YARN-9788-010.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bb59d8921d30 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef9d12d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25077/testReport/ | | Max. process+thread count | 816 (vs. ulimit of 5500) | |
[jira] [Commented] (YARN-9780) SchedulerConf Mutation API does not Allow Stop and Remove Queue in a single call
[ https://issues.apache.org/jira/browse/YARN-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964704#comment-16964704 ] Prabhu Joseph commented on YARN-9780: - [~snemeth] Thanks for reviewing. That condition is still true - Queues can be deleted only after it is Stopped. This patch provides a way in Mutation Api to combine both Stop and Delete in a single call. But the existing behavior is not changed. 1. Without Mutation API, queue has to be stopped using refresh option and only then can be removed. 2. With Mutation API, user can perform delete in a separate call but the stop has to be done before that for delete to work. > SchedulerConf Mutation API does not Allow Stop and Remove Queue in a single > call > > > Key: YARN-9780 > URL: https://issues.apache.org/jira/browse/YARN-9780 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9780-001.patch, YARN-9780-002.patch, > YARN-9780-003.patch, YARN-9780-004.patch > > > SchedulerConf Mutation API does not Allow Stop and Remove Queue in a single > call. The queue has to be stopped before removing and so it is useful to > allow both Stop and remove queue in a single call. > *Repro:* > {code:java} > Capacity-Scheduler.xml: > yarn.scheduler.capacity.root.queues = new, default, dummy > yarn.scheduler.capacity.root.default.capacity = 60 > yarn.scheduler.capacity.root.dummy.capacity = 30 > yarn.scheduler.capacity.root.new.capacity = 10 > curl -v -X PUT -d @abc.xml -H "Content-type: application/xml" > 'http://:8088/ws/v1/cluster/scheduler-conf' > abc.xml > > > root.default > > > capacity > 70 > > > > > root.new > > > state > STOPPED > > > > root.new > > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9947) lazy init appLogAggregatorImpl when log aggregation
[ https://issues.apache.org/jira/browse/YARN-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hu Ziqian updated YARN-9947: Summary: lazy init appLogAggregatorImpl when log aggregation (was: lazy-init-appLogAggregatorImpl-when-log-aggregation) > lazy init appLogAggregatorImpl when log aggregation > --- > > Key: YARN-9947 > URL: https://issues.apache.org/jira/browse/YARN-9947 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 3.1.3 >Reporter: Hu Ziqian >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9789) Disable Option for Write Ahead Logs of LogMutation
[ https://issues.apache.org/jira/browse/YARN-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964694#comment-16964694 ] Prabhu Joseph commented on YARN-9789: - Thanks [~snemeth] for reviewing. Have added a new testcase in TestLeveldbConfigurationStore in [^YARN-9789-002.patch] . YARN-9789, YARN-9780, YARN-9781 and YARN-9788 all modifies same files. So i will rebase again after the commit of any other patch. > Disable Option for Write Ahead Logs of LogMutation > -- > > Key: YARN-9789 > URL: https://issues.apache.org/jira/browse/YARN-9789 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9789-001.patch, YARN-9789-002.patch > > > When yarn.scheduler.configuration.store.max-logs is set to zero, the > YARNConfigurationStore (ZK, LevelDB) reads the write ahead logs from the > backend which is not needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9789) Disable Option for Write Ahead Logs of LogMutation
[ https://issues.apache.org/jira/browse/YARN-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9789: Attachment: YARN-9789-002.patch > Disable Option for Write Ahead Logs of LogMutation > -- > > Key: YARN-9789 > URL: https://issues.apache.org/jira/browse/YARN-9789 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9789-001.patch, YARN-9789-002.patch > > > When yarn.scheduler.configuration.store.max-logs is set to zero, the > YARNConfigurationStore (ZK, LevelDB) reads the write ahead logs from the > backend which is not needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9947) lazy-init-appLogAggregatorImpl-when-log-aggregation
Hu Ziqian created YARN-9947: --- Summary: lazy-init-appLogAggregatorImpl-when-log-aggregation Key: YARN-9947 URL: https://issues.apache.org/jira/browse/YARN-9947 Project: Hadoop YARN Issue Type: Improvement Components: nodemanager Affects Versions: 3.1.3 Reporter: Hu Ziqian -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9946) Support container.watcher for watching container process
[ https://issues.apache.org/jira/browse/YARN-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoukang updated YARN-9946: --- Component/s: nodemanager > Support container.watcher for watching container process > - > > Key: YARN-9946 > URL: https://issues.apache.org/jira/browse/YARN-9946 > Project: Hadoop YARN > Issue Type: New Feature > Components: nodemanager >Reporter: zhoukang >Assignee: zhoukang >Priority: Major > Attachments: example-pmap.png > > > Support run watcher script for watching container process.Like print jstack > or pmap etc. > !example-pmap.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9946) Support container.watcher for watching container process
[ https://issues.apache.org/jira/browse/YARN-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoukang updated YARN-9946: --- Attachment: (was: 选区_003.png) > Support container.watcher for watching container process > - > > Key: YARN-9946 > URL: https://issues.apache.org/jira/browse/YARN-9946 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: zhoukang >Assignee: zhoukang >Priority: Major > Attachments: example-pmap.png > > > Support run watcher script for watching container process.Like print jstack > or pmap etc. > !选区_003.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9946) Support container.watcher for watching container process
[ https://issues.apache.org/jira/browse/YARN-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoukang updated YARN-9946: --- Description: Support run watcher script for watching container process.Like print jstack or pmap etc. !example-pmap.png! was: Support run watcher script for watching container process.Like print jstack or pmap etc. !选区_003.png!!example-pmap.png! > Support container.watcher for watching container process > - > > Key: YARN-9946 > URL: https://issues.apache.org/jira/browse/YARN-9946 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: zhoukang >Assignee: zhoukang >Priority: Major > Attachments: example-pmap.png > > > Support run watcher script for watching container process.Like print jstack > or pmap etc. > !example-pmap.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9946) Support container.watcher for watching container process
[ https://issues.apache.org/jira/browse/YARN-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoukang updated YARN-9946: --- Description: Support run watcher script for watching container process.Like print jstack or pmap etc. !选区_003.png!!example-pmap.png! was: Support run watcher script for watching container process.Like print jstack or pmap etc. !选区_003.png! > Support container.watcher for watching container process > - > > Key: YARN-9946 > URL: https://issues.apache.org/jira/browse/YARN-9946 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: zhoukang >Assignee: zhoukang >Priority: Major > Attachments: example-pmap.png > > > Support run watcher script for watching container process.Like print jstack > or pmap etc. > !选区_003.png!!example-pmap.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9946) Support container.watcher for watching container process
[ https://issues.apache.org/jira/browse/YARN-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoukang updated YARN-9946: --- Attachment: example-pmap.png > Support container.watcher for watching container process > - > > Key: YARN-9946 > URL: https://issues.apache.org/jira/browse/YARN-9946 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: zhoukang >Assignee: zhoukang >Priority: Major > Attachments: example-pmap.png > > > Support run watcher script for watching container process.Like print jstack > or pmap etc. > !选区_003.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-9946) Support container.watcher for watching container process
zhoukang created YARN-9946: -- Summary: Support container.watcher for watching container process Key: YARN-9946 URL: https://issues.apache.org/jira/browse/YARN-9946 Project: Hadoop YARN Issue Type: New Feature Reporter: zhoukang Assignee: zhoukang Attachments: 选区_003.png Support run watcher script for watching container process.Like print jstack or pmap etc. !选区_003.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9788) Queue Management API does not support parallel updates
[ https://issues.apache.org/jira/browse/YARN-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964657#comment-16964657 ] Prabhu Joseph commented on YARN-9788: - Thanks [~snemeth] for reviewing the patch. Have addressed the above review comments in [^YARN-9788-010.patch]. > Queue Management API does not support parallel updates > -- > > Key: YARN-9788 > URL: https://issues.apache.org/jira/browse/YARN-9788 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9788-001.patch, YARN-9788-002.patch, > YARN-9788-003.patch, YARN-9788-004.patch, YARN-9788-005.patch, > YARN-9788-006.patch, YARN-9788-007.patch, YARN-9788-008.patch, > YARN-9788-009.patch, YARN-9788-010.patch > > > Queue Management API - does not support parallel updates. When there are two > parallel schedule conf updates (logAndApplyMutation), the first update is > overwritten by the second one. > Currently the logAndApplyMutation creates LogMutation and stores it in a > variable pendingMutation. This way at any given time there will be only one > LogMutation. And so the two parallel logAndApplyMutation will override the > pendingMutation and the later one only will be present. > The fix is to return LogMutation object by logAndApplyMutation which can be > passed during confirmMutation. This fixes the parallel updates. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9788) Queue Management API does not support parallel updates
[ https://issues.apache.org/jira/browse/YARN-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-9788: Attachment: YARN-9788-010.patch > Queue Management API does not support parallel updates > -- > > Key: YARN-9788 > URL: https://issues.apache.org/jira/browse/YARN-9788 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Attachments: YARN-9788-001.patch, YARN-9788-002.patch, > YARN-9788-003.patch, YARN-9788-004.patch, YARN-9788-005.patch, > YARN-9788-006.patch, YARN-9788-007.patch, YARN-9788-008.patch, > YARN-9788-009.patch, YARN-9788-010.patch > > > Queue Management API - does not support parallel updates. When there are two > parallel schedule conf updates (logAndApplyMutation), the first update is > overwritten by the second one. > Currently the logAndApplyMutation creates LogMutation and stores it in a > variable pendingMutation. This way at any given time there will be only one > LogMutation. And so the two parallel logAndApplyMutation will override the > pendingMutation and the later one only will be present. > The fix is to return LogMutation object by logAndApplyMutation which can be > passed during confirmMutation. This fixes the parallel updates. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org