[jira] [Updated] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager
[ https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated YARN-10319: - Attachment: YARN-10319-004.patch > Record Last N Scheduler Activities from ActivitiesManager > - > > Key: YARN-10319 > URL: https://issues.apache.org/jira/browse/YARN-10319 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Labels: activitiesmanager > Attachments: Screen Shot 2020-06-18 at 1.26.31 PM.png, > YARN-10319-001-WIP.patch, YARN-10319-002.patch, YARN-10319-003.patch, > YARN-10319-004.patch > > > ActivitiesManager records a call flow for a given nodeId or a last call flow. > This is useful when debugging the issue live where the user queries with > right nodeId. But capturing last N scheduler activities during the issue > period can help to debug the issue offline. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10315) Avoid sending RMNodeResoureupdate event if resource is same
[ https://issues.apache.org/jira/browse/YARN-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143537#comment-17143537 ] yehuanhuan edited comment on YARN-10315 at 6/24/20, 5:33 AM: - hi [~Sushil-K-S], Can it be solved by comparing node resource usage before and after processing the completed container at AbstractYarnScheduler#nodeUpdate? was (Author: yehuanhuan): Can it be solved by comparing node resource usage before and after processing the completed container at AbstractYarnScheduler#nodeUpdate? > Avoid sending RMNodeResoureupdate event if resource is same > --- > > Key: YARN-10315 > URL: https://issues.apache.org/jira/browse/YARN-10315 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin Chundatt >Assignee: Sushil Ks >Priority: Major > > When the node is in DECOMMISSIONING state the RMNodeResourceUpdateEvent is > send for every heartbeat . Which will result in scheduler resource update. > Avoid sending the same. > Scheduler node resource update iterates through all the queues for resource > update which is costly.. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10315) Avoid sending RMNodeResoureupdate event if resource is same
[ https://issues.apache.org/jira/browse/YARN-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143537#comment-17143537 ] yehuanhuan commented on YARN-10315: --- Can it be solved by comparing node resource usage before and after processing the completed container at AbstractYarnScheduler#nodeUpdate? > Avoid sending RMNodeResoureupdate event if resource is same > --- > > Key: YARN-10315 > URL: https://issues.apache.org/jira/browse/YARN-10315 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin Chundatt >Assignee: Sushil Ks >Priority: Major > > When the node is in DECOMMISSIONING state the RMNodeResourceUpdateEvent is > send for every heartbeat . Which will result in scheduler resource update. > Avoid sending the same. > Scheduler node resource update iterates through all the queues for resource > update which is costly.. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10319) Record Last N Scheduler Activities from ActivitiesManager
[ https://issues.apache.org/jira/browse/YARN-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143416#comment-17143416 ] Tao Yang commented on YARN-10319: - Thanks [~prabhujoseph] for this improvement. I agree that it may be helpful for single-node lookup mechanism, with which users can get all-nodes activities in multiple scheduling cycles at once for better debugging. Some comments about the patch: * Is it better to rename "bulkactivities" (REST API name) to "bulk-activities"? * SchedulerActivitiesInfo is similar to ActivitiesInfo which also means scheduler activities info, can we rename it to BulkActivitiesInfo? * To keep consistence, we can also rename RMWebServices#getLastNActivities to RMWebServices#getBulkActivities. * ActivitiesManager#recordCount can be affected by both activities and bulk-activities REST APIs, we can use `recordCount.compareAndSet(0, 1)` instead of `recordCount.set(1)` to avoid getting unexpected number of bulk-activities, right? * The fetching approaches of activities and bulk-activities REST API are different (asynchronous or synchronous), I think we should elaborate this in the document. > Record Last N Scheduler Activities from ActivitiesManager > - > > Key: YARN-10319 > URL: https://issues.apache.org/jira/browse/YARN-10319 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > Labels: activitiesmanager > Attachments: Screen Shot 2020-06-18 at 1.26.31 PM.png, > YARN-10319-001-WIP.patch, YARN-10319-002.patch, YARN-10319-003.patch > > > ActivitiesManager records a call flow for a given nodeId or a last call flow. > This is useful when debugging the issue live where the user queries with > right nodeId. But capturing last N scheduler activities during the issue > period can help to debug the issue offline. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143308#comment-17143308 ] Hadoop QA commented on YARN-10229: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 17s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 160 unchanged - 6 fixed = 160 total (was 166) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26202/artifact/out/Dockerfile | | JIRA Issue | YARN-10229 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006297/YARN-10229.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0c11fb48e591 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/26202/testReport/ | | Max. process+thread count | 310 (vs. ulimit
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143294#comment-17143294 ] Young Chen commented on YARN-10229: --- Two thoughts: * We're checking both the amrmProxyEnabled flag and the federationEnabled flag before forwarding the requests. Is just checking amrmProxyEnabled sufficient since that's a federation component? Not sure if there is some edge case usage for amrm proxy without federation. * Can we move the bulk of this logic into AMRMProxyService? E.g. the state store check, etc. Since the federation facade is already loaded inside amrmproxy service, it may be better to reduce ContainerManagerImpl dependency on Federation implementation and keep this logic inside Federation components. This way we also don't have to expose getFederationStateStoreFacade from AMRMProxyService. > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch, > YARN-10229.003.patch, YARN-10229.004.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10229: - Attachment: YARN-10229.004.patch > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch, > YARN-10229.003.patch, YARN-10229.004.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143252#comment-17143252 ] Hadoop QA commented on YARN-10229: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 27s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 2 new + 160 unchanged - 6 fixed = 162 total (was 166) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26201/artifact/out/Dockerfile | | JIRA Issue | YARN-10229 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006287/YARN-10229.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd7a8b635614 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 03f855e3e7a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle |
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143177#comment-17143177 ] Bilwa S T commented on YARN-10229: -- fixed checkstyle issues in .003 patch > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch, > YARN-10229.003.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10229: - Attachment: YARN-10229.003.patch > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch, > YARN-10229.003.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143172#comment-17143172 ] Bilwa S T commented on YARN-10229: -- [~elgoiri] do i need to add "{{@SuppressWarnings("checkstyle:visibilitymodifier")" for VisibilityModifier issue?}} > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143163#comment-17143163 ] Íñigo Goiri commented on YARN-10229: Not all the checkstyles are solvable but the one with the indentation and the hide field make sense to fix. > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143159#comment-17143159 ] Hadoop QA commented on YARN-10229: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 17s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 3 new + 165 unchanged - 1 fixed = 168 total (was 166) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26200/artifact/out/Dockerfile | | JIRA Issue | YARN-10229 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006279/YARN-10229.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1e07a19a61f1 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 03f855e3e7a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle |
[jira] [Commented] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143099#comment-17143099 ] Szilard Nemeth commented on YARN-10278: --- Ok. We have a green build so the latest patch fixed the test issue. This is ready for review. > CapacityScheduler test framework > ProportionalCapacityPreemptionPolicyMockFramework need some review > --- > > Key: YARN-10278 > URL: https://issues.apache.org/jira/browse/YARN-10278 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10278.001.patch, YARN-10278.002.patch > > > This test framework class mocks a bit too heavily, and simulates CS internal > behaviour with the mock methods over a point it is reasonably maintainable, > any internal change in CS is a major headscratch. > A lot of tests depend on this class, so we should approach it carefully, but > I think it's wroth to examine this class if it can be made a bit more > resilient to changes, and easier to maintain. Or at least document it better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143095#comment-17143095 ] Hadoop QA commented on YARN-10278: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 20 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 48s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 82 new + 70 unchanged - 42 fixed = 152 total (was 112) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 2s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26199/artifact/out/Dockerfile | | JIRA Issue | YARN-10278 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006275/YARN-10278.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c5b29d14f84d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 03f855e3e7a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle |
[jira] [Commented] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143079#comment-17143079 ] Bilwa S T commented on YARN-10229: -- [~elgoiri] Thanks for reviewing. Updated patch. Handled removing empty line and adding comment. I dont think extracting facading is required as its used just once. > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10229) [Federation] Client should be able to submit application to RM directly using normal client conf
[ https://issues.apache.org/jira/browse/YARN-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated YARN-10229: - Attachment: YARN-10229.002.patch > [Federation] Client should be able to submit application to RM directly using > normal client conf > > > Key: YARN-10229 > URL: https://issues.apache.org/jira/browse/YARN-10229 > Project: Hadoop YARN > Issue Type: Bug > Components: amrmproxy, federation >Affects Versions: 3.1.1 >Reporter: JohnsonGuo >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-10229.001.patch, YARN-10229.002.patch > > > Scenario: When enable the yarn federation feature with multi yarn clusters, > one can submit their job to yarn-router by *modified* their client > configuration with yarn router address. > But if one still wants to submit their jobs via the original client (before > enable federation) to RM directly, it will encounter the AMRMToken exception. > That means once enable federation ,if some one want to submit job, they have > to modify the client conf. > > one possible solution for this Scenario is: > In NodeManger, when the client ApplicationMaster request comes: > * get the client job.xml from HDFS "". > * parse the "yarn.resourcemanager.scheduler.address" parameter in job.xml > * if the value of the parameter is "localhost:8049"(AMRM address),then do > the AMRMToken valid process > * if the value of the parameter is "rm:port"(rm address),then skip the > AMRMToken valid process > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy
[ https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143035#comment-17143035 ] Gergely Pollak commented on YARN-10277: --- Thank you for the fix [~snemeth] , the patch seems fine to me LGTM+1 (Non-binding). > CapacityScheduler test TestUserGroupMappingPlacementRule should build proper > hierarchy > -- > > Key: YARN-10277 > URL: https://issues.apache.org/jira/browse/YARN-10277 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10277.001.patch, YARN-10277.002.patch > > > Since the CapacityScheduler internal implementation depends more and more on > queue being hierarchical, the test gets really hard to maintain. A lot of > test cases were failing because they used non existing queues, but the older > placement rule solution ignored missing parents, but since the leaf queue > change in CS, we must be able to get a full path for any queue, since all > queues are referenced by their full path. > This test should reflect this and instead of creating and expecting the > existance of fictional queues, it should create a proper queue hierarchy, > with a way to describe it better. > Currently we set up a bunch of mockito "when" statements to simulate the > queue behavior, but this is a hassle to maintain, and easy to miss a few > method. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142988#comment-17142988 ] Szilard Nemeth commented on YARN-10316: --- Thanks [~pbacsko], Committed branch-3.3 patch as well. Resolved jira. > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0, 3.3.1 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch, YARN-10316-branch-3.3.001.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10316: -- Fix Version/s: 3.3.1 > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0, 3.3.1 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch, YARN-10316-branch-3.3.001.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142962#comment-17142962 ] Szilard Nemeth commented on YARN-10278: --- Found one issue with patch001, it was a tricky one, basically app ids were not increased correctly for mock applications. Let's hope this fixes both unit test failures. > CapacityScheduler test framework > ProportionalCapacityPreemptionPolicyMockFramework need some review > --- > > Key: YARN-10278 > URL: https://issues.apache.org/jira/browse/YARN-10278 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10278.001.patch, YARN-10278.002.patch > > > This test framework class mocks a bit too heavily, and simulates CS internal > behaviour with the mock methods over a point it is reasonably maintainable, > any internal change in CS is a major headscratch. > A lot of tests depend on this class, so we should approach it carefully, but > I think it's wroth to examine this class if it can be made a bit more > resilient to changes, and easier to maintain. Or at least document it better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10278: -- Attachment: YARN-10277.002.patch > CapacityScheduler test framework > ProportionalCapacityPreemptionPolicyMockFramework need some review > --- > > Key: YARN-10278 > URL: https://issues.apache.org/jira/browse/YARN-10278 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10278.001.patch > > > This test framework class mocks a bit too heavily, and simulates CS internal > behaviour with the mock methods over a point it is reasonably maintainable, > any internal change in CS is a major headscratch. > A lot of tests depend on this class, so we should approach it carefully, but > I think it's wroth to examine this class if it can be made a bit more > resilient to changes, and easier to maintain. Or at least document it better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10278: -- Attachment: YARN-10278.002.patch > CapacityScheduler test framework > ProportionalCapacityPreemptionPolicyMockFramework need some review > --- > > Key: YARN-10278 > URL: https://issues.apache.org/jira/browse/YARN-10278 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10278.001.patch, YARN-10278.002.patch > > > This test framework class mocks a bit too heavily, and simulates CS internal > behaviour with the mock methods over a point it is reasonably maintainable, > any internal change in CS is a major headscratch. > A lot of tests depend on this class, so we should approach it carefully, but > I think it's wroth to examine this class if it can be made a bit more > resilient to changes, and easier to maintain. Or at least document it better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review
[ https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10278: -- Attachment: (was: YARN-10277.002.patch) > CapacityScheduler test framework > ProportionalCapacityPreemptionPolicyMockFramework need some review > --- > > Key: YARN-10278 > URL: https://issues.apache.org/jira/browse/YARN-10278 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Szilard Nemeth >Priority: Major > Attachments: YARN-10278.001.patch > > > This test framework class mocks a bit too heavily, and simulates CS internal > behaviour with the mock methods over a point it is reasonably maintainable, > any internal change in CS is a major headscratch. > A lot of tests depend on this class, so we should approach it carefully, but > I think it's wroth to examine this class if it can be made a bit more > resilient to changes, and easier to maintain. Or at least document it better. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query
[ https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142948#comment-17142948 ] Hadoop QA commented on YARN-10304: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 56s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 45s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 51s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26198/artifact/out/Dockerfile | | JIRA Issue | YARN-10304 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006257/YARN-10304.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142941#comment-17142941 ] Hadoop QA commented on YARN-10316: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 33m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} branch-3.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 39s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} branch-3.3 passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 4s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} branch-3.3 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26196/artifact/out/Dockerfile | | JIRA Issue | YARN-10316 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006251/YARN-10316-branch-3.3.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2f36b350691e 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.3 / 8f1b70e | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | unit |
[jira] [Commented] (YARN-10166) Add detail log for ApplicationAttemptNotFoundException
[ https://issues.apache.org/jira/browse/YARN-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142908#comment-17142908 ] Youquan Lin commented on YARN-10166: Could you please take a look? [~sunilg] > Add detail log for ApplicationAttemptNotFoundException > -- > > Key: YARN-10166 > URL: https://issues.apache.org/jira/browse/YARN-10166 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Reporter: Youquan Lin >Assignee: Youquan Lin >Priority: Minor > Labels: patch > Attachments: YARN-10166-001.patch, YARN-10166-002.patch, > YARN-10166-003.patch, YARN-10166-004.patch > > > Suppose user A killed the app, then ApplicationMasterService will call > unregisterAttempt() for this app. Sometimes, app's AM continues to call the > alloate() method and reports an error as follows. > {code:java} > Application attempt appattempt_1582520281010_15271_01 doesn't exist in > ApplicationMasterService cache. > {code} > If user B has been watching the AM log, he will be confused why the > attempt is no longer in the ApplicationMasterService cache. So I think we can > add detail log for ApplicationAttemptNotFoundException as follows. > {code:java} > Application attempt appattempt_1582630210671_14658_01 doesn't exist in > ApplicationMasterService cache.App state: KILLED,finalStatus: KILLED > ,diagnostics: App application_1582630210671_14658 killed by userA from > 127.0.0.1 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10325) Document max-parallel-apps for Capacity Scheduler
[ https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142871#comment-17142871 ] Hadoop QA commented on YARN-10325: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 48s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-3.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 31s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 42m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-YARN-Build/26197/artifact/out/Dockerfile | | JIRA Issue | YARN-10325 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006254/YARN-10325-branch-3.3.001.patch | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux bdd0b3fb055f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.3 / 8f1b70e | | Max. process+thread count | 465 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/26197/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > Document max-parallel-apps for Capacity Scheduler > - > > Key: YARN-10325 > URL: https://issues.apache.org/jira/browse/YARN-10325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, capacityscheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: YARN-10325-001.patch, YARN-10325-branch-3.3.001.patch > > > New feature introduced by YARN-9930 should be reflected in the upstream > documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10304) Create an endpoint for remote application log directory path query
[ https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142847#comment-17142847 ] Andras Gyori commented on YARN-10304: - Thank you [~adam.antal] for the review. I have addressed all the remaining concerns. Extended the logic with the application specific path assembly, and added 2 more unit tests to cover the scenarios. > Create an endpoint for remote application log directory path query > -- > > Key: YARN-10304 > URL: https://issues.apache.org/jira/browse/YARN-10304 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Andras Gyori >Assignee: Andras Gyori >Priority: Minor > Attachments: YARN-10304.001.patch, YARN-10304.002.patch, > YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch, > YARN-10304.006.patch > > > The logic of the aggregated log directory path determination (currently based > on configuration) is scattered around the codebase and duplicated multiple > times. By providing a separate class for creating the path for a specific > user, it allows for an abstraction over this logic. This could be used in > place of the previously duplicated logic, moreover, we could provide an > endpoint to query this path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10304) Create an endpoint for remote application log directory path query
[ https://issues.apache.org/jira/browse/YARN-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Gyori updated YARN-10304: Attachment: YARN-10304.006.patch > Create an endpoint for remote application log directory path query > -- > > Key: YARN-10304 > URL: https://issues.apache.org/jira/browse/YARN-10304 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Andras Gyori >Assignee: Andras Gyori >Priority: Minor > Attachments: YARN-10304.001.patch, YARN-10304.002.patch, > YARN-10304.003.patch, YARN-10304.004.patch, YARN-10304.005.patch, > YARN-10304.006.patch > > > The logic of the aggregated log directory path determination (currently based > on configuration) is scattered around the codebase and duplicated multiple > times. By providing a separate class for creating the path for a specific > user, it allows for an abstraction over this logic. This could be used in > place of the previously duplicated logic, moreover, we could provide an > endpoint to query this path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10325) Document max-parallel-apps for Capacity Scheduler
[ https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10325: Attachment: YARN-10325-branch-3.3.001.patch > Document max-parallel-apps for Capacity Scheduler > - > > Key: YARN-10325 > URL: https://issues.apache.org/jira/browse/YARN-10325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, capacityscheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: YARN-10325-001.patch, YARN-10325-branch-3.3.001.patch > > > New feature introduced by YARN-9930 should be reflected in the upstream > documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142824#comment-17142824 ] Hudson commented on YARN-10316: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18376 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18376/]) YARN-10316. FS-CS converter: convert maxAppsDefault, maxRunningApps (snemeth: rev 03f855e3e7a4505362e221c8a07ae9317af773d0) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigRuleHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSQueueConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigRuleHandler.java > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch, YARN-10316-branch-3.3.001.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10316: Attachment: YARN-10316-branch-3.3.001.patch > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch, YARN-10316-branch-3.3.001.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142812#comment-17142812 ] Szilard Nemeth commented on YARN-10316: --- Hi [~pbacsko], Thanks for working on this. Latest patch LGTM, committed to trunk. Thanks [~gandras] for the review. Do you want to backport this to any branch? > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10316) FS-CS converter: convert maxAppsDefault, maxRunningApps settings
[ https://issues.apache.org/jira/browse/YARN-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10316: -- Fix Version/s: 3.4.0 > FS-CS converter: convert maxAppsDefault, maxRunningApps settings > > > Key: YARN-10316 > URL: https://issues.apache.org/jira/browse/YARN-10316 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: fs2cs > Fix For: 3.4.0 > > Attachments: YARN-10136-001.patch, YARN-10316-002.patch, > YARN-10316-003.patch > > > In YARN-9930, support for maximum running applications (called "max parallel > apps") has been introduced. > The converter now can handle the following settings in {{fair-scheduler.xml}}: > * {{}} per user > * {{}} per queue > * {{}} > * {{}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8047) RMWebApp make external class pluggable
[ https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142799#comment-17142799 ] Prabhu Joseph commented on YARN-8047: - [~BilwaST] Thanks for the patch. Below are some of the comments. 1. yarn-default.xml a. The description of yarn.http.rmwebapp.external.classes says "Used to specify custom web application pages". This looks not correct, can we change it to "Used to specify custom web services for ResourceManager" 2. RMWebApp.java b. Below code is not necessary as getClasses won't return null. {code:java} +if (externalClasses == null) { + return; +} {code} 3. RmController.java a. Below needs to be removed. + System.out.println(schedulerName); b. This is affecting existing behavior for custom schedulers, we need to show the DefaultSchedulerPage.class if hadoop.http.rmwebapp.scheduler.page.class not configured. And a warn message into log saying the custom page class is not found if user has configured a class which does not exist. + renderText("Not Found"); 4. Can you also include a unit testcase which shows custom webservice and custom page works fine. > RMWebApp make external class pluggable > -- > > Key: YARN-8047 > URL: https://issues.apache.org/jira/browse/YARN-8047 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin Chundatt >Assignee: Bilwa S T >Priority: Minor > Attachments: YARN-8047-001.patch, YARN-8047-002.patch, > YARN-8047-003.patch, YARN-8047.004.patch, YARN-8047.005.patch > > > JIra should make sure we should be able to plugin webservices and web pages > of scheduler in Resourcemanager > * RMWebApp allow to bind external classes > * RMController allow to plugin scheduler classes -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations
[ https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142789#comment-17142789 ] Szilard Nemeth commented on YARN-10279: --- [~mhudaky], [~shuzirra], I think it's worth to file a jira to track these FS flakies, if it's not yet created. > Avoid unnecessary QueueMappingEntity creations > -- > > Key: YARN-10279 > URL: https://issues.apache.org/jira/browse/YARN-10279 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Hudáky Márton Gyula >Priority: Minor > Attachments: YARN-10279.001.patch, YARN-10279.003.patch, > YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch > > > In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes > we create new instances of QueueMappingEntity class. In some cases we simply > copy the already received class, so we just duplicate it, which is > unnecessary since the class is immutable. > This is just a minor improvement, probably doesn't have much impact, but > still puts some unnecessary load on GC. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations
[ https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142786#comment-17142786 ] Gergely Pollak commented on YARN-10279: --- [~mhudaky] thank you for the patch, LGTM+1 (non-binding), the tests are really flaky, but across the multiple uploads, all tests have passed at least one time, also FS failures are compleately unrelated, since this is class deep in CS codebase. I had similar problems with these tests. > Avoid unnecessary QueueMappingEntity creations > -- > > Key: YARN-10279 > URL: https://issues.apache.org/jira/browse/YARN-10279 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Hudáky Márton Gyula >Priority: Minor > Attachments: YARN-10279.001.patch, YARN-10279.003.patch, > YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch > > > In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes > we create new instances of QueueMappingEntity class. In some cases we simply > copy the already received class, so we just duplicate it, which is > unnecessary since the class is immutable. > This is just a minor improvement, probably doesn't have much impact, but > still puts some unnecessary load on GC. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations
[ https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142690#comment-17142690 ] Adam Antal commented on YARN-10279: --- Thanks for the patch [~mhudaky]. The unit test failures seem unrelated, so I give a +1 to this patch. > Avoid unnecessary QueueMappingEntity creations > -- > > Key: YARN-10279 > URL: https://issues.apache.org/jira/browse/YARN-10279 > Project: Hadoop YARN > Issue Type: Task >Reporter: Gergely Pollak >Assignee: Hudáky Márton Gyula >Priority: Minor > Attachments: YARN-10279.001.patch, YARN-10279.003.patch, > YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch > > > In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes > we create new instances of QueueMappingEntity class. In some cases we simply > copy the already received class, so we just duplicate it, which is > unnecessary since the class is immutable. > This is just a minor improvement, probably doesn't have much impact, but > still puts some unnecessary load on GC. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10325) Document max-parallel-apps for Capacity Scheduler
[ https://issues.apache.org/jira/browse/YARN-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142649#comment-17142649 ] Andras Gyori commented on YARN-10325: - The patch LGTM, +1. > Document max-parallel-apps for Capacity Scheduler > - > > Key: YARN-10325 > URL: https://issues.apache.org/jira/browse/YARN-10325 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, capacityscheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: YARN-10325-001.patch > > > New feature introduced by YARN-9930 should be reflected in the upstream > documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org