[jira] [Updated] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-10233: --- Target Version/s: 3.3.0 > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-10233: --- Priority: Blocker (was: Major) > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082934#comment-17082934 ] Sunil G commented on YARN-10233: This is a regression and hence YARN UI2 cant see daemon logs. Marking as a blocker. Thanks [~akhilpb] and [~prabhujoseph] for finding this. Lets get this in quickly. [~prabhujoseph] cud u pl work work RM of 3.3.0 to get this in > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10219) YARN service placement constraints is broken
[ https://issues.apache.org/jira/browse/YARN-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082936#comment-17082936 ] Hudson commented on YARN-10219: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18140 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18140/]) YARN-10219. Fix YARN Native Service Placement Constraints with Node (pjoseph: rev c791b0e90e0d9c7cb05d162d605e0679942bcbfb) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/utils/TestServiceApiUtil.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md > YARN service placement constraints is broken > > > Key: YARN-10219 > URL: https://issues.apache.org/jira/browse/YARN-10219 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2, 3.3.0, 3.2.1, 3.1.3 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Blocker > Attachments: YARN-10219.001.patch, YARN-10219.002.patch, > YARN-10219.003.patch, YARN-10219.004.patch, YARN-10219.005.patch > > > YARN service placement constraint does not work with node label nor node > attributes. Example of placement constraints: > {code} > "placement_policy": { > "constraints": [ > { > "type": "AFFINITY", > "scope": "NODE", > "node_attributes": { > "label":["genfile"] > }, > "target_tags": [ > "ping" > ] > } > ] > }, > {code} > Node attribute added: > {code} ./bin/yarn nodeattributes -add "host-3.example.com:label=genfile" > {code} > Scheduling activities shows: > {code} Node does not match partition or placement constraints, > unsatisfied PC expression="in,node,ping", target-type=ALLOCATION_TAG > > 1 > host-3.example.com:45454{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10219) YARN service placement constraints is broken
[ https://issues.apache.org/jira/browse/YARN-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082939#comment-17082939 ] Prabhu Joseph commented on YARN-10219: -- Thanks [~eyang] for fixing this issue. Have pushed the [^YARN-10219.005.patch] to trunk and cherry-picked to branch-3.3. > YARN service placement constraints is broken > > > Key: YARN-10219 > URL: https://issues.apache.org/jira/browse/YARN-10219 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2, 3.3.0, 3.2.1, 3.1.3 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Blocker > Attachments: YARN-10219.001.patch, YARN-10219.002.patch, > YARN-10219.003.patch, YARN-10219.004.patch, YARN-10219.005.patch > > > YARN service placement constraint does not work with node label nor node > attributes. Example of placement constraints: > {code} > "placement_policy": { > "constraints": [ > { > "type": "AFFINITY", > "scope": "NODE", > "node_attributes": { > "label":["genfile"] > }, > "target_tags": [ > "ping" > ] > } > ] > }, > {code} > Node attribute added: > {code} ./bin/yarn nodeattributes -add "host-3.example.com:label=genfile" > {code} > Scheduling activities shows: > {code} Node does not match partition or placement constraints, > unsatisfied PC expression="in,node,ping", target-type=ALLOCATION_TAG > > 1 > host-3.example.com:45454{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-10159) TimelineConnector does not destroy the jersey client
[ https://issues.apache.org/jira/browse/YARN-10159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph reassigned YARN-10159: Assignee: Tanu Ajmera (was: Prabhu Joseph) > TimelineConnector does not destroy the jersey client > > > Key: YARN-10159 > URL: https://issues.apache.org/jira/browse/YARN-10159 > Project: Hadoop YARN > Issue Type: Sub-task > Components: ATSv2 >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Tanu Ajmera >Priority: Major > > TimelineConnector does not destroy the jersey client. This method must be > called when there are not responses pending otherwise undefined behavior will > occur. > http://javadox.com/com.sun.jersey/jersey-client/1.8/com/sun/jersey/api/client/Client.html#destroy() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7826) Yarn service status cli does not update lifetime if its updated with -appId
[ https://issues.apache.org/jira/browse/YARN-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph reassigned YARN-7826: --- Assignee: Tanu Ajmera > Yarn service status cli does not update lifetime if its updated with -appId > --- > > Key: YARN-7826 > URL: https://issues.apache.org/jira/browse/YARN-7826 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-native-services >Reporter: Yesha Vora >Assignee: Tanu Ajmera >Priority: Critical > > 1) Create Httpd yarn service with lifetime = 3600 sec. > 2) Run yarn application -status , The lifetime field has 3600 sec. > 3) Update lifetime of service using applicationId > {code} > yarn application -appId application_1516919074719_0001 -updateLifetime > 48000{code} > 4) Verify Application status using ApplicationId. Lifetime detail is updated > correctly > 5) Verify Lifetime using application name > {code} > [hrt_qa@xxx hadoopqe]$ yarn application -status httpd-hrt-qa-n > { > "uri" : null, > "name" : "httpd-hrt-qa-n", > "id" : "application_1516919074719_0001", > "artifact" : null, > "resource" : null, > "launch_time" : null, > "number_of_running_containers" : null, > "lifetime" : 3600, > "placement_policy" : null, > "components" : [ { > "name" : "httpd", > "dependencies" : [ ], > "readiness_check" : null, > "artifact" : { > "id" : "centos/httpd-24-centos7:latest", > "type" : "DOCKER", > "uri" : null > }, > "launch_command" : "/usr/bin/run-httpd", > "resource" : { > "uri" : null, > "profile" : null, > "cpus" : 1, > "memory" : "1024", > "additional" : null > }, > "number_of_containers" : 2, > "run_privileged_container" : false, > "placement_policy" : null, > "state" : "STABLE", > "configuration" : { > "properties" : { }, > "env" : { }, > "files" : [ { > "type" : "TEMPLATE", > "dest_file" : "/var/www/html/index.html", > "src_file" : null, > "properties" : { > "content" : "TitleHello > from ${COMPONENT_INSTANCE_NAME}!" > } > } ] > }, > "quicklinks" : [ ], > "containers" : [ { > "uri" : null, > "id" : "container_e07_1516919074719_0001_01_02", > "launch_time" : 1516919372633, > "ip" : "xxx.xxx.xxx.xxx", > "hostname" : "httpd-0.httpd-hrt-qa-n.hrt_qa.test.com", > "bare_host" : "xxx", > "state" : "READY", > "component_instance_name" : "httpd-0", > "resource" : null, > "artifact" : null, > "privileged_container" : null > }, { > "uri" : null, > "id" : "container_e07_1516919074719_0001_01_03", > "launch_time" : 1516919372637, > "ip" : "xxx.xxx.xxx.xxx", > "hostname" : "httpd-1.httpd-hrt-qa-n.hrt_qa.test.com", > "bare_host" : "xxx", > "state" : "READY", > "component_instance_name" : "httpd-1", > "resource" : null, > "artifact" : null, > "privileged_container" : null > } ] > }, { > "name" : "httpd-proxy", > "dependencies" : [ ], > "readiness_check" : null, > "artifact" : { > "id" : "centos/httpd-24-centos7:latest", > "type" : "DOCKER", > "uri" : null > }, > "launch_command" : "/usr/bin/run-httpd", > "resource" : { > "uri" : null, > "profile" : null, > "cpus" : 1, > "memory" : "1024", > "additional" : null > }, > "number_of_containers" : 1, > "run_privileged_container" : false, > "placement_policy" : null, > "state" : "STABLE", > "configuration" : { > "properties" : { }, > "env" : { }, > "files" : [ { > "type" : "TEMPLATE", > "dest_file" : "/etc/httpd/conf.d/httpd-proxy.conf", > "src_file" : "httpd-proxy.conf", > "properties" : { } > } ] > }, > "quicklinks" : [ ], > "containers" : [ { > "uri" : null, > "id" : "container_e07_1516919074719_0001_01_04", > "launch_time" : 1516919372638, > "ip" : "xxx.xxx.xxx.xxx", > "hostname" : "httpd-proxy-0.httpd-hrt-qa-n.hrt_qa.test.com", > "bare_host" : "xxx", > "state" : "READY", > "component_instance_name" : "httpd-proxy-0", > "resource" : null, > "artifact" : null, > "privileged_container" : null > } ] > } ], > "configuration" : { > "properties" : { }, > "env" : { }, > "files" : [ ] > }, > "state" : "STABLE", > "quicklinks" : { > "Apache HTTP Server" : > "http://httpd-proxy-0.httpd-hrt-qa-n.hrt_qa.test.com:8080"; > }, > "queue" : null, > "kerberos_principal" : { > "principal_name" : null, > "keytab" : null > } > } > {code} > Here, App status with app-name d
[jira] [Assigned] (YARN-10183) Auto Created Leaf Queues does not start
[ https://issues.apache.org/jira/browse/YARN-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph reassigned YARN-10183: Assignee: Tanu Ajmera (was: Prabhu Joseph) > Auto Created Leaf Queues does not start > > > Key: YARN-10183 > URL: https://issues.apache.org/jira/browse/YARN-10183 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Tanu Ajmera >Priority: Major > > Auto Created Leaf Queues does not start. Have a Parent queue with auto create > leaf queue enabled. > {code} > > > yarn.scheduler.capacity.root.batch.auto-create-child-queue.enabled > true > > > > yarn.scheduler.capacity.root.batch.leaf-queue-template.capacity > 30 > > {code} > Stopping Parent Queue stops auto created leaf queues. But Starting Parent > Queue / Auto Created Leaf Queue does not start the Leaf Queue causing > subsequent jobs submitted failing. > {code} > > yarn.scheduler.capacity.root.batch.state > RUNNING > > > yarn.scheduler.capacity.root.batch.hive.state > RUNNING > > {code} > Subsequent job fails > {code} > Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit > application_1583503947651_0002 to YARN : > org.apache.hadoop.security.AccessControlException: Queue root.batch.hive is > STOPPED. Cannot accept submission of application: > application_1583503947651_0002 > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:327) > at > org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:303) > at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082996#comment-17082996 ] Adam Antal commented on YARN-10215: --- +1 (non-binding) I'd attempt to rebase & reupload the patch to have a clean jenkins build, but it's fine. > Endpoint for obtaining direct URL for the logs > -- > > Key: YARN-10215 > URL: https://issues.apache.org/jira/browse/YARN-10215 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10025.001.patch, YARN-10025.002.patch, > YARN-10025.003.patch > > > If CORS protected UIs are set up, there is an issue when the browser tries to > access the logs of a running container in the RM web UIv2. > Assuming ATS is not up, the browser follows the following call chain: > - Tries to access ATS, it fails, falls back to JHS > - From RM the browser received basic app info, we know that the application > is running > - From the JHS we got the list of containers and their log files. > - When we try to access a specific log file, the JHS redirects the request to > the NM's UI (on which node the container is running). This redirect is > performed by the browser automatically. In this setup the host is considered > as a protected information, thus the browser omits the "Origin" field from > the request when this redirect is done. The browser then denies access to the > NodeManager's web UI due to the CORS header set up for NM, but the Origin is > null in the redirect request. > - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to > the CORS violation. > We should fix this. As an approach we can expose another endpoints which only > returns the URL of the NodeManager what we should call directly from the UIv2 > in order to receive the log. This adds a bit of a complexity, but will enable > users to keep the CORS protected setup. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-10215: -- Target Version/s: 3.3.0, 3.4.0 (was: 3.4.0) > Endpoint for obtaining direct URL for the logs > -- > > Key: YARN-10215 > URL: https://issues.apache.org/jira/browse/YARN-10215 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10025.001.patch, YARN-10025.002.patch, > YARN-10025.003.patch > > > If CORS protected UIs are set up, there is an issue when the browser tries to > access the logs of a running container in the RM web UIv2. > Assuming ATS is not up, the browser follows the following call chain: > - Tries to access ATS, it fails, falls back to JHS > - From RM the browser received basic app info, we know that the application > is running > - From the JHS we got the list of containers and their log files. > - When we try to access a specific log file, the JHS redirects the request to > the NM's UI (on which node the container is running). This redirect is > performed by the browser automatically. In this setup the host is considered > as a protected information, thus the browser omits the "Origin" field from > the request when this redirect is done. The browser then denies access to the > NodeManager's web UI due to the CORS header set up for NM, but the Origin is > null in the redirect request. > - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to > the CORS violation. > We should fix this. As an approach we can expose another endpoints which only > returns the URL of the NodeManager what we should call directly from the UIv2 > in order to receive the log. This adds a bit of a complexity, but will enable > users to keep the CORS protected setup. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10138) Document the new JHS API
[ https://issues.apache.org/jira/browse/YARN-10138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-10138: -- Target Version/s: 3.4.0, 3.3.1 (was: 3.4.0) > Document the new JHS API > > > Key: YARN-10138 > URL: https://issues.apache.org/jira/browse/YARN-10138 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > > A new API has been introduced in YARN-10028, but we did not document it in > the JHS API documentation. Let's add it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-9710) [UI2] Yarn Daemon Logs displays the URL instead of log name
[ https://issues.apache.org/jira/browse/YARN-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB reassigned YARN-9710: -- Assignee: Akhil PB (was: Prabhu Joseph) > [UI2] Yarn Daemon Logs displays the URL instead of log name > --- > > Key: YARN-9710 > URL: https://issues.apache.org/jira/browse/YARN-9710 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Akhil PB >Priority: Minor > Attachments: Screen Shot 2019-07-26 at 8.53.50 PM.png > > > Yarn Daemon Logs displays the URL instead of log name. > !Screen Shot 2019-07-26 at 8.53.50 PM.png|height=300! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10234: Summary: FS-CS converter: don't enable auto-create queue property for root (was: FS-CS converter: don't enale auto-create queue property for root) > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10234) FS-CS converter: don't enale auto-create queue property for root
Peter Bacsko created YARN-10234: --- Summary: FS-CS converter: don't enale auto-create queue property for root Key: YARN-10234 URL: https://issues.apache.org/jira/browse/YARN-10234 Project: Hadoop YARN Issue Type: Sub-task Reporter: Peter Bacsko Assignee: Peter Bacsko The auto-create-child-queue property should not be enabled for root, otherwise it creates an exception inside capacity scheduler. {noformat} 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: Trying to re-establish ZK session 2020-04-14 09:48:54,117 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: RefreshAll operation failed at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) at org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) Caused by: java.io.IOException: Failed to re-init queues : null at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) ... 6 more Caused by: java.lang.ClassCastException {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10234: Attachment: YARN-10234-001.patch > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10234-001.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs
[ https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated YARN-10215: -- Target Version/s: 3.4.0, 3.3.1 (was: 3.3.0, 3.4.0) > Endpoint for obtaining direct URL for the logs > -- > > Key: YARN-10215 > URL: https://issues.apache.org/jira/browse/YARN-10215 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Andras Gyori >Priority: Major > Attachments: YARN-10025.001.patch, YARN-10025.002.patch, > YARN-10025.003.patch > > > If CORS protected UIs are set up, there is an issue when the browser tries to > access the logs of a running container in the RM web UIv2. > Assuming ATS is not up, the browser follows the following call chain: > - Tries to access ATS, it fails, falls back to JHS > - From RM the browser received basic app info, we know that the application > is running > - From the JHS we got the list of containers and their log files. > - When we try to access a specific log file, the JHS redirects the request to > the NM's UI (on which node the container is running). This redirect is > performed by the browser automatically. In this setup the host is considered > as a protected information, thus the browser omits the "Origin" field from > the request when this redirect is done. The browser then denies access to the > NodeManager's web UI due to the CORS header set up for NM, but the Origin is > null in the redirect request. > - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to > the CORS violation. > We should fix this. As an approach we can expose another endpoints which only > returns the URL of the NodeManager what we should call directly from the UIv2 > in order to receive the log. This adds a bit of a complexity, but will enable > users to keep the CORS protected setup. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083132#comment-17083132 ] Adam Antal commented on YARN-9954: -- Hi [~BilwaST], Thanks for the patch. Though generally keeping the {{IllegalArgumentException}} is a good idea, but {{YarnException}} might be a better fit to the {{ClientRMService#submitApplication()}} just by looking at the context. Otherwise looks good to me. +1 (non-binding). > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083181#comment-17083181 ] Hadoop QA commented on YARN-10234: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}158m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.converter.TestFSQueueConverter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | YARN-10234 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12999876/YARN-10234-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c87ae91d6a9c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aeeebc5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25888/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25888/testReport/ | | Max. process+thread count | 808 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yar
[jira] [Updated] (YARN-10226) NPE in Capacity Scheduler while using %primary_group queue mapping
[ https://issues.apache.org/jira/browse/YARN-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10226: Attachment: (was: YARN-10234-002.patch) > NPE in Capacity Scheduler while using %primary_group queue mapping > -- > > Key: YARN-10226 > URL: https://issues.apache.org/jira/browse/YARN-10226 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10226-001.patch > > > If we use the following queue mapping: > {{u:%user:%primary_group}} > then we get a NPE inside ResourceManager: > {noformat} > 2020-04-06 11:59:13,883 ERROR resourcemanager.ResourceManager > (ResourceManager.java:serviceStart(881)) - Failed to load/recover state > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.getQueue(CapacitySchedulerQueueManager.java:138) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getContextForPrimaryGroup(UserGroupMappingPlacementRule.java:163) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getPlacementForUser(UserGroupMappingPlacementRule.java:118) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getPlacementForApp(UserGroupMappingPlacementRule.java:227) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager.placeApplication(PlacementManager.java:67) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.placeApplication(RMAppManager.java:827) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:378) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:367) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:594) > ... > {noformat} > We to check if parent queue is null in > {{UserGroupMappingPlacementRule.getContextForPrimaryGroup()}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10226) NPE in Capacity Scheduler while using %primary_group queue mapping
[ https://issues.apache.org/jira/browse/YARN-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10226: Attachment: YARN-10234-002.patch > NPE in Capacity Scheduler while using %primary_group queue mapping > -- > > Key: YARN-10226 > URL: https://issues.apache.org/jira/browse/YARN-10226 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10226-001.patch > > > If we use the following queue mapping: > {{u:%user:%primary_group}} > then we get a NPE inside ResourceManager: > {noformat} > 2020-04-06 11:59:13,883 ERROR resourcemanager.ResourceManager > (ResourceManager.java:serviceStart(881)) - Failed to load/recover state > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.getQueue(CapacitySchedulerQueueManager.java:138) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getContextForPrimaryGroup(UserGroupMappingPlacementRule.java:163) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getPlacementForUser(UserGroupMappingPlacementRule.java:118) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.getPlacementForApp(UserGroupMappingPlacementRule.java:227) > at > org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementManager.placeApplication(PlacementManager.java:67) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.placeApplication(RMAppManager.java:827) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:378) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:367) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:594) > ... > {noformat} > We to check if parent queue is null in > {{UserGroupMappingPlacementRule.getContextForPrimaryGroup()}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated YARN-10234: Attachment: YARN-10234-002.patch > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9710) [UI2] Yarn Daemon Logs displays the URL instead of log name
[ https://issues.apache.org/jira/browse/YARN-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083231#comment-17083231 ] Akhil PB commented on YARN-9710: This will be fixed in YARN-10233 > [UI2] Yarn Daemon Logs displays the URL instead of log name > --- > > Key: YARN-9710 > URL: https://issues.apache.org/jira/browse/YARN-9710 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Akhil PB >Priority: Minor > Attachments: Screen Shot 2019-07-26 at 8.53.50 PM.png > > > Yarn Daemon Logs displays the URL instead of log name. > !Screen Shot 2019-07-26 at 8.53.50 PM.png|height=300! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-9710) [UI2] Yarn Daemon Logs displays the URL instead of log name
[ https://issues.apache.org/jira/browse/YARN-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB resolved YARN-9710. Resolution: Duplicate > [UI2] Yarn Daemon Logs displays the URL instead of log name > --- > > Key: YARN-9710 > URL: https://issues.apache.org/jira/browse/YARN-9710 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Assignee: Akhil PB >Priority: Minor > Attachments: Screen Shot 2019-07-26 at 8.53.50 PM.png > > > Yarn Daemon Logs displays the URL instead of log name. > !Screen Shot 2019-07-26 at 8.53.50 PM.png|height=300! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated YARN-8680: -- Attachment: YARN-8680-branch-2.10.002.patch > YARN NM: Implement Iterable Abstraction for LocalResourceTracker state > -- > > Key: YARN-8680 > URL: https://issues.apache.org/jira/browse/YARN-8680 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Pradeep Ambati >Assignee: Pradeep Ambati >Priority: Critical > Fix For: 3.2.0, 3.1.2, 2.10.1 > > Attachments: YARN-8680-branch-2.10.001.patch, > YARN-8680-branch-2.10.002.patch, YARN-8680.00.patch, YARN-8680.01.patch, > YARN-8680.02.patch, YARN-8680.03.patch, YARN-8680.04.patch > > > Similar to YARN-8242, implement iterable abstraction for > LocalResourceTrackerState to load completed and in progress resources when > needed rather than loading them all at a time for a respective state. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083274#comment-17083274 ] Jim Brennan commented on YARN-8680: --- Put up patch 002 to fix checkstyle issue. > YARN NM: Implement Iterable Abstraction for LocalResourceTracker state > -- > > Key: YARN-8680 > URL: https://issues.apache.org/jira/browse/YARN-8680 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Pradeep Ambati >Assignee: Pradeep Ambati >Priority: Critical > Fix For: 3.2.0, 3.1.2, 2.10.1 > > Attachments: YARN-8680-branch-2.10.001.patch, > YARN-8680-branch-2.10.002.patch, YARN-8680.00.patch, YARN-8680.01.patch, > YARN-8680.02.patch, YARN-8680.03.patch, YARN-8680.04.patch > > > Similar to YARN-8242, implement iterable abstraction for > LocalResourceTrackerState to load completed and in progress resources when > needed rather than loading them all at a time for a respective state. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083292#comment-17083292 ] Vinod Kumar Vavilapalli commented on YARN-9848: --- +1 for revert and +1 for making this a 3.3.0 blocker. I think we should this revert it in a 3.2 maintenance release too. > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Major > Attachments: YARN-9848-01.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083326#comment-17083326 ] Hadoop QA commented on YARN-8680: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 8s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} branch-2.10 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in branch-2.10 has 4 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 243 unchanged - 5 fixed = 243 total (was 248) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:bfeec47759e | | JIRA Issue | YARN-8680 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1208/YARN-8680-branch-2.10.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dd362762ef71 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit
[jira] [Commented] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083337#comment-17083337 ] Jim Brennan commented on YARN-8680: --- [~epayne], [~ebadger], [~jhung] can we cherry-pick this back to branch-2.10? The only change in the patch I put up was the checkstyle change (removed an unneeded import). > YARN NM: Implement Iterable Abstraction for LocalResourceTracker state > -- > > Key: YARN-8680 > URL: https://issues.apache.org/jira/browse/YARN-8680 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Pradeep Ambati >Assignee: Pradeep Ambati >Priority: Critical > Fix For: 3.2.0, 3.1.2, 2.10.1 > > Attachments: YARN-8680-branch-2.10.001.patch, > YARN-8680-branch-2.10.002.patch, YARN-8680.00.patch, YARN-8680.01.patch, > YARN-8680.02.patch, YARN-8680.03.patch, YARN-8680.04.patch > > > Similar to YARN-8242, implement iterable abstraction for > LocalResourceTrackerState to load completed and in progress resources when > needed rather than loading them all at a time for a respective state. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083362#comment-17083362 ] Hadoop QA commented on YARN-10234: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | YARN-10234 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12999895/YARN-10234-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0a20b5059b70 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aeeebc5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25889/testReport/ | | Max. process+thread count | 834 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25889/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > FS-CS converter: don't enable au
[jira] [Commented] (YARN-10219) YARN service placement constraints is broken
[ https://issues.apache.org/jira/browse/YARN-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083402#comment-17083402 ] Eric Yang commented on YARN-10219: -- Thank you [~prabhujoseph]. > YARN service placement constraints is broken > > > Key: YARN-10219 > URL: https://issues.apache.org/jira/browse/YARN-10219 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2, 3.3.0, 3.2.1, 3.1.3 >Reporter: Eric Yang >Assignee: Eric Yang >Priority: Blocker > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10219.001.patch, YARN-10219.002.patch, > YARN-10219.003.patch, YARN-10219.004.patch, YARN-10219.005.patch > > > YARN service placement constraint does not work with node label nor node > attributes. Example of placement constraints: > {code} > "placement_policy": { > "constraints": [ > { > "type": "AFFINITY", > "scope": "NODE", > "node_attributes": { > "label":["genfile"] > }, > "target_tags": [ > "ping" > ] > } > ] > }, > {code} > Node attribute added: > {code} ./bin/yarn nodeattributes -add "host-3.example.com:label=genfile" > {code} > Scheduling activities shows: > {code} Node does not match partition or placement constraints, > unsatisfied PC expression="in,node,ping", target-type=ALLOCATION_TAG > > 1 > host-3.example.com:45454{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-9848: -- Target Version/s: 3.3.0 > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083431#comment-17083431 ] Brahma Reddy Battula commented on YARN-9848: [~Steven Rand] can you please upload the patch,As this is blocker to 3.3.0 release. > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Major > Attachments: YARN-9848-01.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-9848: -- Priority: Blocker (was: Major) > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083434#comment-17083434 ] Sunil G commented on YARN-9848: --- [~Steven Rand], I can help in reviewing this today. Thanks. > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steven Rand updated YARN-9848: -- Attachment: YARN-9848.002.patch > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch, YARN-9848.002.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083449#comment-17083449 ] Steven Rand commented on YARN-9848: --- >From trying to apply the patch locally, it seems that trunk has changed since >I wrote it, and it no longer applies cleanly. I'll upload a new one soon. > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch, YARN-9848.002.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-10233: Attachment: YARN-10233.001.patch > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > Attachments: YARN-10233.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083452#comment-17083452 ] Akhil PB commented on YARN-10233: - [~sunilg] [~prabhujoseph] Could you please review the patch. > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > Attachments: YARN-10233.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steven Rand updated YARN-9848: -- Attachment: YARN-9848.003.patch > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch, YARN-9848.002.patch, > YARN-9848.003.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083449#comment-17083449 ] Steven Rand edited comment on YARN-9848 at 4/14/20, 5:30 PM: - >From trying to apply the patch locally, it seems that trunk has changed since >I wrote it, and it no longer applies cleanly. I'll upload a new one soon. EDIT: The {{YARN-9848.003.patch}} file accounts for the changes from YARN-9886 and applies to trunk. was (Author: steven rand): >From trying to apply the patch locally, it seems that trunk has changed since >I wrote it, and it no longer applies cleanly. I'll upload a new one soon. > revert YARN-4946 > > > Key: YARN-9848 > URL: https://issues.apache.org/jira/browse/YARN-9848 > Project: Hadoop YARN > Issue Type: Bug > Components: log-aggregation, resourcemanager >Reporter: Steven Rand >Priority: Blocker > Attachments: YARN-9848-01.patch, YARN-9848.002.patch, > YARN-9848.003.patch > > > In YARN-4946, we've been discussing a revert due to the potential for keeping > more applications in the state store than desired, and the potential to > greatly increase RM recovery times. > > I'm in favor of reverting the patch, but other ideas along the lines of > YARN-9571 would work as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated YARN-8680: -- Target Version/s: 3.2.0, 3.0.2, 2.10.1 (was: 3.0.2, 3.2.0) Thanks for the cherry-pick patch [~Jim_Brennan]. I committed this to branch-2.10. > YARN NM: Implement Iterable Abstraction for LocalResourceTracker state > -- > > Key: YARN-8680 > URL: https://issues.apache.org/jira/browse/YARN-8680 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Pradeep Ambati >Assignee: Pradeep Ambati >Priority: Critical > Fix For: 3.2.0, 3.1.2, 2.10.1 > > Attachments: YARN-8680-branch-2.10.001.patch, > YARN-8680-branch-2.10.002.patch, YARN-8680.00.patch, YARN-8680.01.patch, > YARN-8680.02.patch, YARN-8680.03.patch, YARN-8680.04.patch > > > Similar to YARN-8242, implement iterable abstraction for > LocalResourceTrackerState to load completed and in progress resources when > needed rather than loading them all at a time for a respective state. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTracker state
[ https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083532#comment-17083532 ] Jim Brennan commented on YARN-8680: --- Thanks [~ebadger]! > YARN NM: Implement Iterable Abstraction for LocalResourceTracker state > -- > > Key: YARN-8680 > URL: https://issues.apache.org/jira/browse/YARN-8680 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Pradeep Ambati >Assignee: Pradeep Ambati >Priority: Critical > Fix For: 3.2.0, 3.1.2, 2.10.1 > > Attachments: YARN-8680-branch-2.10.001.patch, > YARN-8680-branch-2.10.002.patch, YARN-8680.00.patch, YARN-8680.01.patch, > YARN-8680.02.patch, YARN-8680.03.patch, YARN-8680.04.patch > > > Similar to YARN-8242, implement iterable abstraction for > LocalResourceTrackerState to load completed and in progress resources when > needed rather than loading them all at a time for a respective state. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9848) revert YARN-4946
[ https://issues.apache.org/jira/browse/YARN-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083570#comment-17083570 ] Hadoop QA commented on YARN-9848: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 3 new + 194 unchanged - 6 fixed = 197 total (was 200) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 57s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | YARN-9848 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1233/YARN-9848.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8fdda6cc7f1b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7b2d84d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/25891/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25891/testReport/ | | Max. process+thread count | 855 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083594#comment-17083594 ] Peter Bacsko commented on YARN-10234: - [~sunilg] [~snemeth] please review & commit. > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083602#comment-17083602 ] Jonathan Hung commented on YARN-9954: - Thanks [~BilwaST], a few comments * Let's change {{/**Max size of application tags.*/}} -> {{/** Max number of application tags.*/}} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083602#comment-17083602 ] Jonathan Hung edited comment on YARN-9954 at 4/14/20, 8:52 PM: --- Thanks [~BilwaST], a few comments * Let's change {{/** Max size of application tags.*/}} -> {{/** Max number of application tags.*/}} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? was (Author: jhung): Thanks [~BilwaST], a few comments * Let's change {{/**Max size of application tags.*/}} -> {{/** Max number of application tags.*/}} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083602#comment-17083602 ] Jonathan Hung edited comment on YARN-9954 at 4/14/20, 8:53 PM: --- Thanks [~BilwaST], a few comments * Let's change {noformat}/**Max size of application tags.*/{noformat} -> {noformat}/** Max number of application tags.*/{noformat} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? was (Author: jhung): Thanks [~BilwaST], a few comments * Let's change {{/** Max size of application tags.*/}} -> {{/** Max number of application tags.*/}} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083602#comment-17083602 ] Jonathan Hung edited comment on YARN-9954 at 4/14/20, 8:54 PM: --- Thanks [~BilwaST], a few comments * Let's change {noformat}/**Max size of application tags.*/{noformat} -> {noformat}/** Max number of application tags.*/{noformat} * Also in yarn-default.xml, let's change {noformat}Max size of application tags {noformat} -> {noformat}Max number of application tags {noformat} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? was (Author: jhung): Thanks [~BilwaST], a few comments * Let's change {noformat}/**Max size of application tags.*/{noformat} -> {noformat}/** Max number of application tags.*/{noformat} * {{Max size of application tags }} -> {{Max number of application tags }} * Agree with Adam, let's wrap the IllegalArgumentExceptions as YarnException via RPCUtil.getRemoteException Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10233) [YARN UI2] No Logs were found in "YARN Daemon Logs" page
[ https://issues.apache.org/jira/browse/YARN-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083741#comment-17083741 ] Hadoop QA commented on YARN-10233: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 41m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | YARN-10233 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1232/YARN-10233.001.patch | | Optional Tests | dupname asflicense shadedclient | | uname | Linux 31bd9373b533 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ae474e1 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 416 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/25892/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > [YARN UI2] No Logs were found in "YARN Daemon Logs" page > > > Key: YARN-10233 > URL: https://issues.apache.org/jira/browse/YARN-10233 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB >Priority: Blocker > Fix For: 3.3.0 > > Attachments: YARN-10233.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-10235) Yarn New Features
Chengwei Wang created YARN-10235: Summary: Yarn New Features Key: YARN-10235 URL: https://issues.apache.org/jira/browse/YARN-10235 Project: Hadoop YARN Issue Type: New Feature Reporter: Chengwei Wang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10235) Set running task limit when mapreduce job is running
[ https://issues.apache.org/jira/browse/YARN-10235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengwei Wang updated YARN-10235: - Summary: Set running task limit when mapreduce job is running (was: Yarn New Features) > Set running task limit when mapreduce job is running > > > Key: YARN-10235 > URL: https://issues.apache.org/jira/browse/YARN-10235 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Chengwei Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10235) Enable to set running task limit when mapreduce job is running
[ https://issues.apache.org/jira/browse/YARN-10235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengwei Wang updated YARN-10235: - Summary: Enable to set running task limit when mapreduce job is running (was: Set running task limit when mapreduce job is running) > Enable to set running task limit when mapreduce job is running > -- > > Key: YARN-10235 > URL: https://issues.apache.org/jira/browse/YARN-10235 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Chengwei Wang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10234: -- Fix Version/s: 3.4.0 > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.4.0 > > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083809#comment-17083809 ] Szilard Nemeth commented on YARN-10234: --- Hi [~pbacsko], Patch looks good to me, committed to trunk. Checking if this can be committed to branch-3.3 without conflicts. > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10234: -- Fix Version/s: 3.3.1 > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.4.0, 3.3.1 > > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083812#comment-17083812 ] Szilard Nemeth commented on YARN-10234: --- Committed to branch-3.3 as well. Given that other jiras under YARN-9698 are not backported to branch-3.2, I'm resolving this jira. > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.4.0, 3.3.1 > > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083814#comment-17083814 ] Surendra Singh Lilhore commented on YARN-9954: -- {quote}Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? {quote} Better now mark it deprecated now and remove it in hadoop 3.4.0. > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-9954) Configurable max application tags and max tag length
[ https://issues.apache.org/jira/browse/YARN-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083814#comment-17083814 ] Surendra Singh Lilhore edited comment on YARN-9954 at 4/15/20, 5:36 AM: {quote}Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? {quote} Better now mark it deprecated and remove it in hadoop 3.4.0. was (Author: surendrasingh): {quote}Also, this jira will be useful to have in older minor versions. But we cannot remove @Evolving fields within a minor version. Shall we open a separate jira to remove these fields, and in this jira set DEFAULT_RM_APPLICATION_MAX_TAGS to APPLICATION_MAX_TAGS and set DEFAULT_RM_APPLICATION_MAX_TAG_LENGTH to APPLICATION_MAX_TAG_LENGTH ? {quote} Better now mark it deprecated now and remove it in hadoop 3.4.0. > Configurable max application tags and max tag length > > > Key: YARN-9954 > URL: https://issues.apache.org/jira/browse/YARN-9954 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Bilwa S T >Priority: Major > Attachments: YARN-9954.001.patch > > > Currently max tags and max tag length is hardcoded, it should be configurable > {noformat} > @Evolving > public static final int APPLICATION_MAX_TAGS = 10; > @Evolving > public static final int APPLICATION_MAX_TAG_LENGTH = 100; {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5277) When localizers fail due to resource timestamps being out, provide more diagnostics
[ https://issues.apache.org/jira/browse/YARN-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083817#comment-17083817 ] Szilard Nemeth commented on YARN-5277: -- Thanks [~sahuja], Committed patches for both branches. Thanks a lot. Resolving jira. > When localizers fail due to resource timestamps being out, provide more > diagnostics > --- > > Key: YARN-5277 > URL: https://issues.apache.org/jira/browse/YARN-5277 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.4.0 > > Attachments: YARN-5277-branch-3.2.003.patch, > YARN-5277-branch-3.3.004.patch, YARN-5277.001.patch, YARN-5277.002.patch > > > When an NM fails a resource D/L as the timestamps are wrong, there's not much > info, just two long values. > It would be good to also include the local time values, *and the current wall > time*. These are the things people need to know when trying to work out what > went wrong -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5277) When localizers fail due to resource timestamps being out, provide more diagnostics
[ https://issues.apache.org/jira/browse/YARN-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-5277: - Fix Version/s: 3.2.2 3.3.0 > When localizers fail due to resource timestamps being out, provide more > diagnostics > --- > > Key: YARN-5277 > URL: https://issues.apache.org/jira/browse/YARN-5277 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-5277-branch-3.2.003.patch, > YARN-5277-branch-3.3.004.patch, YARN-5277.001.patch, YARN-5277.002.patch > > > When an NM fails a resource D/L as the timestamps are wrong, there's not much > info, just two long values. > It would be good to also include the local time values, *and the current wall > time*. These are the things people need to know when trying to work out what > went wrong -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10234: -- Fix Version/s: (was: 3.3.1) 3.3.0 > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10207) CLOSE_WAIT socket connection leaks during rendering of (corrupted) aggregated logs on the JobHistoryServer Web UI
[ https://issues.apache.org/jira/browse/YARN-10207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10207: -- Fix Version/s: (was: 3.3.1) 3.3.0 > CLOSE_WAIT socket connection leaks during rendering of (corrupted) aggregated > logs on the JobHistoryServer Web UI > - > > Key: YARN-10207 > URL: https://issues.apache.org/jira/browse/YARN-10207 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Reporter: Siddharth Ahuja >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-10207.001.patch, YARN-10207.002.patch, > YARN-10207.003.patch, YARN-10207.004.patch, YARN-10207.branch-3.2.001.patch > > > File descriptor leaks are observed coming from the JobHistoryServer process > while it tries to render a "corrupted" aggregated log on the JHS Web UI. > Issue reproduced using the following steps: > # Ran a sample Hadoop MR Pi job, it had the id - > application_1582676649923_0026. > # Copied an aggregated log file from HDFS to local FS: > {code} > hdfs dfs -get > /tmp/logs/systest/logs/application_1582676649923_0026/_8041 > {code} > # Updated the TFile metadata at the bottom of this file with some junk to > corrupt the file : > *Before:* > {code} > > ^@^GVERSION*(^@&container_1582676649923_0026_01_03^F^Dnone^A^Pª5²ª5²^C^Qdata:BCFile.index^Dnoneª5þ^M^M^Pdata:TFile.index^Dnoneª5È66^Odata:TFile.meta^Dnoneª5Â^F^F^@^@^@^@^@^B6^K^@^A^@^@Ñ^QÓh<91>µ×¶9ßA@<92>ºáP > {code} > *After:* > {code} > > ^@^GVERSION*(^@&container_1582676649923_0026_01_03^F^Dnone^A^Pª5²ª5²^C^Qdata:BCFile.index^Dnoneª5þ^M^M^Pdata:TFile.index^Dnoneª5È66^Odata:TFile.meta^Dnoneª5Â^F^F^@^@^@^@^@^B6^K^@^A^@^@Ñ^QÓh<91>µ×¶9ßA@<92>ºáPblah > {code} > Notice "blah" (junk) added at the very end. > # Remove the existing aggregated log file that will need to be replaced by > our modified copy from step 3 (as otherwise HDFS will prevent it from placing > the file with the same name as it already exists): > {code} > hdfs dfs -rm -r -f > /tmp/logs/systest/logs/application_1582676649923_0026/_8041 > {code} > # Upload the corrupted aggregated file back to HDFS: > {code} > hdfs dfs -put _8041 > /tmp/logs/systest/logs/application_1582676649923_0026 > {code} > # Visit HistoryServer Web UI > # Click on job_1582676649923_0026 > # Click on "logs" link against the AM (assuming the AM ran on nm_hostname) > # Review the JHS logs, following exception will be seen: > {code} > 2020-03-24 20:03:48,484 ERROR org.apache.hadoop.yarn.webapp.View: Error > getting logs for job_1582676649923_0026 > java.io.IOException: Not a valid BCFile. > at > org.apache.hadoop.io.file.tfile.BCFile$Magic.readAndVerify(BCFile.java:927) > at > org.apache.hadoop.io.file.tfile.BCFile$Reader.(BCFile.java:628) > at > org.apache.hadoop.io.file.tfile.TFile$Reader.(TFile.java:804) > at > org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.(AggregatedLogFormat.java:588) > at > org.apache.hadoop.yarn.logaggregation.filecontroller.tfile.TFileAggregatedLogsBlock.render(TFileAggregatedLogsBlock.java:111) > at > org.apache.hadoop.yarn.logaggregation.filecontroller.tfile.LogAggregationTFileController.renderAggregatedLogsBlock(LogAggregationTFileController.java:341) > at > org.apache.hadoop.yarn.webapp.log.AggregatedLogsBlock.render(AggregatedLogsBlock.java:117) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69) > at > org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79) > at org.apache.hadoop.yarn.webapp.View.render(View.java:235) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49) > at > org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117) > at > org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848) > at > org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71) > at > org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82) > at > org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212) > at > org.apache.hadoop.mapreduce.v2.hs.webapp.HsController.logs(HsController.java:202) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorI
[jira] [Updated] (YARN-9995) Code cleanup in TestSchedConfCLI
[ https://issues.apache.org/jira/browse/YARN-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9995: - Fix Version/s: (was: 3.3.1) 3.3.0 > Code cleanup in TestSchedConfCLI > > > Key: YARN-9995 > URL: https://issues.apache.org/jira/browse/YARN-9995 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Bilwa S T >Priority: Minor > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-9995.001.patch, YARN-9995.002.patch, > YARN-9995.003.patch, YARN-9995.004.patch, YARN-9995.branch-3.2.patch > > > Some tests are too verbose: > - add / delete / remove queues testcases: Creating SchedConfUpdateInfo > instances could be simplified with a helper method or something like that. > - Some fields can be converted to local variables: sysOutStream, sysOut, > sysErr, csConf > - Any additional cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9995) Code cleanup in TestSchedConfCLI
[ https://issues.apache.org/jira/browse/YARN-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9995: - Fix Version/s: 3.2.2 > Code cleanup in TestSchedConfCLI > > > Key: YARN-9995 > URL: https://issues.apache.org/jira/browse/YARN-9995 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Bilwa S T >Priority: Minor > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-9995.001.patch, YARN-9995.002.patch, > YARN-9995.003.patch, YARN-9995.004.patch, YARN-9995.branch-3.2.patch > > > Some tests are too verbose: > - add / delete / remove queues testcases: Creating SchedConfUpdateInfo > instances could be simplified with a helper method or something like that. > - Some fields can be converted to local variables: sysOutStream, sysOut, > sysErr, csConf > - Any additional cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9995) Code cleanup in TestSchedConfCLI
[ https://issues.apache.org/jira/browse/YARN-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083823#comment-17083823 ] Szilard Nemeth commented on YARN-9995: -- Thanks [~BilwaST], Committed your patch to branch-3.2. Thanks for filing the jira as well. Resolving this. > Code cleanup in TestSchedConfCLI > > > Key: YARN-9995 > URL: https://issues.apache.org/jira/browse/YARN-9995 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Bilwa S T >Priority: Minor > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-9995.001.patch, YARN-9995.002.patch, > YARN-9995.003.patch, YARN-9995.004.patch, YARN-9995.branch-3.2.patch > > > Some tests are too verbose: > - add / delete / remove queues testcases: Creating SchedConfUpdateInfo > instances could be simplified with a helper method or something like that. > - Some fields can be converted to local variables: sysOutStream, sysOut, > sysErr, csConf > - Any additional cleanup -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9354: - Fix Version/s: 3.4.0 > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch, YARN-9354.branch-3.2.001.patch, > YARN-9354.branch-3.2.002.patch, YARN-9354.branch-3.2.003.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10234) FS-CS converter: don't enable auto-create queue property for root
[ https://issues.apache.org/jira/browse/YARN-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083826#comment-17083826 ] Hudson commented on YARN-10234: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18145 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18145/]) YARN-10234. FS-CS converter: don't enable auto-create queue property for (snemeth: rev 55fcbcb5c2a096f98f273fda52ae25ecaa1d8bb6) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSQueueConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java > FS-CS converter: don't enable auto-create queue property for root > - > > Key: YARN-10234 > URL: https://issues.apache.org/jira/browse/YARN-10234 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Critical > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10234-001.patch, YARN-10234-002.patch > > > The auto-create-child-queue property should not be enabled for root, > otherwise it creates an exception inside capacity scheduler. > {noformat} > 2020-04-14 09:48:54,117 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Trying to re-establish ZK session > 2020-04-14 09:48:54,117 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received > RMFatalEvent of type TRANSITION_TO_ACTIVE_FAILED, caused by failure to > refresh configuration settings: org.apache.hadoop.ha.ServiceFailedException: > RefreshAll operation failed > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:772) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307) > at > org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144) > at > org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:896) > at > org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:476) > at > org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:636) > at > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) > Caused by: java.io.IOException: Failed to re-init queues : null > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:467) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:489) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:430) > at > org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshAll(AdminService.java:761) > ... 6 more > Caused by: java.lang.ClassCastException > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9354: - Fix Version/s: 3.2.2 > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch, YARN-9354.branch-3.2.001.patch, > YARN-9354.branch-3.2.002.patch, YARN-9354.branch-3.2.003.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083829#comment-17083829 ] Szilard Nemeth commented on YARN-9354: -- Hi [~gandras], Thanks for the branch-3.2 patch, committed it to the branch. Resolving this jira. > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch, YARN-9354.branch-3.2.001.patch, > YARN-9354.branch-3.2.002.patch, YARN-9354.branch-3.2.003.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10002: -- Fix Version/s: 3.4.0 > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch, YARN-10002.branch-3.2.001.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10002: -- Fix Version/s: 3.2.2 > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch, YARN-10002.branch-3.2.001.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083838#comment-17083838 ] Szilard Nemeth commented on YARN-10002: --- Thanks [~bteke] for the branch-3.2 patch, LGTM and committed to the respective branch. Resolving this jira. > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.3.0, 3.2.2, 3.4.0 > > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch, YARN-10002.branch-3.2.001.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10001) Add explanation of unimplemented methods in InMemoryConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-10001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083839#comment-17083839 ] Szilard Nemeth commented on YARN-10001: --- Hi [~sahuja], Just informing you that YARN-10002 got committed. > Add explanation of unimplemented methods in InMemoryConfigurationStore > -- > > Key: YARN-10001 > URL: https://issues.apache.org/jira/browse/YARN-10001 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10001-branch-3.2.003.patch, YARN-10001.001.patch, > YARN-10001.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9999) TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, general code cleanup
[ https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083847#comment-17083847 ] Szilard Nemeth commented on YARN-: -- Hi [~bteke], Latest patch looks good to me, committed to trunk. Could you please create backports for branch-3.3 and branch-3.2. I assume branch-3.3 would be trivial, but I'm not sure about 3.2. Anyway, it's better to have jenkins result for both branches here. > TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, > general code cleanup > --- > > Key: YARN- > URL: https://issues.apache.org/jira/browse/YARN- > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-.001.patch, YARN-.002.patch, > YARN-.003.patch, YARN-.004.patch > > > All config store tests are extended from ConfigurationStoreBaseTest: > * TestInMemoryConfigurationStore > * TestLeveldbConfigurationStore > * TestZKConfigurationStore > TestFSSchedulerConfigurationStore should also extend from it. > Additionally, some general code cleanup can be applied as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9999) TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, general code cleanup
[ https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-: - Fix Version/s: 3.4.0 > TestFSSchedulerConfigurationStore: Extend from ConfigurationStoreBaseTest, > general code cleanup > --- > > Key: YARN- > URL: https://issues.apache.org/jira/browse/YARN- > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.4.0 > > Attachments: YARN-.001.patch, YARN-.002.patch, > YARN-.003.patch, YARN-.004.patch > > > All config store tests are extended from ConfigurationStoreBaseTest: > * TestInMemoryConfigurationStore > * TestLeveldbConfigurationStore > * TestZKConfigurationStore > TestFSSchedulerConfigurationStore should also extend from it. > Additionally, some general code cleanup can be applied as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10001) Add explanation of unimplemented methods in InMemoryConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-10001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja updated YARN-10001: --- Attachment: (was: YARN-10001-branch-3.2.003.patch) > Add explanation of unimplemented methods in InMemoryConfigurationStore > -- > > Key: YARN-10001 > URL: https://issues.apache.org/jira/browse/YARN-10001 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10001-branch-3.2.001.patch, YARN-10001.001.patch, > YARN-10001.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10001) Add explanation of unimplemented methods in InMemoryConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-10001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Ahuja updated YARN-10001: --- Attachment: YARN-10001-branch-3.2.001.patch > Add explanation of unimplemented methods in InMemoryConfigurationStore > -- > > Key: YARN-10001 > URL: https://issues.apache.org/jira/browse/YARN-10001 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10001-branch-3.2.001.patch, YARN-10001.001.patch, > YARN-10001.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10001) Add explanation of unimplemented methods in InMemoryConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-10001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083852#comment-17083852 ] Siddharth Ahuja commented on YARN-10001: Hey [~snemeth], thanks for the update. I have gone ahead and created a patch for branch-3.2 now (removed existed patch and replaced it with a new one). Will wait on jenkins build now I suppose. > Add explanation of unimplemented methods in InMemoryConfigurationStore > -- > > Key: YARN-10001 > URL: https://issues.apache.org/jira/browse/YARN-10001 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Siddharth Ahuja >Priority: Major > Fix For: 3.3.0, 3.4.0 > > Attachments: YARN-10001-branch-3.2.001.patch, YARN-10001.001.patch, > YARN-10001.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org