[ 
https://issues.apache.org/jira/browse/YARN-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749720#comment-17749720
 ] 

ASF GitHub Bot commented on YARN-7402:
--------------------------------------

slfan1989 commented on code in PR #5901:
URL: https://github.com/apache/hadoop/pull/5901#discussion_r1280592893


##########
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java:
##########
@@ -57,15 +58,23 @@ public static <T> T invokeRMWebService(String webAddr, 
String path, final Class<
     T obj = null;
 
     WebResource webResource = client.resource(webAddr);
-    ClientResponse response = webResource.path("ws/v1/cluster").path(path)
-        .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
-    if (response.getStatus() == HttpServletResponse.SC_OK) {
-      obj = response.getEntity(returnType);
-    } else {
-      throw new YarnRuntimeException("Bad response from remote web service: "
-          + response.getStatus());
+    ClientResponse response = null;
+    try {
+      response = webResource.path("ws/v1/cluster").path(path)
+          .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
+      if (response.getStatus() == SC_OK) {
+        obj = response.getEntity(returnType);
+      } else {
+        throw new YarnRuntimeException(
+            "Bad response from remote web service: " + response.getStatus());
+      }
+      return obj;
+    } finally {
+      if (response != null) {
+        response.close();

Review Comment:
   Thank you very much for reviewing the code! I will fix it.



##########
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java:
##########
@@ -44,10 +44,7 @@
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts;
-import 
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo;
-import 
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo;
-import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo;
-import 
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.*;

Review Comment:
   I will fix it.





> Federation V2: Global Optimizations
> -----------------------------------
>
>                 Key: YARN-7402
>                 URL: https://issues.apache.org/jira/browse/YARN-7402
>             Project: Hadoop YARN
>          Issue Type: New Feature
>          Components: federation
>            Reporter: Carlo Curino
>            Assignee: Carlo Curino
>            Priority: Major
>              Labels: pull-request-available
>
> YARN Federation today requires manual configuration of queues within each 
> sub-cluster, and each RM operates "in isolation". This has few issues:
> # Preemption is computed locally (and might far exceed the global need)
> # Jobs within a queue are forced to consume their resources "evenly" based on 
> queue mapping
> This umbrella JIRA tracks a new feature that leverages the 
> FederationStateStore as a synchronization mechanism among RMs, and allows for 
> allocation and preemption decisions to be based on a (close to up-to-date) 
> global view of the cluster allocation and demand. The JIRA also tracks 
> algorithms to automatically generate policies for Router and AMRMProxy to 
> shape the traffic to each sub-cluster, and general "maintenance" of the 
> FederationStateStore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to