[ 
https://issues.apache.org/jira/browse/GOBBLIN-1728?focusedWorklogId=818948&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-818948
 ]

ASF GitHub Bot logged work on GOBBLIN-1728:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Oct/22 21:32
            Start Date: 20/Oct/22 21:32
    Worklog Time Spent: 10m 
      Work Description: hanghangliu commented on code in PR #3586:
URL: https://github.com/apache/gobblin/pull/3586#discussion_r1001137288


##########
gobblin-yarn/src/main/java/org/apache/gobblin/yarn/YarnService.java:
##########
@@ -845,8 +877,6 @@ public void onContainersAllocated(List<Container> 
containers) {
               instanceName = null;
             }
           }
-          allocatedContainerCountMap.put(containerHelixTag,

Review Comment:
   Yes, so your change should be better as it add protection when decreasing 
the container. 





Issue Time Tracking
-------------------

    Worklog Id:     (was: 818948)
    Time Spent: 3h 50m  (was: 3h 40m)

> Yarn Service requests too many containers due to improper calculation
> ---------------------------------------------------------------------
>
>                 Key: GOBBLIN-1728
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-1728
>             Project: Apache Gobblin
>          Issue Type: New Feature
>            Reporter: Matthew Ho
>            Priority: Major
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Yarn Service is responsible for calculating the number of instances based on 
> the helix tasks. Yarn service tracks the number of instances by asking Yarn 
> for the number of resource requests and the number of allocated containers.
>  
> It uses this count to determine if it should ask for more containers or 
> shrink the number of containers. This calculation is currently done 
> improperly and we continue to request containers when we have enough 
> requested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to