wilfred-s edited a comment on pull request #281:
URL: 
https://github.com/apache/incubator-yunikorn-k8shim/pull/281#issuecomment-890948140


   Looking at the code and the tests I think there is a problem with the 
calculation.
   This is the base you gave:
   ```
        // pod
        // initcontainers
        // IC1{500mi, 1000m, 1}
        // IC2{1024mi, 2000m, 4}
        // containers
        // C1{4096mi, 2000m, 2}
        // C2{1024mi, 5000m, 2}
        // result is {4096mi, 5000m, 5}
   ```
   There is a three step process:
   1) Get the maximum for the init containers. They real serially which means 
that I just need to make sure that I can accomodate the largest request for 
each resource type.
   That gives me in this case cpu: 1024mi, memory 2000m and gpu 4. In this 
specific case it all maps to IC2. IC1 has all types smaller than IC2.
   2) Get the maximum usage for the real containers. They run in parallel which 
means I need to sum up all the resources used for all containers.
   The total usage of all regular containers is:  cpu: 5120mi, memory 7000m and 
gpu 4.
   3) Calculate the maximum number of resources needed in the startup phase for 
the init containers and the normal process:
   ```
   init cpu: 1024mi, normal cpu: 5120mi --> cpu: 5120mi
   init memory 2000m, normal memory 7000m --> memory 7000m
   init gpu 4, normal gpu 4 --> gpu 4
   ```
   
   The request for this specific pod passed on by the shim to the core is cpu: 
5120mi, memory 7000m, gpu 4
   
   If for example the init container IC2 would request 10000m for memory and 0 
(zero) gpu the request would become: cpu 5120mi, memory 10000m, gpu 4. CPU does 
not change: sum of C1 and C2, memory is now based on IC2 max, gpu remains at 4 
as init and regular containers used the same.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to