[ 
https://issues.apache.org/jira/browse/YUNIKORN-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YUNIKORN-1747:
-----------------------------------
    Description: 
Similarly to YUNIKORN-1746, the method 
{{nodeInfoListerImpl.HavePodsWithAffinityList()}} is called very often, for 
every pod.
{noformat}
func (n nodeInfoListerImpl) HavePodsWithAffinityList() ([]*framework.NodeInfo, 
error) {
        nodes := n.cache.GetNodesInfoMap()
        result := make([]*framework.NodeInfo, 0, len(nodes))
        for _, node := range nodes {
                if len(node.PodsWithAffinity) > 0 {
                        result = append(result, node)
                }
        }
        return result, nil
}
{noformat}

This is slightly trickier, but still doable. We need to know whether we should 
include a node in our "result" slice or not. Since removing/adding element to a 
slice also results in new memory allocations, we just create a new one when 
needed. We have to detect whether a node update results in a change. This is 
tracked by the {{Generation}} field in {{NodeInfo}}:

{noformat}
// NodeInfo is node level aggregated information.
type NodeInfo struct {
        // Overall node information.
        node *v1.Node

        // Pods running on the node.
        Pods []*PodInfo

        ...
        // Whenever NodeInfo changes, generation is bumped.
        // This is used to avoid cloning it if the object didn't change.
        Generation int64
{noformat}

When this field is changed (check its value before & after), we just bump our 
own counter inside the scheduler cache (and we don't maintain a per-node 
generation values), which indicates whether we should create a new slice or not.

  was:
Similarly to YUNIKORN-1746, the method 
{{nodeInfoListerImpl.HavePodsWithAffinityList()}} is called very often, for 
every pod.
{noformat}
func (n nodeInfoListerImpl) HavePodsWithAffinityList() ([]*framework.NodeInfo, 
error) {
        nodes := n.cache.GetNodesInfoMap()
        result := make([]*framework.NodeInfo, 0, len(nodes))
        for _, node := range nodes {
                if len(node.PodsWithAffinity) > 0 {
                        result = append(result, node)
                }
        }
        return result, nil
}
{noformat}

This is slightly trickier, but still doable. We need to know whether we should 
include a node in our "result" slice or not. Since removing/adding element to a 
slice also results in new memory allocations, we just create a new one when 
needed. We have to detect whether a node update results in a change. This is 
tracked by the {{Generation}} field in {{NodeInfo}}:

{noformat}
// NodeInfo is node level aggregated information.
type NodeInfo struct {
        // Overall node information.
        node *v1.Node

        // Pods running on the node.
        Pods []*PodInfo

        ...
        // Whenever NodeInfo changes, generation is bumped.
        // This is used to avoid cloning it if the object didn't change.
        Generation int64
{noformat}

When this field is updated (before & after update), we just bump our own 
counter inside the scheduler cache (and we don't maintain a per-node generation 
values), which indicates whether we should create a new slice or not.


> Improve the performance of nodeInfoListerImpl.HavePodsWithAffinityList()
> ------------------------------------------------------------------------
>
>                 Key: YUNIKORN-1747
>                 URL: https://issues.apache.org/jira/browse/YUNIKORN-1747
>             Project: Apache YuniKorn
>          Issue Type: Sub-task
>          Components: shim - kubernetes
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>         Attachments: image-2023-05-17-20-02-57-177.png
>
>
> Similarly to YUNIKORN-1746, the method 
> {{nodeInfoListerImpl.HavePodsWithAffinityList()}} is called very often, for 
> every pod.
> {noformat}
> func (n nodeInfoListerImpl) HavePodsWithAffinityList() 
> ([]*framework.NodeInfo, error) {
>       nodes := n.cache.GetNodesInfoMap()
>       result := make([]*framework.NodeInfo, 0, len(nodes))
>       for _, node := range nodes {
>               if len(node.PodsWithAffinity) > 0 {
>                       result = append(result, node)
>               }
>       }
>       return result, nil
> }
> {noformat}
> This is slightly trickier, but still doable. We need to know whether we 
> should include a node in our "result" slice or not. Since removing/adding 
> element to a slice also results in new memory allocations, we just create a 
> new one when needed. We have to detect whether a node update results in a 
> change. This is tracked by the {{Generation}} field in {{NodeInfo}}:
> {noformat}
> // NodeInfo is node level aggregated information.
> type NodeInfo struct {
>       // Overall node information.
>       node *v1.Node
>       // Pods running on the node.
>       Pods []*PodInfo
>         ...
>       // Whenever NodeInfo changes, generation is bumped.
>       // This is used to avoid cloning it if the object didn't change.
>       Generation int64
> {noformat}
> When this field is changed (check its value before & after), we just bump our 
> own counter inside the scheduler cache (and we don't maintain a per-node 
> generation values), which indicates whether we should create a new slice or 
> not.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to