On 11/22/21 20:54, Vladislav Odintsov wrote:
> Hi Ilya,
> 
> I’ve tested both patches, the problem with high CPU seems to be solved, 
> thanks.
> I noticed see that if I add/delete one lsp to/from port group with a negative 
> match rule, ovs-vswitchd each time (add or remove) consumes +4-18 MB RSS.
> 
> # ovn-nbctl pg-set-ports pg_1 lsp1 lsp2 lsp3 lsp4
> 
> [root@ovn-1 ~]# while :; do ps-mem -p $(pidof ovs-vswitchd) 2> /dev/null | 
> grep ovs; sleep 1; done
>  32.6 MiB +   1.4 MiB =  34.0 MiBovs-vswitchd
>  32.6 MiB +   1.4 MiB =  34.0 MiBovs-vswitchd
>  32.6 MiB +   1.4 MiB =  34.0 MiBovs-vswitchd
>  32.5 MiB +   1.4 MiB =  33.8 MiBovs-vswitchd
>  88.7 MiB +   1.4 MiB =  90.1 MiBovs-vswitchd
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd  # ovn-nbctl pg-set-ports pg_1 
> lsp1 lsp2 lsp3
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd
> 130.7 MiB +   1.4 MiB = 132.1 MiBovs-vswitchd
> 141.2 MiB +   1.4 MiB = 142.6 MiBovs-vswitchd  # ovn-nbctl pg-set-ports pg_1 
> lsp1 lsp2 lsp3 lsp4
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 158.4 MiB +   1.4 MiB = 159.8 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd  # ovn-nbctl pg-set-ports pg_1 
> lsp1 lsp2 lsp3
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 162.3 MiB +   1.4 MiB = 163.7 MiBovs-vswitchd
> 170.9 MiB +   1.4 MiB = 172.2 MiBovs-vswitchd  # ovn-nbctl pg-set-ports pg_1 
> lsp1 lsp2 lsp3 lsp4
> 170.9 MiB +   1.4 MiB = 172.2 MiBovs-vswitchd
> 170.9 MiB +   1.4 MiB = 172.2 MiBovs-vswitchd
> 170.9 MiB +   1.4 MiB = 172.2 MiBovs-vswitchd
> 
> Can this be also related to above problem?

I let the script that sets a port group back and forth every 10 seconds
to run for a half an hour while logging the RSS.  I'm not familiar with
ps-mem and I don't know how it works, but usual 'ps -aux' showed that
RSS goes up and down between 140 and 170 MB.  Values seems to be always
slightly different, even +- few MBs, but RSS stayed within the mentioned
interval.  I suppose, this is just how memory allocator works in this
fairly complex case.  To me it doesn't look like it's growing continuously,
so should not be a problem.

Does it continuously grow in your case for extended period of time?

> 
> Regards,
> Vladislav Odintsov
> 
>> On 22 Nov 2021, at 18:25, Ilya Maximets <[email protected] 
>> <mailto:[email protected]>> wrote:
>>
>> On 11/22/21 10:18, Vladislav Odintsov wrote:
>>> Ilya,
>>>
>>> there’s a problem still in place, just with another case.
>>> Initially I’ve tested only creation of topology, but didn’t think about 
>>> testing the modification of those flows.
>>>
>>> Create topology from initial mail, and then modify it somehow. For 
>>> instance, change the LSPs in port group.
>>> Consider we’ve got lsp1, lsp2, lsp3, lsp4 in pg1 with ACL’s negative match, 
>>> then remove lsp4 from pg1 (ovn-nbctl pg-set-ports lsp1 lsp2 lsp3).
>>> Symptoms are the same as it was with initial addition of flows:
>>> high ovn-controller & ovs-vswitchd CPU usage and ovs-vswitchd memory 
>>> consumed all memory and then got killed by OOM-killer.
>>>
>>> Let me know if you need any additional info.
>>
>> Thanks for testing!  I can reproduce that too.
>> It's a very similar issue, but with a different flavor, so
>> it will require a separate fix.
>>
>> Could you try the following patch together with the previous one:
>>  
>> https://patchwork.ozlabs.org/project/openvswitch/patch/[email protected]/
>>  
>> <https://patchwork.ozlabs.org/project/openvswitch/patch/[email protected]/>
>> ?
>>
>> Bets regards, Ilya Maximets.
>> _______________________________________________
>> dev mailing list
>> [email protected] <mailto:[email protected]>
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> 

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to