Hello, It seems there is a memory leak in dpdk-19.11.1. everything is going fine while I enabled dpdk in ovs-vswitchd and created netdev bridge, but if I add dpdk ports to netdev bridge, ovs-vswitchd starts to consume memory until the system runs out of memory, I’m using ovs-2.13.0 + dpdk-19.11.1 in CentOS 7.6, when I upgrade dpdk from 19.11.1 to 19.11.3, the memleak seems gone. see the details below:
# ovs-vsctl list op _uuid : 800ba786-5c0c-4b67-8565-eb04c7a3f495 bridges : [732be0aa-b377-4b7a-9994-e9e9470ce918, 7cc6e09c-2cc6-4ef1-bc3e-1768476f6222, b873f99b-25b3-4864-bf60-988b2fe95dd6, fd6d5636-1a50-45df-a9c8-3950aa919506] cur_cfg : 3991 datapath_types : [netdev, system] datapaths : {} db_version : "8.2.0" dpdk_initialized : true dpdk_version : "DPDK 19.11.1" external_ids : {hostname=node-135, ovn-bridge=br-ovn, ovn-bridge-mappings="default:br-provider,public:br-provider,default1:br-prov ider,public1:br-provider", rundir="/var/run/openvswitch", system-id="7cf7596c-ef34-42ad-9c4e-bf9736172d1b"} iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system, tap, vxlan] manager_options : [fd59708c-7125-443a-b0b7-ede3a945b66d] next_cfg : 3991 other_config : {dpdk-extra="--single-file-segments", dpdk-init=try, dpdk-socket-limit="1024,1024,1024,1024", dpdk-socket-mem="1024,1024,1024,1024", pmd-cpu-mask="0xf00f00f01e", stats-update-interval="10000", userspace-tso-enable="true", vlan-limit="2"} ovs_version : "2.13.1" # ovs-vsctl show 800ba786-5c0c-4b67-8565-eb04c7a3f495 Manager "ptcp:6640:127.0.0.1" Bridge sw-000003 datapath_type: netdev Port sw-000003 Interface sw-000003 type: internal Port sw-000003-bond Interface enp217s0f0 type: dpdk options: {dpdk-devargs="0000:d9:00.0"} Interface enp219s0f0 type: dpdk options: {dpdk-devargs="0000:db:00.0"} The top command shows that the memory of ovs-vswitchd keeps growing, top - 14:40:43 up 4 days, 4:48, 5 users, load average: 3.50, 3.64, 3.51 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.6 us, 1.0 sy, 0.0 ni, 89.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 13153808+total, 39495820 free, 91428160 used, 614112 buff/cache KiB Swap: 30719996 total, 29890564 free, 829432 used. 39331276 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 198504 root 10 -10 524.1g 4.6g 26472 S 208.0 3.7 4:25.33 ovs-vswitchd top - 14:40:58 up 4 days, 4:49, 5 users, load average: 3.53, 3.64, 3.51 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 5.6 us, 1.1 sy, 0.0 ni, 93.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 13153808+total, 38849932 free, 92051616 used, 636544 buff/cache KiB Swap: 30719996 total, 29893380 free, 826616 used. 38685504 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 198504 root 10 -10 524.7g 5.2g 26472 S 217.6 4.1 4:59.89 ovs-vswitchd top - 14:41:40 up 4 days, 4:49, 5 users, load average: 4.00, 3.73, 3.55 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.1 us, 0.9 sy, 0.0 ni, 90.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 13153808+total, 37690576 free, 93230832 used, 616684 buff/cache KiB Swap: 30719996 total, 29900036 free, 819960 used. 37526172 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 198504 root 10 -10 525.9g 6.3g 26472 S 210.0 5.0 6:29.15 ovs-vswitchd top - 14:45:14 up 4 days, 4:53, 5 users, load average: 2.98, 3.39, 3.45 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.1 us, 0.8 sy, 0.0 ni, 90.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 13153808+total, 32005748 free, 98914240 used, 618104 buff/cache KiB Swap: 30719996 total, 29933316 free, 786680 used. 31842124 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 198504 root 10 -10 531.2g 11.7g 26472 S 213.3 9.3 14:02.89 ovs-vswitchd Because ovs-vswitchd consumed too much memory, the system oom was triggered, and it is finally killed Looking forward to your reply thanks