I read in a few places that mixing OS networking features (like bonding) and 
OVS is not a good idea and that the recommendation is to do everything at OVS 
level. That's why I assumed the configuration was not ok (even when it worked 
correctly for around two years albeit the high memory usage I detected).

How many MB of RAM would you consider normal in a small setup like this one? 
Just to make myself an idea.

I just finished a maintenance window on this server that required a reboot.
Right after reboot ovs-vswitchd is using 14MB of RAM.
I will keep monitoring the process memory usage usage and report back after two 
weeks or so.

Would it make sense to get a process dump for analysis even if memory usage is 
not going as high (several GBs) as before the config change? In other words, if 
I find that the process memory usage grows up to around 500MB but then becomes 
steady and is not growing anymore would it make sense to collect a dump for 
analysis?

On lun, feb 25, 2019 at 5:48 PM, Ben Pfaff <[email protected]> wrote:
Both configurations should work, so probably you did find a bug causing a 
memory leak in the former configuration. 464 MB actually sounds like a lot 
also. On Sun, Feb 24, 2019 at 02:58:02PM +0000, Fernando Casas Schössow wrote:
Hi Ben, In my case I think I found the cause of the issue, and it was indeed a 
misconfiguration on my side. Yet I'm not really sure why the misconfiguration 
was causing the high memory usage on OVS. The server has 4 NICs. Bonded in two 
bonds of two. The problem I think it was that the bonding was done at OS level 
(Linux kernel bonding) instead of at OVS level. So there were two interfaces at 
OS level (bond0 and bond1) with bond0 added to OVS as an uplink port. I changed 
that configuration, removed all the bonding at OS level and instead created the 
bonds at OVS level. Then I restarted the service so I can monitor memory usage. 
After this change, memory usage growth from 10MB (at service start) to 464MB 
after a few hours and then stayed at that level until today (a week later). I'm 
still monitoring the process memory usage but as I said is steady for almost a 
week so I will keep monitoring it for a couple more weeks just in case and 
report back. Thanks. Kind regards, Fernando On sáb, feb 23, 2019 at 12:23 AM, 
Ben Pfaff <[email protected]<mailto:[email protected]>> wrote: It's odd that two people 
would notice the same problem at the same time on old branches. Anyway, I'm 
attaching the scripts I have. They are rough. The second one invokes the first 
one as a subprocess; it is probably the one you should use. I might have to 
walk you through how to use it, or write better documentation myself. Anyway, 
it should be a start. On Wed, Feb 20, 2019 at 07:15:26PM +0400, Oleg Bondarev 
wrote: Ah, sorry, I missed "ovs-vswitchd memory consumption behavior" thread. 
So I guess I'm also interested in the scripts for analyzing the heap in a core 
dump :) Thanks, Oleg On Wed, Feb 20, 2019 at 7:00 PM Oleg Bondarev 
<[email protected]<mailto:[email protected]><mailto:[email protected]>>
 wrote: > Hi, > > OVS 2.8.0, uptime 197 days, 44G RAM. > ovs-appctl memory/show 
reports: > "handlers:35 ofconns:4 ports:73 revalidators:13 rules:1099 udpif > 
keys:686" > > Similar data on other nodes of the OpenStack cluster. > Seems 
usage grows gradually over time. > Are there any known issues, like > 
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-14970? > Please advise 
on the best way to debug. > > Thanks, > Oleg > > 
_______________________________________________ discuss mailing list 
[email protected]<mailto:[email protected]><mailto:[email protected]>
 https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to