I think you'll probably have to get someone else to track down the
issue.  I haven't found the time to try to reproduce it.

On Wed, Jan 16, 2019 at 08:33:27AM +0000, Ani Sinha wrote:
> Hi Ben:
> 
> Any luck reproducing the issue? Also looking at similar issues in the past, I 
> see we have encountered something similar like here :
> 
> https://mail.openvswitch.org/pipermail/ovs-dev/2018-February/344088.html
> 
> Did we found out where the memory leak was? Was there a patch to fix it?
> 
> Thanks
> Ani
> 
> On Jan 3, 2019, 1:57 PM +0530, Ani Sinha <[email protected]>, wrote:
> Oh and I forgot to mention that we are seeing this issue on OVS versions 
> 2.5.0 and 2.5.2.
> 
> thanks
> Ani
> 
> On Jan 3, 2019, 1:10 PM +0530, Ani Sinha <[email protected]>, wrote:
> Hi Ben:
> 
> In order to reproduce, please put the node in a network with flood of ARP 
> broadcast packets, for example in an environment where machines are 
> performing peer to peer upgrades at scheduled intervals. If you have a packet 
> generator, you can generate ARP broadcast packets with random source MACs and 
> then flood the network with these packets with varying transmission rates. If 
> you see the RSS of OVS increasing monotonically, you have recreated the 
> issue. If the host has OOM killer enabled, it should eventually kill the OVS 
> daemon. The confusing part is that the OVS never seems to free the memory 
> once the slow path packets have been processed. That is why the RSS in 
> allocated huge pages keeps increasing. We are not sure why this happens.
> 
> Let me know if this helps. Happy New Year.
> 
> Thanks
> Ani
> On Dec 28, 2018, 11:21 PM +0530, Ben Pfaff <[email protected]>, wrote:
> On Fri, Dec 28, 2018 at 07:30:52AM +0000, Ani Sinha wrote:
> We are performing an experiment based on our observed behavior on some of our 
> systems. We are using a python packet generator called scapy to generate arp 
> broadcast packet bursts with random MACs. What we are observing is that 
> within a period of 20 sec to 30 sec, the upcall flow count reaches to about 
> 30K with about 1.6G memory consumption by OVS at which point the kernel OOM 
> killer kicks in and kills the OVS daemon.
> 
> We are wondering if this is an expected behavior and if it is, whether there 
> is a setting to limit the size of internal data structures used by OVS for 
> ARP packets or processing slow path ARP packets (I have not looked into the 
> code and done any analysis).
> 
> It's not expected. How do we reproduce it?
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to