Hello OVS team,
I would like to share and get feedback on the OVN functionality/scalability 
testing that we tried out in our local lab.


[cid:[email protected]]

It's a two node setup, having 100 containers with OVS instance running inside 
it. There are total 400 veth ports attached to OVS instances inside each 
host(Every OVS instance have 4-5 ports attached to them). All these ports are 
uniformly shared across 10 logical networks.


Here are the few findings and questions on this testing appraoch

*         Is there any other better approach that can run in a local 
environment to test the OVN+OVS functionality in a scaled environment?

*         I verified that the rules are getting populated correctly on every 
OVS instance inside the container, though have some issues with traffic 
forwarding such as the traffic getting forwarded locally in each node between 
containers without any issues, though the performance is very poor(My chassis 
have 20 cores and this test case eats up all the 100% CPU). The issue here is 
the inter-node traffic always getting dropped. This works find when I reduce 
number of hypervisors/containers to about 20-30. Its looks to me that all the 
physical interface bandwidth getting used for some heartbeat/sync messages. 
Just curious to know what are the purpose of these messages and why it's 
creating this much traffic to flood the physical interface.

*         Are there any measured values on management traffic generated by 
controller <---->southbound DB and Northddb <---->northd<---->Southbound DB??

*         What's the maximum number of supported hypervisors that tested so far.



Regards
_Sugesh

_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to