> This patchset optimizes for two cases when using shared mempools.
> 
> If there are ports with different MTUs, that usually leads
> to multiple shared mempools being created because mempool
> mbuf size and hence creation is based from MTU.
> 
> In fact, a port with a smaller MTU could share a mempool with
> mbuf sizes that can accomodate larger MTUs (assuming same NUMA).
> So instead of multiple shared mempools being created based on MTU,
> the ports can use a single shared mempool.
> 
> Another issue optimized for is that if there is an intended MTU
> for ports of say 9000, but the port is initially added without
> MTU specified, then it will fall back to a default MTU of 1500.
> 
> As it is not mandatory for the user to set MTU, it cannot be assumed
> that a new MTU will be set after a port is added, so mempools based
> on an MTU of 1500 are used.
> 
> When the MTU is subsequently set to 9000, the 1500 mempool will
> not be needed and may be freed, but during the in-between time
> both mempools are required.
> 
> Both these cases can be optimized for. However, to just switch to
> increased mbuf size based mempools automatically could lead to
> increased memory consumption and break upgrades depending on config.
> 
> So the user should give a hint about the MTUs they want the mempool
> mbufs size to be based on. While it is flexible for multiple sizes
> and numa, it would most likely be used with a single value. e.g.
> 
>      $ ovs-vsctl --no-wait set Open_vSwitch . \
>        other_config:shared-mempool-config=9000
> 
> With this all dpdk ports will share mempools on the relevant NUMA
> with an mbuf size based on MTU 9000.

Thanks for the series Kevin, given that this was already under discussion prior 
to the soft freeze I've merged it to master for the 2.18 release.

There are so follow up items that could be addressed (i.e. expansion of the 
unit tests) but I see no reason to block on that for the moment.

Thanks
Ian
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to