On Fri, May 13, 2022 at 5:35 PM Kevin Traynor <[email protected]> wrote:
>
> A mempool is currently created for a vhost port when it is added.
>
> The NUMA info of the vhost port is not known until a device is added to
> the port, so on multi-NUMA systems the initial NUMA node for the mempool
> is a best guess based on vswitchd affinity.
>
> When a device is added to the vhost port, the NUMA info can be checked
> and if the guess was incorrect a mempool on the correct NUMA node
> created.
>
> The current scheme can have the effect of creating a mempool on a NUMA
> node that will not be needed and at least for a certain time period
> requires memory.
>
> It is also difficult for a user trying to provision memory on different
> NUMA nodes, if they are not sure which NUMA node the initial mempool
> for a vhost port will be on.
>
> This patch delays the creation of the mempool for a vhost port on
> multi-NUMA systems until the vhost NUMA info is known.

I prefer having a single behavior for mono and multi numa (=> no
question about which behavior resulted in mempool
creation/attribution).
Though I don't have a strong opinion against having this difference in behavior.


>
> Signed-off-by: Kevin Traynor <[email protected]>

Otherwise, this new behavior for multi numa and the patch lgtm.

I tested with a dual numa system, ovs running with pmd threads on numa
1, one bridge with vhost-user ports serving one vm in numa0 and one vm
in numa1.

Running ovs-appctl netdev-dpdk/get-mempool-info | grep ovs,
- before patch, no vm started
mempool <ovsf4b05b4a00021580262144>@0x17f703180
                    ^^
                    We can notice that a numa0 mempool was created
even if PMD threads are on numa1 and no port uses this mempool.
- before patch, once vm in numa1 is started,
mempool <ovsf4b05b4a00021580262144>@0x17f703180
mempool <ovsaa10b11501021580262144>@0x11ffe01e40
- before patch, once vm in numa0 is started too,
mempool <ovsf4b05b4a00021580262144>@0x17f703180
mempool <ovsaa10b11501021580262144>@0x11ffe01e40


- after patch, no vm started
<no mempool>
- after patch, once vm in numa1 is started,
mempool <ovsaa10b11501021580262144>@0x11ffe01e40
                    ^^
                    Only one mempool in numa1
- after patch, once vm in numa0 is started too,
mempool <ovsaa10b11501021580262144>@0x11ffe01e40
mempool <ovscc694b5f00021580262144>@0x17f51dcc0


Reviewed-by: David Marchand <[email protected]>

-- 
David Marchand

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to