>From: [email protected] [mailto:[email protected]]
>On Behalf Of Kavanagh, Mark B
>Sent: Friday, October 20, 2017 9:40 AM
>To: Fischetti, Antonio <[email protected]>; [email protected]
>Subject: Re: [ovs-dev] [PATCH v8 2/6] Fix mempool names to reflect socket id.
>
>>From: Fischetti, Antonio
>>Sent: Thursday, October 19, 2017 5:54 PM
>>To: [email protected]
>>Cc: Kavanagh, Mark B <[email protected]>; Aaron Conole
>><[email protected]>; Fischetti, Antonio <[email protected]>
>>Subject: [PATCH v8 2/6] Fix mempool names to reflect socket id.
>>
>>Create mempool names by considering also the NUMA socket number.
>>So a name reflects what socket the mempool is allocated on.
>>This change is needed for the NUMA-awareness feature.
>>
>>CC: Mark B Kavanagh <[email protected]>
>>CC: Aaron Conole <[email protected]>
>>Acked-by: Kevin Traynor <[email protected]>
>>Reported-by: Ciara Loftus <[email protected]>
>>Tested-by: Ciara Loftus <[email protected]>
>>Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each
>>port.")
>>Signed-off-by: Antonio Fischetti <[email protected]>
>
>LGTM - Signed-off-by: Mark Kavanagh <[email protected]>
s/Signed-off/Acked/
Haven't had coffee yet...
>
>
>>---
>>Mempool names now contains the requested socket id and become like:
>>"ovs_4adb057e_1_2030_20512".
>>
>>Tested with DPDK 17.05.2 (from dpdk-stable branch).
>>NUMA-awareness feature enabled (DPDK/config/common_base).
>>
>>Created 1 single dpdkvhostuser port type.
>>OvS pmd-cpu-mask=FF00003 # enable cores on both numa nodes
>>QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only
>>
>> Before launching the VM:
>> ------------------------
>>ovs-appctl dpif-netdev/pmd-rxq-show
>>shows core #1 is serving the vhu port.
>>
>>pmd thread numa_id 0 core_id 1:
>> isolated : false
>> port: dpdkvhostuser0 queue-id: 0
>>
>> After launching the VM:
>> -----------------------
>>the vhu port is now managed by core #27
>>pmd thread numa_id 1 core_id 27:
>> isolated : false
>> port: dpdkvhostuser0 queue-id: 0
>>
>>and the log shows a new mempool is allocated on NUMA node 1, while
>>the previous one is deleted:
>>
>>2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated
>>"ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
>>2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing
>>"ovs_4adb057e_0_2030_20512" mempool
>>---
>> lib/netdev-dpdk.c | 13 +++++++------
>> 1 file changed, 7 insertions(+), 6 deletions(-)
>>
>>diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>>index 45a81f2..7e95f36 100644
>>--- a/lib/netdev-dpdk.c
>>+++ b/lib/netdev-dpdk.c
>>@@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
>> {
>> uint32_t h = hash_string(dmp->if_name, 0);
>> char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
>>- int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
>>- h, dmp->mtu, dmp->mp_size);
>>+ int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
>>+ h, dmp->socket_id, dmp->mtu, dmp->mp_size);
>> if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
>> return NULL;
>> }
>>@@ -534,10 +534,11 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool
>>*mp_exists)
>> do {
>> char *mp_name = dpdk_mp_name(dmp);
>>
>>- VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
>>- "with %d Rx and %d Tx queues.",
>>- dmp->mp_size, dev->up.name,
>>- dev->requested_n_rxq, dev->requested_n_txq);
>>+ VLOG_DBG("Port %s: Requesting a mempool of %u mbufs "
>>+ "on socket %d for %d Rx and %d Tx queues.",
>>+ dev->up.name, dmp->mp_size,
>>+ dev->requested_socket_id,
>>+ dev->requested_n_rxq, dev->requested_n_txq);
>>
>> dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
>> MP_CACHE_SZ,
>>--
>>2.4.11
>
>_______________________________________________
>dev mailing list
>[email protected]
>https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev