Re: [ovs-dev] [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket id.

2017-10-19 Thread Kevin Traynor
On 10/19/2017 12:45 PM, Fischetti, Antonio wrote:
> Thanks Kevin for your review.
> I could re-spin a new version to take into account your comments on the patch 
> #6.
> I was wondering a better way to rephrase the debug message in this patch.
> Would it be ok something like:
> 
>   VLOG_DBG("Port %s: Requesting a mempool of %u mbufs "
>   "on socket %d for %d Rx and %d Tx queues.",
>   dev->up.name, dmp->mp_size, 
>  dev->requested_socket_id,
>   dev->requested_n_rxq, dev->requested_n_txq);
> 

Yep, that reads better, thanks.

> Any suggestion, let me know.
> 
> -Antonio
> 
>> -Original Message-
>> From: Kevin Traynor [mailto:ktray...@redhat.com]
>> Sent: Thursday, October 19, 2017 11:33 AM
>> To: Fischetti, Antonio ; d...@openvswitch.org
>> Cc: Kavanagh, Mark B ; Aaron Conole
>> 
>> Subject: Re: [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket
>> id.
>>
>> On 10/18/2017 05:01 PM, antonio.fische...@intel.com wrote:
>>> Create mempool names by considering also the NUMA socket number.
>>> So a name reflects what socket the mempool is allocated on.
>>> This change is needed for the NUMA-awareness feature.
>>>
>>> CC: Mark B Kavanagh 
>>> CC: Kevin Traynor 
>>> CC: Aaron Conole 
>>> Reported-by: Ciara Loftus 
>>> Tested-by: Ciara Loftus 
>>> Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each
>> port.")
>>> Signed-off-by: Antonio Fischetti 
>>
>> If you have to re-spin anyway I'd make the debug sentence a little more
>> natural sounding, but it's ok as is and I don't think it's worth
>> re-spinning just for that.
>>
>> Otherwise LGTM.
>> Acked-by: Kevin Traynor 
>>
>>
>>> ---
>>> Mempool names now contains the requested socket id and become like:
>>> "ovs_4adb057e_1_2030_20512".
>>>
>>> Tested with DPDK 17.05.2 (from dpdk-stable branch).
>>> NUMA-awareness feature enabled (DPDK/config/common_base).
>>>
>>> Created 1 single dpdkvhostuser port type.
>>> OvS pmd-cpu-mask=FF3 # enable cores on both numa nodes
>>> QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only
>>>
>>>  Before launching the VM:
>>>  
>>> ovs-appctl dpif-netdev/pmd-rxq-show
>>> shows core #1 is serving the vhu port.
>>>
>>> pmd thread numa_id 0 core_id 1:
>>> isolated : false
>>> port: dpdkvhostuser0queue-id: 0
>>>
>>>  After launching the VM:
>>>  ---
>>> the vhu port is now managed by core #27
>>> pmd thread numa_id 1 core_id 27:
>>> isolated : false
>>> port: dpdkvhostuser0queue-id: 0
>>>
>>> and the log shows a new mempool is allocated on NUMA node 1, while
>>> the previous one is deleted:
>>>
>>> 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated
>> "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
>>> 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing
>> "ovs_4adb057e_0_2030_20512" mempool
>>> ---
>>> ---
>>>  lib/netdev-dpdk.c | 9 +
>>>  1 file changed, 5 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>>> index d49afd8..3155505 100644
>>> --- a/lib/netdev-dpdk.c
>>> +++ b/lib/netdev-dpdk.c
>>> @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
>>>  {
>>>  uint32_t h = hash_string(dmp->if_name, 0);
>>>  char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
>>> -int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
>>> -   h, dmp->mtu, dmp->mp_size);
>>> +int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
>>> +   h, dmp->socket_id, dmp->mtu, dmp->mp_size);
>>>  if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
>>>  return NULL;
>>>  }
>>> @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool
>> *mp_exists)
>>>  char *mp_name = dpdk_mp_name(dmp);
>>>
>>>  VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
>>> - "with %d Rx and %d Tx queues.",
>>> + "with %d Rx and %d Tx queues, socket id:%d.",
>>>   dmp->mp_size, dev->up.name,
>>> - dev->requested_n_rxq, dev->requested_n_txq);
>>> + dev->requested_n_rxq, dev->requested_n_txq,
>>> + dev->requested_socket_id);
>>>
>>>  dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
>>>MP_CACHE_SZ,
>>>
> 

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket id.

2017-10-19 Thread Fischetti, Antonio
Thanks Kevin for your review.
I could re-spin a new version to take into account your comments on the patch 
#6.
I was wondering a better way to rephrase the debug message in this patch.
Would it be ok something like:

  VLOG_DBG("Port %s: Requesting a mempool of %u mbufs "
  "on socket %d for %d Rx and %d Tx queues.",
  dev->up.name, dmp->mp_size, 
   dev->requested_socket_id,
  dev->requested_n_rxq, dev->requested_n_txq);

Any suggestion, let me know.

-Antonio

> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Thursday, October 19, 2017 11:33 AM
> To: Fischetti, Antonio ; d...@openvswitch.org
> Cc: Kavanagh, Mark B ; Aaron Conole
> 
> Subject: Re: [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket
> id.
> 
> On 10/18/2017 05:01 PM, antonio.fische...@intel.com wrote:
> > Create mempool names by considering also the NUMA socket number.
> > So a name reflects what socket the mempool is allocated on.
> > This change is needed for the NUMA-awareness feature.
> >
> > CC: Mark B Kavanagh 
> > CC: Kevin Traynor 
> > CC: Aaron Conole 
> > Reported-by: Ciara Loftus 
> > Tested-by: Ciara Loftus 
> > Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each
> port.")
> > Signed-off-by: Antonio Fischetti 
> 
> If you have to re-spin anyway I'd make the debug sentence a little more
> natural sounding, but it's ok as is and I don't think it's worth
> re-spinning just for that.
> 
> Otherwise LGTM.
> Acked-by: Kevin Traynor 
> 
> 
> > ---
> > Mempool names now contains the requested socket id and become like:
> > "ovs_4adb057e_1_2030_20512".
> >
> > Tested with DPDK 17.05.2 (from dpdk-stable branch).
> > NUMA-awareness feature enabled (DPDK/config/common_base).
> >
> > Created 1 single dpdkvhostuser port type.
> > OvS pmd-cpu-mask=FF3 # enable cores on both numa nodes
> > QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only
> >
> >  Before launching the VM:
> >  
> > ovs-appctl dpif-netdev/pmd-rxq-show
> > shows core #1 is serving the vhu port.
> >
> > pmd thread numa_id 0 core_id 1:
> > isolated : false
> > port: dpdkvhostuser0queue-id: 0
> >
> >  After launching the VM:
> >  ---
> > the vhu port is now managed by core #27
> > pmd thread numa_id 1 core_id 27:
> > isolated : false
> > port: dpdkvhostuser0queue-id: 0
> >
> > and the log shows a new mempool is allocated on NUMA node 1, while
> > the previous one is deleted:
> >
> > 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated
> "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
> > 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing
> "ovs_4adb057e_0_2030_20512" mempool
> > ---
> > ---
> >  lib/netdev-dpdk.c | 9 +
> >  1 file changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> > index d49afd8..3155505 100644
> > --- a/lib/netdev-dpdk.c
> > +++ b/lib/netdev-dpdk.c
> > @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
> >  {
> >  uint32_t h = hash_string(dmp->if_name, 0);
> >  char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
> > -int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
> > -   h, dmp->mtu, dmp->mp_size);
> > +int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
> > +   h, dmp->socket_id, dmp->mtu, dmp->mp_size);
> >  if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
> >  return NULL;
> >  }
> > @@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool
> *mp_exists)
> >  char *mp_name = dpdk_mp_name(dmp);
> >
> >  VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
> > - "with %d Rx and %d Tx queues.",
> > + "with %d Rx and %d Tx queues, socket id:%d.",
> >   dmp->mp_size, dev->up.name,
> > - dev->requested_n_rxq, dev->requested_n_txq);
> > + dev->requested_n_rxq, dev->requested_n_txq,
> > + dev->requested_socket_id);
> >
> >  dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
> >MP_CACHE_SZ,
> >

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v7 2/6] netdev-dpdk: Fix mempool names to reflect socket id.

2017-10-18 Thread antonio . fischetti
Create mempool names by considering also the NUMA socket number.
So a name reflects what socket the mempool is allocated on.
This change is needed for the NUMA-awareness feature.

CC: Mark B Kavanagh 
CC: Kevin Traynor 
CC: Aaron Conole 
Reported-by: Ciara Loftus 
Tested-by: Ciara Loftus 
Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each port.")
Signed-off-by: Antonio Fischetti 
---
Mempool names now contains the requested socket id and become like:
"ovs_4adb057e_1_2030_20512".

Tested with DPDK 17.05.2 (from dpdk-stable branch).
NUMA-awareness feature enabled (DPDK/config/common_base).

Created 1 single dpdkvhostuser port type.
OvS pmd-cpu-mask=FF3 # enable cores on both numa nodes
QEMU core mask = 0xFC000 # cores for qemu on numa node 1 only

 Before launching the VM:
 
ovs-appctl dpif-netdev/pmd-rxq-show
shows core #1 is serving the vhu port.

pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdkvhostuser0queue-id: 0

 After launching the VM:
 ---
the vhu port is now managed by core #27
pmd thread numa_id 1 core_id 27:
isolated : false
port: dpdkvhostuser0queue-id: 0

and the log shows a new mempool is allocated on NUMA node 1, while
the previous one is deleted:

2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated 
"ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing 
"ovs_4adb057e_0_2030_20512" mempool
---
---
 lib/netdev-dpdk.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index d49afd8..3155505 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
 {
 uint32_t h = hash_string(dmp->if_name, 0);
 char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
-int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%u",
-   h, dmp->mtu, dmp->mp_size);
+int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "ovs_%x_%d_%d_%u",
+   h, dmp->socket_id, dmp->mtu, dmp->mp_size);
 if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
 return NULL;
 }
@@ -535,9 +535,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu, bool 
*mp_exists)
 char *mp_name = dpdk_mp_name(dmp);
 
 VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
- "with %d Rx and %d Tx queues.",
+ "with %d Rx and %d Tx queues, socket id:%d.",
  dmp->mp_size, dev->up.name,
- dev->requested_n_rxq, dev->requested_n_txq);
+ dev->requested_n_rxq, dev->requested_n_txq,
+ dev->requested_socket_id);
 
 dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
   MP_CACHE_SZ,
-- 
2.4.11

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev