> > -----Original Message----- > > From: Kevin Traynor [mailto:[email protected]] > > Sent: Tuesday, January 23, 2018 6:43 PM > > To: [email protected]; Wojciechowicz, RobertX > > <[email protected]>; [email protected]; > > [email protected]; Stokes, Ian <[email protected]>; > > [email protected]; Kavanagh, Mark B <[email protected]>; > > [email protected]; [email protected] > > Cc: Kevin Traynor <[email protected]>; Fischetti, Antonio > > <[email protected]> > > Subject: [RFC] netdev-dpdk: Update amount of mbufs requested. > > > > As each DPDK port now has its own mempool, it means depending on the > > amount of ports and their configuration we can now require a lot more > > memory than previously needed. > > > > Reduce the amount of extra mbufs requested for each port and set as a > > minimum the amount of mbufs that are needed when the queues, caches > > and inflight buffers associated with that port are full. > > > > Thanks for this Kevin, I understand you had only compile tested this so I > ran it through the vsperf integration test suite and there were issues > found effecting existing features. > > Ian.
Scratch that, on further investigation the issues were unrelated to this patch. With that in mind I think this patch could be used to alleviate the memory consumption going forward. Ian > > CC: Antonio Fischetti <[email protected]> > > CC: Robert Wojciechowicz <[email protected]> > > Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for > > each > > port.") > > Reported-by: Venkatesan Pradeep <[email protected]> > > Signed-off-by: Kevin Traynor <[email protected]> > > --- > > lib/netdev-dpdk.c | 23 ++++++++++++----------- > > 1 file changed, 12 insertions(+), 11 deletions(-) > > > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index > > ac2e38e..0c72ab9 > > 100644 > > --- a/lib/netdev-dpdk.c > > +++ b/lib/netdev-dpdk.c > > @@ -92,9 +92,8 @@ static struct vlog_rate_limit rl = > > VLOG_RATE_LIMIT_INIT(5, 20); > > #define NETDEV_DPDK_MAX_PKT_LEN 9728 > > > > -/* Min number of packets in the mempool. OVS tries to allocate a > > mempool with > > - * roughly estimated number of mbufs: if this fails (because the > > system doesn't > > - * have enough hugepages) we keep halving the number until the > > allocation > > - * succeeds or we reach MIN_NB_MBUF */ > > -#define MIN_NB_MBUF (4096 * 4) > > + /* > > + * Amount of additional packets requested with the minimum for port > > mempool. > > + */ > > +#define NB_MBUF_ADD (4096) > > #define MP_CACHE_SZ RTE_MEMPOOL_CACHE_MAX_SIZE > > > > @@ -518,5 +517,5 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu) > > const char *netdev_name = netdev_get_name(&dev->up); > > int socket_id = dev->requested_socket_id; > > - uint32_t n_mbufs; > > + uint32_t n_mbufs, min_mbufs; > > uint32_t hash = hash_string(netdev_name, 0); > > struct rte_mempool *mp = NULL; > > @@ -529,8 +528,9 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu) > > * + <additional memory for corner cases> > > */ > > - n_mbufs = dev->requested_n_rxq * dev->requested_rxq_size > > - + dev->requested_n_txq * dev->requested_txq_size > > - + MIN(RTE_MAX_LCORE, dev->requested_n_rxq) * > > NETDEV_MAX_BURST > > - + MIN_NB_MBUF; > > + min_mbufs = dev->requested_n_rxq * dev->requested_rxq_size > > + + dev->requested_n_txq * dev->requested_txq_size > > + + MIN(RTE_MAX_LCORE, dev->requested_n_rxq) * > > NETDEV_MAX_BURST > > + + MIN(RTE_MAX_LCORE, dev->requested_n_rxq) * > MP_CACHE_SZ; > > + n_mbufs = min_mbufs + NB_MBUF_ADD; > > > > ovs_mutex_lock(&dpdk_mp_mutex); > > @@ -579,5 +579,6 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu) > > mp_name, n_mbufs); > > } > > - } while (!mp && rte_errno == ENOMEM && (n_mbufs /= 2) >= > > MIN_NB_MBUF); > > + n_mbufs = n_mbufs == min_mbufs ? 0 : MAX(min_mbufs, n_mbufs / > 2); > > + } while (!mp && rte_errno == ENOMEM && n_mbufs); > > > > ovs_mutex_unlock(&dpdk_mp_mutex); > > -- > > 1.8.3.1 > > _______________________________________________ > dev mailing list > [email protected] > https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
