On 11/21/2019 3:12 PM, David Marchand wrote: > Following [1], testpmd memory consumption has skyrocketted. > The rte_port structure has gotten quite fat. > > struct rte_port { > [...] > struct rte_eth_rxconf rx_conf[65536]; /* 266280 3145728 */ > /* --- cacheline 53312 boundary (3411968 bytes) was 40 bytes ago --- */ > struct rte_eth_txconf tx_conf[65536]; /* 3412008 3670016 */ > /* --- cacheline 110656 boundary (7081984 bytes) was 40 bytes ago --- */ > [...] > /* size: 8654936, cachelines: 135234, members: 31 */ > [...] > > testpmd handles RTE_MAX_ETHPORTS ports (32 by default) which means that it > needs ~256MB just for this internal representation. > > The reason is that a testpmd rte_port (the name is quite confusing, as > it is a local type) maintains configurations for all queues of a port. > But where you would expect testpmd to use RTE_MAX_QUEUES_PER_PORT as the > maximum queue count, the rte_port uses MAX_QUEUE_ID set to 64k. > > Prefer the ethdev maximum value. > > After this patch: > struct rte_port { > [...] > struct rte_eth_rxconf rx_conf[1025]; /* 8240 49200 */ > /* --- cacheline 897 boundary (57408 bytes) was 32 bytes ago --- */ > struct rte_eth_txconf tx_conf[1025]; /* 57440 57400 */ > /* --- cacheline 1794 boundary (114816 bytes) was 24 bytes ago --- */ > [...] > /* size: 139488, cachelines: 2180, members: 31 */ > [...] > > [1]: https://git.dpdk.org/dpdk/commit/?id=436b3a6b6e62 > > Signed-off-by: David Marchand <david.march...@redhat.com>
Thanks for figuring this out, Acked-by: Ferruh Yigit <ferruh.yi...@intel.com> <...> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h > index 90694a3309..217d577018 100644 > --- a/app/test-pmd/testpmd.h > +++ b/app/test-pmd/testpmd.h > @@ -58,8 +58,6 @@ typedef uint16_t portid_t; > typedef uint16_t queueid_t; > typedef uint16_t streamid_t; > > -#define MAX_QUEUE_ID ((1 << (sizeof(queueid_t) * 8)) - 1) No strong opinion, but would it be simpler if assign 'MAX_QUEUE_ID' to 'RTE_MAX_QUEUES_PER_PORT' instead? #define MAX_QUEUE_ID RTE_MAX_QUEUES_PER_PORT