Hi,

I see your point but there is another issue which currently makes increasing 
the NM_MAX_DESC  define size rather futile. In netmap it is not possible to map 
a set of RX/TX rings to a nm descriptor (e.g. rings 0-3 -> nm_desc0, rings 4-7 
-> nm_desc1). You can only map a single ring or all of them. So, in the ODP MQ 
implementation we have to map a single nm_desc to each ring, no matter how many 
RX/TX queues are actually configured in ODP (1 is an exception). When the 
number of NIC rings increases, netmap will eventually run out of memory for 
storing the nm_desc objects. If I remember correctly, the maximum number of 
nm_desc objects is probably something between 64 and 128. 

An exception to this is the aforementioned single queue case where all nm rings 
can be mapped to a single nm descriptor. Now, instead of failing in 
pktio_open() we could set capa.max_input_queues/capa.max_output_queues to 1 and 
return successfully.

-Matias

> -----Original Message-----
> From: Tilli, Juha-Matti (Nokia - FI/Espoo)
> Sent: Wednesday, March 02, 2016 3:50 PM
> To: [email protected]; Elo, Matias (Nokia - FI/Espoo)
> <[email protected]>
> Subject: RE: [lng-odp] [PATCH] linux-generic: netmap: increase maximum
> descriptor count
> 
> Hi,
> 
> Isn't 64 input/output queues quite low? Typically, the number of queues is the
> same as the number of virtual CPUs. Considering that typical high-end CPUs 
> have
> 10 cores and HyperThreading (20 virtual CPUs total) and that high-end servers 
> can
> have two CPU sockets (40 virtual CPUs total), the limit is quite close on 
> high-end
> servers.
> 
> According to some quick googling, for 82599 the maximum number of RX and TX
> queues is 128 each, higher than 64. For XL710, the maximum number of TX and RX
> queues is 1536.
> 
> One can get dual-processor Xeon E5-2699 v4 CPUs that have 22 cores and
> HyperThreading, and if there's two such processors, it's 88 queues. There's 
> also
> Xeon E7-8895 v3 octa-processor that has 18 cores, and if there's eight such
> processors, it's 288 queues. Yes, these are really top of the line models, so 
> most
> users won't have such powerful machines, but some may have. As a matter of
> fact, newer RHEL versions do support 288 CPUs.
> 
> I would therefore consider much higher values for the constant, although I 
> only
> have 40 virtual CPUs (and therefore 40 queues) on the most powerful of my
> systems.
> 
> Of course, if you're running multiple threads and have the custom netmap
> drivers, actually mapping all of the queues takes so long that you may want to
> reduce the queue count to a more manageable level.
> 
> -----Original Message-----
> From: lng-odp [mailto:[email protected]] On Behalf Of EXT Maxim
> Uvarov
> Sent: Wednesday, March 02, 2016 12:38 PM
> To: [email protected]
> Subject: Re: [lng-odp] [PATCH] linux-generic: netmap: increase maximum
> descriptor count
> 
> Merged,
> Maxim.
> 
> On 02/26/16 17:02, Matias Elo wrote:
> > Increase maximum descriptor count to support NICs with up to
> > 64 input/output queues. Related debug messages are also
> > improved.
> >
> > Signed-off-by: Matias Elo <[email protected]>
> > ---
> >   platform/linux-generic/include/odp_packet_netmap.h | 2 +-
> >   platform/linux-generic/pktio/netmap.c              | 6 ++++--
> >   2 files changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/platform/linux-generic/include/odp_packet_netmap.h
> b/platform/linux-generic/include/odp_packet_netmap.h
> > index 26a8da1..b7990d9 100644
> > --- a/platform/linux-generic/include/odp_packet_netmap.h
> > +++ b/platform/linux-generic/include/odp_packet_netmap.h
> > @@ -17,7 +17,7 @@
> >   #include <linux/if_ether.h>
> >   #include <net/if.h>
> >
> > -#define NM_MAX_DESC 32
> > +#define NM_MAX_DESC 64
> >
> >   /** Ring for mapping pktin/pktout queues to netmap descriptors */
> >   struct netmap_ring_t {
> > diff --git a/platform/linux-generic/pktio/netmap.c b/platform/linux-
> generic/pktio/netmap.c
> > index 0554171..168b76a 100644
> > --- a/platform/linux-generic/pktio/netmap.c
> > +++ b/platform/linux-generic/pktio/netmap.c
> > @@ -288,7 +288,8 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> >             goto error;
> >     }
> >     if (desc->nifp->ni_rx_rings > NM_MAX_DESC) {
> > -           ODP_ERR("Unable to store all rx rings\n");
> > +           ODP_ERR("Unable to store all %" PRIu32 " rx rings (max %d)\n",
> > +                   desc->nifp->ni_rx_rings, NM_MAX_DESC);
> >             nm_close(desc);
> >             goto error;
> >     }
> > @@ -298,7 +299,8 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> >             pkt_nm->capa.max_input_queues = desc->nifp->ni_rx_rings;
> >
> >     if (desc->nifp->ni_tx_rings > NM_MAX_DESC) {
> > -           ODP_ERR("Unable to store all tx rings\n");
> > +           ODP_ERR("Unable to store all %" PRIu32 " tx rings (max %d)\n",
> > +                   desc->nifp->ni_tx_rings, NM_MAX_DESC);
> >             nm_close(desc);
> >             goto error;
> >     }
> 
> _______________________________________________
> lng-odp mailing list
> [email protected]
> https://lists.linaro.org/mailman/listinfo/lng-odp
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to