Bill, Petri, Bogdan,

Thank you for your fast feedback. It's been incredibly instructive. We were
looking for something like the input hash that Petri points out, even at
the price of not having classification (which we could implement manually).

I'll dive into the details of queue scheduling and sticking to a CPU.

Thank you all.

--
Oriol Arcas
Software Engineer
Starflow Networks

On Thu, Jan 4, 2018 at 11:01 AM, Bogdan Pricope <bogdan.pric...@linaro.org>
wrote:

> I guess, the issue is not how to hash traffic in different scheduled
> queues but how to lock a scheduled queue to a single thread (core):
> sched.sync guarantees that at one moment one queue is scheduled to a
> single thread but not on the same thread all the times - this may be
> enough for some implementations (to avoid some locks) but not enough
> for others.
>
> One problem is that all pktio sched queues are assigned to a single
> sched group that is assigned to a single group of threads/cores. If I
> understand correctly, Bill suggests classification + sched queues,
> where each queue is assigned to a different sched group that is
> assigned to a single thread/core.
>
> Other idea is to use direct mode (+ RSS), where each worker is polling
> from the its own pktin all the time (odp_pktin_recv()).
>
> On 4 January 2018 at 10:24, Savolainen, Petri (Nokia - FI/Espoo)
> <petri.savolai...@nokia.com> wrote:
> >
> >
> >> -----Original Message-----
> >> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
> Oriol
> >> Arcas
> >> Sent: Wednesday, January 03, 2018 7:12 PM
> >> To: LNG ODP Mailman List <lng-odp@lists.linaro.org>
> >> Subject: [lng-odp] RSS in ODP
> >>
> >> Hello and happy new year,
> >>
> >> In our company we are looking into scaling the odp_schedule() calls.
> >> Currently we are manually doing Receive Side Scaling, which requires one
> >> CPU to receive all the packets and pass them to other worker CPU in a
> >> flow-deterministic way (i.e., not spreading the packets from a TCP flow
> to
> >> different CPUs). Obviously this is a bottleneck.
> >>
> >> It would be great if ODP had optional RSS policies, which ultimately
> would
> >> assign packets from the same flow to a single thread in the schedule
> group
> >> (usually hashing the address tuple). Would this probably mean having
> >> dedicated queues?
> >>
> >> I don't know if there is something similar in ODP already which I have
> >> missed. I'll thank any feedback!
> >>
> >> Best regards,
> >>
> >> --
> >> Oriol Arcas
> >> Software Engineer
> >> Starflow Networks
> >
> >
> > Our l2fwd test application (odp_l2fwd.c) configures packet input
> hashing, which is in practice RSS, but could be also some other
> implementation defined packet input hash function. You can take a look from
> there. The same hash configuration is possible for both direct pktin queues
> and scheduled event queues. For scheduled queues you would enable it
> something like this:
> >
> > /* Normal interface open and config steps */
> > pktio = odp_pktio_open(dev, pool, &pktio_param);
> > odp_pktio_config(pktio, &config);
> >
> > /*
> >  * Setup packet input hashing into scheduled event queues
> >  */
> > if (num_rx_queues > capa.max_input_queues)
> >         num_rx_queues = capa.max_input_queues;
> >
> > odp_pktin_queue_param_init(&pktin_param);
> >
> > pktin_param.queue_param.sched.prio  = ODP_SCHED_PRIO_DEFAULT;
> > pktin_param.queue_param.sched.sync  = ODP_SCHED_SYNC_ATOMIC;
> > pktin_param.queue_param.sched.group = ODP_SCHED_GROUP_ALL;
> > pktin_param.hash_enable = 1;
> > pktin_param.hash_proto.proto.ipv4_udp = 1;
> > pktin_param.num_queues  = num_rx_queues;
> >
> > if (odp_pktin_queue_config(pktio, &pktin_param))
> >         return -1;
> >
> > /* Optionally, see which event queues has been created by the previous
> call.
> >  * May e.g. want to set queue contexts here.
> >  */
> > if (odp_pktin_event_queue(pktio, rx_queues, num_rx_queues) !=
> num_rx_queues)
> >         return -1;
> >
> > /* Starts packet input */
> > odp_pktio_start(pktio);
> >
> > /* Use scheduler to receive packets ...*/
> >
> >
>

Reply via email to