On 12 January 2018 at 17:43, Oriol Arcas <or...@starflownetworks.com> wrote:

> Hi again,
>
> I two doubts about the CoS RSS implementation. The new odp_cls_cos_param_t
> supports a custom queue_param_t for the newly created queues:
>
> https://github.com/Linaro/odp/blob/master/platform/linux-
> generic/odp_classification.c#L215
>
> odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param)
> {
>
> ...
>
>     if (param->num_queue > 1) {
>             odp_queue_param_init(&cos->s.queue_param);
>             cos->s.queue_group = true;
>             cos->s.queue = ODP_QUEUE_INVALID;
>             cos->s.num_queue = param->num_queue;
>             _odp_cls_update_hash_proto(cos,
>                                        param->hash_proto);
>             tbl_index = cos->s.index * CLS_COS_QUEUE_MAX;
>             for (j = 0; j < CLS_COS_QUEUE_MAX; j++) {
>                     queue = odp_queue_create(NULL, &cos->s.
>                                              queue_param);
>                     if (queue == ODP_QUEUE_INVALID) {
>                             UNLOCK(&cos->s.lock);
>                             return ODP_COS_INVALID;
>                     }
>                     queue_grp_tbl->s.queue[tbl_index + j] =
>                                     queue;
>             }
>
>     } else {
>             cos->s.queue = param->queue;
>     }
>
> ...
>
> }
>
> My understanding is that the queue parameters in the 'param' argument are
> not used when initializing 'cos->s.queue_param', only the 'num_queue' and
> 'hash_proto' fields. So the newly created queues will ignore the queue
> parameters (scheduling, type, etc.).
>

The newly created queue will have to use the queue parameter configured by
the application.


> A second question is that for num_queue > 1 the queues are created
> automatically by ODP, but in odp_cos_destroy() they are not destroyed. So I
> assume either it should be done there, or the user is responsible for
> retrieving and destroying those queues.
>

The queue created using this mechanism will have to be destroyed by the
implementation. Application will not have to worry about these
implementation created queues.
These are BUG in linux-generic implementation and will fix the same.


> Regards,
>
>
>
> --
> Oriol Arcas
> Software Engineer
> Starflow Networks
>
> On Thu, Jan 4, 2018 at 5:37 PM, Bala Manoharan <bala.manoha...@linaro.org>
> wrote:
>
>>
>> Regards,
>> Bala
>>
>> On 4 January 2018 at 21:52, Oriol Arcas <or...@starflownetworks.com>
>> wrote:
>>
>>> Hi Bala,
>>>
>>> I didn't find any hashing parameter in the CoS API, is it implemented or
>>> a suggestion?
>>>
>>
>> It has been added as part of TigerMoth. Pls check the following
>> parameters in CoS.
>>
>> typedef struct odp_cls_cos_param {
>>
>>
>>         /** Number of queues to be linked to this CoS.
>>
>>
>>          * If the number is greater than 1 then hashing is enabled.
>>
>>
>>          * If number is equal to 1 then hashing is disabled.
>>
>>
>>          * When hashing is enabled the queues are created by the
>> implementation
>>
>>          * and application need not configure any queue to the class of
>> service.
>>
>>          * When hashing is disabled application has to configure the
>> queue to
>>
>>          * the class of service.
>>
>>
>>          * Depening on the implementation this number might be
>> rounded-off to
>>
>>          * nearest supported value (e.g power of 2)
>>
>>
>>          */
>>
>>
>>         uint32_t num_queue;
>>
>>
>>
>>
>>
>>         /** Variant mapping for queue hash configurataion */
>>
>>
>>         union {
>>
>>
>>                 /** Mapping used when num_queue = 1, hashing is disabled
>> in
>>
>>                  * this case and application has to configure this queue
>> and
>>
>>                  * packets are delivered to this queue */
>>
>>
>>                 odp_queue_t queue;
>>
>>
>>
>>
>>
>>                 /** Mapping used when num_queue > 1, hashing is enabled
>> in
>>
>>                  * this case and queues are created by the
>> implementation */
>>
>>                 struct {
>>
>>
>>                         /** Queue parameters */
>>
>>
>>                         odp_queue_param_t queue_param;
>>
>>
>>
>>
>>
>>                         /** Protocol header fields which are included in
>>
>>
>>                          * packet input hash calculation */
>>
>>
>>                         odp_pktin_hash_proto_t hash_proto;
>>
>>
>>                 };
>>
>>
>>         };
>>
>>>
>>> --
>>> Oriol Arcas
>>> Software Engineer
>>> Starflow Networks
>>>
>>> On Thu, Jan 4, 2018 at 5:11 PM, Bala Manoharan <
>>> bala.manoha...@linaro.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> In addition configuring the hashing on the pktio interface as Petri
>>>> suggested, you can also configure hashing on CoS.
>>>> By configuring hashing on CoS, you can effectively configure hashing
>>>> for a particular flow.
>>>>
>>>> Regards,
>>>> Bala
>>>>
>>>> On 4 January 2018 at 20:51, Oriol Arcas <or...@starflownetworks.com>
>>>> wrote:
>>>>
>>>>> Bill, Petri, Bogdan,
>>>>>
>>>>> Thank you for your fast feedback. It's been incredibly instructive. We
>>>>> were
>>>>> looking for something like the input hash that Petri points out, even
>>>>> at
>>>>> the price of not having classification (which we could implement
>>>>> manually).
>>>>>
>>>>> I'll dive into the details of queue scheduling and sticking to a CPU.
>>>>>
>>>>> Thank you all.
>>>>>
>>>>> --
>>>>> Oriol Arcas
>>>>> Software Engineer
>>>>> Starflow Networks
>>>>>
>>>>> On Thu, Jan 4, 2018 at 11:01 AM, Bogdan Pricope <
>>>>> bogdan.pric...@linaro.org>
>>>>> wrote:
>>>>>
>>>>> > I guess, the issue is not how to hash traffic in different scheduled
>>>>> > queues but how to lock a scheduled queue to a single thread (core):
>>>>> > sched.sync guarantees that at one moment one queue is scheduled to a
>>>>> > single thread but not on the same thread all the times - this may be
>>>>> > enough for some implementations (to avoid some locks) but not enough
>>>>> > for others.
>>>>> >
>>>>> > One problem is that all pktio sched queues are assigned to a single
>>>>> > sched group that is assigned to a single group of threads/cores. If I
>>>>> > understand correctly, Bill suggests classification + sched queues,
>>>>> > where each queue is assigned to a different sched group that is
>>>>> > assigned to a single thread/core.
>>>>> >
>>>>> > Other idea is to use direct mode (+ RSS), where each worker is
>>>>> polling
>>>>> > from the its own pktin all the time (odp_pktin_recv()).
>>>>> >
>>>>> > On 4 January 2018 at 10:24, Savolainen, Petri (Nokia - FI/Espoo)
>>>>> > <petri.savolai...@nokia.com> wrote:
>>>>> > >
>>>>> > >
>>>>> > >> -----Original Message-----
>>>>> > >> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On
>>>>> Behalf Of
>>>>> > Oriol
>>>>> > >> Arcas
>>>>> > >> Sent: Wednesday, January 03, 2018 7:12 PM
>>>>> > >> To: LNG ODP Mailman List <lng-odp@lists.linaro.org>
>>>>> > >> Subject: [lng-odp] RSS in ODP
>>>>> > >>
>>>>> > >> Hello and happy new year,
>>>>> > >>
>>>>> > >> In our company we are looking into scaling the odp_schedule()
>>>>> calls.
>>>>> > >> Currently we are manually doing Receive Side Scaling, which
>>>>> requires one
>>>>> > >> CPU to receive all the packets and pass them to other worker CPU
>>>>> in a
>>>>> > >> flow-deterministic way (i.e., not spreading the packets from a
>>>>> TCP flow
>>>>> > to
>>>>> > >> different CPUs). Obviously this is a bottleneck.
>>>>> > >>
>>>>> > >> It would be great if ODP had optional RSS policies, which
>>>>> ultimately
>>>>> > would
>>>>> > >> assign packets from the same flow to a single thread in the
>>>>> schedule
>>>>> > group
>>>>> > >> (usually hashing the address tuple). Would this probably mean
>>>>> having
>>>>> > >> dedicated queues?
>>>>> > >>
>>>>> > >> I don't know if there is something similar in ODP already which I
>>>>> have
>>>>> > >> missed. I'll thank any feedback!
>>>>> > >>
>>>>> > >> Best regards,
>>>>> > >>
>>>>> > >> --
>>>>> > >> Oriol Arcas
>>>>> > >> Software Engineer
>>>>> > >> Starflow Networks
>>>>> > >
>>>>> > >
>>>>> > > Our l2fwd test application (odp_l2fwd.c) configures packet input
>>>>> > hashing, which is in practice RSS, but could be also some other
>>>>> > implementation defined packet input hash function. You can take a
>>>>> look from
>>>>> > there. The same hash configuration is possible for both direct pktin
>>>>> queues
>>>>> > and scheduled event queues. For scheduled queues you would enable it
>>>>> > something like this:
>>>>> > >
>>>>> > > /* Normal interface open and config steps */
>>>>> > > pktio = odp_pktio_open(dev, pool, &pktio_param);
>>>>> > > odp_pktio_config(pktio, &config);
>>>>> > >
>>>>> > > /*
>>>>> > >  * Setup packet input hashing into scheduled event queues
>>>>> > >  */
>>>>> > > if (num_rx_queues > capa.max_input_queues)
>>>>> > >         num_rx_queues = capa.max_input_queues;
>>>>> > >
>>>>> > > odp_pktin_queue_param_init(&pktin_param);
>>>>> > >
>>>>> > > pktin_param.queue_param.sched.prio  = ODP_SCHED_PRIO_DEFAULT;
>>>>> > > pktin_param.queue_param.sched.sync  = ODP_SCHED_SYNC_ATOMIC;
>>>>> > > pktin_param.queue_param.sched.group = ODP_SCHED_GROUP_ALL;
>>>>> > > pktin_param.hash_enable = 1;
>>>>> > > pktin_param.hash_proto.proto.ipv4_udp = 1;
>>>>> > > pktin_param.num_queues  = num_rx_queues;
>>>>> > >
>>>>> > > if (odp_pktin_queue_config(pktio, &pktin_param))
>>>>> > >         return -1;
>>>>> > >
>>>>> > > /* Optionally, see which event queues has been created by the
>>>>> previous
>>>>> > call.
>>>>> > >  * May e.g. want to set queue contexts here.
>>>>> > >  */
>>>>> > > if (odp_pktin_event_queue(pktio, rx_queues, num_rx_queues) !=
>>>>> > num_rx_queues)
>>>>> > >         return -1;
>>>>> > >
>>>>> > > /* Starts packet input */
>>>>> > > odp_pktio_start(pktio);
>>>>> > >
>>>>> > > /* Use scheduler to receive packets ...*/
>>>>> > >
>>>>> > >
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to