Hi Maxim, thanks for your quick answer!

Glad to see new people interesting ODP. There are many ways to
participate: mailing list, regular public meeting on Tuesday or be a
member of Linaro LNG team with it's benefits (drive next API
development, use LNG infrastructure, set and prioritize tasks).

I will join Tuesday meetings.
Regarding Linaro membership, I couldn't find much information on the Linaro website. Do you have pointers about what is required and what it means? One of my concern is that our architecture is not ARM-based. ODP is ISA-agnostic, but not sure about Linaro though.

If you go to http://www.opendataplane.org/ you can find different repos
for odp.
[...]
So the best thing it to start with looking at linux-generic
implementation, then branch it out to your local tree then start
implement pktio and queue function. As reference how to do it best you
can take a look at TI Keystone2 implementation or DPDK or Netmap.

Yes, this is what we started to do. And the only real issue we saw so far is this pktio <--> pool association.

About address space I think you should be ok with implementation
odp_shm_reserve() function which should take care about all your
internal address spaces.

Right. The packet pool API itself fits nicely our needs.

Then you can create bunch of odp pools for each segment, then call
odp_pktio_open() for each pool  and  then bind it to queue with
odp_queue_create(). Does that work for you?
Or might be in you configuration we should consider segmented pool
support. I.e. if pool is represented with several memory chunks.

Not sure wether it could fix our issue. I will try to better explain, here is what we have on the packet RX path:
            +------------+
            | DISPATCHER |
iface0 ---> |     X      | ---> address space 0
iface1 ---> |    PMR     | ---> address space 1
iface2 ---> |    CoS     |          ...
iface3 ---> | Scheduling | ---> address space N
            +------------+

iface[0-3] are physical Ethernet interfaces. The DISPATCHER block receive packets from various interfaces, and is responsible of classifying (ie attributing a CoS) to incoming packets based on PMR. When the packet CoS is determined, it will schedule it in CoS-aware and flow-aware manner. The DISPATCHER can schedule packets to any address space. The packets are not buffered by the DISPATCHER. They go through it, and they will be buffered for processing in the target address space.
What does it means is that:
 - packet pools are created in the various address spaces
- each configured CoS can be scheduled (in a flow-aware manner) to any address spaces, or only in some of them (this is user-defined)

My initial idea was to consider each iface as a pktio, create queues in the various address spaces, and associate a CoS to a group of queues. When the DISPATCHER determines the packet CoS, it can be schedule it in one of the queue belonging to the CoS queue group, and the packet will end-up in the address space of the selected queue. But if I do that, I need to associate the packet pool to a queue rather than a pktio. A packet would have the following path: ingress pktio --> PMR --> CoS --> queue --> pool.

Now, your proposal is to open a pktio in each address space. But it looks to me that it means that packets from different CoS will use the same pool? I would like to avoid that. Moreover as PMR and CoS are defined per pktio, this would mean that each address space could have its own PMR and CoS setup. This won't map well on our HW.

Thanks,
ben

--
Benoît GANNE
Field Application Engineer, Kalray
+33 (0)648 125 843
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to