On 04/06/15 14:07, Bill Fischofer wrote:
Yes, that should be updated to clarify this point. Thanks.

+1 for that.

Maxim.

On Mon, Apr 6, 2015 at 5:16 AM, Ciprian Barbu <[email protected] <mailto:[email protected]>> wrote:

    On Fri, Apr 3, 2015 at 8:39 PM, Bill Fischofer
    <[email protected] <mailto:[email protected]>> wrote:
    > The pool specified on odp_pktio_open() is simply the default
    pool to use if
    > a PMR doesn't provide a more specific match to a CoS.  In the
    latter case
    > the pool associated with the CoS applies.

    Should we update the doxygen documentation of odp_pktio_open to
    mention the pool is only used by the default CoS?

    >
    > On Fri, Apr 3, 2015 at 12:32 PM, Benoît Ganne <[email protected]
    <mailto:[email protected]>> wrote:
    >>
    >> Hi Bill,
    >>
    >> Thanks for the feedback. It was more or less what I was
    thinking about,
    >> but the fact that pktio_open() took a pool in parameter made me
    think it was
    >> per-iface.
    >>
    >> Thanks,
    >> Ben.
    >>
    >>
    >> Sent from Samsung Mobile.
    >>
    >>
    >> -------- Original message --------
    >> From: Bill Fischofer
    >> Date:03/04/2015 18:14 (GMT+01:00)
    >> To: Benoît Ganne
    >> Cc: LNG ODP Mailman List
    >> Subject: Re: [lng-odp] ODP port to a new architecture
    >>
    >> Hi Benoît,
    >>
    >> What you describe seems quite doable with the current ODP API set,
    >> however, we're always looking to better refine them to best
    match the
    >> capabilities of the various platforms that support
    ODP--especially those
    >> that embody novel HW architectures--so I'd encourage you to
    participate in
    >> the mailing list and our regular weekly calls that Maxim has
    already
    >> mentioned.
    >>
    >> The ODP classifier APIs are intended to be very general and not
    tied to
    >> specific embodiments.  PMRs can be associated with PktIO
    objects, but that's
    >> not a requirement.  The intended flow is that packets are
    matched against
    >> PMRs to find the most-specific match and that process assigns a
    CoS to the
    >> arriving packet.  A CoS, in turn, specifies both the pool that
    should be
    >> used to store the packet as well as the queue (or queue group)
    that it
    >> should be added to for scheduling.  Queue groups are not part
    of ODP v1.0
    >> but provide a means of distributing flow-related packets to
    individual
    >> related queues that form the queue group.  We expect to be
    adding this
    >> capability to the APIs this year.
    >>
    >> So it sound like you'd want a pool per address space and use
    PMRs to sort
    >> arriving packets into a set of CoSes that map to the
    appropriate per-AS
    >> pool.
    >>
    >> Bill
    >>
    >> On Fri, Apr 3, 2015 at 10:20 AM, Benoît Ganne <[email protected]
    <mailto:[email protected]>> wrote:
    >>>
    >>> Hi Maxim, thanks for your quick answer!
    >>>
    >>>> Glad to see new people interesting ODP. There are many ways to
    >>>> participate: mailing list, regular public meeting on Tuesday
    or be a
    >>>> member of Linaro LNG team with it's benefits (drive next API
    >>>> development, use LNG infrastructure, set and prioritize tasks).
    >>>
    >>>
    >>> I will join Tuesday meetings.
    >>> Regarding Linaro membership, I couldn't find much information
    on the
    >>> Linaro website. Do you have pointers about what is required
    and what it
    >>> means? One of my concern is that our architecture is not
    ARM-based. ODP is
    >>> ISA-agnostic, but not sure about Linaro though.
    >>>
    >>>> If you go to http://www.opendataplane.org/ you can find
    different repos
    >>>> for odp.
    >>>
    >>> [...]
    >>>>
    >>>> So the best thing it to start with looking at linux-generic
    >>>> implementation, then branch it out to your local tree then start
    >>>> implement pktio and queue function. As reference how to do it
    best you
    >>>> can take a look at TI Keystone2 implementation or DPDK or Netmap.
    >>>
    >>>
    >>> Yes, this is what we started to do. And the only real issue we
    saw so far
    >>> is this pktio <--> pool association.
    >>>
    >>>> About address space I think you should be ok with implementation
    >>>> odp_shm_reserve() function which should take care about all your
    >>>> internal address spaces.
    >>>
    >>>
    >>> Right. The packet pool API itself fits nicely our needs.
    >>>
    >>>> Then you can create bunch of odp pools for each segment, then
    call
    >>>> odp_pktio_open() for each pool  and then bind it to queue with
    >>>> odp_queue_create(). Does that work for you?
    >>>> Or might be in you configuration we should consider segmented
    pool
    >>>> support. I.e. if pool is represented with several memory chunks.
    >>>
    >>>
    >>> Not sure wether it could fix our issue. I will try to better
    explain,
    >>> here is what we have on the packet RX path:
    >>>             +------------+
    >>>             | DISPATCHER |
    >>> iface0 ---> |     X      | ---> address space 0
    >>> iface1 ---> |    PMR     | ---> address space 1
    >>> iface2 ---> |    CoS     |          ...
    >>> iface3 ---> | Scheduling | ---> address space N
    >>>             +------------+
    >>>
    >>> iface[0-3] are physical Ethernet interfaces. The DISPATCHER
    block receive
    >>> packets from various interfaces, and is responsible of
    classifying (ie
    >>> attributing a CoS) to incoming packets based on PMR. When the
    packet CoS is
    >>> determined, it will schedule it in CoS-aware and flow-aware
    manner. The
    >>> DISPATCHER can schedule packets to any address space.
    >>> The packets are not buffered by the DISPATCHER. They go
    through it, and
    >>> they will be buffered for processing in the target address space.
    >>> What does it means is that:
    >>>  - packet pools are created in the various address spaces
    >>>  - each configured CoS can be scheduled (in a flow-aware
    manner) to any
    >>> address spaces, or only in some of them (this is user-defined)
    >>>
    >>> My initial idea was to consider each iface as a pktio, create
    queues in
    >>> the various address spaces, and associate a CoS to a group of
    queues. When
    >>> the DISPATCHER determines the packet CoS, it can be schedule
    it in one of
    >>> the queue belonging to the CoS queue group, and the packet
    will end-up in
    >>> the address space of the selected queue.
    >>> But if I do that, I need to associate the packet pool to a
    queue rather
    >>> than a pktio. A packet would have the following path: ingress
    pktio --> PMR
    >>> --> CoS --> queue --> pool.
    >>>
    >>> Now, your proposal is to open a pktio in each address space.
    But it looks
    >>> to me that it means that packets from different CoS will use
    the same pool?
    >>> I would like to avoid that.
    >>> Moreover as PMR and CoS are defined per pktio, this would mean
    that each
    >>> address space could have its own PMR and CoS setup. This won't
    map well on
    >>> our HW.
    >>>
    >>> Thanks,
    >>> ben
    >>>
    >>> --
    >>> Benoît GANNE
    >>> Field Application Engineer, Kalray
    >>> +33 (0)648 125 843 <tel:%2B33%20%280%29648%20125%20843>
    >>> _______________________________________________
    >>> lng-odp mailing list
    >>> [email protected] <mailto:[email protected]>
    >>> https://lists.linaro.org/mailman/listinfo/lng-odp
    >>
    >>
    >
    >
    > _______________________________________________
    > lng-odp mailing list
    > [email protected] <mailto:[email protected]>
    > https://lists.linaro.org/mailman/listinfo/lng-odp
    >




_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to