Forwarded to the list.

---------- Forwarded message ----------
From: Mike Holmes <[email protected]>
Date: 14 January 2015 at 14:05
Subject: Re: odp_schedule.h what do we need to resolve this for 1.0
To: Alexandru Badicioiu <[email protected]>
Cc: "Savolainen, Petri (NSN - FI/Espoo)" <[email protected]>, Robert
King <[email protected]>, Bill Fischofer <[email protected]>


I added this a a point in the 2015 delta doc as a placeholder assuming it
may not be addressed in 1.0

https://docs.google.com/a/linaro.org/document/d/1Nld3fPAgpYAv8ClqYxxJjMbtcEH7zBHDxXTc7w15m24/edit#

Please clean up the suggestion,

On 14 January 2015 at 07:00, Alexandru Badicioiu <
[email protected]> wrote:

> The scenario I had in mind is:
> 1 - packet 1 is dequeued from ATOMIC queue
> 2 - packet 1 is fragmented in N fragments, each of them is a new packet
> 3 - packet 1 first fragment is enqueued, this will release the atomic
> context implicitly
> 4 - packet 2 is dequeued from the same ATOMIC queue by another core
> 5 - packet 1 second fragment is enqueued, this will release the atomic
> context
> 6 - packet 3 is dequeued from the same ATOMIC queue by another core
>
> Now we have packet 2 and 3 processed in parallel from the same atomic
> queue which contradicts the definition of ATOMIC queue.
> How is this scenario handled by the current scheduler API?
>
> Alex
>
>
>
>
> On 14 January 2015 at 13:08, Savolainen, Petri (NSN - FI/Espoo) <
> [email protected]> wrote:
>
>>  That’s true if multiple source queues share a destination queue. If you
>> want to avoid interleaving you should store references to those fragments
>> into the original packet (in user data or headroom) or a new message, and
>> enqueue that instead of enqueuing fragments individually.
>>
>>
>>
>> I think we’ll add e.g. packet list structure later this year to support
>> packet linking in application. Potentially, that could be extended to
>> standard solution for bundling packets during queueing.
>>
>>
>>
>> -Petri
>>
>>
>>
>>
>>
>>
>>
>> *From:* ext Alexandru Badicioiu [mailto:[email protected]]
>> *Sent:* Wednesday, January 14, 2015 11:53 AM
>> *To:* Savolainen, Petri (NSN - FI/Espoo)
>> *Cc:* Mike Holmes; Robert King; Bill Fischofer
>>
>> *Subject:* Re: odp_schedule.h what do we need to resolve this for 1.0
>>
>>
>>
>> I think even with an ATOMIC source queue, when a packet is fragmented and
>> the resulting fragments are enqueued, the next dequeue should happen after
>> all fragments have been enqueued, otherwise we can get in the flow packets
>> interleaved with fragments of another packet or fragments of different
>> packets interleaved. Shouldn't we add something like
>> __keep_atomic_context__ for this cases ?
>>
>> Would it be reasonable to assume that fragments of a given packet are
>> always ordered relatively to each other (they are if they are produced by a
>> single core) ?
>>
>>
>>
>> Alex
>>
>>
>>
>>
>>
>>
>>
>> On 14 January 2015 at 11:02, Savolainen, Petri (NSN - FI/Espoo) <
>> [email protected]> wrote:
>>
>> This is a valid use case: how to insert new packets (in order) into a
>> flow of ordered packets. However we can leave possible new APIs for that
>> after v1.0. Currently if everything (also the new packets) have to maintain
>> order in the destination queue, user must use an atomic queue as the source
>> queue (i.e. no support from ordered queue for that).
>>
>>
>>
>> If it’s only important to keep first fragments in order, current ordered
>> queue definition supports that.
>>
>>
>>
>> -Petri
>>
>>
>>
>>
>>
>> *From:* ext Alexandru Badicioiu [mailto:[email protected]]
>> *Sent:* Wednesday, January 14, 2015 10:10 AM
>> *To:* Mike Holmes
>> *Cc:* Petri Savolainen; Robert King; Bill Fischofer
>> *Subject:* Re: odp_schedule.h what do we need to resolve this for 1.0
>>
>>
>>
>> Hi Mike,
>>
>> the use-case I highlighted is when a packet is dequeued from an ORDERED
>> queue and fragmentation is required before the next enqueue  - e.g. in case
>> of IPSec tunnels fragmentation before tunneling to avoid possible later ESP
>> packet fragmentation due to the length increase and need to reassemble the
>> ESP at destination before decryption. I think we need to handle this case
>> explicitly with some additional call(s).
>>
>>
>>
>> Alex
>>
>>
>>
>>
>>
>> On 13 January 2015 at 21:57, Mike Holmes <[email protected]> wrote:
>>
>> Hi Alex
>>
>>
>>
>> You had a note in this doc
>>
>>
>>
>>
>> https://docs.google.com/a/linaro.org/document/d/1BRVyW8IIVMTq4nhB_vUz5y-te6TEdu5g1XgolujjY6c/edit#
>> <https://docs.google.com/a/linaro.org/document/d/1BRVyW8IIVMTq4nhB_vUz5y-te6TEdu5g1XgolujjY6c/edit>
>>  odp_schedule.h
>>
>> Petri  >>>> Alex has use cases that appear to need API changes - have we
>> addressed these ?
>>
>>
>>
>> ·     *Remove: odp_schedule_one Mike patch posted*
>>
>> o  *Only way to break out from the schedule loop is to first call
>> odp_schedule_pause() and then call odp_schedule()/_multi() so many times
>> that an INVALID buffer is returned*
>>
>> o  *Configuration options to optimize for throughput vs. latency/QoS
>> will be added later*
>>
>> ·     *Add odp_schedule_skip_order()*
>>
>> o  *Merge the patch*
>> odp_schedule_one()
>>
>>
>>
>> *Action*
>>
>> *Notes*
>>
>> Delete
>>
>> *Waiting on timers*
>>    odp_schedule_skip_order
>>
>>
>>
>> *Action*
>>
>> *Notes*
>>
>> NEW
>>
>> New API.  Signature:
>>
>> *void odp_schedule_skip_order(odp_queue_t dest, odp_buffer_t buf);*
>>
>>
>>
>> Petri  >>>> What does this do ?
>>
>>
>>
>> Configuration option to define max per thread scheduler caching (down to
>> one).
>>
>>
>>
>>
>>
>> --
>>
>> *Mike Holmes*
>>
>> Linaro  Sr Technical Manager
>>
>> LNG - ODP
>>
>>
>>
>>
>>
>
>


-- 
*Mike Holmes*
Linaro  Sr Technical Manager
LNG - ODP
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to