On Mon, May 4, 2015 at 10:19 PM, Bill Fischofer
<[email protected]> wrote:
> This is a non-trivial problem because it's not obvious what the "correct"
> answer is. While one can certainly engineer test cases where it's better to
> schedule completions back to the same core (e.g., cache is already "warm",
> etc.) that's not always the case.  If at the time of completion the
> originating core has just started some other transaction but another core is
> free should you wait or go ahead and use the idle core?  How about if there
> are now N events in the same situation?
>
> In general it's poor design for the application to try to "outthink" the
> implementation on such things, just like you don't want to try to outthink
> the compiler when writing C code.  That's essentially designing in an
> implementation model into the application, which is contrary to the
> performance and portability goals that ODP is trying to encourage.  A better
> approach might be to have event attributes that are visible to the scheduler
> and then then let the implementations tune themselves to better take such
> considerations into account over time. Right now linux-generic, at least,
> has a very basic scheduler but you'd expect other implementations to have
> more sophisticated ones, especially as this model becomes more widespread.

I've only brought this into discussion because we have a queue
schedule groups defined in ODP, but the only value we have now is
ODP_SCHED_GROUP_ALL:
https://git.linaro.org/lng/odp.git/blob/HEAD:/include/odp/api/queue.h#l151
This concept was taken from event machine but not in its entirety so I
cannot help not to think that the design around this concept is
incomplete / misleading.

Also, if I'm not mistaken, I think there are plenty of HW platforms
that support queue schedule groups, having this in ODP would greatly
increase flexibility.

>
> On Mon, May 4, 2015 at 9:57 AM, Ciprian Barbu <[email protected]>
> wrote:
>>
>> Hi,
>>
>> I received this question internally from some people looking at ODP, I
>> don't know how to answer. It goes like this.
>>
>> Say you want to execute a crypto session in async mode, using a
>> completion queue. A core would initiate the session and the go about
>> it's business. ODP will execute the operation in async mode and push
>> the completion event in the specified queue when ready. Now, the
>> application must either wait in a loop trying to receive from that
>> queue or to use odp_schedule for a more general programming model,
>> where it always calls odp_schedule.
>>
>> The question is, isn't there a way to make sure the completion event
>> will be received by the core that initiated the operation and not some
>> random one depending on how the scheduler will treat the event? I
>> remember something about cpumask for queues, but we have abandoned
>> that concept.
>>
>> /Ciprian
>> _______________________________________________
>> lng-odp mailing list
>> [email protected]
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to