Yes.  Basically you want a test that creates 10,000+ queues that have work
on them and then measure things like average queue wait time, etc.

On Tue, Sep 8, 2015 at 9:03 AM, Mike Holmes <[email protected]> wrote:

> Do we need to add a specific test to the performance suite to aid
> assessing these issues ?
>
> On 8 September 2015 at 09:57, Nicolas Morey-Chaisemartin <[email protected]
> > wrote:
>
>>
>>
>> On 09/08/2015 03:33 PM, Ola Liljedahl wrote:
>> > Sorry I missed this discussion. It is really interesting. IMO the
>> linux-generic scheduler is too simplistic to be used as is or its behaviour
>> copied. We have seen some undesirable behaviour in our internal work where
>> we use ODP. Very simplified, we essentially have two levels of processing,
>> both fed by the scheduler to the same shared set of CPU's (sometimes only
>> one CPU). The first level is fed using one queue and the second level is
>> fed using N queues. Both levels are fed the same amount of packets so the
>> first level queue transfers N packets while the second level queues
>> transfer 1 packet each (on the average). The linux-generic scheduler does
>> round robin over queues, trying to give *equal* attention to each queue.
>> But in our design, all queues are not equal and the first level queue
>> should get more attention.
>> >
>> > One idea was to specify weights with the queues and do some kind of
>> weighted queueing. But I don't like the idea of trying to know the optimal
>> weights and then configure how the scheduler should work. Possibly these
>> weights are not constant. I think the scheduler needs to dynamically adapt
>> to the current workload, I think this will be much more robust and it also
>> requires less configuration by the application (which is always difficult).
>> Agreed.
>> >
>> > I think the scheduler needs to give queues attention corresponding to
>> how many events they (currently) contain. One potential way of doing that
>> is to keep scheduling from a queue until it either is empty or you move on
>> to the next queue with probability 2^-n (n being the number of events on
>> the queue). The more events on a queue, the longer you serve that queue but
>> with some increasing probability (as the queue drains) you move on to the
>> next eligible (non-empty) queue. Over time this should give some kind of
>> "fair" attention to all queues without stalling indefinitely on some queue
>> that refuses to drain.
>> >
>> The issue with this kind of algorithm is that it can be quite difficult
>> to have an efficient (and fair) parallel version of it.
>> If you want to be fair, you need to synchronize your threads so that your
>> probability not only depends on what your thread saw in its queues but what
>> the others saw too.
>> It often means adding a big lock or something of sort.
>>
>> I'm not saying it's impossible but being both scalable (with more cores)
>> and fair is not an easy problem.
>>
>> > The changes to a scheduler that does unconditional round robin
>> shouldn't be too complicated.
>>
>> A very easy change that would fix the starvation issue would be to start
>> iterating from the previous offset (+1?) instead of our thread id.
>> This way we can ensure that all thread will go around all the queues.
>> It's not good for cache though.
>>
>> I can prepare a patch but I will need someone to evaluate the performance
>> impact on x86. I don't have a clean setup available for benchmarking.
>>
>> Nicolas
>>
>> _______________________________________________
>> lng-odp mailing list
>> [email protected]
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>
>
>
> --
> Mike Holmes
> Technical Manager - Linaro Networking Group
> Linaro.org <http://www.linaro.org/> *│ *Open source software for ARM SoCs
>
>
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to