Am Mittwoch, 21. September 2016 schrieb Martin Pieuchot :

> On 21/09/16(Wed) 16:29, David Gwynne wrote:
> > [...]
> > the point i was trying to make was that the existing stuff (tasks,
> timeouts) can be used together to get the effect we want. my point was very
> poorly made though.
> >
> > i think your point is that you can make a clever change to timeouts and
> not have to do a ton of flow on code changes to take advantage of it.
>
> I'm trying to fix one problem at a time.
>
> > [...]
> > if timeouts are the way to schedule stuff to run in the future, then
> we're going to get head-of-line blocking problems. pending timeouts will
> end up stuck behind work that is waiting on an arbitrary lock, because
> there's an implicit single thread that will run them all. right now that is
> mitigated by timeouts being an interrupt context, we just dont put a lot of
> work like that in there right now.
>
> Really?  Is it worth than it is now with the KERNEL_LOCK()?
>
> > the benefit of taskqs is that you explicitly steer work to threads that
> can sleep independently of each other. they lack being able to schedule
> work to run in the future though.
> >
> > it turns out it isnt that hard to make taskqs use a priority queue
> internally instead of a tailq. this allows you to specify that tasks get
> executed in the future (or right now, like the current behaviour) in an
> explicit thread (or pool of threads). it does mean a lot of changes to code
> thats using timeouts now though.
>
> I agree with you, but these thoughts are IMHO too far ahead.  Everything
> is still serialized in our kernel.
>
>
The diff as it is will deadlock against SCHED_LOCK.
tsleep() calls sleep_setup(), which grabs SCHED_LOCK,
Then sleep_setup_timeout() will grab timeout_mutex in timeout_add()

On softclock() you have the opposite:
Grabs timeout_mutex then does a wakeup, which grabs SCHED_LOCK.

Reply via email to