On Fri, 2010-12-17 at 07:57 +0100, Mike Galbraith wrote:
> On Thu, 2010-12-16 at 14:49 -0500, Rik van Riel wrote:

> > >> +static void yield_to_fair(struct rq *rq, struct task_struct *p)
> > >> +{
> > >> +        struct sched_entity *se =&p->se;
> > >> +        struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > >> +        u64 remain = slice_remain(current);
> > >> +
> > >> +        dequeue_task(rq, p, 0);
> > >> +        se->vruntime -= remain;
> > >> +        if (se->vruntime<  cfs_rq->min_vruntime)
> > >> +                se->vruntime = cfs_rq->min_vruntime;
> > >
> > > This has an excellent chance of moving the recipient rightward.. and the
> > > yielding task didn't yield anything.  This may achieve the desired
> > > result or may just create a nasty latency spike... but it makes no
> > > arithmetic sense.
> > 
> > Good point, the current task calls yield() in the function
> > that calls yield_to_fair, but I seem to have lost the code
> > that penalizes the current task's runtime...
> > 
> > I'll reinstate that.
> 
> See comment in parentheses above :)

BTW, with this vruntime donation thingy, what prevents a task from
forking off accomplices who do nothing but wait for a wakeup and
yield_to(exploit)?

Even swapping vruntimes in the same cfs_rq is dangerous as hell, because
one party is going backward.

        -Mike

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to