On Tue, 2014-07-15 at 14:59 +0200, Thomas Gleixner wrote:
> On Tue, 15 Jul 2014, Peter Zijlstra wrote:
> 
> > On Tue, Jul 15, 2014 at 11:50:45AM +0200, Peter Zijlstra wrote:
> > > So you already have an idle notifier (which is x86 only, we should fix
> > > that I suppose), and you then double check there really isn't anything
> > > else running.
> > 
> > Note that we've already done a large part of the expense of going idle
> > by the time we call that idle notifier -- in specific, we've
> > reprogrammed the clock to stop the tick.
> > 
> > Its really wasteful to then generate work again, which means we have to
> > again reprogram the clock etc.
> 
> Doing anything which is not related to idle itself in the idle
> notifier is just plain wrong.

I don't like the kicking the multi-buffer job flush using idle_notifier
path either.  I'll try another version of the patch by doing this in the
multi-buffer job handler path.
 
> 
> If that stuff wants to utilize idle slots, we really need to come up
> with a generic and general solution. Otherwise we'll grow those warts
> all over the architecture space, with slightly different ways of
> wreckaging the world an some more.
> 
> This whole attidute of people thinking that they need their own
> specialized scheduling around the real scheduler is a PITA. All this
> stuff is just damanging any sensible approach of power saving, load
> balancing, etc.
> 
> What we really want is infrastructure, which allows the scheduler to
> actively query the async work situation and based on the results
> actively decide when to process it and where.

I agree with you.  It will be great if we have such infrastructure. 

Thanks.

Tim

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to