On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> From: Byungchul Park
>
> __sched_period() returns a period which a rq can have. the period has to be
> stretched by the number of task *the rq has*, when nr_running > nr_latency.
> otherwise, task slice can be very smaller
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task *the rq has*, when nr_running nr_latency.
otherwise, task slice can
On Tue, 2015-07-14 at 11:07 +0900, Byungchul Park wrote:
> but.. is there any reason meaningless code should be kept in source? :(
> it also harms readability. of cource, i need to modify my patch a little
> bit to prevent non-group sched entities from getting large slice.
By all means proceed,
On Mon, Jul 13, 2015 at 06:25:35PM +0900, Byungchul Park wrote:
> On Mon, Jul 13, 2015 at 10:26:09AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> > > From: Byungchul Park
> > >
> > > __sched_period() returns a period which a rq can
On Mon, Jul 13, 2015 at 02:30:38PM +0200, Mike Galbraith wrote:
> On Mon, 2015-07-13 at 20:07 +0900, Byungchul Park wrote:
>
> > i still think stretching with local cfs's nr_running should be replaced with
> > stretching with a top(=root) level one.
>
> I think we just can't take 'slice' _too_
On Mon, 2015-07-13 at 20:07 +0900, Byungchul Park wrote:
> i still think stretching with local cfs's nr_running should be replaced with
> stretching with a top(=root) level one.
I think we just can't take 'slice' _too_ seriously. Not only is it
annoying with cgroups, the scheduler simply
On Mon, Jul 13, 2015 at 12:22:17PM +0200, Mike Galbraith wrote:
> On Mon, 2015-07-13 at 17:29 +0900, Byungchul Park wrote:
> > On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
> > > On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
> > >
> > > > and i agree with that it
On Mon, 2015-07-13 at 17:29 +0900, Byungchul Park wrote:
> On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
> > On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
> >
> > > and i agree with that it makes latency increase for non-grouped tasks.
> >
> > It's not only a latency
On Mon, Jul 13, 2015 at 10:26:09AM +0200, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> > From: Byungchul Park
> >
> > __sched_period() returns a period which a rq can have. the period has to be
> > stretched by the number of task *the rq has*,
On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
> On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
>
> > and i agree with that it makes latency increase for non-grouped tasks.
>
> It's not only a latency hit for the root group, it's across the board.
>
> I suspect an
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> From: Byungchul Park
>
> __sched_period() returns a period which a rq can have. the period has to be
> stretched by the number of task *the rq has*, when nr_running > nr_latency.
> otherwise, task slice can be very smaller
On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
> and i agree with that it makes latency increase for non-grouped tasks.
It's not only a latency hit for the root group, it's across the board.
I suspect an overloaded group foo/bar/baz would prefer small slices over
a large wait as well.
On Mon, Jul 13, 2015 at 10:26:09AM +0200, Peter Zijlstra wrote:
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task
On Mon, 2015-07-13 at 20:07 +0900, Byungchul Park wrote:
i still think stretching with local cfs's nr_running should be replaced with
stretching with a top(=root) level one.
I think we just can't take 'slice' _too_ seriously. Not only is it
annoying with cgroups, the scheduler simply doesn't
On Mon, Jul 13, 2015 at 12:22:17PM +0200, Mike Galbraith wrote:
On Mon, 2015-07-13 at 17:29 +0900, Byungchul Park wrote:
On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
and i agree with that it makes latency
On Mon, 2015-07-13 at 17:29 +0900, Byungchul Park wrote:
On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
and i agree with that it makes latency increase for non-grouped tasks.
It's not only a latency hit for the
On Mon, Jul 13, 2015 at 02:30:38PM +0200, Mike Galbraith wrote:
On Mon, 2015-07-13 at 20:07 +0900, Byungchul Park wrote:
i still think stretching with local cfs's nr_running should be replaced with
stretching with a top(=root) level one.
I think we just can't take 'slice' _too_
On Mon, Jul 13, 2015 at 06:25:35PM +0900, Byungchul Park wrote:
On Mon, Jul 13, 2015 at 10:26:09AM +0200, Peter Zijlstra wrote:
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a
On Tue, 2015-07-14 at 11:07 +0900, Byungchul Park wrote:
but.. is there any reason meaningless code should be kept in source? :(
it also harms readability. of cource, i need to modify my patch a little
bit to prevent non-group sched entities from getting large slice.
By all means proceed,
On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
and i agree with that it makes latency increase for non-grouped tasks.
It's not only a latency hit for the root group, it's across the board.
I suspect an overloaded group foo/bar/baz would prefer small slices over
a large wait as well.
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task *the rq has*, when nr_running nr_latency.
otherwise, task slice can
On Mon, Jul 13, 2015 at 09:07:01AM +0200, Mike Galbraith wrote:
On Mon, 2015-07-13 at 09:56 +0900, Byungchul Park wrote:
and i agree with that it makes latency increase for non-grouped tasks.
It's not only a latency hit for the root group, it's across the board.
I suspect an overloaded
On Fri, Jul 10, 2015 at 02:31:10PM +0100, Morten Rasmussen wrote:
> On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> > From: Byungchul Park
> >
> > __sched_period() returns a period which a rq can have. the period has to be
> > stretched by the number of task *the rq
On Fri, Jul 10, 2015 at 02:31:10PM +0100, Morten Rasmussen wrote:
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
> From: Byungchul Park
>
> __sched_period() returns a period which a rq can have. the period has to be
> stretched by the number of task *the rq has*, when nr_running > nr_latency.
> otherwise, task slice can be very smaller
From: Byungchul Park
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task *the rq has*, when nr_running > nr_latency.
otherwise, task slice can be very smaller than sysctl_sched_min_granularity
depending on the position of tg hierarchy when
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task *the rq has*, when nr_running nr_latency.
otherwise, task slice can be very smaller than sysctl_sched_min_granularity
depending on the position
On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.p...@lge.com wrote:
From: Byungchul Park byungchul.p...@lge.com
__sched_period() returns a period which a rq can have. the period has to be
stretched by the number of task *the rq has*, when nr_running nr_latency.
otherwise, task slice can
28 matches
Mail list logo