On Thu, Dec 15, 2016 at 12:30:43PM +0100, Tommaso Cucinotta wrote:
> Hi Peter,
>
> On 13/12/2016 11:21, Peter Zijlstra wrote:
> >On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
> >>Just a note: if you want to recover arbitrary task affinities, you can
> >>re-cast your above
On Thu, Dec 15, 2016 at 12:30:43PM +0100, Tommaso Cucinotta wrote:
> Hi Peter,
>
> On 13/12/2016 11:21, Peter Zijlstra wrote:
> >On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
> >>Just a note: if you want to recover arbitrary task affinities, you can
> >>re-cast your above
Hi Peter,
On 13/12/2016 11:21, Peter Zijlstra wrote:
On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
Just a note: if you want to recover arbitrary task affinities, you can re-cast
your above test like this:
for_each_processor(cpu)
\sum U[t]/A[t] \leq 1 (or U_max), for
Hi Peter,
On 13/12/2016 11:21, Peter Zijlstra wrote:
On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
Just a note: if you want to recover arbitrary task affinities, you can re-cast
your above test like this:
for_each_processor(cpu)
\sum U[t]/A[t] \leq 1 (or U_max), for
On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
>
> Just a note: if you want to recover arbitrary task affinities, you can
> re-cast your above test like this:
>
> for_each_processor(cpu)
> \sum U[t]/A[t] \leq 1 (or U_max), for each task t on cpu, with utilization
> U[t]
On Thu, Nov 10, 2016 at 11:01:59AM +0100, Tommaso Cucinotta wrote:
>
> Just a note: if you want to recover arbitrary task affinities, you can
> re-cast your above test like this:
>
> for_each_processor(cpu)
> \sum U[t]/A[t] \leq 1 (or U_max), for each task t on cpu, with utilization
> U[t]
Hi Henrik,
On Thu, 10 Nov 2016 13:56:35 +0100
Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 01:38:40PM +0100, luca abeni wrote:
> > Hi Henrik,
>
> Hi Luca,
>
> > On Thu, 10 Nov 2016 13:21:00 +0100
> > Henrik Austad wrote:
> > > On Thu, Nov 10, 2016 at
Hi Henrik,
On Thu, 10 Nov 2016 13:56:35 +0100
Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 01:38:40PM +0100, luca abeni wrote:
> > Hi Henrik,
>
> Hi Luca,
>
> > On Thu, 10 Nov 2016 13:21:00 +0100
> > Henrik Austad wrote:
> > > On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra
On Thu, 10 Nov 2016 12:03:47 +0100
Tommaso Cucinotta wrote:
> On 10/11/2016 10:06, luca abeni wrote:
> > is equivalent to the "least laxity first" (LLF) algorithm.
> > Giving precedence to tasks with 0 laxity is a technique that is
> > often used to improve the
On Thu, 10 Nov 2016 12:03:47 +0100
Tommaso Cucinotta wrote:
> On 10/11/2016 10:06, luca abeni wrote:
> > is equivalent to the "least laxity first" (LLF) algorithm.
> > Giving precedence to tasks with 0 laxity is a technique that is
> > often used to improve the schedulability on multi-processor
On Thu, Nov 10, 2016 at 01:38:40PM +0100, luca abeni wrote:
> Hi Henrik,
Hi Luca,
> On Thu, 10 Nov 2016 13:21:00 +0100
> Henrik Austad wrote:
> > On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
> [...]
> > > We define the time to fail as:
> > >
> > > ttf(t)
On Thu, Nov 10, 2016 at 01:38:40PM +0100, luca abeni wrote:
> Hi Henrik,
Hi Luca,
> On Thu, 10 Nov 2016 13:21:00 +0100
> Henrik Austad wrote:
> > On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
> [...]
> > > We define the time to fail as:
> > >
> > > ttf(t) := t_d - t_b;
On Thu, Nov 10, 2016 at 01:21:00PM +0100, Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
> > We define the time to fail as:
> >
> > ttf(t) := t_d - t_b; where
> >
> > t_d is t's absolute deadline
> > t_b is t's remaining budget
> >
> > This is
On Thu, Nov 10, 2016 at 01:21:00PM +0100, Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
> > We define the time to fail as:
> >
> > ttf(t) := t_d - t_b; where
> >
> > t_d is t's absolute deadline
> > t_b is t's remaining budget
> >
> > This is
Hi Henrik,
On Thu, 10 Nov 2016 13:21:00 +0100
Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
[...]
> > We define the time to fail as:
> >
> > ttf(t) := t_d - t_b; where
> >
> > t_d is t's absolute deadline
> > t_b is t's
Hi Henrik,
On Thu, 10 Nov 2016 13:21:00 +0100
Henrik Austad wrote:
> On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
[...]
> > We define the time to fail as:
> >
> > ttf(t) := t_d - t_b; where
> >
> > t_d is t's absolute deadline
> > t_b is t's remaining budget
> >
>
Hi Peter,
On Thu, 10 Nov 2016 11:59:18 +0100
Peter Zijlstra wrote:
[...]
> > > MIXED CRITICALITY SCHEDULING
> > >
> > > Since we want to provide better guarantees for single CPU affine
> > > tasks than the G-EDF scheduler provides for the single CPU tasks,
> > > we need
Hi Peter,
On Thu, 10 Nov 2016 11:59:18 +0100
Peter Zijlstra wrote:
[...]
> > > MIXED CRITICALITY SCHEDULING
> > >
> > > Since we want to provide better guarantees for single CPU affine
> > > tasks than the G-EDF scheduler provides for the single CPU tasks,
> > > we need to somehow alter the
On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
>
>
> Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
> wanting single CPU affinity is better QoS than provided by G-EDF.
>
> Therefore the aim is to provide harder guarantees, similar to UP, for
On Thu, Nov 10, 2016 at 09:08:07AM +0100, Peter Zijlstra wrote:
>
>
> Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
> wanting single CPU affinity is better QoS than provided by G-EDF.
>
> Therefore the aim is to provide harder guarantees, similar to UP, for
On 10/11/2016 10:06, luca abeni wrote:
is equivalent to the "least laxity first" (LLF) algorithm.
Giving precedence to tasks with 0 laxity is a technique that is often
used to improve the schedulability on multi-processor systems.
EDZL (EDF / Zero Laxity first), right? AFAICR, there's quite a
On 10/11/2016 10:06, luca abeni wrote:
is equivalent to the "least laxity first" (LLF) algorithm.
Giving precedence to tasks with 0 laxity is a technique that is often
used to improve the schedulability on multi-processor systems.
EDZL (EDF / Zero Laxity first), right? AFAICR, there's quite a
On Thu, Nov 10, 2016 at 10:06:02AM +0100, luca abeni wrote:
> Hi Peter,
>
> On Thu, 10 Nov 2016 09:08:07 +0100
> Peter Zijlstra wrote:
>
> > Add support for single CPU affinity to SCHED_DEADLINE; the supposed
> > reason for wanting single CPU affinity is better QoS than
On Thu, Nov 10, 2016 at 10:06:02AM +0100, luca abeni wrote:
> Hi Peter,
>
> On Thu, 10 Nov 2016 09:08:07 +0100
> Peter Zijlstra wrote:
>
> > Add support for single CPU affinity to SCHED_DEADLINE; the supposed
> > reason for wanting single CPU affinity is better QoS than provided by
> > G-EDF.
>
Hi,
On 10/11/2016 09:08, Peter Zijlstra wrote:
Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
wanting single CPU affinity is better QoS than provided by G-EDF.
Therefore the aim is to provide harder guarantees, similar to UP, for single
CPU affine tasks. This
Hi,
On 10/11/2016 09:08, Peter Zijlstra wrote:
Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
wanting single CPU affinity is better QoS than provided by G-EDF.
Therefore the aim is to provide harder guarantees, similar to UP, for single
CPU affine tasks. This
Hi Peter,
On Thu, 10 Nov 2016 09:08:07 +0100
Peter Zijlstra wrote:
> Add support for single CPU affinity to SCHED_DEADLINE; the supposed
> reason for wanting single CPU affinity is better QoS than provided by
> G-EDF.
This looks very interesting, thanks for sharing!
I have
Hi Peter,
On Thu, 10 Nov 2016 09:08:07 +0100
Peter Zijlstra wrote:
> Add support for single CPU affinity to SCHED_DEADLINE; the supposed
> reason for wanting single CPU affinity is better QoS than provided by
> G-EDF.
This looks very interesting, thanks for sharing!
I have some "theoretical"
Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
wanting single CPU affinity is better QoS than provided by G-EDF.
Therefore the aim is to provide harder guarantees, similar to UP, for single
CPU affine tasks. This then leads to a mixed criticality scheduling
Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
wanting single CPU affinity is better QoS than provided by G-EDF.
Therefore the aim is to provide harder guarantees, similar to UP, for single
CPU affine tasks. This then leads to a mixed criticality scheduling
30 matches
Mail list logo