On 11/13/2013 03:31 AM, Steven Rostedt wrote:
> On Thu, 7 Nov 2013 14:43:37 +0100
> Juri Lelli wrote:
>
>> From: Dario Faggioli
>
>> --- /dev/null
>> +++ b/include/linux/sched/deadline.h
>> @@ -0,0 +1,24 @@
>> +#ifndef _SCHED_DEADLINE_H
>> +#define _SCHED_DEADLINE_H
>> +
>> +/*
>> + *
On 11/13/2013 03:31 AM, Steven Rostedt wrote:
On Thu, 7 Nov 2013 14:43:37 +0100
Juri Lelli juri.le...@gmail.com wrote:
From: Dario Faggioli raist...@linux.it
--- /dev/null
+++ b/include/linux/sched/deadline.h
@@ -0,0 +1,24 @@
+#ifndef _SCHED_DEADLINE_H
+#define _SCHED_DEADLINE_H
+
On Thu, 7 Nov 2013 14:43:37 +0100
Juri Lelli wrote:
> From: Dario Faggioli
> --- /dev/null
> +++ b/include/linux/sched/deadline.h
> @@ -0,0 +1,24 @@
> +#ifndef _SCHED_DEADLINE_H
> +#define _SCHED_DEADLINE_H
> +
> +/*
> + * SCHED_DEADLINE tasks has negative priorities, reflecting
> + * the
On Thu, 7 Nov 2013 14:43:37 +0100
Juri Lelli juri.le...@gmail.com wrote:
From: Dario Faggioli raist...@linux.it
--- /dev/null
+++ b/include/linux/sched/deadline.h
@@ -0,0 +1,24 @@
+#ifndef _SCHED_DEADLINE_H
+#define _SCHED_DEADLINE_H
+
+/*
+ * SCHED_DEADLINE tasks has negative
From: Dario Faggioli
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
From: Dario Faggioli raist...@linux.it
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they
On 10/14/2013 01:51 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +static void set_cpus_allowed_dl(struct task_struct *p,
>> +const struct cpumask *new_mask)
>> +{
>> +int weight = cpumask_weight(new_mask);
>> +
>> +
On 10/14/2013 07:34 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 06:58:51PM +0200, Juri Lelli wrote:
>> On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
>>> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> We discussed on this point in the past...
>
> Ah, completely forgot about
On 10/14/2013 01:49 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +/*
>> + * Yield task semantic for -deadline tasks is:
>> + *
>> + * get off from the CPU until our next instance, with
>> + * a new runtime.
>> + */
>
> Could you amend that comment
On Mon, Oct 14, 2013 at 06:58:51PM +0200, Juri Lelli wrote:
> On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
> > On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> We discussed on this point in the past...
Ah, completely forgot about that; please update the comment that we
indeed use
On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +static void update_curr_dl(struct rq *rq)
>> +{
>> +struct task_struct *curr = rq->curr;
>> +struct sched_dl_entity *dl_se = >dl;
>> +u64 delta_exec;
>> +
>> +if
On Mon, Oct 14, 2013 at 06:16:50PM +0200, Juri Lelli wrote:
>
> When disassembled everything seems fine, at least for x86 and ARM. Do I add
> the
> fake data hazard anyway?
nah, lets add it when we find it's needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On 10/14/2013 01:33 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +static void replenish_dl_entity(struct sched_dl_entity *dl_se)
>> +{
>> +struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>> +struct rq *rq = rq_of_dl_rq(dl_rq);
>> +
>> +/*
>> +
On 10/14/2013 01:24 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +/*
>> + * We are being explicitly informed that a new instance is starting,
>> + * and this means that:
>> + * - the absolute deadline of the entity has to be placed at
>> + *
On Mon, Oct 14, 2013 at 03:05:56PM +0200, Juri Lelli wrote:
> Yes, I already considered and used that. But, it is slipped into next patch
> :\.
> I'll bring the change to this patch.
Ah yes, the wandering hunks problem. I'm only too familiar with it :-(
--
To unsubscribe from this list: send
On 10/14/2013 01:10 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
>> +struct sched_dl_entity {
>> +struct rb_node rb_node;
>> +int nr_cpus_allowed;
>> +
>
> Please see:
>
> 29baa7478ba4 sched: Move nr_cpus_allowed out of 'struct sched_rt_entity'
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +static void set_cpus_allowed_dl(struct task_struct *p,
> + const struct cpumask *new_mask)
> +{
> + int weight = cpumask_weight(new_mask);
> +
> + BUG_ON(!dl_task(p));
> +
> +
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +/*
> + * Yield task semantic for -deadline tasks is:
> + *
> + * get off from the CPU until our next instance, with
> + * a new runtime.
> + */
Could you amend that comment with a reason for why this is so? I have
vague
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +static void update_curr_dl(struct rq *rq)
> +{
> + struct task_struct *curr = rq->curr;
> + struct sched_dl_entity *dl_se = >dl;
> + u64 delta_exec;
> +
> + if (!dl_task(curr) || !on_dl_rq(dl_se))
> +
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +static void replenish_dl_entity(struct sched_dl_entity *dl_se)
> +{
> + struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
> + struct rq *rq = rq_of_dl_rq(dl_rq);
> +
> + /*
> + * We keep moving the deadline away until we get
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +/*
> + * We are being explicitly informed that a new instance is starting,
> + * and this means that:
> + * - the absolute deadline of the entity has to be placed at
> + *current time + relative deadline;
> + * - the runtime of
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> @@ -1693,8 +1701,14 @@ void sched_fork(struct task_struct *p)
> p->sched_reset_on_fork = 0;
> }
>
> - if (!rt_prio(p->prio))
> + if (dl_prio(p->prio)) {
> + put_cpu();
> + return
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
> +struct sched_dl_entity {
> + struct rb_node rb_node;
> + int nr_cpus_allowed;
> +
Please see:
29baa7478ba4 sched: Move nr_cpus_allowed out of 'struct sched_rt_entity'
--
To unsubscribe from this list: send the line
From: Dario Faggioli
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
From: Dario Faggioli raist...@linux.it
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+struct sched_dl_entity {
+ struct rb_node rb_node;
+ int nr_cpus_allowed;
+
Please see:
29baa7478ba4 sched: Move nr_cpus_allowed out of 'struct sched_rt_entity'
--
To unsubscribe from this list: send the line unsubscribe
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
@@ -1693,8 +1701,14 @@ void sched_fork(struct task_struct *p)
p-sched_reset_on_fork = 0;
}
- if (!rt_prio(p-prio))
+ if (dl_prio(p-prio)) {
+ put_cpu();
+ return -EAGAIN;
Is
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+/*
+ * We are being explicitly informed that a new instance is starting,
+ * and this means that:
+ * - the absolute deadline of the entity has to be placed at
+ *current time + relative deadline;
+ * - the runtime of the
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void replenish_dl_entity(struct sched_dl_entity *dl_se)
+{
+ struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
+ struct rq *rq = rq_of_dl_rq(dl_rq);
+
+ /*
+ * We keep moving the deadline away until we get some
+
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void update_curr_dl(struct rq *rq)
+{
+ struct task_struct *curr = rq-curr;
+ struct sched_dl_entity *dl_se = curr-dl;
+ u64 delta_exec;
+
+ if (!dl_task(curr) || !on_dl_rq(dl_se))
+ return;
+
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+/*
+ * Yield task semantic for -deadline tasks is:
+ *
+ * get off from the CPU until our next instance, with
+ * a new runtime.
+ */
Could you amend that comment with a reason for why this is so? I have
vague recollections of
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void set_cpus_allowed_dl(struct task_struct *p,
+ const struct cpumask *new_mask)
+{
+ int weight = cpumask_weight(new_mask);
+
+ BUG_ON(!dl_task(p));
+
+
On 10/14/2013 01:10 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+struct sched_dl_entity {
+struct rb_node rb_node;
+int nr_cpus_allowed;
+
Please see:
29baa7478ba4 sched: Move nr_cpus_allowed out of 'struct sched_rt_entity'
Yes, I
On Mon, Oct 14, 2013 at 03:05:56PM +0200, Juri Lelli wrote:
Yes, I already considered and used that. But, it is slipped into next patch
:\.
I'll bring the change to this patch.
Ah yes, the wandering hunks problem. I'm only too familiar with it :-(
--
To unsubscribe from this list: send the
On 10/14/2013 01:24 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+/*
+ * We are being explicitly informed that a new instance is starting,
+ * and this means that:
+ * - the absolute deadline of the entity has to be placed at
+ *current time +
On 10/14/2013 01:33 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void replenish_dl_entity(struct sched_dl_entity *dl_se)
+{
+struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
+struct rq *rq = rq_of_dl_rq(dl_rq);
+
+/*
+ * We keep
On Mon, Oct 14, 2013 at 06:16:50PM +0200, Juri Lelli wrote:
When disassembled everything seems fine, at least for x86 and ARM. Do I add
the
fake data hazard anyway?
nah, lets add it when we find it's needed.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the
On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void update_curr_dl(struct rq *rq)
+{
+struct task_struct *curr = rq-curr;
+struct sched_dl_entity *dl_se = curr-dl;
+u64 delta_exec;
+
+if (!dl_task(curr) ||
On Mon, Oct 14, 2013 at 06:58:51PM +0200, Juri Lelli wrote:
On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
We discussed on this point in the past...
Ah, completely forgot about that; please update the comment that we
indeed use
On 10/14/2013 01:49 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+/*
+ * Yield task semantic for -deadline tasks is:
+ *
+ * get off from the CPU until our next instance, with
+ * a new runtime.
+ */
Could you amend that comment with a reason
On 10/14/2013 07:34 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 06:58:51PM +0200, Juri Lelli wrote:
On 10/14/2013 01:44 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
We discussed on this point in the past...
Ah, completely forgot about that;
On 10/14/2013 01:51 PM, Peter Zijlstra wrote:
On Mon, Oct 14, 2013 at 12:43:35PM +0200, Juri Lelli wrote:
+static void set_cpus_allowed_dl(struct task_struct *p,
+const struct cpumask *new_mask)
+{
+int weight = cpumask_weight(new_mask);
+
+
From: Dario Faggioli
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they are needed.
From: Dario Faggioli raist...@linux.it
Introduces the data structures, constants and symbols needed for
SCHED_DEADLINE implementation.
Core data structure of SCHED_DEADLINE are defined, along with their
initializers. Hooks for checking if a task belong to the new policy
are also added where they
44 matches
Mail list logo