Re: [PATCH] sched: change nr_uninterruptible to be signed

2015-11-30 Thread Peter Zijlstra
On Fri, Nov 27, 2015 at 12:09:44PM +0800, yalin wang wrote:
> nr_uninterruptible will be negative during running,
> this happened when dequeue a TASK_UNINTERRUPTIBLE task
> from rq1 and then wake up the task and queue it to rq2,
> then rq2->nr_uninterruptible-- will reuslt in negative value
> sometimes.

Why!? Aren't isn't signed stuff more likely affected by over/underflow
muck? I'd be more inclined to remove the signed muck altogether like:

That relies on C having defined overflow semantics for unsigned types
and the (hard) assumption that the hardware uses 2s-complement (which
the kernel assumes in many many places already).

diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index ef7159012cf3..62cff13ed120 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -80,13 +80,14 @@ void get_avenrun(unsigned long *loads, unsigned long 
offset, int shift)
 
 long calc_load_fold_active(struct rq *this_rq)
 {
-   long nr_active, delta = 0;
+   unsigned long nr_active;
+   long delta = 0;
 
nr_active = this_rq->nr_running;
-   nr_active += (long)this_rq->nr_uninterruptible;
+   nr_active += this_rq->nr_uninterruptible;
 
if (nr_active != this_rq->calc_load_active) {
-   delta = nr_active - this_rq->calc_load_active;
+   delta = (long)(nr_active - this_rq->calc_load_active);
this_rq->calc_load_active = nr_active;
}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2eb2002aa336..2f9b4ce759dc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -652,7 +652,7 @@ struct rq {
 
/* calc_load related fields */
unsigned long calc_load_update;
-   long calc_load_active;
+   unsigned long calc_load_active;
 
 #ifdef CONFIG_SCHED_HRTICK
 #ifdef CONFIG_SMP
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] sched: change nr_uninterruptible to be signed

2015-11-30 Thread Peter Zijlstra
On Fri, Nov 27, 2015 at 12:09:44PM +0800, yalin wang wrote:
> nr_uninterruptible will be negative during running,
> this happened when dequeue a TASK_UNINTERRUPTIBLE task
> from rq1 and then wake up the task and queue it to rq2,
> then rq2->nr_uninterruptible-- will reuslt in negative value
> sometimes.

Why!? Aren't isn't signed stuff more likely affected by over/underflow
muck? I'd be more inclined to remove the signed muck altogether like:

That relies on C having defined overflow semantics for unsigned types
and the (hard) assumption that the hardware uses 2s-complement (which
the kernel assumes in many many places already).

diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index ef7159012cf3..62cff13ed120 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -80,13 +80,14 @@ void get_avenrun(unsigned long *loads, unsigned long 
offset, int shift)
 
 long calc_load_fold_active(struct rq *this_rq)
 {
-   long nr_active, delta = 0;
+   unsigned long nr_active;
+   long delta = 0;
 
nr_active = this_rq->nr_running;
-   nr_active += (long)this_rq->nr_uninterruptible;
+   nr_active += this_rq->nr_uninterruptible;
 
if (nr_active != this_rq->calc_load_active) {
-   delta = nr_active - this_rq->calc_load_active;
+   delta = (long)(nr_active - this_rq->calc_load_active);
this_rq->calc_load_active = nr_active;
}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2eb2002aa336..2f9b4ce759dc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -652,7 +652,7 @@ struct rq {
 
/* calc_load related fields */
unsigned long calc_load_update;
-   long calc_load_active;
+   unsigned long calc_load_active;
 
 #ifdef CONFIG_SCHED_HRTICK
 #ifdef CONFIG_SMP
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] sched: change nr_uninterruptible to be signed

2015-11-26 Thread yalin wang
nr_uninterruptible will be negative during running,
this happened when dequeue a TASK_UNINTERRUPTIBLE task
from rq1 and then wake up the task and queue it to rq2,
then rq2->nr_uninterruptible-- will reuslt in negative value
sometimes.

Signed-off-by: yalin wang 
---
 kernel/sched/loadavg.c | 2 +-
 kernel/sched/sched.h   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index ef71590..39504c6 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -83,7 +83,7 @@ long calc_load_fold_active(struct rq *this_rq)
long nr_active, delta = 0;
 
nr_active = this_rq->nr_running;
-   nr_active += (long)this_rq->nr_uninterruptible;
+   nr_active += this_rq->nr_uninterruptible;
 
if (nr_active != this_rq->calc_load_active) {
delta = nr_active - this_rq->calc_load_active;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 84d4879..7b5f67b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -605,7 +605,7 @@ struct rq {
 * one CPU and if it got migrated afterwards it may decrease
 * it on another CPU. Always updated under the runqueue lock:
 */
-   unsigned long nr_uninterruptible;
+   long nr_uninterruptible;
 
struct task_struct *curr, *idle, *stop;
unsigned long next_balance;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] sched: change nr_uninterruptible to be signed

2015-11-26 Thread yalin wang
nr_uninterruptible will be negative during running,
this happened when dequeue a TASK_UNINTERRUPTIBLE task
from rq1 and then wake up the task and queue it to rq2,
then rq2->nr_uninterruptible-- will reuslt in negative value
sometimes.

Signed-off-by: yalin wang 
---
 kernel/sched/loadavg.c | 2 +-
 kernel/sched/sched.h   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index ef71590..39504c6 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -83,7 +83,7 @@ long calc_load_fold_active(struct rq *this_rq)
long nr_active, delta = 0;
 
nr_active = this_rq->nr_running;
-   nr_active += (long)this_rq->nr_uninterruptible;
+   nr_active += this_rq->nr_uninterruptible;
 
if (nr_active != this_rq->calc_load_active) {
delta = nr_active - this_rq->calc_load_active;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 84d4879..7b5f67b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -605,7 +605,7 @@ struct rq {
 * one CPU and if it got migrated afterwards it may decrease
 * it on another CPU. Always updated under the runqueue lock:
 */
-   unsigned long nr_uninterruptible;
+   long nr_uninterruptible;
 
struct task_struct *curr, *idle, *stop;
unsigned long next_balance;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/