2.6.36-stable review patch.  If anyone has any objections, please let us know.

------------------

From: Ken Chen <[email protected]>

commit 38715258aa2e8cd94bd4aafadc544e5104efd551 upstream.

Per task latencytop accumulator prematurely terminates due to erroneous
placement of latency_record_count.  It should be incremented whenever a
new record is allocated instead of increment on every latencytop event.

Also fix search iterator to only search known record events instead of
blindly searching all pre-allocated space.

Signed-off-by: Ken Chen <[email protected]>
Reviewed-by: Arjan van de Ven <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 kernel/latencytop.c |   17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -194,14 +194,7 @@ __account_scheduler_latency(struct task_
 
        account_global_scheduler_latency(tsk, &lat);
 
-       /*
-        * short term hack; if we're > 32 we stop; future we recycle:
-        */
-       tsk->latency_record_count++;
-       if (tsk->latency_record_count >= LT_SAVECOUNT)
-               goto out_unlock;
-
-       for (i = 0; i < LT_SAVECOUNT; i++) {
+       for (i = 0; i < tsk->latency_record_count; i++) {
                struct latency_record *mylat;
                int same = 1;
 
@@ -227,8 +220,14 @@ __account_scheduler_latency(struct task_
                }
        }
 
+       /*
+        * short term hack; if we're > 32 we stop; future we recycle:
+        */
+       if (tsk->latency_record_count >= LT_SAVECOUNT)
+               goto out_unlock;
+
        /* Allocated a new one: */
-       i = tsk->latency_record_count;
+       i = tsk->latency_record_count++;
        memcpy(&tsk->latency_record[i], &lat, sizeof(struct latency_record));
 
 out_unlock:


_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to