ChangeSet 1.2065.3.26, 2005/03/12 08:27:39-08:00, [EMAIL PROTECTED]
[PATCH] re-inline sched functions
This could be part of the unknown 2% performance regression with
db transaction processing benchmark.
The four functions in the following patch use to be inline. They
are un-inlined since 2.6.7.
We measured that by re-inline them back on 2.6.9, it improves
performance
for db transaction processing benchmark, +0.2% (on real hardware :-)
The cost is certainly larger kernel size, cost 928 bytes on x86, and
2728 bytes on ia64. But certainly worth the money for enterprise
customer since they improve performance on enterprise workload.
# size vmlinux.*
text data bss dec hex filename
3261844 717184 262020 4241048 40b698 vmlinux.x86.orig
3262772 717488 262020 4242280 40bb68 vmlinux.x86.inline
text data bss dec hex filename
5836933 903828 201940 6942701 69efed vmlinux.ia64.orig
5839661 903460 201940 6945061 69f925 vmlinux.ia64.inline
Possible we can introduce them back?
Signed-off-by: Ken Chen <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
sched.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff -Nru a/kernel/sched.c b/kernel/sched.c
--- a/kernel/sched.c 2005-03-12 21:30:08 -08:00
+++ b/kernel/sched.c 2005-03-12 21:30:09 -08:00
@@ -166,7 +166,7 @@
#define SCALE_PRIO(x, prio) \
max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE)
-static unsigned int task_timeslice(task_t *p)
+static inline unsigned int task_timeslice(task_t *p)
{
if (p->static_prio < NICE_TO_PRIO(0))
return SCALE_PRIO(DEF_TIMESLICE*4, p->static_prio);
@@ -282,7 +282,7 @@
* interrupts. Note the ordering: we can safely lookup the task_rq without
* explicitly disabling preemption.
*/
-static runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
+static inline runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
__acquires(rq->lock)
{
struct runqueue *rq;
@@ -402,7 +402,7 @@
/*
* rq_lock - lock a given runqueue and disable interrupts.
*/
-static runqueue_t *this_rq_lock(void)
+static inline runqueue_t *this_rq_lock(void)
__acquires(rq->lock)
{
runqueue_t *rq;
@@ -1308,7 +1308,7 @@
* with the lock held can cause deadlocks; see schedule() for
* details.)
*/
-static void finish_task_switch(task_t *prev)
+static inline void finish_task_switch(task_t *prev)
__releases(rq->lock)
{
runqueue_t *rq = this_rq();
-
To unsubscribe from this list: send the line "unsubscribe bk-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html