Re: IO queueing and complete affinity w/ threads: Some results

2008-02-14 Thread Alan D. Brunelle
Taking a step back, I went to a very simple test environment: o 4-way IA64 o 2 disks (on separate RAID controller, handled by separate ports on the same FC HBA - generates different IRQs). o Using write-cached tests - keep all IOs inside of the RAID controller's cache, so no perturbations

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-14 Thread Alan D. Brunelle
Taking a step back, I went to a very simple test environment: o 4-way IA64 o 2 disks (on separate RAID controller, handled by separate ports on the same FC HBA - generates different IRQs). o Using write-cached tests - keep all IOs inside of the RAID controller's cache, so no perturbations

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-13 Thread Alan D. Brunelle
Comparative results between the original affinity patch and the kthreads-based patch on the 32-way running the kernel make sequence. It may be easier to compare/contrast with the graphs provided at http://free.linux.hp.com/~adb/jens/kernmk.png (kernmk.agr also provided, if you want to run

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-13 Thread Alan D. Brunelle
Comparative results between the original affinity patch and the kthreads-based patch on the 32-way running the kernel make sequence. It may be easier to compare/contrast with the graphs provided at http://free.linux.hp.com/~adb/jens/kernmk.png (kernmk.agr also provided, if you want to run

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Alan D. Brunelle wrote: > > Hopefully, the first column is self-explanatory - these are the settings > applied to the queue_affinity, completion_affinity and rq_affinity tunables. > Due to the fact that the standard deviations are so large coupled with the > very close avera

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Back on the 32-way, in this set of tests we're running 12 disks spread out through the 8 cells of the 32-way. Each disk will have an Ext2 FS placed on it, a clean Linux kernel source untar()ed onto it, then a full make (-j4) and then a make clean performed. The 12 series are done in parallel -

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Whilst running a series of file system related loads on our 32-way*, I dropped down to a 16-way w/ only 24 disks, and ran two kernels: the original set of Jens' patches and then his subsequent kthreads-based set. Here are the results: Original: A Q C | MBPS Avg Lat StdDev | Q-local Q-remote

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Alan D. Brunelle wrote: Hopefully, the first column is self-explanatory - these are the settings applied to the queue_affinity, completion_affinity and rq_affinity tunables. Due to the fact that the standard deviations are so large coupled with the very close average results, I'm

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Back on the 32-way, in this set of tests we're running 12 disks spread out through the 8 cells of the 32-way. Each disk will have an Ext2 FS placed on it, a clean Linux kernel source untar()ed onto it, then a full make (-j4) and then a make clean performed. The 12 series are done in parallel -

Re: IO queueing and complete affinity w/ threads: Some results

2008-02-12 Thread Alan D. Brunelle
Whilst running a series of file system related loads on our 32-way*, I dropped down to a 16-way w/ only 24 disks, and ran two kernels: the original set of Jens' patches and then his subsequent kthreads-based set. Here are the results: Original: A Q C | MBPS Avg Lat StdDev | Q-local Q-remote

IO queueing and complete affinity w/ threads: Some results

2008-02-11 Thread Alan D. Brunelle
The test case chosen may not be a very good start, but anyways, here are some initial test results with the "nasty arch bits". This was performed on a 32-way ia64 box with 1 terrabyte of RAM, and 144 FC disks (contained in 24 HP MSA1000 RAID controlers attached to 12 dual-port adapters). Each

IO queueing and complete affinity w/ threads: Some results

2008-02-11 Thread Alan D. Brunelle
The test case chosen may not be a very good start, but anyways, here are some initial test results with the nasty arch bits. This was performed on a 32-way ia64 box with 1 terrabyte of RAM, and 144 FC disks (contained in 24 HP MSA1000 RAID controlers attached to 12 dual-port adapters). Each

Re: IO queuing and complete affinity with threads (was Re: [PATCH 0/8] IO queuing and complete affinity)

2008-02-07 Thread Alan D. Brunelle
Jens Axboe wrote: > Hi, > > Here's a variant using kernel threads only, the nasty arch bits are then > not needed. Works for me, no performance testing (that's a hint for Alan > to try and queue up some testing for this variant as well :-) > > I'll get to that, working my way through the first

Re: [PATCH 0/8] IO queuing and complete affinity

2008-02-07 Thread Alan D. Brunelle
stable given this. The application used was doing 64KiB asynchronous direct reads, and had a minimum average per-IO latency of 42.426310 milliseconds, and average of 42.486557 milliseconds (std dev of 0.0041561), and a max of 42.561360 milliseconds I'm going to do some runs on a 16-way NUMA box

Re: IO queuing and complete affinity with threads (was Re: [PATCH 0/8] IO queuing and complete affinity)

2008-02-07 Thread Alan D. Brunelle
Jens Axboe wrote: Hi, Here's a variant using kernel threads only, the nasty arch bits are then not needed. Works for me, no performance testing (that's a hint for Alan to try and queue up some testing for this variant as well :-) I'll get to that, working my way through the first batch

Re: [PATCH 0/8] IO queuing and complete affinity

2008-02-07 Thread Alan D. Brunelle
, and average of 42.486557 milliseconds (std dev of 0.0041561), and a max of 42.561360 milliseconds I'm going to do some runs on a 16-way NUMA box, w/ a lot of disks today, to see if we see gains in that environment. Alan D. Brunelle HP OSLO SP -- To unsubscribe from this list: send the line

Re: 2.6.24 regression w/ QLA2300

2008-02-05 Thread Alan D. Brunelle
Andrew Vasquez wrote: > On Tue, 05 Feb 2008, Andrew Vasquez wrote: > >> On Tue, 05 Feb 2008, Alan D. Brunelle wrote: >> >>> commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a >>> Merge: 50d9a12... 23c3e29... >>> Author: Linus Torvalds <[EMAIL PROTEC

Re: 2.6.24 regression w/ QLA2300

2008-02-05 Thread Alan D. Brunelle
Andrew Vasquez wrote: > On Tue, 05 Feb 2008, Alan D. Brunelle wrote: > >> commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a >> Merge: 50d9a12... 23c3e29... >> Author: Linus Torvalds <[EMAIL PROTECTED]> >> Date: Fri Jan 25 17:19:08 2008 -0800 >> >&

Re: 2.6.24 regression w/ QLA2300

2008-02-05 Thread Alan D. Brunelle
Andrew Vasquez wrote: On Tue, 05 Feb 2008, Alan D. Brunelle wrote: commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a Merge: 50d9a12... 23c3e29... Author: Linus Torvalds [EMAIL PROTECTED] Date: Fri Jan 25 17:19:08 2008 -0800 Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Re: 2.6.24 regression w/ QLA2300

2008-02-05 Thread Alan D. Brunelle
Andrew Vasquez wrote: On Tue, 05 Feb 2008, Andrew Vasquez wrote: On Tue, 05 Feb 2008, Alan D. Brunelle wrote: commit 9b73e76f3cf63379dcf45fcd4f112f5812418d0a Merge: 50d9a12... 23c3e29... Author: Linus Torvalds [EMAIL PROTECTED] Date: Fri Jan 25 17:19:08 2008 -0800 Merge git

[PATCH] Moved UNPLUG traces to match 1-to-1 with PLUG traces

2008-02-01 Thread Alan D. Brunelle
fs.ext2] 8 The proposed patch was tested with a 2.6.22-based kernel, and compile tested with a 2.6.24-based tree from 31 January 2008 (85004cc367abc000aa36c0d0e270ab609a68b0cb). Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]> --- block/blk-core.c | 12 1 files changed, 4

[PATCH] Moved UNPLUG traces to match 1-to-1 with PLUG traces

2008-02-01 Thread Alan D. Brunelle
The proposed patch was tested with a 2.6.22-based kernel, and compile tested with a 2.6.24-based tree from 31 January 2008 (85004cc367abc000aa36c0d0e270ab609a68b0cb). Signed-off-by: Alan D. Brunelle [EMAIL PROTECTED] --- block/blk-core.c | 12 1 files changed, 4 insertions(+), 8 deletions

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Ray Lee wrote: Out of curiosity, what are the mount options for the freshly created ext3 fs? In particular, are you using noatime, nodiratime? Ray Nope, just mount. However, the tool I'm using to read the large file & overwrite the large file does open with O_NOATIME for reads... The

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Alan D. Brunelle wrote: Read large file: Kernel MinAvgMax Std Dev%user %system %iowait -- base : 201.6 215.1 275.5 22.8 0.26%4.69% 33.54% arjan: 198.0 210.3 261.5 18.5 0.33% 10.24% 54.00

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Here are the results for the latest tests, some notes: o The machine actually has 8GiB of RAM, so the tests still may end up using (some) page cache. (But at least it was the same for both kernels! :-) ) o Sorry the results took so long - the updated tree size caused the runs to take > 12

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Here are the results for the latest tests, some notes: o The machine actually has 8GiB of RAM, so the tests still may end up using (some) page cache. (But at least it was the same for both kernels! :-) ) o Sorry the results took so long - the updated tree size caused the runs to take 12

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Alan D. Brunelle wrote: Read large file: Kernel MinAvgMax Std Dev%user %system %iowait -- base : 201.6 215.1 275.5 22.8 0.26%4.69% 33.54% arjan: 198.0 210.3 261.5 18.5 0.33% 10.24% 54.00

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-16 Thread Alan D. Brunelle
Ray Lee wrote: Out of curiosity, what are the mount options for the freshly created ext3 fs? In particular, are you using noatime, nodiratime? Ray Nope, just mount. However, the tool I'm using to read the large file overwrite the large file does open with O_NOATIME for reads... The tool

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Oh, and the runs were done in single-user mode... Alan - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Arjan van de Ven wrote: On Wed, 14 Nov 2007 18:18:05 +0100 Ingo Molnar <[EMAIL PROTECTED]> wrote: * Andrew Morton <[EMAIL PROTECTED]> wrote: ooh, more performance testing. Thanks * The overwriter task (on an 8GiB file), average over 10 runs: o 2.6.24 -

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Andrew Morton wrote: (cc lkml restored, with permission) On Wed, 14 Nov 2007 10:48:10 -0500 "Alan D. Brunelle" <[EMAIL PROTECTED]> wrote: Andrew Morton wrote: On Mon, 15 Oct 2007 16:13:15 -0400 Rik van Riel <[EMAIL PROTECTED]> wrote: Since you ha

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Andrew Morton wrote: (cc lkml restored, with permission) On Wed, 14 Nov 2007 10:48:10 -0500 Alan D. Brunelle [EMAIL PROTECTED] wrote: Andrew Morton wrote: On Mon, 15 Oct 2007 16:13:15 -0400 Rik van Riel [EMAIL PROTECTED] wrote: Since you have been involved a lot with ext3

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Arjan van de Ven wrote: On Wed, 14 Nov 2007 18:18:05 +0100 Ingo Molnar [EMAIL PROTECTED] wrote: * Andrew Morton [EMAIL PROTECTED] wrote: ooh, more performance testing. Thanks * The overwriter task (on an 8GiB file), average over 10 runs: o 2.6.24 - 300.88226

Re: [patch] Give kjournald a IOPRIO_CLASS_RT io priority

2007-11-14 Thread Alan D. Brunelle
Oh, and the runs were done in single-user mode... Alan - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

[PATCH] Add UNPLUG traces to all appropriate places

2007-11-08 Thread Alan D. Brunelle
-by: Alan D. Brunelle <[EMAIL PROTECTED]> --- block/ll_rw_blk.c | 24 +++- drivers/md/bitmap.c|3 +-- drivers/md/dm-table.c |3 +-- drivers/md/linear.c|3 +-- drivers/md/md.c|4 ++-- drivers/md/multipath.c |3 +-- drivers/md/raid0.c

[PATCH] Add UNPLUG traces to all appropriate places

2007-11-08 Thread Alan D. Brunelle
-by: Alan D. Brunelle [EMAIL PROTECTED] --- block/ll_rw_blk.c | 24 +++- drivers/md/bitmap.c|3 +-- drivers/md/dm-table.c |3 +-- drivers/md/linear.c|3 +-- drivers/md/md.c|4 ++-- drivers/md/multipath.c |3 +-- drivers/md/raid0.c |3

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-10-02 Thread Alan D. Brunelle
Mathieu Desnoyers wrote: >> remember that we have seen and discussed something like this before, >> it's still a puzzle to me... >> >> > I do wonder about that performance _increase_ with blktrace enabled. I > > Interesting question indeed. > > In those tests, when blktrace is running, are

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-10-02 Thread Alan D. Brunelle
Mathieu Desnoyers wrote: remember that we have seen and discussed something like this before, it's still a puzzle to me... I do wonder about that performance _increase_ with blktrace enabled. I Interesting question indeed. In those tests, when blktrace is running, are the relay

[PATCH] Some IO scheduler cleanup in Documentation/block

2007-09-27 Thread Alan D. Brunelle
ult IO scheduler. (From as-iosched.txt) o Added in sysfs mount instructions. (From deadline-iosched.txt) Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]> --- Documentation/block/as-iosched.txt | 21 + Documentation/block/deadline-iosche

[PATCH] Some IO scheduler cleanup in Documentation/block

2007-09-27 Thread Alan D. Brunelle
. (From as-iosched.txt) o Added in sysfs mount instructions. (From deadline-iosched.txt) Signed-off-by: Alan D. Brunelle [EMAIL PROTECTED] --- Documentation/block/as-iosched.txt | 21 + Documentation/block/deadline-iosched.txt | 23

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-09-26 Thread Alan D. Brunelle
Mathieu Desnoyers wrote: * Alan D. Brunelle ([EMAIL PROTECTED]) wrote: Taking Linux 2.6.23-rc6 + 2.6.23-rc6-mm1 as a basis, I took some sample runs of the following on both it and after applying Mathieu Desnoyers 11-patch sequence (19 September 2007). * 32-way IA64 + 132GiB + 10 FC

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-09-26 Thread Alan D. Brunelle
Mathieu Desnoyers wrote: * Alan D. Brunelle ([EMAIL PROTECTED]) wrote: Taking Linux 2.6.23-rc6 + 2.6.23-rc6-mm1 as a basis, I took some sample runs of the following on both it and after applying Mathieu Desnoyers 11-patch sequence (19 September 2007). * 32-way IA64 + 132GiB + 10 FC

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-09-25 Thread Alan D. Brunelle
ble, we'll try to get some "real" Oracle benchmark runs done to gage the impact of the markers changes to performance... Alan D. Brunelle Hewlett-Packard / Open Source and Linux Organization / Scalability and Performance Group - To unsubscribe from this list: send the line &

Re: Linux Kernel Markers - performance characterization with large IO load on large-ish system

2007-09-25 Thread Alan D. Brunelle
to get some real Oracle benchmark runs done to gage the impact of the markers changes to performance... Alan D. Brunelle Hewlett-Packard / Open Source and Linux Organization / Scalability and Performance Group - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body

[PATCH] Fix remap handling by blktrace

2007-08-07 Thread Alan D. Brunelle
to be in the right order o Sent up mapped-from and mapped-to device information Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]> --- block/ll_rw_blk.c|4 drivers/md/dm.c |4 ++-- include/linux/blktrace_api.h |3 ++- 3 files changed, 8 insertions(+), 3 del

[PATCH] Fix remap handling by blktrace

2007-08-07 Thread Alan D. Brunelle
to be in the right order o Sent up mapped-from and mapped-to device information Signed-off-by: Alan D. Brunelle [EMAIL PROTECTED] --- block/ll_rw_blk.c|4 drivers/md/dm.c |4 ++-- include/linux/blktrace_api.h |3 ++- 3 files changed, 8 insertions(+), 3 deletions

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-21 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, May 21 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Tue, May 01 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-21 Thread Alan D. Brunelle
Jens Axboe wrote: On Tue, May 01 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-21 Thread Alan D. Brunelle
Jens Axboe wrote: On Tue, May 01 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-21 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, May 21 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Tue, May 01 2007, Alan D. Brunelle wrote: Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-01 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding in this patch. (Graph can be found at http://free.linux.hp.com/~adb/cfq

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-05-01 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding in this patch. (Graph can be found at http://free.linux.hp.com/~adb/cfq

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-04-30 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding in this patch. (Graph can be found at http://free.linux.hp.com/~adb/cfq

CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-04-30 Thread Alan D. Brunelle
, but it is something to keep an eye on as the regression showed itself across the complete run. Alan D. Brunelle - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo

CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-04-30 Thread Alan D. Brunelle
, but it is something to keep an eye on as the regression showed itself across the complete run. Alan D. Brunelle - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: CFQ IO scheduler patch series - AIM7 DBase results on a 16-way IA64

2007-04-30 Thread Alan D. Brunelle
Jens Axboe wrote: On Mon, Apr 30 2007, Alan D. Brunelle wrote: The results from a single run of an AIM7 DBase load on a 16-way ia64 box (64GB RAM + 144 FC disks) showed a slight regression (~0.5%) by adding in this patch. (Graph can be found at http://free.linux.hp.com/~adb/cfq

[PATCH linux-2.6-block.git] Fix blktrace trace ordering for plug branch

2007-04-27 Thread Alan D. Brunelle
Thanks, Alan From: Alan D. Brunelle <[EMAIL PROTECTED]> Fix unplug/insert trace inversion problem. Signed-off-by: Alan D. Brunelle <[EMAIL PROTECTED]> --- block/ll_rw_blk.c |8 include/linux/blkdev.h |1 + 2 files changed, 5 insertions(+), 4 deletions(-)

[PATCH linux-2.6-block.git] Fix blktrace trace ordering for plug branch

2007-04-27 Thread Alan D. Brunelle
Thanks, Alan From: Alan D. Brunelle [EMAIL PROTECTED] Fix unplug/insert trace inversion problem. Signed-off-by: Alan D. Brunelle [EMAIL PROTECTED] --- block/ll_rw_blk.c |8 include/linux/blkdev.h |1 + 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block

Re: [PATCH 5/15] cfq-iosched: speed up rbtree handling

2007-04-26 Thread Alan D. Brunelle
Jens Axboe wrote: On Wed, Apr 25 2007, Jens Axboe wrote: On Wed, Apr 25 2007, Jens Axboe wrote: On Wed, Apr 25 2007, Alan D. Brunelle wrote: Hi Jens - The attached patch speeds it up even more - I'm finding a >9% reduction in %system with no loss in IO performance. This just sets the cac

Re: [PATCH 5/15] cfq-iosched: speed up rbtree handling

2007-04-26 Thread Alan D. Brunelle
Jens Axboe wrote: On Wed, Apr 25 2007, Jens Axboe wrote: On Wed, Apr 25 2007, Jens Axboe wrote: On Wed, Apr 25 2007, Alan D. Brunelle wrote: Hi Jens - The attached patch speeds it up even more - I'm finding a 9% reduction in %system with no loss in IO performance. This just sets the cached

Re: [PATCH 5/15] cfq-iosched: speed up rbtree handling

2007-04-25 Thread Alan D. Brunelle
Hi Jens - The attached patch speeds it up even more - I'm finding a >9% reduction in %system with no loss in IO performance. This just sets the cached element when the first is looked for. Alan From: Alan D. Brunelle <[EMAIL PROTECTED]> Update cached leftmost every time it is found

Re: [PATCH 0/15] CFQ IO scheduler patch series

2007-04-25 Thread Alan D. Brunelle
gly enough this patch also seems to remove some noise during the run - see the chart at http://free.linux.hp.com/~adb/cfq/rkb_s.png Alan D. Brunelle HP / Open Source and Linux Organization / Scalability and Performance Group - To unsubscribe from this list: send the line "unsubscribe linu

Re: [PATCH 0/15] CFQ IO scheduler patch series

2007-04-25 Thread Alan D. Brunelle
enough this patch also seems to remove some noise during the run - see the chart at http://free.linux.hp.com/~adb/cfq/rkb_s.png Alan D. Brunelle HP / Open Source and Linux Organization / Scalability and Performance Group - To unsubscribe from this list: send the line unsubscribe linux-kernel

Re: [PATCH 5/15] cfq-iosched: speed up rbtree handling

2007-04-25 Thread Alan D. Brunelle
Hi Jens - The attached patch speeds it up even more - I'm finding a 9% reduction in %system with no loss in IO performance. This just sets the cached element when the first is looked for. Alan From: Alan D. Brunelle [EMAIL PROTECTED] Update cached leftmost every time it is found. Signed