These patches improve sequential write IO patterns and reduce ordered write log contention.
The first patch is simply for diagnosis purposes - it enabled me to see where Io was being dispatched from, and led directly to he fix in the second patch. The third patch removes the use of WRITE_SYNC_PLUG for async writes (data, metadata and log), and the third moves the AIL pushing out from under the log lock so that incoming writes can still proceed while the log is being flushed. The difference is on a local disk that XFS can do 85MB/s sequential write, gfs2 can do: cfq noop vanilla 38MB/s 48MB/s +2 48MB/s 65MB/s +3 48MB/s 65MB/s +4 51MB/s 75MB/s The improvement is due to the IO patterns resulting in the disk being IO bound, and the subsequent improvements in IO patterns directly translate into more throughput On a faster 4-disk dm stripe array on the same machine that XFS can do 265MB/s (@ 550iop/s) sequential write, gfs2 can do: cfq noop vanilla 135MB/s @ 400iop/s 130MB/s @ 800iop/s +4 135MB/s @ 400iop/s 130MB/s @ 500iop/s No improvement or degradation in throughput is seen here as the disks never get to being IO bound - the write is cpu bound. However, there is an improvement in iops seen on no-op scheduler as a result of the improvement in IO dispatch patterns. The patches have not seen much testing, so this is really just a posting for comments/feedback at this point.