Grab NWFS at vger.timpanogas.org.  It has a really good ASYNCH I/O
abstraction in kernel that is pluggable and will allow very agressive
testing of the elevator code in 2.4.0.  Check the file BLOCK.C for the
2.4 support and ASYNC.C.  Theres a nice way to pump ons of AIO requests
into Linux with this interface, and it's fully asynch.   

Jeff

"J. Robert von Behren" wrote:
> 
> Greetings, Robert.
> 
> Looking over your test program, I don't think you are actually testing
> the elevator algorithm at all.  There are a couple of key flaws:
> 
>   * The reads and writes are synchronous, so the elevator algorithm
>     at _most_ gets to effect the blocks within a single read or
>     write (ie - inside the same file).
> 
>   * The fact that you have written the files out all at once will not
>     place all 240 megs of data consecutively on the disk.  The file
>     system spreads out data on disk to allow breathing room for
>     adding new files, or appending to existing files.  In particular,
>     this means that although large subsections of blocks that are
>     adjacent in the logical file _will_ be close by on disk, you cannot
>     generalize from this that these large subsections will be close to
>     each other - either within a single file, or between the files
>     created by the test program.
> 
>   * Unless the partition is completely empty, existing file data will
>     effect where new data is placed (and in particular how well
> co-located
>     it can be).
> 
> A better approach to testing the elevator algorithm would be to write
> directly to a raw device, instead of going through the file system.
> This would allow you better control over actual disk placement of
> blocks, and let you know what you were testing.  It would also allow you
> to make sure that subsequent re-tests would be repeatable, as they
> shouldn't be effected by existing data.  Finally, to do this test right,
> you need to be able to issue multiple IO requests to the disk at
> essentially the same time.  To do this, you'll probably need to go
> multithreaded, and use a barrier of some sort to make sure the threads
> stay synchronized.
> 
> Best regards,
> 
> -Rob von Behren
> 
> Robert Cohen wrote:
> >
> > Here are the latest results for my elv_test elevator benchmark.
> > This benchmark gives three numbers, a baseline figure for writing
> > sequentially and write and read results for writing and reading in a
> > pattern designed to give the elevator a hard time.
> > The source code is available at
> > http://tltsu.anu.edu.au/~robert/elv_test.c.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to