cc'ing linux-aio, for the AIO part of the discussion. You might be able
to find some of your answers in the archives.

On Fri, Mar 25, 2005 at 03:26:23PM -0600, Steve French wrote:
> Christoph,
> I had time to add the generic vectored i/o and async i/o calls to cifs 
> that you had suggested last month.  They are within the ifdef for the 
> CIFS_EXPERIMENTAL config option for the time being.   I would like to do 
> more testing of these though - are there any tests (even primitive ones) 
> for readv/writev and async i/o?
> 
> Is there an easy way measuring the performance benefit of these (vs. the 
> fallback routines in fs/read_write.c) - since presumably async and 
> vectored i/o never kick in on a standard copy command such as cp or dd 
> and require a modified application that is vector i/o aware or async i/o 
> aware.

there are several tests for AIO - I tend to use Chris Mason's aio-stress
which can be used to compare performance in terms of throughput for
streaming reads/writes for different variations of options.

(the following page isn't exactly up-to-date, but should still give
you some pointers: lse.sf.net/io/aio.html)

> 
> You had mentioned do_sync_read - is there a reason to change the current 
> call to generic_file_read in the cifs read entry point to do_sync_read.  
> Some filesystems which export aio routines still call generic_file_read 
> and others call do_sync_read and it was not obvious to me what that 
> would change.

I think you could keep it the way it is - generic_file_read will take care
of things. But maybe I should comment only after I see your patch. Are
you planning to post it some time ?

Regards
Suparna

> 
> This is partly to better limit reading from the pagecache when the read 
> oplock is lost (ie when we do not have the network caching token 
> allowing readahead from the server) but primarily because I would like 
> to see if this could help with getting more parallelism in the single 
> client to single server large file sequential copy case.   Currently 
> CIFS can do large operations (as large as 128K for read or write in some 
> cases) - and this is much more efficient for network transfer - but 
> without mounting with forcedirectio I had limited my cifs_readpages to 
> 16K (4 pages typically) - and because I do the SMBread synchronously I 
> am severely limiting parallelism in the case of a single threaded app.  
> So where I would like to get to is during readahead having multiple SMB 
> reads for the same inode on the wire at one time - with the SMB reads 
> each larger than a page (between 4 and 30 pages) - and was hoping that 
> the aio and readv/writev support would make that easier.  
> 
> I probably need to look more at the NFS direct i/o example to see if 
> there are easy changes I can make to enable it on a per inode basis 
> (rather than as only a mount option), and to double check what other 
> filesystem do for returning errors on mmap and sendfile on inodes that 
> are marked direct i/o.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Suparna Bhattacharya ([EMAIL PROTECTED])
Linux Technology Center
IBM Software Lab, India

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to