> Given your question are you about to come back with a
> case where you are not 
> seeing this?

Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O 
scheduler. When reading the same file sequentially, adding processes 
drastically reduced total disk throughput (single disk machine). Using the 
Linux anticipatory scheduler worked just fine: no additional I/O costs for more 
processes.

That got me worried about the project I'm working on, and I wanted to 
understand ZFS's caching behavior better to prove to myself that the problem 
wouldn't happen under ZFS. Clearly the block will be in cache on the second 
read, but what I'd like to know is if ZFS will ask the disk to do a long, 
efficient sequential read of the disk, or whether it will somehow not recognize 
that the read is sequential because the requests are coming from different 
processes?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to