[zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-27 Thread Jeff Davis
 
 Given your question are you about to come back with a
 case where you are not 
 seeing this?
 

As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O 
rate drops off quickly when you add processes while reading the same blocks 
from the same file at the same time. I don't know why this is, and it would be 
helpful if someone explained it to me.

ZFS did a lot better. There did not appear to be any drop-off after the first 
process. There was a drop in I/O rate as I kept adding processes, but in that 
case the CPU was at 100%. I haven't had a chance to test this on a bigger box, 
but I suspect ZFS is able to keep the sequential read going at full speed (at 
least if the blocks happen to be written sequentially).

I did these tests with each process being a dd if=bigfile of=/dev/null 
started at the same time, and I measured I/O rate with zpool iostat mypool 2 
and iostat -Md 2.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-27 Thread Jeff Davis
 On February 26, 2007 9:05:21 AM -0800 Jeff Davis
 But you have to be aware that logically sequential
 reads do not
 necessarily translate into physically sequential
 reads with zfs.  zfs

I understand that the COW design can fragment files. I'm still trying to 
understand how that would affect a database. It seems like that may be bad for 
performance on single disks due to the seeking, but I would expect that to be 
less significant when you have many spindles. I've read the following blogs 
regarding the topic, but didn't find a lot of details:

http://blogs.sun.com/bonwick/entry/zfs_block_allocation
http://blogs.sun.com/realneel/entry/zfs_and_databases
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Efficiency when reading the same file blocks

2007-02-26 Thread Jeff Davis
 Given your question are you about to come back with a
 case where you are not 
 seeing this?

Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O 
scheduler. When reading the same file sequentially, adding processes 
drastically reduced total disk throughput (single disk machine). Using the 
Linux anticipatory scheduler worked just fine: no additional I/O costs for more 
processes.

That got me worried about the project I'm working on, and I wanted to 
understand ZFS's caching behavior better to prove to myself that the problem 
wouldn't happen under ZFS. Clearly the block will be in cache on the second 
read, but what I'd like to know is if ZFS will ask the disk to do a long, 
efficient sequential read of the disk, or whether it will somehow not recognize 
that the read is sequential because the requests are coming from different 
processes?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Efficiency when reading the same file blocks

2007-02-25 Thread Jeff Davis
if you have N processes reading the same file sequentially (where file size is 
much greater than physical memory) from the same starting position, should I 
expect that all N processes finish in the same time as if it were a single 
process?

In other words, if you have one process that reads blocks from a file, is it 
free (meaning no additional total I/O cost) to have another process read the 
same blocks from the same file at the same time?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss