In order to be reasonably representative of a real-world situation, I'd suggest 
the following additions:

> 1) create a large file (bigger than main memory) on
> an empty ZFS pool.

1a.  The pool should include entire disks, not small partitions (else seeks 
will be artificially short).

1b.  The file needs to be a *lot* bigger than the cache available to it, else 
caching effects on the reads will be non-negligible.

1c.  Unless the file fills up a large percentage of the pool the rest of the 
pool needs to be fairly full (else the seeks that updating the file generates 
will, again, be artificially short ones).

> 2) time a sequential scan of the file
> 3) random write i/o over say, 50% of the file (either
> with or without
> matching blocksize)

3a.  Unless the file itself fills up a large percentage of the pool, do this 
while other significant other updating activity is also occurring in the pool 
so that the local holes in the original file layout created by some of its 
updates don't get favored for use by subsequent updates to the same file 
(again, artificially shortening seeks).

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to