dd uses a default block size of 512B.  Does this map to your
expected usage ? When I quickly tested the CPU cost of small
read from cache, I did see that ZFS was more costly than UFS
up to a crossover between 8K and 16K.   We might need a more
comprehensive study of that (data in/out of cache, different
recordsize  &    alignment constraints   ).   But  for small
syscalls, I think we might need some work  in ZFS to make it
CPU efficient.

So first,  does  small sequential write    to a large  file,
matches an interesting use case ?


-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to