On Nov 29, 2007 11:41 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
> It depends on the read pattern.  If you will be reading these small
> files randomly, then there may be a justification to tune recordsize.
> In general, backup/restore workloads are not random reads, so you
> may be ok with the defaults.  Try it and see if it meets your
> performance requirements.
>  -- richard

It seems as though backup/restore of small files would be a random
pattern, unless you are using zfs send/receive.  Since no enterprise
backup solution that I am aware of uses zfs send/receive, most people
doing backups of zfs are using something that does something along the
lines of

while readdir ; do
    open file
    read from file
    write to backup stream
    close file
done

Since files are unlikely to be on disk in a contiguous manner, this
looks like a random read operation to me.

Am I wrong?

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to