Mike Gerdts wrote:
> On Nov 29, 2007 11:41 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
>   
>> It depends on the read pattern.  If you will be reading these small
>> files randomly, then there may be a justification to tune recordsize.
>> In general, backup/restore workloads are not random reads, so you
>> may be ok with the defaults.  Try it and see if it meets your
>> performance requirements.
>>  -- richard
>>     
>
> It seems as though backup/restore of small files would be a random
> pattern, unless you are using zfs send/receive.  Since no enterprise
> backup solution that I am aware of uses zfs send/receive, most people
> doing backups of zfs are using something that does something along the
> lines of
>
> while readdir ; do
>     open file
>     read from file
>     write to backup stream
>     close file
> done
>
> Since files are unlikely to be on disk in a contiguous manner, this
> looks like a random read operation to me.
>
> Am I wrong?
>
>   
I don't think you are wrong.  I think it will depend on if the
read order is the same as the write order.  We'd need to know more
about these details to comment further.

The penalty here is that you might read more than 2kBytes to get
2kBytes of interesting data. This unused date will be cached in several
places, so it is not a given that it is a wasted effort, but it might be
inefficient.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to