On Oct 11, 2012, at 4:47 PM, andy thomas wrote:

> According to a Sun document called something like 'ZFS best practice' I read 
> some time ago, best practice was to use the entire disk for ZFS and not to 
> partition or slice it in any way. Does this advice hold good for FreeBSD as 
> well?

        My understanding of the best practice was that with Solaris prior to 
ZFS, it disabled the volatile disk cache.  With ZFS, the disk cache is used, 
but after every transaction a cache-flush command is issued to ensure that the 
data made it the platters.  If you slice the disk, enabling the disk cache for 
the whole disk is dangerous because other file systems (meaning UFS) wouldn't 
do the cache-flush and there was a risk for data-loss should the cache fail due 
to, say a power outage.
        Can't speak to how BSD deals with the disk cache.

> I looked at a server earlier this week that was running FreeBSD 8.0 and had 2 
> x 1 Tb SAS disks in a ZFS 13 mirror with a third identical disk as a spare. 
> Large file I/O throughput was OK but the mail jail it hosted had periods when 
> it was very slow with accessing lots of small files. All three disks (the two 
> in the ZFS mirror plus the spare) had been partitioned with gpart so that 
> partition 1 was a 6 GB swap and partition 2 filled the rest of the disk and 
> had a 'freebsd-zfs' partition on it. It was these second partitions that were 
> part of the mirror.
> 
> This doesn't sound like a very good idea to me as surelt disk seeks for swap 
> and for ZFS file I/O are bound to clash. aren't they?

        It surely would make a slow, memory starved swapping system even 
slower.  :)

> Another point about the Sun ZFS paper - it mentioned optimum performance 
> would be obtained with RAIDz pools if the number of disks was between 3 and 
> 9. So I've always limited my pools to a maximum of 9 active disks plus spares 
> but the other day someone here was talking of seeing hundreds of disks in a 
> single pool! So what is the current advice for ZFS in Solaris and FreeBSD?

        That number was drives per vdev, not per pool.

-Phil
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to