[EMAIL PROTECTED] said:
> The situation: a three 500gb disk raidz array.  One disk breaks and you
> replace it with a new one.  But the new 500gb disk is slightly smaller than
> the smallest disk in the array.   
> . . .
> So I figure the only way to build smaller-than-max-disk-size functionality
> into a raidz array is to make a slice on each disk that is slightly smaller
> than the max disk size, and then build the array out of those slices.  Am I
> correct here?

Actually, you can manually adjust the "whole disk" label so it takes up
less than the whole disk.  ZFS doesn't seem to notice.  One way of doing
this is to create a temporary whole-disk pool on an unlabelled disk,
allowing ZFS to setup its standard EFI label.  Then destroy that temporary
pool, and use "format" to adjust the size of slice 0 to whatever smaller
block count you want.  Later "zpool create", "add", or "attach" operations
seem to just follow the existing label, rather than adjust it upwards to
the maximum block count that will fit on the disk.

I'm just reporting what I've observed (Solaris-10U3);  Naturally this
could change as releases go forward, although the current behavior
seems like a pretty safe one.


> If so, is there a downside to using slice(s) instead of whole disks?  The
> zpool manual says "ZFS can use individual slices or partitions, though the
> recommended mode of operation is to use whole disks." ["Virtual Devices
> (vdevs)", 1]   

The only down-side I know of is only a potential one:  You could get
competing uses of the same spindle, if you have more than one slice in
use on the same physical drive at the same time.  That can definitely
slow things down a lot, depending on what's going on.  ZFS seems to try
to use up all available performance of the drives it has been configured
to use.

Note that slicing up a boot drive with boot filesystems on part of
the disk, and a ZFS data pool on the rest, works just fine, likely
because you don't typically see a lot of I/O on the OS/boot filesystems
unless you're short on RAM (in which case things go slow for other reasons).

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to