ZFS can generally detect device changes on Sun hardware, but for other
hardware, the behavior is unknown.

The most harmful pool problem I see besides inadequate redundancy levels
or no backups, is device changes. Recovery can be difficult.

Follow recommended practices for replacing devices in a live pool.

In general, ZFS can handle controller/device changes if the driver
generates or fabricates device IDs. You can view device IDs with this
command:

# zdb -l /dev/dsk/cvtxdysz

If you are unsure what impact device changes will have your pool, then
export the pool first. If you see the device ID has changed when the
pool is exported (use prtconf -v to view device IDs while the pool is
exported) with the hardware change, then the resulting pool behavior is
unknown.

Thanks,

Cindy

On 02/01/10 11:28, Freddie Cash wrote:
10 disks connected in the following order:

0 1 2 3 4 5 6 7 8 9

Export pool.  Remove three drives from the system:

0 1   3 4   6 7 8

Plug them back in, but into different slots:

0 1 9 3 4 1 6 7 8 5

Import the pool.

What's supposed to happen is that ZFS detects the drives, figures out where they 
"logically" belong, and continues on its merry way as if nothing happened.

In this case, the OP gets some weird output where a device is listed twice 
(once in the vdev, once as a spare) and one device is missing from the list.

Make sense now?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to