Hi again,

> Follow recommended practices for replacing devices in
> a live pool.

Fair enough. On the other hand I guess it has become clear that the pool went 
offline as a part of the procedure. That was partly as I am not sure about the 
hotplug capabilities of the controller, partly as I wanted to simulate an 
incident that will force me to shut down the machine. I also assumed that a 
controlled  procedure of atomical, legal steps (export, reboot, import) should 
avoid unexpected gotchas. 

> 
> In general, ZFS can handle controller/device changes
> if the driver
> generates or fabricates device IDs. You can view
> device IDs with this
> command:
> 
> # zdb -l /dev/dsk/cvtxdysz
> 
> If you are unsure what impact device changes will
> have your pool, then
> export the pool first. If you see the device ID has
> changed when the
> pool is exported (use prtconf -v to view device IDs
> while the pool is
> exported) with the hardware change, then the
> resulting pool behavior is
> unknown.

That's interesting. I understand I should do this to get a better idea what may 
happen before ripping the drives from the respective slots. Now: in case of an 
enclosure transfer or controller change, how do I find out if the receiving 
configuration will be able to handle it? The test obviously will tell about the 
IDs the sending configuration has produced. What layer will interpret the IDs, 
driver or ZFS? Are the IDs written to disk? 

The reason I am doing this is to find out what I need to observe in respect to 
failover strategies for controllers, mainboards, etc. for the hardware that I 
am using. Which is naturally Non-SUN.

Regards,

Sebastian
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to