Roman,
Let's consider the following scenario:
There is zfs pool being replicated with AVS. Volume sets are in
replication mode, everything is ok.
What happens if I decided to add a spare disk to existent zpool?
If you the option of enabling this spare disk before or after this
disk is added to the ZFS storage, but if done before hand one can save
100% of the initial synchronization costs.
Before adding to ZFS storage pool:
sndradm -nE <Primary-node> <Primary-node spare disk> <Primary-node
bitmap> <Secondary-node> <Secondary-node disk> <Secondary-node
bitmap> ip async
sndradm -g <Group-name> -q a <Secondary-node>:<Secondary-node disk>
After ...:
sndradm -ne <Primary-node> <Primary-node spare disk> <Primary-node
bitmap> <Secondary-node> <Secondary-node disk> <Secondary-node
bitmap> ip async
sndradm -g <Group-name> -q a <Secondary-node>:<Secondary-node disk>
There are no problems with zfs, it can add in to an existent pool.
Now the question: if it's possible to add that a new volume set to I/O
group that is synchronized and being replicated?
Yes.
It's quite simple if exporting zpool on primary node is ok (just by
deleting existent configuration and recreating new sets in sndradm)
But is this possible without exporting zpool on primary host and
recreating volume sets?
Yes
Thanks,
Roman Naumenko
- Jim
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss