Hi all,

We're attempting to make a production change to a CloudStack deployment
(currently installed at 4.4.1), where we want to change the scope of the
attached primary storage from being attached at ZONE scope to CLUSTER scope
(this deployment currently has exactly 1 zone, 1 pod and 1 cluster in use).

The purpose of the change is to allow us to deploy a second primary storage
device to a new cluster, providing no single points of failure within the
system.

(Note: adding 2 primary storage devices to the same zone has had the
opposite effect, effectively halving the reliability of the system since
each node considers failure of _any_ NFS primary storage to be sufficient
grounds to initiate a HA reboot event).

All VM's are KVM, all used storage is NFS shared storage.  There are dozens
of deployed VM's running and 20 hosts deployed.

The expectation is that this change would be made during a system
maintenance event (and shutdown), but the hope is that we can effect this
change without complete removal of the storage and associated redeployment
of all VMs.

I note that the design notes for 4.2 (where storage scope was introduced),
indicate that the major design change that needs to be accounted for was
the that a column, called scope, was added into storage_pool table.
(see:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
).

This suggests that the change could be as trivial as:
0. Shutdown of cloudstack management server.
1.  Identifying the primary storage device in cloud.storage_pool.
2.  Changing the 'scope' field from 'ZONE' to 'CLUSTER'.
3.  Populating the 'pod_id' and 'cluster_id' fields to reflect the
appropriate pod & cluster.
4.  Restart cloudstack.

Questions:

1. Has anyone attempted a similar migration?
2. Is this the way it's designed to work (in 4.4.1)?  Are there any other
values to be accounted for?
3. Is there another way to effect this through an API/UI? (We would
consider an upgrade if this is supported in a later release)?



Thanks in advance,

Rohan

Reply via email to