Good points, Sateesh! Thanks for chiming in. :) On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi <sateesh.chodapune...@accelerite.com<mailto:sateesh.chodapune...@accelerite.com>> wrote:
Hi Andrija, I’ve converted cluster-wide NFS based storage pools to zone-wide in the past. Basically there are 2 steps for NFS and Ceph, 1. DB update 2. If there are more than 1 cluster in that zone, then do un-manage & manage all the clusters except the original cluster In addition to Mike’s suggestion, you need to do following, • Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table Example SQL looks like below, given that the hypervisor in my setup is VMware. mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, hypervisor='VMware' where id=<STORAGE_POOL_ID>; With DB update, the changes would be reflected in UI as well. Post the DB update, it is important to un-manage, followed by manage clusters (except the original cluster to which this storage pool belongs to) so that all hosts in other clusters also to connect to this storage pool, making this pool as a full-fledged zone wide storage pool. Hope this helps you! Regards, Sateesh Ch, CloudStack Development, Accelerite, www.accelerite.com<http://www.accelerite.com> @accelerite -----Original Message----- From: "Tutkowski, Mike" <mike.tutkow...@netapp.com<mailto:mike.tutkow...@netapp.com>> Reply-To: "dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>" <dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>> Date: Friday, 29 September 2017 at 6:57 PM To: "dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>" <dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>, "us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>" <us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>> Subject: Re: Advice on converting zone-wide to cluster-wide storage Hi Andrija, I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope. I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire: • In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage). • In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage). • In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM. Talk to you later! Mike On 9/29/17, 5:18 AM, "Andrija Panic" <andrija.pa...@gmail.com<mailto:andrija.pa...@gmail.com>> wrote: Hi all, I was wondering if anyone have experience hacking DB and converting zone-wide primary storage to cluster-wide. We have: 1 x NFS primary storage, zone-wide 1 x CEPH primary storage, zone-wide 1 x SOLIDFIRE orimary storage, zone-wide 1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary storage (SS not relevant here). I'm assuming few DB changes would do it - storage_pool table / scope, cluster_id, pod_id fileds), but have not yet had time to play with it really. Any advice if this is OK to be done in production environment, would be very much appreciated. We plan to expand to many more racks, so we might move from single-everything (pod/cluster) to multiple PODs/clusters etc, and thus design Primary Storage accordingly. Thanks ! -- Andrija Panić DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.