GitHub user Jayd603 edited a comment on the discussion: Restoring volume 
snapshots from secondary storage in new CS cluster

This is hilarious.   So I added my replicated (read-only zfs partition) that 
contains all my backups and ALL the servers in the cluster rebooted.  

2026-01-07T20:51:25.512331+00:00 az-3 heartbeat: kvmheartbeat.sh will reboot 
system because it was unable to write t
he heartbeat to the storage.

Ok so there is still a problem with how cloudstack works with primary network 
attached storage.  I already had lots of local host storage that was working 
great, I had no shared primary storage.  I wanted to add this primary storage 
to test restoring backups. BOOM.   Here is the thing, on another cluster I 
wanted to add some supplemental network attached storage - I did.  Then there 
was an issue with that storage , this rendered the entire cloudstack cluster 
unusable.  There HAS to be a better way.  Creating  a single point of failure 
for an entire kvm cluster by just adding some primary nas storage is crazy.   
It should fail gracefully and allow the management server and agents to run, 
local host stored VMs to start if primary nas storage become unavailable.





> @Jayd603 Since you’re restoring backups into a totally separate CloudStack 
> environment, it’s going to be tough to get them to show up without some 
> database changes. However, I was thinking you might be able to bypass the DB 
> issues entirely by using the KVM QCOW2 import feature.
> 
> The trick is to register your backup target (the NAS) as Primary Storage 
> instead of Secondary. If you do that, you can use the 'Import QCOW2 image 
> from Shared Storage' option under Tools > Import-Export Instances.
> 
> I am referring to this feature 
> https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/virtual_machines/importing_unmanaging_vms.html#import-instances-from-shared-storage..
>  you can do this from GUI as well ( From Tools --> Import-Export Instances 
> --> Select KVM --> Select "Import QCOW2 image from Shared Storage" under 
> Action)
> 
> It’s a bit of a 'hack' because that tool is technically for migrating 
> external KVM VMs, but I think it can work for your use case. It lets you pick 
> the raw .qcow2 files directly from the storage and spin them up as managed 
> instances in your new setup without worrying about the old metadata. The only 
> catch is that you’ll need a way to map the files back to the right machines. 
> Since the files are named with UUIDs, you'll need to reference your old 
> database (or a file list) to figure out which .qcow2 belongs to which VM 
> before you start the import



GitHub link: 
https://github.com/apache/cloudstack/discussions/12254#discussioncomment-15438402

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to