When I leave out the “details[0].UEFI=LEGACY”, the job seems to complete. I 
haven’t tried to boot the instance yet because inadvertently removed all the 
instances. Running the cloudmonkey command created a long list of instances 
despite the error. I cleaned up and it removed the qcow2 file, which I guess is 
a good thing. It means things were in the right place.

Recopying the qcow2 file and I will try to start the instance again.

Thanks for your help.

> On Thursday, Feb 05, 2026 at 8:59 PM, Jithin Raju <[email protected] 
> (mailto:[email protected])> wrote:
> Hi Jeremy,
>
> The CMK looks fine, you need to review the logs to find what caused the 
> import failure. [1]
> In the UI , you may want to change the selection to shared storage instead of 
> local storage.
>
> [1] 
> https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/troubleshooting.html
>
> -Jithin
>
>
>
> From: Jeremy Hansen <[email protected]>
> Date: Friday, 6 February 2026 at 8:13 AM
> To: Jithin Raju <[email protected]>
> Subject: Re: Adding an existing cluster as a new region?
>
> Thank you so much for your help. I just want to clarify that I’m going 
> throught the proper steps.
>
> I placed the qcow here:
>
> nas.sifu.intra:/volume1/cloud/primary 42T 18T 25T 42% 
> /mnt/630a7d01-1f6d-36f1-b587-1f561ccc82dd/8629c278-8eae-4c92-a3b6-831ab20a9c2b
>
> Then I ran this command:
>
> cmk -p la1
> Apache CloudStack 🐵 CloudMonkey 6.4.0-rc
> Report issues: https://github.com/apache/cloudstack-cloudmonkey/issues
>
> (la1) 🐱 > import vm name=ca-fs-corp displayname=ca-fs-corp 
> importsource=shared hypervisor=kvm 
> storageid=630a7d01-1f6d-36f1-b587-1f561ccc82dd 
> diskpath=8629c278-8eae-4c92-a3b6-831ab20a9c2b 
> networkid=d3619d80-11cd-483b-86f2-3245c688a52c migrateallowed=true 
> zoneid=758cd49e-a193-4445-a8a9-576d16f4c24b 
> clusterid=38e832ba-8837-4aa6-8b0b-e92a841c0ef2 details[0].UEFI=LEGACY 
> serviceofferingid=88de7d7a-02c1-4a51-831f-8b96ce2227ae
>
>
>
> Which results in this:
>
> import vm name=ca-fs-corp displayname=ca-fs-corp importsource=shared 
> hypervisor=kvm storageid=630a7d01-1f6d-36f1-b587-1f561ccc82dd 
> diskpath=8629c278-8eae-4c92-a3b6-831ab20a9c2b 
> networkid=d3619d80-11cd-483b-86f2-3245c688a52c migrateallowed=true 
> zoneid=758cd49e-a193-4445-a8a9-576d16f4c24b 
> clusterid=38e832ba-8837-4aa6-8b0b-e92a841c0ef2 details[0].UEFI=LEGACY 
> serviceofferingid=88de7d7a-02c1-4a51-831f-8b96ce2227ae
> {
> "account": "admin",
> "accountid": "1c4424d2-f4e8-11f0-888d-df0da2073b14",
> "cmd": "org.apache.cloudstack.api.command.admin.vm.ImportVmCmd",
> "completed": "2026-02-05T18:38:48-0800",
> "created": "2026-02-05T18:38:48-0800",
> "domainid": "deefed4b-f4e7-11f0-888d-df0da2073b14",
> "domainpath": "ROOT",
> "jobid": "3f4f6be4-f67a-4673-9678-e49bec95c701",
> "jobprocstatus": 0,
> "jobresult": {
> "errorcode": 530,
> "errortext": "Import failed for Vm: i-2-54-VM. Suitable deployment 
> destination not found"
> },
> "jobresultcode": 530,
> "jobresulttype": "object",
> "jobstatus": 2,
> "userid": "1c4465a9-f4e8-11f0-888d-df0da2073b14"
> }
> 🙈 Error: async API failed for job 3f4f6be4-f67a-4673-9678-e49bec95c701
>
>
>
> I also tried to do this through the web interface but when I go to this 
> section:
>
> See attached screenshot. Storage Pool doesn’t give me any options.
>
>
>
>
> Thank you
> -jeremy
>
>
>
> > On Wednesday, Feb 04, 2026 at 8:37 PM, Jithin Raju 
> > <[email protected] (mailto:[email protected])> wrote:
> > You should be able to import the QCOW2 files from the backup on one region 
> > to another region using KVM import methods : Import QCOW2 image from 
> > local/Shared storage [1]
> >
> > You need to copy the QCOW2 files to the target Primary storage.
> >
> > Cloudmonkey example :
> > cmk import vm \
> > name=<vm-name> \
> > displayname=<vm-display-name> \
> > importsource=shared \
> > hypervisor=kvm \
> > storageid=<primary-storage-uuid> \
> > diskpath=<disk-file.qcow2> \
> > networkid=<guest-network-uuid> \
> > serviceofferingid=<service-offering-uuid> \
> > migrateallowed=true \
> > zoneid=<zone-uuid> \
> > clusterid=<cluster-uuid> \
> > details[0].UEFI=LEGACY
> >
> > Note: optionally pass templateid if required.
> >
> > cmk import volume \
> > name=<volume-name> \
> > zoneid=<zone-uuid> \
> > diskofferingid=<disk-offering-uuid> \
> > storageid=<primary-storage-uuid> \
> > diskpath=<data-disk.qcow2> \
> > format=QCOW2
> >
> > cmk attach volume \
> > id=<volume-uuid> \
> > virtualmachineid=<vm-uuid>
> >
> > [1] https://www.shapeblue.com/kvm-import/
> >
> >
> > -Jithin
> >
> > From: [email protected] <[email protected]>
> > Date: Thursday, 5 February 2026 at 7:34 AM
> > To: [email protected] <[email protected]>
> > Subject: Re: Adding an existing cluster as a new region?
> >
> > Am I off base here? Just trying to get some clarification. Can a standalone 
> > cluster be converted to a region so I would be able to restore from a 
> > backup made in a different location?
> >
> > Thanks
> >
> >
> >
> > On Wednesday, Feb 04, 2026 at 12:18 PM, 
> > <[email protected]<mailto:[email protected]>> wrote:
> > Is this just not possible with Cloudstack?
> >
> > Is there an alternative to backing up a vm and being able to restore it in 
> > a totally different standalone cluster?
> >
> > My overall goal is to be able to move these vm to a new cluster that has no 
> > relation to the other.
> >
> > Thanks.
> >
> >
> >
> > On Tuesday, Feb 03, 2026 at 9:57 PM, 
> > <[email protected]<mailto:[email protected]>> wrote:
> > I’m trying to test the backup and restore process in 4.22. As of right now 
> > I have two completely separate 4.22 clusters in two separate locations. To 
> > use the cross region backup and restore features in 4.22, it seems I need 
> > to link the cluster or add the second cluster as a region.
> >
> > Docs say to specify the region during the second cluster’s db creation. 
> > What if the second cluster was already built and running?
> >
> > Any advice is greatly appreciated.
> >
> > -jeremy
> >
> >

Attachment: signature.asc
Description: PGP signature

Reply via email to