spdinis commented on issue #8632:
URL: https://github.com/apache/cloudstack/issues/8632#issuecomment-2097564272

   Hi, I have been doing a lot of testing round this and indeed seems that if 
the target storage isn't cluster wide it will give the the mentioned error, 
something that needs to be addreseed.
   
   There are other issues, that I likely should address separately, but just so 
you know, if your source vmware infra uses distributed switches, you won't be 
able to migrate unless you use a recent EL version on version 9.2 and above.
   
   This is due to the fact that libvirt library when reading the vmx file is 
expecting always with network labels as the standard switches. so anything 
running libvirt 8.0.0 like ubuntu 22.04, El8 and others, won't successfully 
migrate it.
   
   This is documented here: https://bugzilla.redhat.com/show_bug.cgi?id=1988211 
and has been fixed on RHBA-2023:2171 errata from 9.2 and above.
   
   ubuntu 24.04 is documented to use libvirt 9 so I guess it will work, but 
4.19 doens't officially supports it.
   
   The other issue I have found is that the migration as documented and is a by 
product of the virt-v2v relies on the connectivity from the converter to the 
vcenter and vcenter to amangement network of the ESXi. There won't be a direct 
connection ever from the converter to the ESXi itself. That means that in most 
cases the migration is painful slow.
   
   The other thing is that the migration claimed as online isn't really online, 
so don't even bother doing it, you will endup with potential issues, just shut 
off the VM.
   
   Finally the way I managed to make it snappy and works very well for us is to 
present secondary storage or any other NFS volume both to the converter and the 
ESXi, For excess of caution I do a clone of the original VM after power off 
into the NFS mount, that is typically quite fast.
   
   Then on the converter node in Rocky 9.3 in my case, I simply do virt-v2v -i 
vmx [NFS mount path to the cloned vm .vmx file] and then -o libvirt -of qcow2 
-os [target virsh storage pool], this output is useful for a server that is 
part of an existing cluster and you can simply point it to the primary storage 
you want by just replacing that by the uid you can see in cloudstack. 
Alternatively you can use -o local -of qcow2 -os [server target mount path], 
this is useful if the vonverter isn't part of an existing cluster and isn't 
aware of the cloudstack pools.
   
   The final step for us, can be the 1st is to create a dummy VM with the exact 
specs you need in cloudstack, boot it up, shut it down, see where are the disks 
and then on the primary storage simply mv the converted disk to replace the 
disks created by the dummy instance. And then power up and you just have to 
correct the network potentially.
   
   After many many attempts I found this method to work in 15 minutes end to 
end on an average VM that has 30 to 40 GB disk and has worked 100% of the time 
so far with no hassles. The native cloudstack migration just doesn't work for 
us because of the data path it takes, it is specifically penalizing in our 
enviroment. If you have firewalls between things and 1G management networks or 
a vcenter for different locations, avoid the native migration for now, you will 
be bagging your head quite significantly with mixed results.
   
   Some of this findings I will likely open separate issues for it, since 
specially the issue with the distributed switches will let a lot of people 
crying specially if they are ubuntu shop or EL8.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to