tsinik-dw edited a comment on issue #5924:
URL: https://github.com/apache/cloudstack/issues/5924#issuecomment-1042690245


   Hi @nvazquez, 
   
   I've got good news!... I tried the command you proposed and the VM named 
MIGR-VM-SF-2 was successfully migrated from XEN to XCP with the following 
output on cmk:
   
   ~~~
   (localcloud) 🐱 > migrate virtualmachinewithvolume 
virtualmachineid=28fb0eed-4277-4f2e-a179-38ed3051c723 
hostid=73cd3fa4-01ce-4b55-970f-14efd0143aa1 
migrateto[0].volume=291563e5-2ea0-4aaa-b14e-b747d48231cd 
migrateto[0].pool=2a30020a-76f9-4f47-89f1-a447739b934d
   virtualmachine = map[account:admin affinitygroup:[] cpunumber:2 
cpuspeed:1200 cpuused:3.68% created:2022-02-16T20:53:28+0200 
details:map[hypervisortoolsversion:xenserver61 
platform:vga:std;videoram:8;apic:true;viridian:false;device_id:0001;timeoffset:-1;pae:true;acpi:1;hpet:true;secureboot:false;nx:true
 rootdisksize:6] diskioread:0 diskiowrite:0 diskkbsread:2.2594307e+07 
diskkbswrite:1.1737023e+07 diskofferingid:ed9e9144-c46a-4b33-a16d-6a16a32cbafa 
diskofferingname:VOL-on-SF-200 displayname:MIGR-VM-SF-2 displayvm:true 
domain:ROOT domainid:ce4fbcaf-8031-11ec-8660-4626a7ce6ce6 
guestosid:1dcfb598-8032-11ec-8660-4626a7ce6ce6 haenable:false 
hasannotations:false hostid:73cd3fa4-01ce-4b55-970f-14efd0143aa1 
hostname:xcp1.noc-dev.localdomain hypervisor:XenServer 
id:28fb0eed-4277-4f2e-a179-38ed3051c723 instancename:i-2-37-VM 
isdynamicallyscalable:false isodisplaytext:CentOS-7-x86_64-Minimal-2009 
isoid:129a82d9-86cd-4323-a5af-f2f0767bb70e isoname:CentOS-7-x86_64-Minimal-2009 
lastupdated:2022
 -02-16T22:20:34+0200 memory:2048 memoryintfreekbs:0 memorykbs:2.09714e+06 
memorytargetkbs:2.097152e+06 name:MIGR-VM-SF-2 networkkbsread:0 
networkkbswrite:0 nic:[map[broadcasturi:vlan://138 deviceid:0 
extradhcpoption:[] gateway:10.1.1.1 id:08b1c5b0-3846-4aa8-a25f-2841254d7aed 
ipaddress:10.1.1.53 isdefault:true isolationuri:vlan://138 
macaddress:02:00:1e:11:00:0d netmask:255.255.255.0 
networkid:78a105a3-0e08-4077-89f5-fa929be5d833 networkname:xenserver-net-1 
secondaryip:[] traffictype:Guest type:Isolated]] osdisplayname:CentOS 7 
ostypeid:1dcfb598-8032-11ec-8660-4626a7ce6ce6 passwordenabled:false 
pooltype:Iscsi receivedbytes:0 rootdeviceid:0 rootdevicetype:ROOT 
securitygroup:[] sentbytes:0 
serviceofferingid:e3a933aa-269e-4de7-9876-4b79dab6b885 
serviceofferingname:SF-200 state:Running tags:[] 
templatedisplaytext:CentOS-7-x86_64-Minimal-2009 
templateid:129a82d9-86cd-4323-a5af-f2f0767bb70e 
templatename:CentOS-7-x86_64-Minimal-2009 
userid:48bd974a-8032-11ec-8660-4626a7ce6ce6 username:admin
  zoneid:7980e0b4-ebbd-425a-97d3-c09af5890cee zonename:ZONE1]
   +
   +
   (localcloud) 🐱 > 
   ~~~
   
   I attach the related management log 
[nv_interclust_vm_migr_sf_success.txt](https://github.com/apache/cloudstack/files/8086515/nv_interclust_vm_migr_sf_success.txt)
   
   It is also **important** that I was able to migrate a VM with `ROOT` disk on 
managed storage (Solidfire) **and** a `DATA` disk on non-managed (NFS) storage 
by adding the appropriate entries in the command  for this extra disk! The 
`DATA` disk was migrated on the XCP's cluster-wide NFS storage pool.
   
   The cmk output was :
   ~~~
   (localcloud) 🐱 > migrate virtualmachinewithvolume 
virtualmachineid=28fb0eed-4277-4f2e-a179-38ed3051c723 
hostid=73cd3fa4-01ce-4b55-970f-14efd0143aa1 
migrateto[0].volume=291563e5-2ea0-4aaa-b14e-b747d48231cd  
migrateto[0].pool=2a30020a-76f9-4f47-89f1-a447739b934d 
migrateto[1].volume=e64b3984-65e2-4786-8189-d654deda52ec 
migrateto[1].pool=bda4aa2b-d38b-341b-a968-98762ac528cf
   virtualmachine = map[account:admin affinitygroup:[] cpunumber:2 
cpuspeed:1200 cpuused:9.4% created:2022-02-16T20:53:28+0200 
details:map[Message.ReservedCapacityFreed.Flag:false 
hypervisortoolsversion:xenserver61 
platform:device-model:qemu-upstream-compat;vga:std;videoram:8;apic:true;viridian:false;device_id:0001;timeoffset:-1;pae:true;acpi:1;hpet:true;secureboot:false;nx:true
 rootdisksize:6] diskioread:0 diskiowrite:0 diskkbsread:3.0130144e+07 
diskkbswrite:1.651675e+07 diskofferingid:ed9e9144-c46a-4b33-a16d-6a16a32cbafa 
diskofferingname:VOL-on-SF-200 displayname:MIGR-VM-SF-2 displayvm:true 
domain:ROOT domainid:ce4fbcaf-8031-11ec-8660-4626a7ce6ce6 
guestosid:1dcfb598-8032-11ec-8660-4626a7ce6ce6 haenable:false 
hasannotations:false hostid:73cd3fa4-01ce-4b55-970f-14efd0143aa1 
hostname:xcp1.noc-dev.localdomain hypervisor:XenServer 
id:28fb0eed-4277-4f2e-a179-38ed3051c723 instancename:i-2-37-VM 
isdynamicallyscalable:false lastupdated:2022-02-17T10:21:24+0200 memory:2048 
memoryintfreekbs:1
 346 memorykbs:2.09714e+06 memorytargetkbs:668210 name:MIGR-VM-SF-2 
networkkbsread:51031 networkkbswrite:51031 nic:[map[broadcasturi:vlan://138 
deviceid:0 extradhcpoption:[] gateway:10.1.1.1 
id:08b1c5b0-3846-4aa8-a25f-2841254d7aed ipaddress:10.1.1.53 isdefault:true 
isolationuri:vlan://138 macaddress:02:00:1e:11:00:0d netmask:255.255.255.0 
networkid:78a105a3-0e08-4077-89f5-fa929be5d833 networkname:xenserver-net-1 
secondaryip:[] traffictype:Guest type:Isolated]] osdisplayname:CentOS 7 
ostypeid:1dcfb598-8032-11ec-8660-4626a7ce6ce6 passwordenabled:false 
pooltype:Iscsi receivedbytes:0 rootdeviceid:0 rootdevicetype:ROOT 
securitygroup:[] sentbytes:0 
serviceofferingid:e3a933aa-269e-4de7-9876-4b79dab6b885 
serviceofferingname:SF-200 state:Running tags:[] 
templatedisplaytext:CentOS-7-x86_64-Minimal-2009 
templateid:129a82d9-86cd-4323-a5af-f2f0767bb70e 
templatename:CentOS-7-x86_64-Minimal-2009 
userid:48bd974a-8032-11ec-8660-4626a7ce6ce6 username:admin 
zoneid:7980e0b4-ebbd-425a-97d3-c09af5890cee z
 onename:ZONE1]
   ~~~
   
   Here is the management log for this second trial 
[nv_interclust_vm_migr_sf_nfs_success.txt](https://github.com/apache/cloudstack/files/8086805/nv_interclust_vm_migr_sf_nfs_success.txt)
    


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to