Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi
>>On 24/03/17, 1:51 AM, "Tutkowski, Mike"  wrote:

>>Thanks, Simon
>>I wonder if we support that in CloudStack.

IIRC, CloudStack supports this. I am testing this out in my 4.10 environment, 
will update shortly.

Regards,
Sateesh

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license 
I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

The source datastore and the target datastore are both 
using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI 
based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>>
>>
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi
>>On 24/03/17, 4:18 AM, "Tutkowski, Mike"  wrote:

>>OK, yeah, it does.

>>The source host has access to the source datastore and the destination host 
>>has access to the destination datastore.
>>The source host does not have access to the destination datastore nor does 
>>the destination host have access to the source datastore.
Still this should be supported by CloudStack.
  
>>I've been focusing on doing this with a source and a host datastore that are 
>>both either NFS or iSCSI (but I think you should be able to go NFS to iSCSI 
>>or vice versa, as well).

Mike, I will try this scenario with 4.10 and will share the update.

Regards,
Sateesh

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the destination 
datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires a 
special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk to 
host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk to 
storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating the 
support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS for 
both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method 
RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Not sure. Unfortunately my dev environment is currently being used for 4.10, so 
I don't have the resources to test prior releases at present.

It's hard to say at the moment when this was broken, but it does seem pretty 
important.

> On Mar 23, 2017, at 6:17 PM, Sergey Levitskiy  
> wrote:
> 
> Was it working before in 4.9?
> 
> On 3/23/17, 5:03 PM, "Tutkowski, Mike"  wrote:
> 
>I think I should open a blocker for this for 4.10. Perhaps one of our 
> VMware people can take a look. It sounds like it’s a critical issue.
> 
>On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:
> 
>OK, yeah, it does.
> 
>The source host has access to the source datastore and the destination 
> host has access to the destination datastore.
> 
>The source host does not have access to the destination datastore nor 
> does the destination host have access to the source datastore.
> 
>I've been focusing on doing this with a source and a host datastore 
> that are both either NFS or iSCSI (but I think you should be able to go NFS 
> to iSCSI or vice versa, as well).
> 
>> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
>>  wrote:
>> 
>> It shouldn’t as long the destination host has access to the destination 
>> datastore.
>> 
>> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
>> 
>>   So, in my case, both the source and target datastores are cluster-scoped 
>> primary storage in CloudStack (not zone wide). Would that matter? For 
>> XenServer, that cluster-scoped configuration (but using storage 
>> repositories, of course) works.
>> 
>>   On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
>> wrote:
>> 
>>   It looks like a bug. For vmware, moving root volume with migrateVolume 
>> with livemigrate=true for zone-wide PS works just fine for us. In the 
>> background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
>> works also perfectly fine. I know for a fact that vmware supports moving 
>> from host to host and storage to storage at the same time so it seems to be 
>> a bug in migrateVirtualMachineWithVolume implementation. vSphere standard 
>> license is enough for both regular and storage vMotion.
>> 
>>   On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
>> wrote:
>> 
>>   Thanks, Simon
>> 
>>   I wonder if we support that in CloudStack.
>> 
>>   On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
>> 
>>   Mike,
>> 
>> 
>>   It is possible to do this on vcenter, but it requires a 
>> special license I believe.
>> 
>> 
>>   Here's the info on it :
>> 
>>   
>> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
>> 
>>   
>> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
>> 
>> 
>>   - Si
>>   
>>   From: Tutkowski, Mike 
>>   Sent: Thursday, March 23, 2017 3:09 PM
>>   To: dev@cloudstack.apache.org
>>   Subject: Re: Cannot migrate VMware VM with root disk to host 
>> in different cluster (CloudStack 4.10)
>> 
>>   This is interesting:
>> 
>>   If I shut the VM down and then migrate its root disk to 
>> storage in the other cluster, then start up the VM, the VM gets started up 
>> correctly (running on the new host using the other datastore).
>> 
>>   Perhaps you simply cannot live migrate a VM and its storage 
>> from one cluster to another with VMware? This works for XenServer and I 
>> probably just assumed it would work in VMware, but maybe it doesn’t?
>> 
>>   The reason I’m asking now is because I’m investigating the 
>> support of cross-cluster migration of a VM that uses managed storage. This 
>> works for XenServer as of 4.9 and I was looking to implement similar 
>> functionality for VMware.
>> 
>>   On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
>>  wrote:
>> 
>>   Another piece of info:
>> 
>>   I tried this same VM + storage migration using NFS for 
>> both datastores instead of iSCSI for both datastores and it failed with the 
>> same error message:
>> 
>>   Required property datastore is missing from data object of 
>> type VirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type 
>> vim.vm.RelocateSpec.DiskLocator
>>   at line 1, column 326
>> 
>>   while parsing property "disk" of static type 
>> ArrayOfVirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
I opened the following ticket for this issue:

https://issues.apache.org/jira/browse/CLOUDSTACK-9849

On 3/23/17, 6:03 PM, "Tutkowski, Mike"  wrote:

I think I should open a blocker for this for 4.10. Perhaps one of our 
VMware people can take a look. It sounds like it’s a critical issue.

On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:

OK, yeah, it does.

The source host has access to the source datastore and the destination 
host has access to the destination datastore.

The source host does not have access to the destination datastore nor 
does the destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore 
that are both either NFS or iSCSI (but I think you should be able to go NFS to 
iSCSI or vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the 
destination datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  
wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike" 
 wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  
wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires 
a special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk 
to host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk 
to storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating 
the support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS 
for both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
Was it working before in 4.9?

On 3/23/17, 5:03 PM, "Tutkowski, Mike"  wrote:

I think I should open a blocker for this for 4.10. Perhaps one of our 
VMware people can take a look. It sounds like it’s a critical issue.

On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:

OK, yeah, it does.

The source host has access to the source datastore and the destination 
host has access to the destination datastore.

The source host does not have access to the destination datastore nor 
does the destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore 
that are both either NFS or iSCSI (but I think you should be able to go NFS to 
iSCSI or vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the 
destination datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  
wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike" 
 wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  
wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires 
a special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk 
to host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk 
to storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating 
the support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS 
for both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
> 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
OK, yeah, it does.

The source host has access to the source datastore and the destination host has 
access to the destination datastore.

The source host does not have access to the destination datastore nor does the 
destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore that are 
both either NFS or iSCSI (but I think you should be able to go NFS to iSCSI or 
vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy  
> wrote:
> 
> It shouldn’t as long the destination host has access to the destination 
> datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
> 
>So, in my case, both the source and target datastores are cluster-scoped 
> primary storage in CloudStack (not zone wide). Would that matter? For 
> XenServer, that cluster-scoped configuration (but using storage repositories, 
> of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
> wrote:
> 
>It looks like a bug. For vmware, moving root volume with migrateVolume 
> with livemigrate=true for zone-wide PS works just fine for us. In the 
> background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
> works also perfectly fine. I know for a fact that vmware supports moving from 
> host to host and storage to storage at the same time so it seems to be a bug 
> in migrateVirtualMachineWithVolume implementation. vSphere standard license 
> is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
> wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires a 
> special license I believe.
> 
> 
>Here's the info on it :
> 
>
> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk to host 
> in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk to 
> storage in the other cluster, then start up the VM, the VM gets started up 
> correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its storage 
> from one cluster to another with VMware? This works for XenServer and I 
> probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating the 
> support of cross-cluster migration of a VM that uses managed storage. This 
> works for XenServer as of 4.9 and I was looking to implement similar 
> functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
>  wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS for 
> both datastores instead of iSCSI for both datastores and it failed with the 
> same error message:
> 
>Required property datastore is missing from data object of 
> type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
> ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while parsing SOAP envelope
>at line 1, column 38
> 
>while parsing HTTP request for method relocate
>on object of type vim.VirtualMachine
>at line 1, column 0
> 
>On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
>  wrote:
> 
>Slight typo:
> 
>Both ESXi hosts are version 5.5 and both clusters are 
> 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
It shouldn’t as long the destination host has access to the destination 
datastore.

On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:

So, in my case, both the source and target datastores are cluster-scoped 
primary storage in CloudStack (not zone wide). Would that matter? For 
XenServer, that cluster-scoped configuration (but using storage repositories, 
of course) works.

On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
wrote:

It looks like a bug. For vmware, moving root volume with migrateVolume 
with livemigrate=true for zone-wide PS works just fine for us. In the 
background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
works also perfectly fine. I know for a fact that vmware supports moving from 
host to host and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special 
license I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage 
in the other cluster, then start up the VM, the VM gets started up correctly 
(running on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage 
from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the 
support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of 
type VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
 wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datacenter.
   

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
It looks like a bug. For vmware, moving root volume with migrateVolume with 
livemigrate=true for zone-wide PS works just fine for us. In the background, it 
uses StoragevMotion. From another angle MigrateVirtualMachine works also 
perfectly fine. I know for a fact that vmware supports moving from host to host 
and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license 
I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

The source datastore and the target datastore are both 
using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI 
based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
So, in my case, both the source and target datastores are cluster-scoped 
primary storage in CloudStack (not zone wide). Would that matter? For 
XenServer, that cluster-scoped configuration (but using storage repositories, 
of course) works.

On 3/23/17, 2:31 PM, "Sergey Levitskiy"  wrote:

It looks like a bug. For vmware, moving root volume with migrateVolume with 
livemigrate=true for zone-wide PS works just fine for us. In the background, it 
uses StoragevMotion. From another angle MigrateVirtualMachine works also 
perfectly fine. I know for a fact that vmware supports moving from host to host 
and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special 
license I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in 
the other cluster, then start up the VM, the VM gets started up correctly 
(running on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from 
one cluster to another with VMware? This works for XenServer and I probably 
just assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support 
of cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
 wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license I 
believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in different 
cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both datastores 
instead of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  
wrote:

A little update here:

In the debugger, I made sure we asked for the correct source 
datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having 
the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

The source datastore and the target datastore are both using 
iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>>
>>
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>>
 However, perhaps someone can clear this up for me:
 With XenServer, we are able to migrate a VM and its 
volumes from a host using a shared SR in one cluster to a host using a shared 
SR in another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the 
source host have to be able to see the target datastore? If so, does that mean 
the target datastore has to be zone-wide primary storage when using VMware to 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
This is interesting:

If I shut the VM down and then migrate its root disk to storage in the other 
cluster, then start up the VM, the VM gets started up correctly (running on the 
new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one cluster to 
another with VMware? This works for XenServer and I probably just assumed it 
would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both datastores 
instead of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  
wrote:

A little update here:

In the debugger, I made sure we asked for the correct source 
datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the 
proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to 
access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  
wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its 
volumes from a host using a shared SR in one cluster to a host using a shared 
SR in another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source 
host have to be able to see the target datastore? If so, does that mean the 
target datastore has to be zone-wide primary storage when using VMware to make 
this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 
only. In vSphere 5.1 and later, vMotion does not require environments with 
shared storage. This is useful for performing cross-cluster migrations, when 
the target cluster machines might not have access to the source cluster's 
storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   This looks a little suspicious to me (in 
VmwareResource before we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Another piece of info:

I tried this same VM + storage migration using NFS for both datastores instead 
of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  wrote:

A little update here:

In the debugger, I made sure we asked for the correct source datastore 
(I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the 
proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to 
access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  
wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes 
from a host using a shared SR in one cluster to a host using a shared SR in 
another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host 
have to be able to see the target datastore? If so, does that mean the target 
datastore has to be zone-wide primary storage when using VMware to make this 
work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. 
In vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource 
before we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the 
target datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to 
both the target and source datastores. Since filerTo.getUuid() has the UUID for 
the target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
being null.
>> 
>>   For some reason, we only check if morDsAtTarget is null 
(I’m not sure why we don’t check if 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  wrote:

A little update here:

In the debugger, I made sure we asked for the correct source datastore (I 
edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the proper 
source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to access 
file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a 
host using a shared SR in one cluster to a host using a shared SR in another 
cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host 
have to be able to see the target datastore? If so, does that mean the target 
datastore has to be zone-wide primary storage when using VMware to make this 
work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In 
vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before 
we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target 
datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both 
the target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m 
not sure why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, 
which eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean 
changeDatastore(VirtualMachineRelocateSpec relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
_context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>  

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
A little update here:

In the debugger, I made sure we asked for the correct source datastore (I 
edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the proper source 
and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to access file 
[SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike  
wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike"  
wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a 
host using a shared SR in one cluster to a host using a shared SR in another 
cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host have to 
be able to see the target datastore? If so, does that mean the target datastore 
has to be zone-wide primary storage when using VMware to make this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In 
vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before we 
call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target 
datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both the 
target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m not 
sure why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, which 
eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
_context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>   s_logger.error("VMware RelocateVM_Task to change 
datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>>   }
>>   return false;
>>   }
>> 
>>   The parameter, VirtualMachineRelocateSpec, looks like this:
>> 
>>   http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
>> 
>>   The following error message is returned:
>> 
>>   Required property datastore is missing from data object of 
type 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike  
> wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
>>  wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a host 
 using a shared SR in one cluster to a host using a shared SR in another 
 cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host have to be 
 able to see the target datastore? If so, does that mean the target 
 datastore has to be zone-wide primary storage when using VMware to make 
 this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 
>> 5.1 and later, vMotion does not require environments with shared storage. 
>> This is useful for performing cross-cluster migrations, when the target 
>> cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before we call 
>> VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
>> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
>> filerTo.getUuid());
>>   morDsAtSource = 
>> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
>> filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target datastore: 
>> " + filerTo.getUuid() + " on target host: " + 
>> tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both the 
>> target and source datastores. Since filerTo.getUuid() has the UUID for the 
>> target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
>> being null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m not sure 
>> why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
>> wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
>> migrateVirtualMachineWithVolume (which is expected since I’m asking to 
>> migrate a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, which 
>> eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean changeDatastore(VirtualMachineRelocateSpec 
>> relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
>> _context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
>> VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
>> _context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>   s_logger.error("VMware RelocateVM_Task to change 
>> datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>>   }
>>   return false;
>>   }
>> 
>>   The parameter, VirtualMachineRelocateSpec, looks like this:
>> 
>>   http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
>> 
>>   The following error message is returned:
>> 
>>   Required property datastore is missing from data object of type 
>> VirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type 
>> vim.vm.RelocateSpec.DiskLocator
>>   at line 1, column 327
>> 
>>   while parsing property "disk" of static type 
>> ArrayOfVirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type vim.vm.RelocateSpec
>>   at line 1, column 187
>> 
>>   while parsing call information for method RelocateVM_Task
>>   at line 1, column 110
>> 
>>   while parsing SOAP body
>>   at line 1, column 102
>> 
>>   while parsing SOAP envelope
>>   at line 1, column 38
>> 
>>   while parsing HTTP request for method relocate
>>   on object of type vim.VirtualMachine
>>   at line 1, column 0
>> 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
My version is 5.5 in both clusters.

> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
>  wrote:
> 
> 
>>> On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:
> 
>>> However, perhaps someone can clear this up for me:   
>>> With XenServer, we are able to migrate a VM and its volumes from a host 
>>> using a shared SR in one cluster to a host using a shared SR in another 
>>> cluster even though the source host can’t see the target SR.
>>> Is the same thing possible with VMware or does the source host have to be 
>>> able to see the target datastore? If so, does that mean the target 
>>> datastore has to be zone-wide primary storage when using VMware to make 
>>> this work?
> Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 
> 5.1 and later, vMotion does not require environments with shared storage. 
> This is useful for performing cross-cluster migrations, when the target 
> cluster machines might not have access to the source cluster's storage.
> BTW, what is the version of ESXi hosts in this setup? 
> 
> Regards,
> Sateesh,
> CloudStack development,
> Accelerite, CA-95054
> 
>On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:
> 
>This looks a little suspicious to me (in VmwareResource before we call 
> VirtualMachineMO.changeDatastore):
> 
>morDsAtTarget = 
> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
> filerTo.getUuid());
>morDsAtSource = 
> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
> filerTo.getUuid());
>if (morDsAtTarget == null) {
>String msg = "Unable to find the target datastore: 
> " + filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() 
> + " to execute MigrateWithStorageCommand";
>s_logger.error(msg);
>throw new Exception(msg);
>}
> 
>We use filerTo.getUuid() when trying to get a pointer to both the 
> target and source datastores. Since filerTo.getUuid() has the UUID for the 
> target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
> being null.
> 
>For some reason, we only check if morDsAtTarget is null (I’m not sure 
> why we don’t check if morDsAtSource is null, too).
> 
>On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
> wrote:
> 
>Hi,
> 
>The CloudStack API that the GUI is invoking is 
> migrateVirtualMachineWithVolume (which is expected since I’m asking to 
> migrate a VM from a host in one cluster to a host in another cluster).
> 
>A MigrateWithStorageCommand is sent to VmwareResource, which 
> eventually calls VirtualMachineMO.changeDatastore.
> 
>public boolean changeDatastore(VirtualMachineRelocateSpec 
> relocateSpec) throws Exception {
>ManagedObjectReference morTask = 
> _context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
> VirtualMachineMovePriority.DEFAULT_PRIORITY);
>boolean result = 
> _context.getVimClient().waitForTask(morTask);
>if (result) {
>_context.waitForTaskProgressDone(morTask);
>return true;
>} else {
>s_logger.error("VMware RelocateVM_Task to change 
> datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>}
>return false;
>}
> 
>The parameter, VirtualMachineRelocateSpec, looks like this:
> 
>http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
> 
>The following error message is returned:
> 
>Required property datastore is missing from data object of type 
> VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec.DiskLocator
>at line 1, column 327
> 
>while parsing property "disk" of static type 
> ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while parsing SOAP envelope
>at line 1, column 38
> 
>while parsing HTTP request for method relocate
>on object of type vim.VirtualMachine
>at line 1, column 0
> 
>Thoughts?
> 
>Thanks!
>Mike
> 
>On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
>  wrote:
> 
> 
>Can you trace which API call being 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi

>> On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:

>>However, perhaps someone can clear this up for me:   
>>With XenServer, we are able to migrate a VM and its volumes from a host using 
>>a shared SR in one cluster to a host using a shared SR in another cluster 
>>even though the source host can’t see the target SR.
>>Is the same thing possible with VMware or does the source host have to be 
>>able to see the target datastore? If so, does that mean the target datastore 
>>has to be zone-wide primary storage when using VMware to make this work?
Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 5.1 
and later, vMotion does not require environments with shared storage. This is 
useful for performing cross-cluster migrations, when the target cluster 
machines might not have access to the source cluster's storage.
BTW, what is the version of ESXi hosts in this setup? 

Regards,
Sateesh,
CloudStack development,
Accelerite, CA-95054

On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:

This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: 
" + filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + 
" to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the 
target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.

For some reason, we only check if morDsAtTarget is null (I’m not sure 
why we don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which 
eventually calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = 
_context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change 
datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
 wrote:


Can you trace which API call being used and what parameters 
were specified? 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: " + 
filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + " 
to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the target and 
source datastores. Since filerTo.getUuid() has the UUID for the target 
datastore, that works for morDsAtTarget, but morDsAtSource ends up being null.

For some reason, we only check if morDsAtTarget is null (I’m not sure why we 
don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually 
calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec relocateSpec) 
throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore 
failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy"  
wrote:


Can you trace which API call being used and what parameters were 
specified? migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine







Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
However, perhaps someone can clear this up for me:

With XenServer, we are able to migrate a VM and its volumes from a host using a 
shared SR in one cluster to a host using a shared SR in another cluster even 
though the source host can’t see the target SR.

Is the same thing possible with VMware or does the source host have to be able 
to see the target datastore? If so, does that mean the target datastore has to 
be zone-wide primary storage when using VMware to make this work?

On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:

This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: " + 
filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + " 
to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the target 
and source datastores. Since filerTo.getUuid() has the UUID for the target 
datastore, that works for morDsAtTarget, but morDsAtSource ends up being null.

For some reason, we only check if morDsAtTarget is null (I’m not sure why 
we don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually 
calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore 
failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
 wrote:


Can you trace which API call being used and what parameters were 
specified? migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine









Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Hi,

The CloudStack API that the GUI is invoking is migrateVirtualMachineWithVolume 
(which is expected since I’m asking to migrate a VM from a host in one cluster 
to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually calls 
VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec relocateSpec) 
throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore failed 
due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy"  wrote:


Can you trace which API call being used and what parameters were specified? 
migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine





Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-22 Thread Sergey Levitskiy

Can you trace which API call being used and what parameters were specified? 
migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine



Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-22 Thread Tutkowski, Mike
Slight correction:

That image on Imgur is of the VirtualMachineRelocateSpec that’s passed to the 
VirtualMachineMO.changeDatastore method to attempt to migrate the VM and its 
root disk.

From: "Tutkowski, Mike" 
Date: Wednesday, March 22, 2017 at 11:10 PM
To: "dev@cloudstack.apache.org" 
Subject: Cannot migrate VMware VM with root disk to host in different cluster 
(CloudStack 4.10)

Hi,

Within CloudStack 4.10, I have two VMware 5.5 clusters (both in the same VMware 
datacenter).

Each cluster has a shared iSCSI-based datastore.

I can initiate the process of migrating a VM and its root disk from a host in 
one cluster to a host in another cluster. However, it fails with the following 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

When I re-ran the test in the debugger and looked at the 
VirtualMachineRelocateSpecDiskLocator instance mentioned in the error message, 
I saw the following:

http://imgur.com/a/vtKcq

It looks like the code is not specifying the source datastore.

Can anyone else reproduce this?

Thanks!
Mike


Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-22 Thread Tutkowski, Mike
Also, by the way, datastore-66 (referenced in the image) is the target 
datastore.

From: "Tutkowski, Mike" 
Date: Wednesday, March 22, 2017 at 11:13 PM
To: "dev@cloudstack.apache.org" 
Subject: Re: Cannot migrate VMware VM with root disk to host in different 
cluster (CloudStack 4.10)

Slight correction:

That image on Imgur is of the VirtualMachineRelocateSpec that’s passed to the 
VirtualMachineMO.changeDatastore method to attempt to migrate the VM and its 
root disk.

From: "Tutkowski, Mike" 
Date: Wednesday, March 22, 2017 at 11:10 PM
To: "dev@cloudstack.apache.org" 
Subject: Cannot migrate VMware VM with root disk to host in different cluster 
(CloudStack 4.10)

Hi,

Within CloudStack 4.10, I have two VMware 5.5 clusters (both in the same VMware 
datacenter).

Each cluster has a shared iSCSI-based datastore.

I can initiate the process of migrating a VM and its root disk from a host in 
one cluster to a host in another cluster. However, it fails with the following 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

When I re-ran the test in the debugger and looked at the 
VirtualMachineRelocateSpecDiskLocator instance mentioned in the error message, 
I saw the following:

http://imgur.com/a/vtKcq

It looks like the code is not specifying the source datastore.

Can anyone else reproduce this?

Thanks!
Mike


Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-22 Thread Tutkowski, Mike
Hi,

Within CloudStack 4.10, I have two VMware 5.5 clusters (both in the same VMware 
datacenter).

Each cluster has a shared iSCSI-based datastore.

I can initiate the process of migrating a VM and its root disk from a host in 
one cluster to a host in another cluster. However, it fails with the following 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

When I re-ran the test in the debugger and looked at the 
VirtualMachineRelocateSpecDiskLocator instance mentioned in the error message, 
I saw the following:

http://imgur.com/a/vtKcq

It looks like the code is not specifying the source datastore.

Can anyone else reproduce this?

Thanks!
Mike