Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi
>>On 24/03/17, 1:51 AM, "Tutkowski, Mike"  wrote:

>>Thanks, Simon
>>I wonder if we support that in CloudStack.

IIRC, CloudStack supports this. I am testing this out in my 4.10 environment, 
will update shortly.

Regards,
Sateesh

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license 
I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

The source datastore and the target datastore are both 
using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI 
based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>>
>>
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi
>>On 24/03/17, 4:18 AM, "Tutkowski, Mike"  wrote:

>>OK, yeah, it does.

>>The source host has access to the source datastore and the destination host 
>>has access to the destination datastore.
>>The source host does not have access to the destination datastore nor does 
>>the destination host have access to the source datastore.
Still this should be supported by CloudStack.
  
>>I've been focusing on doing this with a source and a host datastore that are 
>>both either NFS or iSCSI (but I think you should be able to go NFS to iSCSI 
>>or vice versa, as well).

Mike, I will try this scenario with 4.10 and will share the update.

Regards,
Sateesh

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the destination 
datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires a 
special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk to 
host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk to 
storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating the 
support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS for 
both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method 
RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Not sure. Unfortunately my dev environment is currently being used for 4.10, so 
I don't have the resources to test prior releases at present.

It's hard to say at the moment when this was broken, but it does seem pretty 
important.

> On Mar 23, 2017, at 6:17 PM, Sergey Levitskiy  
> wrote:
> 
> Was it working before in 4.9?
> 
> On 3/23/17, 5:03 PM, "Tutkowski, Mike"  wrote:
> 
>I think I should open a blocker for this for 4.10. Perhaps one of our 
> VMware people can take a look. It sounds like it’s a critical issue.
> 
>On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:
> 
>OK, yeah, it does.
> 
>The source host has access to the source datastore and the destination 
> host has access to the destination datastore.
> 
>The source host does not have access to the destination datastore nor 
> does the destination host have access to the source datastore.
> 
>I've been focusing on doing this with a source and a host datastore 
> that are both either NFS or iSCSI (but I think you should be able to go NFS 
> to iSCSI or vice versa, as well).
> 
>> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
>>  wrote:
>> 
>> It shouldn’t as long the destination host has access to the destination 
>> datastore.
>> 
>> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
>> 
>>   So, in my case, both the source and target datastores are cluster-scoped 
>> primary storage in CloudStack (not zone wide). Would that matter? For 
>> XenServer, that cluster-scoped configuration (but using storage 
>> repositories, of course) works.
>> 
>>   On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
>> wrote:
>> 
>>   It looks like a bug. For vmware, moving root volume with migrateVolume 
>> with livemigrate=true for zone-wide PS works just fine for us. In the 
>> background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
>> works also perfectly fine. I know for a fact that vmware supports moving 
>> from host to host and storage to storage at the same time so it seems to be 
>> a bug in migrateVirtualMachineWithVolume implementation. vSphere standard 
>> license is enough for both regular and storage vMotion.
>> 
>>   On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
>> wrote:
>> 
>>   Thanks, Simon
>> 
>>   I wonder if we support that in CloudStack.
>> 
>>   On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
>> 
>>   Mike,
>> 
>> 
>>   It is possible to do this on vcenter, but it requires a 
>> special license I believe.
>> 
>> 
>>   Here's the info on it :
>> 
>>   
>> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
>> 
>>   
>> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
>> 
>> 
>>   - Si
>>   
>>   From: Tutkowski, Mike 
>>   Sent: Thursday, March 23, 2017 3:09 PM
>>   To: dev@cloudstack.apache.org
>>   Subject: Re: Cannot migrate VMware VM with root disk to host 
>> in different cluster (CloudStack 4.10)
>> 
>>   This is interesting:
>> 
>>   If I shut the VM down and then migrate its root disk to 
>> storage in the other cluster, then start up the VM, the VM gets started up 
>> correctly (running on the new host using the other datastore).
>> 
>>   Perhaps you simply cannot live migrate a VM and its storage 
>> from one cluster to another with VMware? This works for XenServer and I 
>> probably just assumed it would work in VMware, but maybe it doesn’t?
>> 
>>   The reason I’m asking now is because I’m investigating the 
>> support of cross-cluster migration of a VM that uses managed storage. This 
>> works for XenServer as of 4.9 and I was looking to implement similar 
>> functionality for VMware.
>> 
>>   On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
>>  wrote:
>> 
>>   Another piece of info:
>> 
>>   I tried this same VM + storage migration using NFS for 
>> both datastores instead of iSCSI for both datastores and it failed with the 
>> same error message:
>> 
>>   Required property datastore is missing from data object of 
>> type VirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type 
>> vim.vm.RelocateSpec.DiskLocator
>>   at line 1, column 326
>> 
>>   while parsing property "disk" of static type 
>> ArrayOfVirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
I opened the following ticket for this issue:

https://issues.apache.org/jira/browse/CLOUDSTACK-9849

On 3/23/17, 6:03 PM, "Tutkowski, Mike"  wrote:

I think I should open a blocker for this for 4.10. Perhaps one of our 
VMware people can take a look. It sounds like it’s a critical issue.

On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:

OK, yeah, it does.

The source host has access to the source datastore and the destination 
host has access to the destination datastore.

The source host does not have access to the destination datastore nor 
does the destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore 
that are both either NFS or iSCSI (but I think you should be able to go NFS to 
iSCSI or vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the 
destination datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  
wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike" 
 wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  
wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires 
a special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk 
to host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk 
to storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating 
the support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS 
for both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
Was it working before in 4.9?

On 3/23/17, 5:03 PM, "Tutkowski, Mike"  wrote:

I think I should open a blocker for this for 4.10. Perhaps one of our 
VMware people can take a look. It sounds like it’s a critical issue.

On 3/23/17, 4:48 PM, "Tutkowski, Mike"  wrote:

OK, yeah, it does.

The source host has access to the source datastore and the destination 
host has access to the destination datastore.

The source host does not have access to the destination datastore nor 
does the destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore 
that are both either NFS or iSCSI (but I think you should be able to go NFS to 
iSCSI or vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
 wrote:
> 
> It shouldn’t as long the destination host has access to the 
destination datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  
wrote:
> 
>So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
 wrote:
> 
>It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike" 
 wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  
wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires 
a special license I believe.
> 
> 
>Here's the info on it :
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk 
to host in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk 
to storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating 
the support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS 
for both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
> 
>Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
> 

IPv6 in Basic Networking at CCC Miami (16-18 May)

2017-03-23 Thread Wido den Hollander
Hi,

CS 4.10 will have IPv6 support in Basic networking with support for security 
grouping as well.

I will be present at CCC in Miami in May ( http://us.cloudstackcollab.org/ ) 
and I will give a talk there about IPv6 in Basic Networking.

If you are looking in to deploying IPv6 in Basic Networking, please come to 
this event!

I will make sure I have a live demo environment ready in one of our datacenters 
to show how it works and help you with drafting a design for your own cloud.

Another reason to come to CCC in Miami!

Register through http://us.cloudstackcollab.org/

See you in Miami!

Wido


Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
OK, yeah, it does.

The source host has access to the source datastore and the destination host has 
access to the destination datastore.

The source host does not have access to the destination datastore nor does the 
destination host have access to the source datastore.

I've been focusing on doing this with a source and a host datastore that are 
both either NFS or iSCSI (but I think you should be able to go NFS to iSCSI or 
vice versa, as well).

> On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy  
> wrote:
> 
> It shouldn’t as long the destination host has access to the destination 
> datastore.
> 
> On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:
> 
>So, in my case, both the source and target datastores are cluster-scoped 
> primary storage in CloudStack (not zone wide). Would that matter? For 
> XenServer, that cluster-scoped configuration (but using storage repositories, 
> of course) works.
> 
>On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
> wrote:
> 
>It looks like a bug. For vmware, moving root volume with migrateVolume 
> with livemigrate=true for zone-wide PS works just fine for us. In the 
> background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
> works also perfectly fine. I know for a fact that vmware supports moving from 
> host to host and storage to storage at the same time so it seems to be a bug 
> in migrateVirtualMachineWithVolume implementation. vSphere standard license 
> is enough for both regular and storage vMotion.
> 
>On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
> wrote:
> 
>Thanks, Simon
> 
>I wonder if we support that in CloudStack.
> 
>On 3/23/17, 2:18 PM, "Simon Weller"  wrote:
> 
>Mike,
> 
> 
>It is possible to do this on vcenter, but it requires a 
> special license I believe.
> 
> 
>Here's the info on it :
> 
>
> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
> 
>
> https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
> 
> 
>- Si
>
>From: Tutkowski, Mike 
>Sent: Thursday, March 23, 2017 3:09 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Cannot migrate VMware VM with root disk to host 
> in different cluster (CloudStack 4.10)
> 
>This is interesting:
> 
>If I shut the VM down and then migrate its root disk to 
> storage in the other cluster, then start up the VM, the VM gets started up 
> correctly (running on the new host using the other datastore).
> 
>Perhaps you simply cannot live migrate a VM and its storage 
> from one cluster to another with VMware? This works for XenServer and I 
> probably just assumed it would work in VMware, but maybe it doesn’t?
> 
>The reason I’m asking now is because I’m investigating the 
> support of cross-cluster migration of a VM that uses managed storage. This 
> works for XenServer as of 4.9 and I was looking to implement similar 
> functionality for VMware.
> 
>On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
>  wrote:
> 
>Another piece of info:
> 
>I tried this same VM + storage migration using NFS for 
> both datastores instead of iSCSI for both datastores and it failed with the 
> same error message:
> 
>Required property datastore is missing from data object of 
> type VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec.DiskLocator
>at line 1, column 326
> 
>while parsing property "disk" of static type 
> ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while parsing SOAP envelope
>at line 1, column 38
> 
>while parsing HTTP request for method relocate
>on object of type vim.VirtualMachine
>at line 1, column 0
> 
>On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
>  wrote:
> 
>Slight typo:
> 
>Both ESXi hosts are version 5.5 and both clusters are 
> 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
It shouldn’t as long the destination host has access to the destination 
datastore.

On 3/23/17, 1:34 PM, "Tutkowski, Mike"  wrote:

So, in my case, both the source and target datastores are cluster-scoped 
primary storage in CloudStack (not zone wide). Would that matter? For 
XenServer, that cluster-scoped configuration (but using storage repositories, 
of course) works.

On 3/23/17, 2:31 PM, "Sergey Levitskiy"  
wrote:

It looks like a bug. For vmware, moving root volume with migrateVolume 
with livemigrate=true for zone-wide PS works just fine for us. In the 
background, it uses StoragevMotion. From another angle MigrateVirtualMachine 
works also perfectly fine. I know for a fact that vmware supports moving from 
host to host and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  
wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special 
license I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage 
in the other cluster, then start up the VM, the VM gets started up correctly 
(running on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage 
from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the 
support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
 wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of 
type VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
 wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datacenter.
   

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sergey Levitskiy
It looks like a bug. For vmware, moving root volume with migrateVolume with 
livemigrate=true for zone-wide PS works just fine for us. In the background, it 
uses StoragevMotion. From another angle MigrateVirtualMachine works also 
perfectly fine. I know for a fact that vmware supports moving from host to host 
and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license 
I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

The source datastore and the target datastore are both 
using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI 
based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
So, in my case, both the source and target datastores are cluster-scoped 
primary storage in CloudStack (not zone wide). Would that matter? For 
XenServer, that cluster-scoped configuration (but using storage repositories, 
of course) works.

On 3/23/17, 2:31 PM, "Sergey Levitskiy"  wrote:

It looks like a bug. For vmware, moving root volume with migrateVolume with 
livemigrate=true for zone-wide PS works just fine for us. In the background, it 
uses StoragevMotion. From another angle MigrateVirtualMachine works also 
perfectly fine. I know for a fact that vmware supports moving from host to host 
and storage to storage at the same time so it seems to be a bug in 
migrateVirtualMachineWithVolume implementation. vSphere standard license is 
enough for both regular and storage vMotion.

On 3/23/17, 1:21 PM, "Tutkowski, Mike"  wrote:

Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special 
license I believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in 
different cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in 
the other cluster, then start up the VM, the VM gets started up correctly 
(running on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from 
one cluster to another with VMware? This works for XenServer and I probably 
just assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support 
of cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  
wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both 
datastores instead of iSCSI for both datastores and it failed with the same 
error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
 wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are 
within the same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
 wrote:

A little update here:

In the debugger, I made sure we asked for the correct 
source datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked 
having the proper source and target datastores, I now 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Thanks, Simon

I wonder if we support that in CloudStack.

On 3/23/17, 2:18 PM, "Simon Weller"  wrote:

Mike,


It is possible to do this on vcenter, but it requires a special license I 
believe.


Here's the info on it :


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html


https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in different 
cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the 
other cluster, then start up the VM, the VM gets started up correctly (running 
on the new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one 
cluster to another with VMware? This works for XenServer and I probably just 
assumed it would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both datastores 
instead of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  
wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  
wrote:

A little update here:

In the debugger, I made sure we asked for the correct source 
datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having 
the proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: 
Unable to access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within 
the same VMware datastore.

The source datastore and the target datastore are both using 
iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
 wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>>
>>
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>>
 However, perhaps someone can clear this up for me:
 With XenServer, we are able to migrate a VM and its 
volumes from a host using a shared SR in one cluster to a host using a shared 
SR in another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the 
source host have to be able to see the target datastore? If so, does that mean 
the target datastore has to be zone-wide primary storage when using VMware to 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
This is interesting:

If I shut the VM down and then migrate its root disk to storage in the other 
cluster, then start up the VM, the VM gets started up correctly (running on the 
new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one cluster to 
another with VMware? This works for XenServer and I probably just assumed it 
would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both datastores 
instead of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  
wrote:

A little update here:

In the debugger, I made sure we asked for the correct source 
datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the 
proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to 
access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  
wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its 
volumes from a host using a shared SR in one cluster to a host using a shared 
SR in another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source 
host have to be able to see the target datastore? If so, does that mean the 
target datastore has to be zone-wide primary storage when using VMware to make 
this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 
only. In vSphere 5.1 and later, vMotion does not require environments with 
shared storage. This is useful for performing cross-cluster migrations, when 
the target cluster machines might not have access to the source cluster's 
storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   This looks a little suspicious to me (in 
VmwareResource before we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Another piece of info:

I tried this same VM + storage migration using NFS for both datastores instead 
of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  wrote:

A little update here:

In the debugger, I made sure we asked for the correct source datastore 
(I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the 
proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to 
access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  
wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes 
from a host using a shared SR in one cluster to a host using a shared SR in 
another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host 
have to be able to see the target datastore? If so, does that mean the target 
datastore has to be zone-wide primary storage when using VMware to make this 
work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. 
In vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource 
before we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the 
target datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to 
both the target and source datastores. Since filerTo.getUuid() has the UUID for 
the target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
being null.
>> 
>>   For some reason, we only check if morDsAtTarget is null 
(I’m not sure why we don’t check if 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  wrote:

A little update here:

In the debugger, I made sure we asked for the correct source datastore (I 
edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the proper 
source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to access 
file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a 
host using a shared SR in one cluster to a host using a shared SR in another 
cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host 
have to be able to see the target datastore? If so, does that mean the target 
datastore has to be zone-wide primary storage when using VMware to make this 
work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In 
vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before 
we call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target 
datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both 
the target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m 
not sure why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike" 
 wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, 
which eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean 
changeDatastore(VirtualMachineRelocateSpec relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
_context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>  

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
A little update here:

In the debugger, I made sure we asked for the correct source datastore (I 
edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the proper source 
and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to access file 
[SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the same VMware 
datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike  
wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike"  
wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a 
host using a shared SR in one cluster to a host using a shared SR in another 
cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host have to 
be able to see the target datastore? If so, does that mean the target datastore 
has to be zone-wide primary storage when using VMware to make this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In 
vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before we 
call VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
>>   morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target 
datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both the 
target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m not 
sure why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, which 
eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
_context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>   s_logger.error("VMware RelocateVM_Task to change 
datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>>   }
>>   return false;
>>   }
>> 
>>   The parameter, VirtualMachineRelocateSpec, looks like this:
>> 
>>   http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
>> 
>>   The following error message is returned:
>> 
>>   Required property datastore is missing from data object of 
type 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike  
> wrote:
> 
> My version is 5.5 in both clusters.
> 
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
>>  wrote:
>> 
>> 
 On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:
>> 
 However, perhaps someone can clear this up for me:   
 With XenServer, we are able to migrate a VM and its volumes from a host 
 using a shared SR in one cluster to a host using a shared SR in another 
 cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source host have to be 
 able to see the target datastore? If so, does that mean the target 
 datastore has to be zone-wide primary storage when using VMware to make 
 this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 
>> 5.1 and later, vMotion does not require environments with shared storage. 
>> This is useful for performing cross-cluster migrations, when the target 
>> cluster machines might not have access to the source cluster's storage.
>> BTW, what is the version of ESXi hosts in this setup? 
>> 
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>> 
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:
>> 
>>   This looks a little suspicious to me (in VmwareResource before we call 
>> VirtualMachineMO.changeDatastore):
>> 
>>   morDsAtTarget = 
>> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
>> filerTo.getUuid());
>>   morDsAtSource = 
>> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
>> filerTo.getUuid());
>>   if (morDsAtTarget == null) {
>>   String msg = "Unable to find the target datastore: 
>> " + filerTo.getUuid() + " on target host: " + 
>> tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
>>   s_logger.error(msg);
>>   throw new Exception(msg);
>>   }
>> 
>>   We use filerTo.getUuid() when trying to get a pointer to both the 
>> target and source datastores. Since filerTo.getUuid() has the UUID for the 
>> target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
>> being null.
>> 
>>   For some reason, we only check if morDsAtTarget is null (I’m not sure 
>> why we don’t check if morDsAtSource is null, too).
>> 
>>   On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
>> wrote:
>> 
>>   Hi,
>> 
>>   The CloudStack API that the GUI is invoking is 
>> migrateVirtualMachineWithVolume (which is expected since I’m asking to 
>> migrate a VM from a host in one cluster to a host in another cluster).
>> 
>>   A MigrateWithStorageCommand is sent to VmwareResource, which 
>> eventually calls VirtualMachineMO.changeDatastore.
>> 
>>   public boolean changeDatastore(VirtualMachineRelocateSpec 
>> relocateSpec) throws Exception {
>>   ManagedObjectReference morTask = 
>> _context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
>> VirtualMachineMovePriority.DEFAULT_PRIORITY);
>>   boolean result = 
>> _context.getVimClient().waitForTask(morTask);
>>   if (result) {
>>   _context.waitForTaskProgressDone(morTask);
>>   return true;
>>   } else {
>>   s_logger.error("VMware RelocateVM_Task to change 
>> datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>>   }
>>   return false;
>>   }
>> 
>>   The parameter, VirtualMachineRelocateSpec, looks like this:
>> 
>>   http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
>> 
>>   The following error message is returned:
>> 
>>   Required property datastore is missing from data object of type 
>> VirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type 
>> vim.vm.RelocateSpec.DiskLocator
>>   at line 1, column 327
>> 
>>   while parsing property "disk" of static type 
>> ArrayOfVirtualMachineRelocateSpecDiskLocator
>> 
>>   while parsing serialized DataObject of type vim.vm.RelocateSpec
>>   at line 1, column 187
>> 
>>   while parsing call information for method RelocateVM_Task
>>   at line 1, column 110
>> 
>>   while parsing SOAP body
>>   at line 1, column 102
>> 
>>   while parsing SOAP envelope
>>   at line 1, column 38
>> 
>>   while parsing HTTP request for method relocate
>>   on object of type vim.VirtualMachine
>>   at line 1, column 0
>> 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
My version is 5.5 in both clusters.

> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
>  wrote:
> 
> 
>>> On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:
> 
>>> However, perhaps someone can clear this up for me:   
>>> With XenServer, we are able to migrate a VM and its volumes from a host 
>>> using a shared SR in one cluster to a host using a shared SR in another 
>>> cluster even though the source host can’t see the target SR.
>>> Is the same thing possible with VMware or does the source host have to be 
>>> able to see the target datastore? If so, does that mean the target 
>>> datastore has to be zone-wide primary storage when using VMware to make 
>>> this work?
> Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 
> 5.1 and later, vMotion does not require environments with shared storage. 
> This is useful for performing cross-cluster migrations, when the target 
> cluster machines might not have access to the source cluster's storage.
> BTW, what is the version of ESXi hosts in this setup? 
> 
> Regards,
> Sateesh,
> CloudStack development,
> Accelerite, CA-95054
> 
>On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:
> 
>This looks a little suspicious to me (in VmwareResource before we call 
> VirtualMachineMO.changeDatastore):
> 
>morDsAtTarget = 
> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
> filerTo.getUuid());
>morDsAtSource = 
> HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
> filerTo.getUuid());
>if (morDsAtTarget == null) {
>String msg = "Unable to find the target datastore: 
> " + filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() 
> + " to execute MigrateWithStorageCommand";
>s_logger.error(msg);
>throw new Exception(msg);
>}
> 
>We use filerTo.getUuid() when trying to get a pointer to both the 
> target and source datastores. Since filerTo.getUuid() has the UUID for the 
> target datastore, that works for morDsAtTarget, but morDsAtSource ends up 
> being null.
> 
>For some reason, we only check if morDsAtTarget is null (I’m not sure 
> why we don’t check if morDsAtSource is null, too).
> 
>On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
> wrote:
> 
>Hi,
> 
>The CloudStack API that the GUI is invoking is 
> migrateVirtualMachineWithVolume (which is expected since I’m asking to 
> migrate a VM from a host in one cluster to a host in another cluster).
> 
>A MigrateWithStorageCommand is sent to VmwareResource, which 
> eventually calls VirtualMachineMO.changeDatastore.
> 
>public boolean changeDatastore(VirtualMachineRelocateSpec 
> relocateSpec) throws Exception {
>ManagedObjectReference morTask = 
> _context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
> VirtualMachineMovePriority.DEFAULT_PRIORITY);
>boolean result = 
> _context.getVimClient().waitForTask(morTask);
>if (result) {
>_context.waitForTaskProgressDone(morTask);
>return true;
>} else {
>s_logger.error("VMware RelocateVM_Task to change 
> datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
>}
>return false;
>}
> 
>The parameter, VirtualMachineRelocateSpec, looks like this:
> 
>http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
> 
>The following error message is returned:
> 
>Required property datastore is missing from data object of type 
> VirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type 
> vim.vm.RelocateSpec.DiskLocator
>at line 1, column 327
> 
>while parsing property "disk" of static type 
> ArrayOfVirtualMachineRelocateSpecDiskLocator
> 
>while parsing serialized DataObject of type vim.vm.RelocateSpec
>at line 1, column 187
> 
>while parsing call information for method RelocateVM_Task
>at line 1, column 110
> 
>while parsing SOAP body
>at line 1, column 102
> 
>while parsing SOAP envelope
>at line 1, column 38
> 
>while parsing HTTP request for method relocate
>on object of type vim.VirtualMachine
>at line 1, column 0
> 
>Thoughts?
> 
>Thanks!
>Mike
> 
>On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
>  wrote:
> 
> 
>Can you trace which API call being 

Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Sateesh Chodapuneedi

>> On 23/03/17, 7:21 PM, "Tutkowski, Mike"  wrote:

>>However, perhaps someone can clear this up for me:   
>>With XenServer, we are able to migrate a VM and its volumes from a host using 
>>a shared SR in one cluster to a host using a shared SR in another cluster 
>>even though the source host can’t see the target SR.
>>Is the same thing possible with VMware or does the source host have to be 
>>able to see the target datastore? If so, does that mean the target datastore 
>>has to be zone-wide primary storage when using VMware to make this work?
Yes, Mike. But that’s the case with versions less than 5.1 only. In vSphere 5.1 
and later, vMotion does not require environments with shared storage. This is 
useful for performing cross-cluster migrations, when the target cluster 
machines might not have access to the source cluster's storage.
BTW, what is the version of ESXi hosts in this setup? 

Regards,
Sateesh,
CloudStack development,
Accelerite, CA-95054

On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:

This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: 
" + filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + 
" to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the 
target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.

For some reason, we only check if morDsAtTarget is null (I’m not sure 
why we don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  
wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which 
eventually calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = 
_context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change 
datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
 wrote:


Can you trace which API call being used and what parameters 
were specified? 

[GitHub] cloudstack issue #1740: CLOUDSTACK-9572 Snapshot on primary storage not clea...

2017-03-23 Thread syed
Github user syed commented on the issue:

https://github.com/apache/cloudstack/pull/1740
  
   Thanks @yvsubhash  Did the code review and it LGTM 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Controlling Boot order of VMs

2017-03-23 Thread Syed Ahmed
We tried to use the ISO. There are some problems we found. The ISO is
not shareable ie you can only attach  it to one VM at a time. We also
found some of the VMs crashed when we tried to unmount the ISO. I'll
try to see the advanced options as Sergey suggested. We are using
XenServer so would have to add that support.

On Thu, Mar 23, 2017 at 3:21 AM, Wido den Hollander  wrote:
> Hi,
>
> You can always try attaching a iPXE ISO to the VMs permanently, that will do 
> the network boot for you.
>
> Wido
>
>> Op 22 maart 2017 om 22:59 schreef Syed Ahmed :
>>
>>
>> Hi Guys,
>>
>> I was wondering if it is possible to set the boot order of VMs that
>> get created. We have a use case where we want to boot the VM from the
>> network. Does something like this exist? If no would people be
>> interested in having this functionality.
>>
>> Thanks,
>> -Syed


Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: " + 
filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + " 
to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the target and 
source datastores. Since filerTo.getUuid() has the UUID for the target 
datastore, that works for morDsAtTarget, but morDsAtSource ends up being null.

For some reason, we only check if morDsAtTarget is null (I’m not sure why we 
don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually 
calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec relocateSpec) 
throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore 
failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy"  
wrote:


Can you trace which API call being used and what parameters were 
specified? migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine







Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
However, perhaps someone can clear this up for me:

With XenServer, we are able to migrate a VM and its volumes from a host using a 
shared SR in one cluster to a host using a shared SR in another cluster even 
though the source host can’t see the target SR.

Is the same thing possible with VMware or does the source host have to be able 
to see the target datastore? If so, does that mean the target datastore has to 
be zone-wide primary storage when using VMware to make this work?

On 3/23/17, 7:47 AM, "Tutkowski, Mike"  wrote:

This looks a little suspicious to me (in VmwareResource before we call 
VirtualMachineMO.changeDatastore):

morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
if (morDsAtTarget == null) {
String msg = "Unable to find the target datastore: " + 
filerTo.getUuid() + " on target host: " + tgtHyperHost.getHyperHostName() + " 
to execute MigrateWithStorageCommand";
s_logger.error(msg);
throw new Exception(msg);
}

We use filerTo.getUuid() when trying to get a pointer to both the target 
and source datastores. Since filerTo.getUuid() has the UUID for the target 
datastore, that works for morDsAtTarget, but morDsAtSource ends up being null.

For some reason, we only check if morDsAtTarget is null (I’m not sure why 
we don’t check if morDsAtSource is null, too).

On 3/23/17, 7:31 AM, "Tutkowski, Mike"  wrote:

Hi,

The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually 
calls VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore 
failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
 wrote:


Can you trace which API call being used and what parameters were 
specified? migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine









Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Tutkowski, Mike
Hi,

The CloudStack API that the GUI is invoking is migrateVirtualMachineWithVolume 
(which is expected since I’m asking to migrate a VM from a host in one cluster 
to a host in another cluster).

A MigrateWithStorageCommand is sent to VmwareResource, which eventually calls 
VirtualMachineMO.changeDatastore.

public boolean changeDatastore(VirtualMachineRelocateSpec relocateSpec) 
throws Exception {
ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
boolean result = _context.getVimClient().waitForTask(morTask);
if (result) {
_context.waitForTaskProgressDone(morTask);
return true;
} else {
s_logger.error("VMware RelocateVM_Task to change datastore failed 
due to " + TaskMO.getTaskFailureInfo(_context, morTask));
}
return false;
}

The parameter, VirtualMachineRelocateSpec, looks like this:

http://imgur.com/a/vtKcq (datastore-66 is the target datastore)

The following error message is returned:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 327

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

Thoughts?

Thanks!
Mike

On 3/22/17, 11:50 PM, "Sergey Levitskiy"  wrote:


Can you trace which API call being used and what parameters were specified? 
migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine





[GitHub] cloudstack pull request #2018: CLOUDSTACK-9848: Added exit status checking f...

2017-03-23 Thread jayapalu
GitHub user jayapalu opened a pull request:

https://github.com/apache/cloudstack/pull/2018

CLOUDSTACK-9848: Added  exit status checking for the iptables commands

Added the checking exist status of iptables command.
On exception return error so that it will be populated to management server

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack exitstatus

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2018.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2018


commit ab087cce9584c319a1e6144731aec32c2dcea0da
Author: Jayapal 
Date:   2017-03-23T13:16:55Z

CLOUDSTACK-9848: Added  exit status checking for the iptables commands




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1726: CLOUDSTACK-9560 Root volume of deleted VM left unrem...

2017-03-23 Thread ustcweizhou
Github user ustcweizhou commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
LGTM now


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2017: CLOUDSTACK-9847 vm_network_map is not getting cleane...

2017-03-23 Thread SudharmaJain
Github user SudharmaJain commented on the issue:

https://github.com/apache/cloudstack/pull/2017
  
@serg38 This is different from #1613. In #1613 vm_network_map was getting 
cleaned only when nic reservation strategy is Start, but not for Create.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1994: CLOUDSTACK-9827: Storage tags stored in multiple pla...

2017-03-23 Thread nvazquez
Github user nvazquez commented on the issue:

https://github.com/apache/cloudstack/pull/1994
  
Thanks @mike-tutkowski! I pushed force to kick off Travis again


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2014: WIP

2017-03-23 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2014
  
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-599


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1726: CLOUDSTACK-9560 Root volume of deleted VM left unrem...

2017-03-23 Thread yvsubhash
Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1726
  
@ustcweizhou  coding convention issue is taken care


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1960: [4.11/Future] CLOUDSTACK-9782: Host HA and KVM HA pr...

2017-03-23 Thread borisstoyanov
Github user borisstoyanov commented on the issue:

https://github.com/apache/cloudstack/pull/1960
  
@rhtyd PRs 2003 and 2011 got merged, could you please rebase against master 
so it'll pick up the fixes. Once we package I'll continue with the verification 
on the physical hosts. 
 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2014: WIP

2017-03-23 Thread blueorangutan
Github user blueorangutan commented on the issue:

https://github.com/apache/cloudstack/pull/2014
  
@abhinandanprateek a Jenkins job has been kicked to build packages. I'll 
keep you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1740: CLOUDSTACK-9572 Snapshot on primary storage not clea...

2017-03-23 Thread yvsubhash
Github user yvsubhash commented on the issue:

https://github.com/apache/cloudstack/pull/1740
  
@syed 
On the xenserver we always keep the latest snapshot in primary as xenserver 
needs it to create delta snapshots. If the snapshot is not present in 
xenserver, next snapshot would create full snapshot rather than delta snapshot. 
Please refer to the bug related to that
https://issues.apache.org/jira/browse/CLOUDSTACK-2630



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2014: WIP

2017-03-23 Thread abhinandanprateek
Github user abhinandanprateek commented on the issue:

https://github.com/apache/cloudstack/pull/2014
  
@blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2017: CLOUDSTACK-9847 vm_network_map is not getting cleane...

2017-03-23 Thread serg38
Github user serg38 commented on the issue:

https://github.com/apache/cloudstack/pull/2017
  
@SudharmaJain Isn't it duplicate to PR1613 that is already merged?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2017: CLOUDSTACK-9847 vm_network_map is not getting...

2017-03-23 Thread SudharmaJain
GitHub user SudharmaJain opened a pull request:

https://github.com/apache/cloudstack/pull/2017

CLOUDSTACK-9847 vm_network_map is not getting cleaned 

While releasing nic for Create reservation strategy, vm_network_map is not 
getting cleaned. I have added corresponding code to perform the cleanup. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack cs-9847

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2017.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2017


commit 159ee0e1fae99be44b77119aeb5235b0b1571b0d
Author: Sudharma Jain 
Date:   2017-03-23T10:36:31Z

CLOUDSTACK-9847 vm_network_map is not getting cleaned while releasing nic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2016: CLOUDSTACK-9835 : To make management server a...

2017-03-23 Thread harikrishna-patnala
GitHub user harikrishna-patnala opened a pull request:

https://github.com/apache/cloudstack/pull/2016

CLOUDSTACK-9835 : To make management server and SSVM to be in time sync

Added a new configuration paremetar "ntp.server.list" to configure NTP 
server ip in NTP settings of SSVM

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Accelerite/cloudstack CLOUDSTACK-9835

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2016


commit dd312d6b479062a711b67dba0e9a614e9e64c608
Author: Harikrishna Patnala 
Date:   2017-03-15T06:23:30Z

CLOUDSTACK-9835 : Management server and SSVM should be in time sync

Added a new configuration paremetar "ntp.server.list" to configure NTP 
server ip in NTP settings of SSVM




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2011: CLOUDSTACK-9811: fix duplicated nics on VR ca...

2017-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/2011


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2003: CLOUDSTACK-9811: fixed an issue if the dev is...

2017-03-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/2003


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2003: CLOUDSTACK-9811: fixed an issue if the dev is not in...

2017-03-23 Thread karuturi
Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/2003
  
ok. Thanks everyone. I am merging this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2011: CLOUDSTACK-9811: fix duplicated nics on VR caused by...

2017-03-23 Thread karuturi
Github user karuturi commented on the issue:

https://github.com/apache/cloudstack/pull/2011
  
ok. Thanks everyone. I am merging this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #2015: CLOUDSTACK-9843 : Performance improvement of ...

2017-03-23 Thread sudhansu7
GitHub user sudhansu7 opened a pull request:

https://github.com/apache/cloudstack/pull/2015

CLOUDSTACK-9843 : Performance improvement of SSHHelper

A delay of 1 sec has been introduced in SSHHelper Class. This is a fail 
safe code. Removing this will improves the performance of deployVm by 4 sec, 
createFirewallRule by 1 sec and createPortForwardingRule by 1 sec.

We have not faced any issues after removing the delay. This was introduced 
when we were using older version of Trilead library. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sudhansu7/cloudstack CLOUDSTACK-9843

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/2015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2015


commit 05f1982c19e949e90e47d0afbf3afbdadb845b6b
Author: Sudhansu 
Date:   2017-03-21T10:49:44Z

CLOUDSTACK-9843 : Performance improvement of deployVirtualMachine, 
createFirewallRule, createPortForwardingRule

Removed 1 sec sleep in SSHHelper.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #1707: CLOUDSTACK-9397: Add Watchdog timer to KVM Instance

2017-03-23 Thread wido
Github user wido commented on the issue:

https://github.com/apache/cloudstack/pull/1707
  
I rebased the code against master, merges again. Tests are looking good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack issue #2011: CLOUDSTACK-9811: fix duplicated nics on VR caused by...

2017-03-23 Thread DaanHoogland
Github user DaanHoogland commented on the issue:

https://github.com/apache/cloudstack/pull/2011
  
thanks @ustcweizhou 
@karuturi can we merge this? and merge forward?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Controlling Boot order of VMs

2017-03-23 Thread Wido den Hollander
Hi,

You can always try attaching a iPXE ISO to the VMs permanently, that will do 
the network boot for you.

Wido

> Op 22 maart 2017 om 22:59 schreef Syed Ahmed :
> 
> 
> Hi Guys,
> 
> I was wondering if it is possible to set the boot order of VMs that
> get created. We have a use case where we want to boot the VM from the
> network. Does something like this exist? If no would people be
> interested in having this functionality.
> 
> Thanks,
> -Syed