>>On 24/03/17, 1:51 AM, "Tutkowski, Mike" wrote:
>>Thanks, Simon
>>I wonder if we support that in CloudStack.
IIRC, CloudStack supports this. I am testing this out in my 4.10 environment,
will update shortly.
Regards,
Sateesh
On 3/23/17, 2:18 PM, "Simon
>>On 24/03/17, 4:18 AM, "Tutkowski, Mike" wrote:
>>OK, yeah, it does.
>>The source host has access to the source datastore and the destination host
>>has access to the destination datastore.
>>The source host does not have access to the destination datastore nor
Not sure. Unfortunately my dev environment is currently being used for 4.10, so
I don't have the resources to test prior releases at present.
It's hard to say at the moment when this was broken, but it does seem pretty
important.
> On Mar 23, 2017, at 6:17 PM, Sergey Levitskiy
I opened the following ticket for this issue:
https://issues.apache.org/jira/browse/CLOUDSTACK-9849
On 3/23/17, 6:03 PM, "Tutkowski, Mike" wrote:
I think I should open a blocker for this for 4.10. Perhaps one of our
VMware people can take a look. It sounds like
Was it working before in 4.9?
On 3/23/17, 5:03 PM, "Tutkowski, Mike" wrote:
I think I should open a blocker for this for 4.10. Perhaps one of our
VMware people can take a look. It sounds like it’s a critical issue.
On 3/23/17, 4:48 PM, "Tutkowski, Mike"
OK, yeah, it does.
The source host has access to the source datastore and the destination host has
access to the destination datastore.
The source host does not have access to the destination datastore nor does the
destination host have access to the source datastore.
I've been focusing on
It shouldn’t as long the destination host has access to the destination
datastore.
On 3/23/17, 1:34 PM, "Tutkowski, Mike" wrote:
So, in my case, both the source and target datastores are cluster-scoped
primary storage in CloudStack (not zone wide). Would that
It looks like a bug. For vmware, moving root volume with migrateVolume with
livemigrate=true for zone-wide PS works just fine for us. In the background, it
uses StoragevMotion. From another angle MigrateVirtualMachine works also
perfectly fine. I know for a fact that vmware supports moving from
So, in my case, both the source and target datastores are cluster-scoped
primary storage in CloudStack (not zone wide). Would that matter? For
XenServer, that cluster-scoped configuration (but using storage repositories,
of course) works.
On 3/23/17, 2:31 PM, "Sergey Levitskiy"
Thanks, Simon
I wonder if we support that in CloudStack.
On 3/23/17, 2:18 PM, "Simon Weller" wrote:
Mike,
It is possible to do this on vcenter, but it requires a special license I
believe.
Here's the info on it :
This is interesting:
If I shut the VM down and then migrate its root disk to storage in the other
cluster, then start up the VM, the VM gets started up correctly (running on the
new host using the other datastore).
Perhaps you simply cannot live migrate a VM and its storage from one cluster to
Another piece of info:
I tried this same VM + storage migration using NFS for both datastores instead
of iSCSI for both datastores and it failed with the same error message:
Required property datastore is missing from data object of type
VirtualMachineRelocateSpecDiskLocator
while parsing
Slight typo:
Both ESXi hosts are version 5.5 and both clusters are within the same VMware
datastore.
Should be (datastore changed to datacenter):
Both ESXi hosts are version 5.5 and both clusters are within the same VMware
datacenter.
On 3/23/17, 12:31 PM, "Tutkowski, Mike"
A little update here:
In the debugger, I made sure we asked for the correct source datastore (I
edited the UUID we were using for the source datastore).
When VirtualMachineMO.changeDatastore is later invoked having the proper source
and target datastores, I now see this error message:
Virtual
Also, in case it matters, both datastores are iSCSI based.
> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike
> wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi
>> wrote:
>>
>>
My version is 5.5 in both clusters.
> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi
> wrote:
>
>
>>> On 23/03/17, 7:21 PM, "Tutkowski, Mike" wrote:
>
>>> However, perhaps someone can clear this up for me:
>>> With
>> On 23/03/17, 7:21 PM, "Tutkowski, Mike" wrote:
>>However, perhaps someone can clear this up for me:
>>With XenServer, we are able to migrate a VM and its volumes from a host using
>>a shared SR in one cluster to a host using a shared SR in another cluster
This looks a little suspicious to me (in VmwareResource before we call
VirtualMachineMO.changeDatastore):
morDsAtTarget =
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost,
filerTo.getUuid());
morDsAtSource =
However, perhaps someone can clear this up for me:
With XenServer, we are able to migrate a VM and its volumes from a host using a
shared SR in one cluster to a host using a shared SR in another cluster even
though the source host can’t see the target SR.
Is the same thing possible with VMware
Hi,
The CloudStack API that the GUI is invoking is migrateVirtualMachineWithVolume
(which is expected since I’m asking to migrate a VM from a host in one cluster
to a host in another cluster).
A MigrateWithStorageCommand is sent to VmwareResource, which eventually calls
Can you trace which API call being used and what parameters were specified?
migrateVirtualMachineWithVolumeAttempts vs migrateVirtualMachine
Slight correction:
That image on Imgur is of the VirtualMachineRelocateSpec that’s passed to the
VirtualMachineMO.changeDatastore method to attempt to migrate the VM and its
root disk.
From: "Tutkowski, Mike"
Date: Wednesday, March 22, 2017 at 11:10 PM
To:
Also, by the way, datastore-66 (referenced in the image) is the target
datastore.
From: "Tutkowski, Mike"
Date: Wednesday, March 22, 2017 at 11:13 PM
To: "dev@cloudstack.apache.org"
Subject: Re: Cannot migrate VMware VM with root disk to
Hi,
Within CloudStack 4.10, I have two VMware 5.5 clusters (both in the same VMware
datacenter).
Each cluster has a shared iSCSI-based datastore.
I can initiate the process of migrating a VM and its root disk from a host in
one cluster to a host in another cluster. However, it fails with the
24 matches
Mail list logo