Hi Ilya,

Thanks for getting back to me. Hmmm, perhaps then it isn't using the SSVM,
which might explain why I can't see anything happening on there when the
migration is happening. So if its just the Xen hosts involved, perhaps as
you say, first a host in the source cluster copies the disk to secondary
storage, and then a host in the target cluster transfers this disk from
secondary storage to its own primary storage.

It would be great if a dev could confirm this, someone on this list must
know for sure ! :-)

Also, for very large disks, does anyone have a list of parameters which
would need to be changed to make sure there is enough time for a copy to
complete?

When I first tried it, it timed out after 3 hours. It said the following:

Resource [StoragePool:220] is unreachable: Migrate volume failed: Failed to
copy volume to secondary:
java.util.concurrent.TimeoutException: Async 10800 seconds timeout for task
com.xensource.xenapi.Task@1b417e44

I found the following parameters which are set to 10800:

copy.volume.wait                           10800
create.private.template.from.snapshot.wait 10800
create.private.template.from.volume.wait   10800
create.volume.from.snapshot.wait           10800
primary.storage.download.wait              10800

Any idea which one of those is relevant? I presume its copy.volume.wait.

Thanks!




On Wed, Aug 24, 2016 at 11:54 PM, ilya <[email protected]> wrote:

> Not certain how Xen Storage Migration is implemented in 4.5.2
>
> I'd suspect legacy mode would be
>
> 1) copy disks from primary store to secondary NFS
> 2) copy disks from secondary NFS to new primary store
>
> it might be slow... but if you have enough space - it should work...
>
> My understanding is that NFS is mounted directly on hypervisors. I'd ask
> someone else to confirm though...
>
> On 8/24/16 7:20 AM, cs user wrote:
> > Hi All,
> >
> > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >
> > Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> > storage.
> >
> > If I migrate a volume from one primary storage to the other one, using
> > cloudstack, what aspect of the environment is responsible for this copy?
> >
> > I'm trying to identify bottlenecks but I can't see what is responsible
> for
> > this copying. Is it is the xen hosts themselves or the secondary storage
> vm?
> >
> > Thanks!
> >
>

Reply via email to