Hi,

Timeouts while moving big volumes on top of XENServer is pretty common
thing. Last time for bigger volumes we had and issue and following global
settings were changed. (Just added one  extra zero at end of each settings.
this was on XEN 6.2 and ACS4.4.2)

migratewait: 3600
storage.pool.max.waitseconds: 3600
vm.op.cancel.interval: 3600
vm.op.cleanup.wait: 3600
wait:1800
vm.tranisition.wait.interval:3600

--
Makrand


On Thu, Aug 25, 2016 at 11:56 AM, cs user <[email protected]> wrote:

> Hi Makrand,
>
> Thanks for responding!
>
> Yes this does work for us, for small disks. Even with small disks however
> it seems to take a while, 5 mins or so, on a pretty fast 10GB network.
>
> However currently I am trying to move 2TB disks which is taking a while,
> and was hitting a 3 hour time limit which seemed to be a cloudstack default
> for cloning disks.
>
> Migrations within the cluster, as you say, seem to use Storage Xen motion
> and these work fine with small disks. Haven't really tried huge ones with
> that.
>
> I think you are right that it is using the the SSVM, despite the fact I
> can't seem to see much activity when the copy is occurring.
>
> Thanks!
>
>
>
> On Wed, Aug 24, 2016 at 7:36 PM, Makrand <[email protected]> wrote:
>
> > Hi,
> >
> > I think you must be seeing the option like (storage migration required)
> > while you move the volumes between primary storage. I've seen in past
> > people complaining about this option not working (using GUI or API) with
> > setup similar as yours. Did you get this working?
> >
> > Anyways, I think it has to be system VM, coz primary storage A have not
> > Idea about primary storage B via hypervisor, only cloudstack ( (SSVM) can
> > see it as part as one cloud zone.
> >
> > In normal case of moving volume within cluster, Storge XEN motion is what
> > it uses.
> >
> >
> >
> > --
> > Makrand
> >
> >
> > On Wed, Aug 24, 2016 at 7:50 PM, cs user <[email protected]> wrote:
> >
> > > Hi All,
> > >
> > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> > >
> > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> primary
> > > storage.
> > >
> > > If I migrate a volume from one primary storage to the other one, using
> > > cloudstack, what aspect of the environment is responsible for this
> copy?
> > >
> > > I'm trying to identify bottlenecks but I can't see what is responsible
> > for
> > > this copying. Is it is the xen hosts themselves or the secondary
> storage
> > > vm?
> > >
> > > Thanks!
> > >
> >
>

Reply via email to