AFAIK.....the concept of Storage XEN motion, while volume is attached to
running VM, works like:-

1) Placeholder volumes are created on destination storage. A base copy and
disk.
2) Original volume is snapshoted and base copy is created. A  new disk is
created  to receive or store the current writes to disk. Same writes will
be synchronised in real time to place holder disk on destination.
3) This base copy is then synced the base copy on destination volume.
4) Once all process is completed, things will be cleared from source and
base copy on destination will be merged with disk (kind of coalescing)

Above is for case, where both source and destination primary LUNS  are
mounted under cluster.

If storage migration is what is needed and if secondary  storage is
involved (moving path:- volume from source>>secondary>>Volume on
destination primary), then it should take very long time. i am not sure how
storage XEN motion operations are handles in this case. Especially, current
writes to the disk that is being moved.

--
Makrand


On Wed, Aug 31, 2016 at 1:18 PM, cs user <[email protected]> wrote:

> Yep, I've narrowed this down to it running on the xen host. You can see it
> in two ways on the xen host:
>
> # xe task-list
> uuid ( RO)                : 69w52567-124h-14r5-16vb-2c4567143541
>           name-label ( RO): Async.VDI.copy
>     name-description ( RO):
>               status ( RO): pending
>             progress ( RO): 0.700
>
> This actually gives you the percentage completed as well.
>
>
> Also look for the process with:
>
>  ps -ef|grep sparse_dd
>
>
> You will see a process running with a parameter which shows the size of the
> disk, like:
>
> -size 2147483648000 -prezeroed
>
> (   2T in my case :-)   )
>
>
> So once the migration begins, you can look for it on your xen hosts in the
> cluster, it usually starts on the same host as the VM to which the disk was
> attached to.
>
>
> On Tue, Aug 30, 2016 at 6:00 PM, Yiping Zhang <[email protected]> wrote:
>
> > I think the work is mostly performed by the hypervisors. I had seen
> > following during storage live migration in XenCenter:
> >
> > Highlight the primary storage for the departing cluster, then select the
> > “Storage” tab on the right side panel.  You should see disk volumes on
> that
> > primary storage. The far right column is the “Virtual Machine” the disk
> > belongs to.
> >
> > While the live storage migration is running, the migrating volume is
> shown
> > as attached to a VM with the name “control domain for host xxx”, instead
> of
> > the VM name it actually belongs to.
> >
> > To me, this is pretty convincing that Xen cluster is doing the migration.
> >
> > Yiping
> >
> > On 8/27/16, 5:10 AM, "Makrand" <[email protected]> wrote:
> >
> >     Hello ilya,
> >
> >     If I am not mistaken, while adding secondary storage NFS server ip
> and
> > path
> >     is all one specifies in cloud-stack. Running df -h on ACS management
> > server
> >     shows you secondary storage mounted there. Don't think hypervisor
> sees
> > NFS
> >     (even if primary storage and NFS coming from same storage box). Plus,
> > while
> >     doing activities like VM deploy and snapshot things always move from
> >     secondary to primary via SSVM.
> >
> >     Have you actually seen any setup where you have verified this?
> >
> >     @ cs user,
> >     When you're moving the volumes, are those attached to running VM? or
> > those
> >     are just standalone orphan volumes?
> >
> >
> >
> >     --
> >     Makrand
> >
> >
> >     On Thu, Aug 25, 2016 at 4:24 AM, ilya <[email protected]>
> > wrote:
> >
> >     > Not certain how Xen Storage Migration is implemented in 4.5.2
> >     >
> >     > I'd suspect legacy mode would be
> >     >
> >     > 1) copy disks from primary store to secondary NFS
> >     > 2) copy disks from secondary NFS to new primary store
> >     >
> >     > it might be slow... but if you have enough space - it should
> work...
> >     >
> >     > My understanding is that NFS is mounted directly on hypervisors.
> I'd
> > ask
> >     > someone else to confirm though...
> >     >
> >     > On 8/24/16 7:20 AM, cs user wrote:
> >     > > Hi All,
> >     > >
> >     > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >     > >
> >     > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> > primary
> >     > > storage.
> >     > >
> >     > > If I migrate a volume from one primary storage to the other one,
> > using
> >     > > cloudstack, what aspect of the environment is responsible for
> this
> > copy?
> >     > >
> >     > > I'm trying to identify bottlenecks but I can't see what is
> > responsible
> >     > for
> >     > > this copying. Is it is the xen hosts themselves or the secondary
> > storage
> >     > vm?
> >     > >
> >     > > Thanks!
> >     > >
> >     >
> >
> >
> >
>

Reply via email to