you should create a new Disk offering, and tag that with NAS2. That way you
can consolidate the vm's volumes to one storage server.


On Mon, Jan 28, 2013 at 11:39 PM, Arnaud Gaillard <
arnaud.gaill...@xtendsys.net> wrote:

> >
> > >This mean that the VM was running on a specific primary and the volume
> we
> > >created afterward for this VM were not following the storage tag of the
> > >offering.
> >
> > Were you spinning up subsequent vm's from a template created of the
> > original vm? Were those latter vm's using the same service offering? If
> so
> > that shouldn¹t happen.
> >
> >
> I'm not sure to understand your question currently here is a quick summary
> of what is happening:
>
> 1) I create a new VM with system offering specifing Storage tag NAS2
> (template is on the secondary storage, and created from a VM on another
> zone)
> 2) The VM is created on NAS2 everything is fine (ROOT disk on NAS2)
> 3) I go in the management interface and  add a new volume
> 4) The DATA disk for the VM is created on NAS1 and not NAS2
> 5) Now if I use this disk to increase the filesystem of the VM in a
> situation with a VM and its disks spread on 2 storage,
>
> So I was just wondering if there was a way to create an additional volume
> on a specifc primary, to avoid this kind of problem.
>
>
> >
> > >This may create important incoherency and complexity when we do
> operation
> > >with primary storage.
> > >
> > >Is there a way to control on which primary is created a volume? Is this
> a
> > >bug?
> > >
> > >We also have strange behaviour following a primary storage crash with
> some
> > >VM time traveling backward (the VM is up but the filesystem is in the
> > >state
> > >it was one month ago, this state is different from the original
> > >template...).
> > >Also some VM were missing huge part of their filesystem. This is also
> the
> > >case for VM that were not on the impacted storage.
> > >
> > >The data are not corrupted on the NFS server but after the restart, the
> VM
> > >are sometime in an incoherent state from a filesystem point of view. We
> > >tried to understand what might cause this but currently we didn't come
> to
> > >a
> > >satisfactory solution.
> > >
> > >Did other users encountered this type of problem?
> > >
> > >Thanks,
> > >
> > >AG
> > >
> >
> > There seems to be something more fundamentally wrong in your cloud env.
> > Volumes, even after a crash, should be as recent as the crash. Do you
> have
> > any data replication service/snapshotting running on your primary storage
> > server?
> >
> >
> We don't have specific snapshot/replication on the primaries storage.
> Please note that the problem arise with different storage from different
> vendors/technology and we have no data corruption on other NFS share of
> these storages).
>
> The only things that seems to create this problem is that we are in a
> multiple primary architecture. This make us believe that the problem is
> more related with how Cloudstack manages the systems images when one of the
> primary is becoming unavailable. However we have not yet able to determine
> what was really going on.
>
>
> We also noted that when a primary is becoming unavailable, and hosts go in
> Alert mode, it is no more possible to deploy VM on the node even if the
> other primaries and the secondary are perfectly fine (this coupled with the
> HA script that was rebooting node and making the whole infra goes down if
> one primary goes offline, make us believe that in the future we will go for
> a multi-cluster architecture with one primary rather than a big cluster
> with multi-primary)
>
> Thanks!
>
> AG
>
>
> --
> > Æ
> >
> >
> >
> >
>

Reply via email to