This is the spec I've been going off of:

https://cwiki.apache.org/CLOUDSTACK/storage-subsystem-20.html

It has a lot of good information, but it's not entirely sufficient for me
to work off of (hence all the related questions).  :)


On Fri, Mar 8, 2013 at 11:04 PM, Marcus Sorensen <shadow...@gmail.com>wrote:

> Cloudstack wouldn't/shouldn't even get to the point of letting you do
> that without zone-wide storage. It wouldn't think the lun was
> available/reachable in that cluster. It would be like trying to start
> a VM in a cluster it's not associated with. If the volume isn't on a
> pool that is "in" the cluster, it's not going to even try to use it
> there.
>
> I'm not sure if zone-wide/cluster-wide are exclusive, but I don't
> think so. It might be that you have to choose at the time of adding
> the primary storage, but hopefully its implemented such that it can be
> either if it's capable of being zone-wide. The functional spec
> probably has the answers to this.
>
> On Fri, Mar 8, 2013 at 6:45 PM, Mike Tutkowski
> <mike.tutkow...@solidfire.com> wrote:
> > There is a method to implement called grantAccess.
> >
> > Edison was telling me it is here where I enforce an ACL.  If a data disk
> > were being migrated from one cluster to another, wouldn't this
> grantAccess
> > method be called when the hypervisor in the other cluster was ready to
> > access the volume?  At this point, I could add the IQN of the host in
> > question into the ACL and the volume could be accessed by the host in the
> > new cluster.
> >
> > Maybe I'm missing something here?
> >
> >
> > On Fri, Mar 8, 2013 at 6:40 PM, Marcus Sorensen <shadow...@gmail.com>
> wrote:
> >
> >> On that topic, I hope there's a method in the volume service that allows
> >> plugin writers to handle volume copy directly.
> >> On Mar 8, 2013 6:32 PM, "Marcus Sorensen" <shadow...@gmail.com> wrote:
> >>
> >> > It just depends. A VM will generally be tied to a cluster. There's
> >> > technically no reason why someone couldn't make a giant cluster if
> your
> >> > storage supports it, so on that side cluster based seems fine. But if
> you
> >> > end up wanting to move a data disk from one VM to another, and they
> >> happen
> >> > to be in different clusters, that's expensive if you don't have
> zone-wide
> >> > storage. Usually involves dumping and reimporting, and if the same
> San is
> >> > hosting multiple clusters it may seem silly to dump and copy back to
> the
> >> > same San just so that the disk is associated with another cluster.
> >> > On Mar 8, 2013 6:22 PM, "Mike Tutkowski" <
> mike.tutkow...@solidfire.com>
> >> > wrote:
> >> >
> >> >> Thanks for that explanation, Marcus.
> >> >>
> >> >> I believe the primary use case for me is to allow a cluster of hosts
> >> >> (XenServer, VMware, or KVM in particular) to share access to my iSCSI
> >> >> target (we would have a mapping of one VM per iSCSI target or one
> data
> >> >> disk
> >> >> per iSCSI target).
> >> >>
> >> >> I can't really see why hosts outside of the cluster would need
> access to
> >> >> it
> >> >> unless you actually are migrating the VM that's running on that
> volume
> >> to
> >> >> another cluster.
> >> >>
> >> >>
> >> >> On Fri, Mar 8, 2013 at 6:16 PM, Marcus Sorensen <shadow...@gmail.com
> >
> >> >> wrote:
> >> >>
> >> >> > Cluster wide is good for storage that requires some sort of
> >> organization
> >> >> > path the host level, for example, mounted file systems that rely on
> >> >> cluster
> >> >> > locking, like OCFS, GFS, cluster LVM, where hosts that aren't in a
> >> >> cluster
> >> >> > can't make use of the storage. Xen's SR's are sort of like this as
> >> well,
> >> >> > actually almost identical to cluster LVM where it carves volumes
> out
> >> of
> >> >> a
> >> >> > pool or lun, leveraging locking mechanisms in the xen cluster.
> Cluster
> >> >> wide
> >> >> > is also good for topologies that are simply laid out in a way that
> >> makes
> >> >> > sense for it, for example if you had a 10g switch dedicated to a
> >> >> particular
> >> >> > cluster, with NFS services over it.
> >> >> >
> >> >> > It boils down to whether every host in the zone can access/make
> use of
> >> >> the
> >> >> > storage or whether only certain hosts can.
> >> >> > On Mar 8, 2013 6:04 PM, "Mike Tutkowski" <
> >> mike.tutkow...@solidfire.com>
> >> >> > wrote:
> >> >> >
> >> >> > > Hey Edison,
> >> >> > >
> >> >> > > It is entirely possible that Zone wide for my plug-in would make
> >> >> sense.
> >> >> > >  I'm trying to understand what restrictions, if any, are in
> place if
> >> >> it
> >> >> > is
> >> >> > > Zone wide versus Cluster wide.
> >> >> > >
> >> >> > > In my case, the plug-in I'm developing will be creating an iSCSI
> >> >> target
> >> >> > > (volume/LUN) (nothing NFS related) and if that is best to make
> >> >> available
> >> >> > at
> >> >> > > a Zone level, that is totally fine with me.
> >> >> > >
> >> >> > > What would you suggest for my situation?
> >> >> > >
> >> >> > > Thanks!
> >> >> > >
> >> >> > >
> >> >> > > On Fri, Mar 8, 2013 at 5:35 PM, Edison Su <edison...@citrix.com>
> >> >> wrote:
> >> >> > >
> >> >> > > > That API will be easy to be added, and yes, I’ll add it next
> >> >> week.****
> >> >> > > >
> >> >> > > > In the last email, I just give zone-wide primary storage as an
> >> >> example,
> >> >> > > > and I thought your storage box will be zone-wide? As you can
> see,
> >> >> > > > createstoragepoolcmd api is quite flexible, it can be used for
> >> >> > > > zone-wide/cluster storage, so do the storage plugin.****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > *From:* Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >> >> > > > *Sent:* Friday, March 08, 2013 4:09 PM
> >> >> > > > *To:* Edison Su
> >> >> > > > *Cc:* cloudstack-dev@incubator.apache.org
> >> >> > > > *Subject:* Re: Making use of a 4.2 storage plug-in from the
> GUI or
> >> >> > > API****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > OK, cool - thanks for the info, Edison.****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > When you say, "One API is missing," does that mean you're still
> >> >> working
> >> >> > > on
> >> >> > > > implementing that functionality?****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > Also, it sounds like these plug-ins are associated with
> Zone-wide
> >> >> > Primary
> >> >> > > > Storage.  I thought Zone-wide Primary Storage wasn't available
> for
> >> >> all
> >> >> > > > hypervisors?****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > This is from a different e-mail you sent out:
> >> >> > > >
> >> >> > > > "Xenserver and vmware doesn’t support zone wide primary
> storage,
> >> >> > > > currently, this feature is only for NFS/Ceph in KVM. And I
> think
> >> it
> >> >> > > should
> >> >> > > > be useful for your storage box? I am thinking per data volume
> per
> >> >> LUN
> >> >> > for
> >> >> > > > xenserver."****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > I'm not sure how my plug-in would work with XenServer, VMware,
> >> etc.
> >> >> if
> >> >> > it
> >> >> > > > has to be Zone-wide.****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > Can you clarify this for me?****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > Thanks!****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > On Fri, Mar 8, 2013 at 4:33 PM, Edison Su <
> edison...@citrix.com>
> >> >> > > wrote:***
> >> >> > > > *
> >> >> > > >
> >> >> > > > One API is missing, liststorageproviderscmd, which will list
> all
> >> the
> >> >> > > > storage providers registered in the mgt server. ****
> >> >> > > >
> >> >> > > > When adding a zone wide storage pool on the UI, the UI will
> have a
> >> >> > > > drop-down list to show all the primary storage providers. Then
> >> user
> >> >> > will
> >> >> > > > choose one of them, and select some other parameters for the
> >> storage
> >> >> > user
> >> >> > > > wants to add. At the end, UI will call, createstoragepoolcmd,
> with
> >> >> > > > provider=the-storage-provider-uuid-returned from
> >> >> liststoageprovidercmd,
> >> >> > > > scope=zone, and other input parameters. Mgt server will then
> call
> >> >> > > > corresponding storage provider based on provider uuid, to
> register
> >> >> the
> >> >> > > > storage into cloudstack.****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > *From:* Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >> >> > > > *Sent:* Friday, March 08, 2013 2:46 PM
> >> >> > > > *To:* cloudstack-dev@incubator.apache.org
> >> >> > > > *Cc:* Edison Su
> >> >> > > > *Subject:* Making use of a 4.2 storage plug-in from the GUI or
> >> >> API****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > Hi,****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > As you may remember, I'm leveraging Edison's new (4.2) storage
> >> >> plug-in
> >> >> > > > framework to build what is probably the first such plug-in for
> >> >> > > CloudStack.
> >> >> > > > ****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > I was wondering, does anyone know how to make the system aware
> of
> >> >> the
> >> >> > > > plug-in?  I believe once the plug-in is ready (i.e. usable)
> that
> >> the
> >> >> > > intent
> >> >> > > > is to be able to select it when creating Primary Storage
> (instead
> >> of
> >> >> > > having
> >> >> > > > to select a pre-existent iSCSI target).****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > I'm curious how to get this working (i.e. select my plug-in) in
> >> the
> >> >> GUI
> >> >> > > > and via the API.****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > Thanks!****
> >> >> > > >
> >> >> > > >  ****
> >> >> > > >
> >> >> > > > --
> >> >> > > > *Mike Tutkowski*****
> >> >> > > >
> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*****
> >> >> > > >
> >> >> > > > e: mike.tutkow...@solidfire.com****
> >> >> > > >
> >> >> > > > o: 303.746.7302****
> >> >> > > >
> >> >> > > > Advancing the way the world uses the cloud<
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > *™*****
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > > > ****
> >> >> > > >
> >> >> > > > ** **
> >> >> > > >
> >> >> > > > --
> >> >> > > > *Mike Tutkowski*****
> >> >> > > >
> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*****
> >> >> > > >
> >> >> > > > e: mike.tutkow...@solidfire.com****
> >> >> > > >
> >> >> > > > o: 303.746.7302****
> >> >> > > >
> >> >> > > > Advancing the way the world uses the cloud<
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > *™*****
> >> >> > > >
> >> >> > >
> >> >> > >
> >> >> > >
> >> >> > > --
> >> >> > > *Mike Tutkowski*
> >> >> > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > e: mike.tutkow...@solidfire.com
> >> >> > > o: 303.746.7302
> >> >> > > Advancing the way the world uses the
> >> >> > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > > *™*
> >> >> > >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> *Mike Tutkowski*
> >> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> e: mike.tutkow...@solidfire.com
> >> >> o: 303.746.7302
> >> >> Advancing the way the world uses the
> >> >> cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> *™*
> >> >>
> >> >
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to