Hey guys, I think we're talking about a lot of good stuff here.
What do you think about us walking through Edison's Wiki document, updating it, and extending it to include the things we're talking about? https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+subsystem+2.0 We could do this over e-mail and via Wiki, but it might be a good idea if we could all jump on the phone at the same time and walk through this. What do you guys think? I know we're all possibly in different timezones. :) I'm in Colorado, but I don't mind getting on the phone at an "odd" hour to talk this through. Talk to you later! On Thu, Mar 28, 2013 at 8:00 PM, Mike Tutkowski < [email protected]> wrote: > Hey guys, > > I'm thinking through what Vladimir wrote below. My comments are inline. > > Thanks! > > > On Thu, Mar 28, 2013 at 5:38 PM, Vladimir Popovski < > [email protected]> wrote: > >> If I read what Edison mentioned properly, the idea was to provide as much >> freedom for the plugin as possible. >> >> The generic code of datastore creation and manipulations will be >> available, but will be optional. The plugin may decide to use it or perform >> something else. >> >> >> >> We can consider two main scenarios for plugin operations - shared >> pools/datastore vs dedicated. It looks like plugin behavior might be >> completely different there. >> >> >> >> In case of the shared pool/datastore: >> >> 1. On Initialize (or attachCluster/attachZone) level, the plugin >> will call target to create a LUN & attach it to the datastore >> >> 2. On createAsync/deleteAsync levels, the plugin will operate with >> volumes from this pool only (not LUNs). Optionally, if there is no enough >> space in the pool, it may allocate another LUN (or expand the previous one). >> >> >> >> In case of the dedicated pool: >> > > [Mike T.] This is where my main interest lies, but I am certainly > interested in understanding more about the top scenario, as well. > >> 1. On Initialize level (or same attach Cluster/Zone), the plugin >> will establish a connection between source(-s) and destination, but will >> not create any LUNs/datastores >> > [Mike T.] Let's say "attachCluster" is called on the plug-in. I assume if > we are dealing with a VMware cluster that the IP address of vCenter is > passed in? At this point, would we run our code to add our iSCSI target to > the HBAs of each host in the cluster? If it is a XenServer cluster, I don't > think this pre-configuration of HBAs is necessary, so we could do nothing > in that case? > >> 2. On createAsync level, it will create a LUN, create >> corresponding datastore & carve a volume from this datastore (size == size >> of the datastore) >> > [Mike T.] This makes good sense to me. This is where we are told what kind > of a LUN to create (size, for example) on our SAN. After creating said LUN, > we can hook it up to the VMware or XenServer (etc.) cluster (creating a > datastore or storage repository (etc.) that is based off of the LUN's iSCSI > target). The datastore/storage repository would take up all of the space > that the LUN provides. > >> >> >> It will really help if all datastore functions will be provided by the >> CS, but it will be up to the plugin to decide when to invoke them. >> > > [Mike T.] I agree...there is a lot of logic that would need to be > duplicated among storage plug-ins if we don't put it in a library and make > it accessible to them. > >> >> >> Does it make sense? >> > > [Mike T.] I think this is making more sense. I'm still a bit unclear on > how the plug-in behaves in a shared LUN environment (where more than one VM > or data disk lives on the LUN), but my main use case is dedicated service > to VMs and data disks (one LUN per VM or data disk). > > Do either of you know of any diagrams that convey when each of the methods > we implement in are plug-in are invoked? > >> >> >> Regards, >> >> -Vladimir >> >> >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Thursday, March 28, 2013 3:07 PM >> *To:* Edison Su >> *Cc:* Vladimir Popovski; [email protected] >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi Edison, >> >> >> >> I think I'm seeing a bit more where you're coming from. >> >> >> >> I guess I was under the impression that when a plug-in was invoked to >> create storage that the idea was always for that storage to be for a single >> VM or a single data disk. >> >> >> >> It sounds like the plug-in architecture, however, is being designed with >> more than that in mind? >> >> >> >> I'm not sure how this plug-in model would be used, though, if more than >> one VM can be assigned to a storage volume. Here's what I'm thinking: >> >> >> >> * User executes a compute offering. >> >> >> >> * Storage framework gets a volume from storage plug-in. >> >> >> >> * VM is deployed to use the entire volume. >> >> >> >> By the way, when I say "volume" up there, I mean that the same as LUN. >> >> >> >> How could this plug-in framework be used again to deploy another VM to >> the same volume? I don't understand that part (not that I plan on doing >> that, but am curious). >> >> >> >> Thanks! >> >> >> >> On Thu, Mar 28, 2013 at 3:50 PM, Edison Su <[email protected]> wrote: >> >> >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Thursday, March 28, 2013 2:26 PM >> *To:* Edison Su >> *Cc:* Vladimir Popovski; [email protected] >> >> >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi Edison, >> >> >> >> Can you clarify what you mean here? >> >> >> >> [Edison] If there are HV-dependent storage code there(I assume it should >> have some code can be shared between different storage provider at >> hypervisor side), we can generalize them and expose them. >> >> >> >> I think Vladimir and I are proposing that the storage framework be >> modified to only expect its plug-ins to write code that deals with their >> array (not hypervisor-related code). For example, the storage framework >> could call into my plug-in for a new volume, I would create it using the >> SolidFire API, return an IQN, and the storage framework would run the logic >> it needs to in order to, say, create a Datastore for VMware hosts based on >> that IQN. >> >> >> >> [Edison] If we do that way, then we will enforce per datastore per IQN >> model, which seems conflict with what Vladimir talking about. >> >> CloudStack mgt server will not enforce any kind of policy about how the >> volume is created and how the volume will be used by hypervisor. We can >> share code through library, instead of through framework. For example, We >> can put your create datastore code into hypervisor resource code, add a new >> command called, createdatastorecommand, then the provider’s driver code at >> mgt serer can send above command to hypervisor resource, inside that >> createdatastorecommand, which can call the creating datastore code to >> create a datastore. >> >> >> >> What do you think about that Edison? Either way, I'm working on writing >> code that creates a VMware Datastore and we can decide where to place it >> (either in a utility shared by the storage plug-ins or in the CS storage >> framework). >> >> >> >> On Thu, Mar 28, 2013 at 2:12 PM, Edison Su <[email protected]> wrote: >> >> Comments embedded in below. >> >> >> >> *From:* Vladimir Popovski [mailto:[email protected]] >> *Sent:* Wednesday, March 27, 2013 12:22 PM >> *To:* Mike Tutkowski; Edison Su >> *Cc:* [email protected] >> >> >> *Subject:* RE: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi All, >> >> >> >> I was away for couple of days, so sorry for the delay. >> >> >> >> I’m completely with Mike & John (& others) on separating storage plugins >> from hypervisor-related functions. If every plugin will need to implement >> hypervisor-related code, it will be a big mess. >> >> >> >> Our use-case is very similar to Mike’s – we would like to be able to >> provision volumes with different QoS characteristics directly to VMs, >> rather than adding them into “shared” datastores. It might be achieved in >> two ways: >> >> - either to create separate data stores per each volume >> (storage LUN), and from there to create volumes & assign to instances. >> >> - or to assign devices recognized by iSCSI Initiators directly >> to instances (I’m not sure if it will be possible in VMware without >> datastores) >> >> >> >> It looks like Mike started to work on the 1st approach. In this case, >> for every volume there will be a separate datastore. If this is the >> preferred direction for all storage plugins, then the hypervisor-specific >> logic of datastore creation and creating/assigning volumes from the >> datastore will be fairly common for all storage plug-ins. At the same time, >> the storage plugin should have the control over the datastore management. >> It will depend on the plugin, if dedicated or shared datastores should be >> created. >> >> >> >> For the 2nd option (skipping the datastore layer) there might be plenty >> of common code as well. >> >> >> >> So, my questions are: >> >> - do you think it is beneficial to support both options for the >> CS (or are we good with potentially huge number of datastores)? >> >> >> >> [Edison] CloudStack will not enforce any of these options, it’s all up >> to provider’s implementation. Either way is OK to me. Do you think, from >> architecture point of view, Is the current storage API enough for both >> options? If no, we can come up some new APIs. >> >> >> >> - are we all agree that HV-dependent storage code should be >> generic and appropriate interfaces to be exposed? >> >> >> >> [Edison] If there are HV-dependent storage code there(I assume it should >> have some code can be shared between different storage provider at >> hypervisor side), we can generalize them and expose them. >> >> As Mike said, the code dealing with storage pool at the hypervisor side, >> can be shared. >> >> >> >> Regards, >> >> -Vladimir >> >> >> >> >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Wednesday, March 27, 2013 10:21 AM >> *To:* Edison Su >> *Cc:* [email protected]; Vladimir Popovski >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Sounds good, Edison >> >> >> >> Last night I finished up code that uses the VI Java API to create a >> VMware Datastore. >> >> >> >> I want to test it a bit more before I have you look at it. >> >> >> >> Today there is a Citrix CloudPlatform demo I'm participating in to handle >> part of the SolidFire section of the demo, so I might not have time to do >> my Datastore testing, but I should be done with it tomorrow. >> >> >> >> Talk to you later! >> >> >> >> On Wed, Mar 27, 2013 at 11:15 AM, Edison Su <[email protected]> wrote: >> >> For vmware, current cloudstack doesn’t create a vmware datastore through >> vmware’s API, admin needs to create the datastore in Vcenter at first, then >> add it back into cloudstack. I am not familiar with how to create a VMware >> datastore through Vmware’s API, but regarding to add a new host into a >> cluster, the current framework lets storage provider adding a listener >> which can listen on adding host event. >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Tuesday, March 26, 2013 6:45 PM >> >> >> *To:* Edison Su >> *Cc:* [email protected]; Vladimir Popovski >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Great - thanks, Edison! >> >> >> >> I can take a look at that code. >> >> >> >> I've almost gotten the VMware code written. >> >> >> >> It's a little more involved than the XenServer code because you have to >> add static IQNs for discovery to each host in a VMware cluster (this is >> somehow handled behind the scenes, I suppose, with XenServer) before you >> can create a Datastore based on your iSCSI target. >> >> >> >> One thing I was wondering, though, is when you add a new host to this >> VMware cluster. It will need to "inherit" the list of IQNs to discover. I >> image this is the case today. Do you know anything about that? I might >> just try it out and see if that works today. >> >> >> >> On Tue, Mar 26, 2013 at 5:18 PM, Edison Su <[email protected]> wrote: >> >> Thanks! >> >> FYI, there are some code at both xen and kvm hypervisor resource code to >> deal with storage pool creation. >> >> For example, in CitrixResourceBase-> getNfsSR or getIscsiSR to create a >> nfs SR or ISCSI SR. In LibvirtStorageAdaptor, which can create storage pool >> in libvirt. >> >> >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Tuesday, March 26, 2013 1:52 PM >> *To:* Edison Su >> *Cc:* [email protected]; Vladimir Popovski >> >> >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi Edison, >> >> >> >> Sounds good. >> >> >> >> I already have code to create a XenServer Storage Repository (and >> optionally use CHAP credentials) and I'm working right now on creating a >> vSphere Datastore. >> >> >> >> When I have this working and in a nicer state, I can check in with you to >> review it and provide comments. >> >> >> >> Once those two hypervisors are handled, I'll move on to KVM and OVM. >> >> >> >> Thanks! >> >> >> >> On Tue, Mar 26, 2013 at 2:33 PM, Edison Su <[email protected]> wrote: >> >> Yes, code is welcome!!! Currently Only the interface at the management >> server side is defined. At the hypervisor resource side, we may need some >> kind of utility library or another plugin framework, as John proposed few >> months ago. >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Monday, March 25, 2013 2:37 PM >> *To:* Edison Su; [email protected]; Vladimir Popovski >> >> >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Hey Edison, >> >> >> >> So...if you think my understanding is correct (please check out the >> e-mail below), then I have a question. >> >> >> >> Do we really want to have the storage plug-ins taking on the >> responsibility of talking to the hypervisors to hook up the storage that >> they just created? >> >> >> >> I'm a bit familiar with how OpenStack does this and it seems that it only >> has its storage plug-ins create a volume (LUN, whatever) and then the >> framework handles the process of dealing with the hypervisor in question to >> hook up the storage. >> >> >> >> It seems like otherwise we'd need to create a utility for all storage >> plug-ins to share otherwise they'd be duplicating efforts in talking to >> hypervisors. >> >> >> >> What do you think? >> >> >> >> On Thu, Mar 21, 2013 at 7:52 PM, Mike Tutkowski < >> [email protected]> wrote: >> >> Hi Edison, >> >> >> >> I believe I understand the requirements for the plug-in better now. >> >> >> >> It sounds like the flow will be as such: >> >> >> >> * The user executes a Compute or Disk Offering that is tied via a storage >> tag to a Primary Storage that is associated with a plug-in. >> >> >> >> * The storage framework will ask the plug-in to create a volume. The >> plug-in will create a volume and hook the volume up to the appropriate >> hypervisor. For VMware, this means the plug-in will create a Datastore. >> For XenServer, this means the plug-in will create a Storage Repository. >> (So on and so forth for other hypervisors.) >> >> >> >> * The VM or data disk is then deployed to the hypervisor. >> >> >> >> Does that sound correct, Edison? >> >> >> >> Thanks! >> >> >> >> On Thu, Mar 21, 2013 at 5:44 PM, Edison Su <[email protected]> wrote: >> >> >> >> >> >> *From:* Mike Tutkowski [mailto:[email protected]] >> *Sent:* Thursday, March 21, 2013 4:18 PM >> *To:* Edison Su >> *Subject:* Re: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi Edison, >> >> >> >> I wanted to dive into these comments a bit more: >> >> >> >> [Edison] plugin’s driver->createasync will be called when mgt server want >> to create a volume on the storage. In the driver’s implementation, it can >> directly call storage box’s api, or send a command to hypervisor host, then >> call storage box’s api to create an iscsi. >> >> Then create a datastore(for vmware), SR(for xenserver), or storage >> pool(for KVM) on hypervisor host, based on the iscsi iqn. >> >> If the volume is created from a template(for root disk), need to find a >> way to import that template(which is nfs based currently, it will be just a >> plain http url the future) into the root disk. >> >> The part about creating a datastore or a storage repository...is that >> something the plug-in will be responsible for doing or will the storage >> framework cover that piece? I'm thinking the storage framework will since >> all sorts of plug-ins would seem to need that ability (to have their >> storage hooked up to a datastore or a storage repository). >> >> >> >> [Edison] It’s a specific requirement for per volume per LUN case, and >> specific for certain hypervisors(seems KVM doesn’t need to create a storage >> pool when using iscsi LUN), so the storage framework will not deal with it >> right now. >> >> >> >> >> >> Thanks for your time, Edison! :) >> >> >> >> On Thu, Mar 21, 2013 at 4:45 PM, Edison Su <[email protected]> wrote: >> >> Feedback/comments are appreciated, need to know your input from storage >> vendor point of view. >> >> >> >> *From:* Vladimir Popovski [mailto:[email protected]] >> *Sent:* Thursday, March 21, 2013 11:52 AM >> *To:* Edison Su; cloudstack >> >> >> *Cc:* [email protected] >> *Subject:* RE: Storage Subsystem 2.0 plugin docs >> >> >> >> Hi Edison, >> >> >> >> Thank you for the reply. We will check it out. >> >> >> >> Regards, >> >> -Vladimir >> >> >> >> >> >> *From:* Edison Su [mailto:[email protected]] >> *Sent:* Thursday, March 21, 2013 11:36 AM >> *To:* 'Vladimir Popovski'; cloudstack >> *Cc:* [email protected] >> *Subject:* RE: Storage Subsystem 2.0 plugin docs >> >> >> >> >> >> >> >> *From:* Vladimir Popovski >> [mailto:[email protected]<[email protected]>] >> >> *Sent:* Wednesday, March 20, 2013 9:05 AM >> *To:* cloudstack >> *Cc:* [email protected]; Edison Su >> *Subject:* Storage Subsystem 2.0 plugin docs >> >> >> >> Hi All, >> >> >> >> Thank you for a great work on CloudStack! We are interested in >> integrating CS with our storage system and started to look at your >> documentation and storage-related code. I see that Mike from SolidFire >> started working on something similar some time ago and Edison even created >> an empty plugin for it (in Nov’12?). >> >> >> >> We have couple of questions related to that: >> >> - Is there any documentation about plugins (except of >> https://cwiki.apache.org/CLOUDSTACK/storage-subsystem-20.html) >> >> [Edison] There are not much docs about the plugins other than the above >> link. See below. >> >> - Are there any exemplary plugins for primary & secondary >> datastores? Was the SolidFire plugin ever finished? >> >> [Edison] yesterday, I checked in some code to separate existing >> cloudstack storage code into a standalone maven project, called: >> cloud-plugin-storage-volume-default, which can give you an example how a >> storage plugin will look like. >> >> - How to activate a new plugin and use it (at least through >> CLIs/APIs) >> >> [Edison] First, put a bean configuration in client/tomcatconf/ >> componentContext.xml.in for your plugin provider class, like: >> >> <bean id="ClassicalPrimaryDataStoreProvider" >> class="org.apache.cloudstack.storage.datastore.provider.CloudStackPrimaryDataStoreProviderImpl"> >> >> </bean> >> >> Second, when adding a data store into cloudstack, with an extra parameter >> in createstoragepoolcmd: provider=your-provider-name, >> liststorageproviderscmd can list all the registered providers in mgt server. >> >> >> >> >> >> - How to integrate it with the UI >> >> There is no UI part of example code for storage yet, the idea is to use >> pluggable UI( >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/UI+Plugin+Tutorial), >> for each storage provider may need a separate UI to add a storage. For >> example, in adding primary storage ui, there will be a drop down list, show >> all the registered providers, if user selects one of the drop down list, >> then UI will pop up a diagram, based on providers’ pluggable ui, then user >> can type whatever information needed for a storage(e.g. nfs server, nfs >> path, if its nfs). At the end, UI will call createstoragepoolcmd to >> register a storage into cloudstack. >> >> >> >> Thanks, >> >> -Vladimir >> >> >> >> >> >> ------- >> >> Vladimir Popovski >> >> VP, Cloud Operations >> >> Zadara Storage >> (949) 677-2095 >> >> [email protected] >> >> www.zadarastorage.com >> >> >> >> >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> >> >> >> >> >> -- >> *Mike Tutkowski* >> >> *Senior CloudStack Developer, SolidFire Inc.* >> >> e: [email protected] >> >> o: 303.746.7302 >> >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™* >> > > > > -- > *Mike Tutkowski* > *Senior CloudStack Developer, SolidFire Inc.* > e: [email protected] > o: 303.746.7302 > Advancing the way the world uses the > cloud<http://solidfire.com/solution/overview/?video=play> > *™* > -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: [email protected] o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *™*
