Oh, and while we're at it (adding features to disk offerings that is), we should add an encryption flag (maybe there's already a feature request for this).
The implementation details might be tricky, but I'd think that cloudstack could generate and manage the keys, and then they could be passed either to the SAN, or the linux box that needs the volume, or whatever is doing the disk management. This would basically just ensure that cloudstack can start/use encrypted disks, but when they get discarded that the data is unreadable. While the VM is in use, the disk must be unlocked anyway, so it wouldn't protect a live system. You'd also need to be able to provide the customer/admin with that key if they want to download a backup or something. On Thu, Jan 31, 2013 at 4:25 PM, Marcus Sorensen <shadow...@gmail.com> wrote: > So, this is what Edison's storage refactor is designed to accomplish. > Instead of the storage working the way it currently does, creating a > volume for a VM would consist of the cloudstack server (or volume > service as he has created) talking to your solidfire appliance, > creating a new lun, and using that. Now instead of a giant pool/lun > that each vm shares, each VM has it's own LUN that is provisioned on > the fly by cloudstack. > > It sounds like maybe this will make it into 4.1 (I have to go through > my email today, but it sounded close). > > Either way, it would be a good idea to add this into the disk > offering, a basic IO and throughput limit, and then whether you > implement it through cgroups on the Linux server, or at the SAN level, > or through some other means on VMware or Xen, the values are there to > use. > > On Thu, Jan 31, 2013 at 4:19 PM, Mike Tutkowski > <mike.tutkow...@solidfire.com> wrote: >> Hi everyone, >> >> A while back, I had sent out a question regarding storage quality of >> service. A few of you chimed in with some good ideas. >> >> Now that I have a little more experience with CloudStack (these past couple >> weeks, I've been able to get a real CS system up and running, create an >> iSCSI target, and make use of it from XenServer), I would like to pose my >> question again, but in a more refined way. >> >> A little background: I worked for a data-storage company in Boulder, CO >> called SolidFire (http://solidfire.com). We build a highly fault-tolerant, >> clustered SAN technology consisting exclusively of SSDs. One of our main >> features is hard quality of service (QoS). You may have heard of QoS >> before. In our case, we refer to it as hard QoS because the end user has >> the ability to specify on a volume-by-volume basis what the maximum and >> minimum IOPS for a given volume should be. In other words, we do not have >> the user assign relative high, medium, and low priorities to volumes (the >> way you might do with thread priorities), but rather hard IOPS limits. >> >> With this in mind, I would like to know how you would recommend I go about >> enabling CloudStack to support this feature. >> >> In my previous e-mail discussion, people suggested using the Storage Tag >> field. This is a good idea, but does not fully satisfy my requirements. >> >> For example, if I created two large SolidFire volumes (by the way, one >> SolidFire volume equals one LUN), I could create two Primary Storage types >> to map onto them. One Primary Storage type could have the tag "high_perf" >> and the other the tag "normal_perf". >> >> I could then create Compute Offerings and Disk Offerings that referenced >> one Storage Tag or the other. >> >> This would guarantee that a VM Instance or Data Disk would run from one >> SolidFire volume or the other. >> >> The problem is that one SolidFire volume could be servicing multiple VM >> Instances and/or Data Disks. This may not seem like a problem, but it is >> because in such a configuration our SAN can no longer guarantee IOPS on a >> VM-by-VM basis (or a data disk-by-data disk basis). This is called the >> Noisy Neighbor problem. If, for example, one VM Instance starts getting >> "greedy," it can degrade the performance of the other VM Instances (or Data >> Disks) that share that SolidFire volume. >> >> Ideally we would like to have a single VM Instance run on a single >> SolidFire volume and a single Data Disk be associated with a single >> SolidFire volume. >> >> How might I go about accomplishing this design goal? >> >> Thanks!! >> >> -- >> *Mike Tutkowski* >> *Senior CloudStack Developer, SolidFire Inc.* >> e: mike.tutkow...@solidfire.com >> o: 303.746.7302 >> Advancing the way the world uses the >> cloud<http://solidfire.com/solution/overview/?video=play> >> *™*