Cool...thanks, Marcus.

So, how do you recommend I go about this?  Although I've got recent CS code
on my machine and I've built and run it, I've not yet made any changes.  Do
you know of any documentation I could look at to learn the process involved
in making CS changes?


On Thu, Jan 31, 2013 at 4:36 PM, Marcus Sorensen <shadow...@gmail.com>wrote:

> Yes, it would need to be a part of compute offering separately, along
> the CPU/RAM and network limits. Then theoretically they could
> provision OS drive with relatively slow limits, and a database volume
> with higher limits (and higher pricetag or something).
>
> On Thu, Jan 31, 2013 at 4:33 PM, Mike Tutkowski
> <mike.tutkow...@solidfire.com> wrote:
> > Thanks for the info, Marcus!
> >
> > So, you are thinking that when the user creates a new Disk Offering that
> he
> > or she would be given the option of specifying Max and Min IOPS?  That
> > makes sense when I think of Data Disks, but how does that figure into the
> > kind of storage a VM Instance runs off of?  I thought the way that works
> > today is by specifying in the Compute Offering a Storage Tag.
> >
> > Thanks!
> >
> >
> > On Thu, Jan 31, 2013 at 4:25 PM, Marcus Sorensen <shadow...@gmail.com
> >wrote:
> >
> >> So, this is what Edison's storage refactor is designed to accomplish.
> >> Instead of the storage working the way it currently does, creating a
> >> volume for  a VM would consist of the cloudstack server (or volume
> >> service as he has created) talking to your solidfire appliance,
> >> creating a new lun, and using that. Now instead of a giant pool/lun
> >> that each vm shares, each VM has it's own LUN that is provisioned on
> >> the fly by cloudstack.
> >>
> >> It sounds like maybe this will make it into 4.1 (I have to go through
> >> my email today, but it sounded close).
> >>
> >> Either way, it would be a good idea to add this into the disk
> >> offering, a basic IO and throughput limit, and then whether you
> >> implement it through cgroups on the Linux server, or at the SAN level,
> >> or through some other means on VMware or Xen, the values are there to
> >> use.
> >>
> >> On Thu, Jan 31, 2013 at 4:19 PM, Mike Tutkowski
> >> <mike.tutkow...@solidfire.com> wrote:
> >> > Hi everyone,
> >> >
> >> > A while back, I had sent out a question regarding storage quality of
> >> > service.  A few of you chimed in with some good ideas.
> >> >
> >> > Now that I have a little more experience with CloudStack (these past
> >> couple
> >> > weeks, I've been able to get a real CS system up and running, create
> an
> >> > iSCSI target, and make use of it from XenServer), I would like to
> pose my
> >> > question again, but in a more refined way.
> >> >
> >> > A little background:  I worked for a data-storage company in Boulder,
> CO
> >> > called SolidFire (http://solidfire.com).  We build a highly
> >> fault-tolerant,
> >> > clustered SAN technology consisting exclusively of SSDs.  One of our
> main
> >> > features is hard quality of service (QoS).  You may have heard of QoS
> >> > before.  In our case, we refer to it as hard QoS because the end user
> has
> >> > the ability to specify on a volume-by-volume basis what the maximum
> and
> >> > minimum IOPS for a given volume should be.  In other words, we do not
> >> have
> >> > the user assign relative high, medium, and low priorities to volumes
> (the
> >> > way you might do with thread priorities), but rather hard IOPS limits.
> >> >
> >> > With this in mind, I would like to know how you would recommend I go
> >> about
> >> > enabling CloudStack to support this feature.
> >> >
> >> > In my previous e-mail discussion, people suggested using the Storage
> Tag
> >> > field.  This is a good idea, but does not fully satisfy my
> requirements.
> >> >
> >> > For example, if I created two large SolidFire volumes (by the way, one
> >> > SolidFire volume equals one LUN), I could create two Primary Storage
> >> types
> >> > to map onto them.  One Primary Storage type could have the tag
> >> "high_perf"
> >> > and the other the tag "normal_perf".
> >> >
> >> > I could then create Compute Offerings and Disk Offerings that
> referenced
> >> > one Storage Tag or the other.
> >> >
> >> > This would guarantee that a VM Instance or Data Disk would run from
> one
> >> > SolidFire volume or the other.
> >> >
> >> > The problem is that one SolidFire volume could be servicing multiple
> VM
> >> > Instances and/or Data Disks.  This may not seem like a problem, but
> it is
> >> > because in such a configuration our SAN can no longer guarantee IOPS
> on a
> >> > VM-by-VM basis (or a data disk-by-data disk basis).  This is called
> the
> >> > Noisy Neighbor problem.  If, for example, one VM Instance starts
> getting
> >> > "greedy," it can degrade the performance of the other VM Instances (or
> >> Data
> >> > Disks) that share that SolidFire volume.
> >> >
> >> > Ideally we would like to have a single VM Instance run on a single
> >> > SolidFire volume and a single Data Disk be associated with a single
> >> > SolidFire volume.
> >> >
> >> > How might I go about accomplishing this design goal?
> >> >
> >> > Thanks!!
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkow...@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to