A major issue for current QoS providers is not being able to utilize over a
Max amount even when it is highly desirable and the storage system can
support it. That's why I'm thinking it will be a feature others attempt to
imitate.


On Mon, Jun 10, 2013 at 2:44 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> My thinking is that Min and Max are industry standard and Burst is a new
> concept that could catch on.
>
>
> On Mon, Jun 10, 2013 at 2:29 PM, John Burwell <jburw...@basho.com> wrote:
>
>> Wei,
>>
>> In this case, we can have the hypervisor or storage providing the quality
>> of service guarantee.  Naively, it seems reasonable to separate hypervisor
>> and storage QoS parameters and enforce mutually exclusion.  Not only does
>> this approach simplify a whole range of operations, it also avoids
>> reconciling the data models.  The only question I have about the storage
>> provision IOPS is whether or not "Burst IOPS" is an industry standard or
>> vendor specific concept.
>>
>> Thanks,
>> -John
>>
>> On Jun 10, 2013, at 3:59 PM, Wei ZHOU <ustcweiz...@gmail.com> wrote:
>>
>> > Mike,
>> > I do not think users can select only one of them, as they are
>> implemented
>> > on different sides.
>> > Have you investigated the parameters other storage devices support,
>> besides
>> > min/max/burst IOPS? You'd better add all possible fields in your
>> > implementation.
>> >
>> > What do you think about this?
>> > Hypersivor IOPS is fixed, and there is a drop-down box which includes
>> all
>> > supported storage vendors.
>> > If users select "SolidFire", min/max/burst IOPS will appear.
>> > If users select other vendors, relevant fields will appear.
>> > Actually I still insist that it is better to add the storage-related
>> fields
>> > in another table.
>> >
>> > -Wei
>> >
>> >
>> > 2013/6/10 Mike Tutkowski <mike.tutkow...@solidfire.com>
>> >
>> >> Here is my thinking:
>> >>
>> >> Two radio buttons (whatever we want to call them):
>> >>
>> >> 1) Hypervisor IOPS
>> >> 2) Storage IOPS
>> >>
>> >> Leave them both un-checked by default.
>> >>
>> >> If the user checks one or the other, the relevant fields appear.
>> >>
>> >>
>> >> On Mon, Jun 10, 2013 at 1:38 PM, Mike Tutkowski <
>> >> mike.tutkow...@solidfire.com> wrote:
>> >>
>> >>> What do you think, Wei?
>> >>>
>> >>> Should we come up with a way for only one feature (yours or mine) to
>> be
>> >>> used at a time on the new Disk Offering dialog?
>> >>>
>> >>> Since most storage-side provisioned IOPS don't break it down into
>> >> separate
>> >>> read and write categories, I think that's the way to go (only one
>> feature
>> >>> or the other).
>> >>>
>> >>> Any suggestions from a usability standpoint how we want to implement
>> >> this?
>> >>> It could be as simple as a radio button to turn on your feature and
>> mine
>> >>> off or vice versa.
>> >>>
>> >>> Thanks!
>> >>>
>> >>>
>> >>> On Mon, Jun 10, 2013 at 1:33 PM, John Burwell <jburw...@basho.com>
>> >> wrote:
>> >>>
>> >>>> Mike,
>> >>>>
>> >>>> I agree -- I can't image a situation where you would want to use IOPS
>> >>>> provisioned by both the hypervisor and storage.  There are two
>> points of
>> >>>> concern -- the UI and the management server.  We have to ensure that
>> the
>> >>>> user can't create a VM from a compute/disk offering combination where
>> >>>> hypervisor throttled I/O would contradict/conflict with storage
>> >> provisioned
>> >>>> IOPS.  I think this functional conflict must be resolved in the
>> >> management
>> >>>> server to ensure that API calls are properly validated with a UX that
>> >>>> avoids user confusion.  Have Wei and you worked out an approach to
>> >>>> resolving this conflict?
>> >>>>
>> >>>> Thanks,
>> >>>> -John
>> >>>>
>> >>>> On Jun 10, 2013, at 3:24 PM, Mike Tutkowski <
>> >> mike.tutkow...@solidfire.com>
>> >>>> wrote:
>> >>>>
>> >>>>> Wei has sent me the screen shots.
>> >>>>>
>> >>>>> I don't support Compute Offerings for 4.2, so that's not an issue
>> >> here.
>> >>>>>
>> >>>>> I do support Disk Offerings.
>> >>>>>
>> >>>>> It looks like Wei has added four new fields to the Disk Offering.
>> >>>>>
>> >>>>> I have added three (Min, Max, and Burst IOPS).
>> >>>>>
>> >>>>> We just need to decide if we should toggle between his and mine.
>> >>>>>
>> >>>>> I doubt a user would want to use both features at the same time.
>> >>>>>
>> >>>>>
>> >>>>> On Mon, Jun 10, 2013 at 12:30 PM, John Burwell <jburw...@basho.com>
>> >>>> wrote:
>> >>>>>
>> >>>>>> Mike,
>> >>>>>>
>> >>>>>> Have Wei and you figured out the system level as well (e.g.
>> allowing
>> >>>>>> either storage provisioned IOPS or hypervisor throttling, but no
>> >> both)?
>> >>>>>>
>> >>>>>> Thanks,
>> >>>>>> -John
>> >>>>>>
>> >>>>>> On Jun 10, 2013, at 2:12 PM, Mike Tutkowski <
>> >>>> mike.tutkow...@solidfire.com>
>> >>>>>> wrote:
>> >>>>>>
>> >>>>>>> Perhaps Wei could send me some screen shots of what he's changed
>> in
>> >>>> the
>> >>>>>> GUI
>> >>>>>>> for his feature?
>> >>>>>>>
>> >>>>>>> Thanks!
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Mon, Jun 10, 2013 at 11:56 AM, John Burwell <
>> jburw...@basho.com>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>>> Wei,
>> >>>>>>>>
>> >>>>>>>> Have Mike Tutkowski and you reconciled the potential conflict
>> >>>> between a
>> >>>>>>>> throttled I/O VM and a provisioned IOPs volume?  If so, what
>> >> solution
>> >>>>>> did
>> >>>>>>>> you select?
>> >>>>>>>>
>> >>>>>>>> Thanks,
>> >>>>>>>> -John
>> >>>>>>>>
>> >>>>>>>> On Jun 10, 2013, at 1:54 PM, Wei ZHOU <ustcweiz...@gmail.com>
>> >> wrote:
>> >>>>>>>>
>> >>>>>>>>> Guys,
>> >>>>>>>>>
>> >>>>>>>>> I would like to merge disk_io_throttling branch into master.
>> >>>>>>>>> Please review the code on https://reviews.apache.org/r/11782
>> >>>>>>>>>
>> >>>>>>>>> If nobody object, I will merge into master in 72 hours.
>> >>>>>>>>>
>> >>>>>>>>> -Wei
>> >>>>>>>>>
>> >>>>>>>>> 2013/5/30 Wei ZHOU <ustcweiz...@gmail.com>
>> >>>>>>>>>
>> >>>>>>>>>> Hi,
>> >>>>>>>>>> I would like to merge disk_io_throttling branch into master.
>> >>>>>>>>>> If nobody object, I will merge into master in 48 hours.
>> >>>>>>>>>> The purpose is :
>> >>>>>>>>>>
>> >>>>>>>>>> Virtual machines are running on the same storage device (local
>> >>>> storage
>> >>>>>>>> or
>> >>>>>>>>>> share strage). Because of the rate limitation of device (such
>> as
>> >>>>>> iops),
>> >>>>>>>> if
>> >>>>>>>>>> one VM has large disk operation, it may affect the disk
>> >>>> performance of
>> >>>>>>>>>> other VMs running on the same storage device.
>> >>>>>>>>>> It is neccesary to set the maximum rate and limit the disk I/O
>> of
>> >>>> VMs.
>> >>>>>>>>>>
>> >>>>>>>>>> The feature includes:
>> >>>>>>>>>>
>> >>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering, and global
>> >>>>>>>>>> configuration)
>> >>>>>>>>>> (2) change the maximum rate of VMs
>> >>>>>>>>>> (3) limit the disk rate (total bps and iops)
>> >>>>>>>>>> JIRA ticket:
>> >> https://issues.apache.org/jira/browse/CLOUDSTACK-1192
>> >>>>>>>>>> FS (I will update later) :
>> >>>>>>>>>>
>> >>>>>>>>
>> >>>>>>
>> >>>>
>> >>
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>> >>>>>>>>>> Merge check list :-
>> >>>>>>>>>>
>> >>>>>>>>>> * Did you check the branch's RAT execution success?
>> >>>>>>>>>> Yes
>> >>>>>>>>>>
>> >>>>>>>>>> * Are there new dependencies introduced?
>> >>>>>>>>>> No
>> >>>>>>>>>>
>> >>>>>>>>>> * What automated testing (unit and integration) is included in
>> >> the
>> >>>> new
>> >>>>>>>>>> feature?
>> >>>>>>>>>> Unit tests are added.
>> >>>>>>>>>>
>> >>>>>>>>>> * What testing has been done to check for potential
>> regressions?
>> >>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack UI.
>> >>>>>>>>>> (2) VM operations, including
>> >>>>>>>>>> deploy, stop, start, reboot, destroy, expunge. migrate, restore
>> >>>>>>>>>> (3) Volume operations, including
>> >>>>>>>>>> Attach, Detach
>> >>>>>>>>>>
>> >>>>>>>>>> To review the code, you can try
>> >>>>>>>>>> git diff c30057635d04a2396f84c588127d7ebe42e503a7
>> >>>>>>>>>> f2e5591b710d04cc86815044f5823e73a4a58944
>> >>>>>>>>>>
>> >>>>>>>>>> Best regards,
>> >>>>>>>>>> Wei
>> >>>>>>>>>>
>> >>>>>>>>>> [1]
>> >>>>>>>>>>
>> >>>>>>>>
>> >>>>>>
>> >>>>
>> >>
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>> >>>>>>>>>> [2] refs/heads/disk_io_throttling
>> >>>>>>>>>> [3] https://issues.apache.org/jira/browse/CLOUDSTACK-1301<
>> >>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2071
>> >>>>> (CLOUDSTACK-1301
>> >>>>>> -
>> >>>>>>>>  VM Disk I/O Throttling)
>> >>>>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> *Mike Tutkowski*
>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>> e: mike.tutkow...@solidfire.com
>> >>>>>>> o: 303.746.7302
>> >>>>>>> Advancing the way the world uses the
>> >>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>>>> *™*
>> >>>>>>
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> *Mike Tutkowski*
>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>> e: mike.tutkow...@solidfire.com
>> >>>>> o: 303.746.7302
>> >>>>> Advancing the way the world uses the
>> >>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>> *™*
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>> --
>> >>> *Mike Tutkowski*
>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> e: mike.tutkow...@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud<
>> >> http://solidfire.com/solution/overview/?video=play>
>> >>> *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkow...@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the
>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to