Hi Scott,

Thanks for your inputs. Please see some comments below.
BR,
Patrick
On 8/6/13 6:58 PM, Scott Devoid wrote:
Some thoughts:

0. Should Climate also address the need for an eviction service? That is, a service that can weight incoming requests and existing resource allocations using some set of policies and evict an existing resource allocations to make room for the higher weighted request. Eviction is necessary if you want to implement a Spot-like service. And if you want Climate reservations that do not tie physical resources to the reservation, this is also required to ensure that requests against the reservation succeed. (Note that even if you do tie physical resources as in whole-host reservations, an eviction service can help when physical resources fail.)
Good point. We probably don't want to to tie physical resources to a reservations until the lease becomes active.

1. +1 Let end users continue to use existing APIs for resources and extend those interfaces with reservation attributes. Climate should only handle reservation crud and tracking.

2a. As an operator, I want the power to define reservations in terms of host capacity / flavor, min duration, max duration... and limit what kind of reservation requests can come in. Basically define "reservation flavors" and let users submit requests as instances of one "reservation flavor". If you let the end user define all of these parameters I will be rejecting a lot of reservation requests.
Sure, however it is unclear what is the state of reflection about creating host flavor types and extend Nova and API to support that case...? Meanwhile, I think the approach proposed in https://wiki.openstack.org/wiki/WholeHostAllocation to use pre-defined metadata in aggregates should work for categorizing host reservation flavors.

2b. What's the point of an "immediate lease"? This should be equivalent to making the request against Nova directly right? Perhaps there's a rational for this w.r.t. billing? Otherwise I'm not sure what utility this kind of reservation provides?
Well, Amazon uses it as a business enabler for whole sales activities. From the end-user standpoint it ensures that the resources is available for the duration of the lease. I think it is useful when your cloud has limited capacity with capacity contenders.

2c. Automatic vs. manual reservation approval:

    What a user wants to know is whether a reservation can be granted
    in a all-or-nothing manner at the time he is asking the lease.


This is a very hard problem to solve: you have to model resource availability (MTTF, MTBF), resource demand (how full are we going to be), and bake in explicit policies (this tenant gets priority) to automatically grant / deny such reservations. Having reservations go through a manual request -> operator approval system is extremely simple and allows operators to tackle the automated case as they need to.
I agree, but I think that was Dina was referring to when speaking of automatic vs manual reservation is the ability to express whether the resource is started automatically or not by the reservation service. My point was to say that reservation and instantiation are two different and separate things and so the specification of post-lease actions should not be restricted to that if it was only because a reservation that is not started automatically by the reservation service could still be started automatically by someone else like auto-scaling.

All I need is a tool that lets a tenant spawn a single critical instance even when another tenant is running an application that's constantly trying to grab as many instances as it can get. 3. This will add a lot of complexity, particularly if you want to tackle #0.

5. (NEW) Note that Amazon's reserved instances feature doesn't tie reservations against specific instances. Effectively you purchase discount coupons to be applied at the end of the billing cycle. I am not sure how Amazon handles tenants with multiple reservations at different utilization levels (prioritize heavy -> light?).
Amazon knows how to handle tenant's dedicated instances with reservations in the context of VPC. Not sure either how or if it works at all when mixed with prioritization levels. That's tough!

~ Scott


On Tue, Aug 6, 2013 at 6:12 AM, Patrick Petit <patrick.pe...@bull.net <mailto:patrick.pe...@bull.net>> wrote:

    Hi Dina and All,
    Please see comments inline. We can  drill down on the specifics
    off-line if that's more practical.
    Thanks in advance,
    Patrick

    On 8/5/13 3:19 PM, Dina Belova wrote:

    Hello, everyone!


    Patrick, Julien, thank you so much for your comments. As for the
    moments Patrick mentioned in his letter, I'll describe our vision
    for them below.


    1) Patrick, thank you for the idea! I think it would be great to
    add not only 'post-lease actions policy', but also 'start-lease
    actions policy'. I mean like having two types of what can be done
    with resource (virtual one) on lease starting - 'start VM
    automatically' or 'start VM manually'. This means user may not
    use reserved resources at all, if he needs such a behaviour.

    Something along those lines would work but I think the 'start VM
    manually' keeps over specifying the behavior IMO since you still
    make the assumption that reserved resources are always started
    using a term 'manually' that is misleading because if not
    automatically started by the reservation service they can still be
    automatically started elsewhere like in Heat. I general I agree
    that users can take advantage of being able to specify pre and
    post lease actions / conditions although it shouldn't be
    prescriptive of something binary like start automatically or
    manually. Another beneficial usage could be to send parametrized
    notifications. I would also make the pre and post action optional
    so that if the user choose not to associate an action with the
    realization of a lease, he doesn't have to specify anything.
    Finally, I would also that the specification of a pre and post
    action is assorted of a time offset to take into account the lead
    time to provision certain types of resources like physical hosts.
    That's a possible solution to point 4.


    2) We really believe that creating lease first, and going with
    its id to all the OpenStack projects to use is a better idea than
    'filling' the lease with resources just at the moment of its
    creation. I'll try to explain why. First of all, as for virtual
    reservations, we'll need to proxy Nova, Cinder, etc. APIs through
    Reservation API to reserve VM or volume or something else.
    Workflow for VM/volume/etc. creation is really complicated and
    only services written to do this have to do it, in our opinion.
    Second, this makes adding new reservations to the created lease
    simple and user friendly. And the last moment, we should not copy
    all these dashboard pages for instance/volume/... creation to the
    reservation Dashboard tab in this case. As for the physical
    reservations, as you mentioned, there is no way to 'create' them
    like virtual resources in the Nova's, for example, API now.
    That's why there are two ways to solve this problem and reserve
    them. First way is to reserve them from Reservation Service as it
    is implemented now and described also in our document (WF-2b part
    of it). The second variant (that seems to be more elegant, but
    more complicated as well) is to implement needed parts as Nova
    API extension to let Nova do the things it does the best way -
    managing hosts, VMs, etc. Our concern in this question is not
    doing things Nova (or any other service) can do much better.

    Well, I am under the impression that you put forward an
    argumentation that is mostly based on an implementation artifact
    which takes advantage of the actual resource provisioning workflow
    and dashboard rather than taking into account the most common use
    cases and practices. There maybe use cases that mandate for an
    iterative workflow that is similar to what you describe. I may be
    wrong, but I am doubtful it is going to be a common use case. We
    tend to think of AWS as being a reference and you've probably
    noticed that reservations in AWS are performed by chunk (the more
    I reserve for the longer period of time, the cheaper). The problem
    with adding reservations into a lease on a continuous basis is
    that as a user I may end up undo what I have done (e.g. I got only
    900 out of the 1000 VMs I want) and keep trying forever. That's
    potentially a lot of overhead. Also, as a cloud operator, I'd like
    to know what my reservation pipeline looks like ahead of time so
    that I can provision new hardware in due time. That's capacity
    planning. As an operator, I also want to be able grant
    reservations and charge for it even if I don't have the capacity
    right now provided the lead time to provisioning new hardware
    doesn't conflict with the terms of the pending leases. If a user
    can add reservations to a lease at the last moment, that
    flexibility may be compromised. In any events, this is how we
    envision the behavior of the reservation service for the
    reservation of physical capacity and so, it is important the
    service API can support that interaction model. I think it's
    probably okay to do it in two separate steps 1) create the lease,
    2) add reservation (although it seems problematic in the case of
    immediate lease) but the actual hosts reservation request should
    include a cardinality factor so that if the user wants to reserve
    x number of hosts in one chunk he can do it. The reservation
    service would respond yes or no depending on the three possible
    lease terms (immediate, best effort and schedule) along with the
    operator's specific reservation policies that yet has to be
    configurable one way or another. To be discussed...


    3) We completely agree with you! Our 'nested reservation' vision
    was created only to let user the opportunity of checking
    reservation status of complex virtual resources (stacks) by
    having an opportunity to check status of all its 'nested'
    components, like VMs, networks, etc. This can be done as well by
    using just Heat without reservation service. Now we are thinking
    about reservation as the reservation of the OpenStack resource
    that has ID in the OpenStack service DB, no matter how complex it
    is (VM, network, floating IP, stack, etc.)

    I am not sure I am getting this...? All I wanted to say is that
    orchestration is a pretty big deal and my recommendation is not to
    do any of this at all in the reservation service but rely on Heat
    instead when possible. I understand you seem to agree with this...
    Also, I am not sure how you can do stack reservations on the basis
    of a Heat template when it has auto-scaling groups.


    4) We were thinking about Reservation Scheduler as a service that
    controls lease life cycle (starting, ending, making user
    notifications, etc.) and communicates with Reservation Manager
    via RPC. Reservation Manager can send user notifications about
    close lease ending using Ceilometer (this question has to be
    researched). As for the time needed to run physical reservation
    or complex virtual one, that is used to make preparations and
    settings, I think it would be better for user to amortise it in
    lease using period, because for physical resources it much
    depends on hardware resources and for virtual ones - on hardware,
    network and geo location of DCs.

    Do you mean make the user aware of the provisioning lead time in
    the lease schedule? How do suggest they know how to account for
    that? In practice, a lease is a contract and so the reservations
    must be available at the exact time the lease becomes effective.


    Thank you,

    DIna.



    On Mon, Aug 5, 2013 at 1:22 PM, Julien Danjou <jul...@danjou.info
    <mailto:jul...@danjou.info>> wrote:

        On Fri, Aug 02 2013, Patrick Petit wrote:

        > 3. The proposal specifies that a lease can contain a combo
        of different
        >    resources types reservations (instances, volumes, hosts,
        Heat
        >    stacks, ...) that can even be nested and that the
        reservation
        >    service will somehow orchestrate their deployment when
        the lease
        >    kicks in. In my opinion, many use cases (at least ours)
        do not
        >    warrant for that level of complexity and so, if that's
        something
        >    that is need to support your use cases, then it should
        be delivered
        >    as module that can be loaded optionally in the system.
        Our preferred
        >    approach is to use Heat for deployment orchestration.

        I agree that this is not something Climate should be in
        charge. If the
        user wants to reserve a set of services and deploys them
        automatically,
        Climate should provide the lease and Heat the deployment
        orchestration.
        Also, for example, it may be good to be able to reserve
        automatically
        the right amount of resources needed to deploy a Heat stack
        via Climate.

        --
        Julien Danjou
        // Free Software hacker / freelance consultant
        // http://julien.danjou.info

        _______________________________________________
        OpenStack-dev mailing list
        OpenStack-dev@lists.openstack.org
        <mailto:OpenStack-dev@lists.openstack.org>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
    Best regards,

    Dina Belova

    Software Engineer

    Mirantis Inc.



-- Patrick Petit
    Cloud Computing Principal Architect, Innovative Products
    Bull, Architect of an Open World TM
    Tél :+33 (0)4 76 29 70 31  <tel:%2B33%20%280%294%2076%2029%2070%2031>
    Mobile :+33 (0)6 85 22 06 39  <tel:%2B33%20%280%296%2085%2022%2006%2039>
    http://www.bull.com


    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to