On 8/9/13 9:06 PM, Scott Devoid wrote:
Hi Nikolay and Patrick, thanks for your replies.

Virtual vs. Physical Resources
Ok, now I realize what you meant by "virtual resources," e.g. instances, volumes, networks...resources provided by existing OpenStack schedulers. In this case "physical resources" are actually more "removed" since there are no interfaces to them in the user-level OpenStack APIs. If you make a physical reservation on "this rack of machines right here", how do you supply this reservation information to nova-scheduler? Probably via scheduler hints + an availability zone or host-aggregates. At which point you're really defining a instance reservation that includes explicit scheduler hints. Am I missing something?

Hi Scott!
No, you don't. At least, it's how I see things working for hosts reservation. In fact, it is already partially addressed in Havana with https://wiki.openstack.org/wiki/WholeHostAllocation. What's missing is the ability to automate the create and release of those pools based on a lease schedule.
Thanks
Patrick
Eviction:
Nikolay, to your point that we might evict something that was already paid for: in the design I have in mind, this would only happen if the policies set up by the operator caused one reservation to be weighted higher than another reservation. Maybe because one client paid more? The point is that this would be configurable and the sensible default is to not evict anything.


On Fri, Aug 9, 2013 at 8:05 AM, Nikolay Starodubtsev <nstarodubt...@mirantis.com <mailto:nstarodubt...@mirantis.com>> wrote:

    Hello, Patrick!

    We have several reasons to think that for the virtual resources
    this possibility is interesting. If we speak about physical
    resources, user may use them in the different ways, that's why it
    is impossible to include base actions with them to the reservation
    service. But speaking about virtual reservations, let's imagine
    user wants to reserve virtual machine. He knows everything about
    it - its parameters, flavor and time to be leased for. Really, in
    this case user wants to have already working (or at least starting
    to work) reserved virtual machine and it would be great to include
    this opportunity to the reservation service. We are thinking about
    base actions for the virtual reservations that will be supported
    by Climate, like boot/delete for instance, create/delete for
    volume and create/delete for the stacks. The same will be with
    volumes, IPs, etc. As for more complicated behaviour, it may be
    implemented in Heat. This will make reservations simpler to use
    for the end users.

    Don't you think so?

    P.S. Also we remember about the problem you mentioned some letters
    ago - how to guarantee that user will have already working and
    prepared host / VM / stack / etc. by the time lease actually
    starts, no just "lease begins and preparing process begins too".
    We are working on it now.


    On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit
    <patrick.pe...@bull.net <mailto:patrick.pe...@bull.net>> wrote:

        Hi Nikolay,

        Relying on Heat for orchestration is obviously the right thing
        to do. But there is still something in your design approach
        that I am having difficulties to comprehend since the
        beginning. Why do you keep thinking that orchestration and
        reservation should be treated together? That's adding
        unnecessary complexity IMHO. I just don't get it. Wouldn't it
        be much simpler and sufficient to say that there are pools of
        reserved resources you create through the reservation service.
        Those pools could be of different types i.e. host, instance,
        volume, network,.., whatever if that's really needed. Those
        pools are identified by a unique id that you pass along when
        the resource is created. That's it. You know, the AWS
        reservation service doesn't even care about referencing a
        reservation when an instance is created. The association
        between the two just happens behind the scene. That would work
        in all scenarios, manual, automatic, whatever... So, why do
        you care so much about this in a first place?
        Thanks,
        Patrick

        On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:
        Patrick, responding to your comments:

        1) Dina mentioned "start automatically" and "start manually"
        only as examples of how these politics may look like. It
        doesn't seem to be a correct approach to put orchestration
        functionality (that belongs to Heat) in Climate. That's why
        now we can implement the basics like starting Heat stack, and
        for more complex actions we may later utilize something like
        Convection (Task-as-a-Service) project.

        2) If we agree that Heat is the main consumer of
        Reservation-as-a-Service, we can agree that lease may be
        created according to one of the following scenarions (but not
        multiple):
        - a Heat stack (with requirements to stack's contents) as a
        resource to be reserved
        - some amount of physical hosts (random ones or filtered
        based on certain characteristics).
        - some amount of individual VMs OR Volumes OR IPs

        3) Heat might be the main consumer of virtual reservations.
        If not, Heat will require development efforts in order to
        support:
        - reservation of a stack
        - waking up a reserved stack
        - performing all the usual orchestration work

        We will support reservation of individual instance/volume/ IP
        etc, but the use case with "giving user already working group
        of connected VMs, volumes, networks" seems to be the most
        interesting one.
        As for Heat autoscaling, reservation of the maximum instances
        set in the Heat template (not the minimum value) has to be
        implemented in Heat. Some open questions remain though - like
        updating of Heat stack when user changes the template to
        support higher max number of running instances

        4) As a user, I would of course want to have it already
        working, running any configured hosts/stacks/etc by the time
        lease starts. But in reality we can't predict how much time
        the preparation process should take for every single use
        case. So if you have an idea how this should be implemented,
        it would be great you share your opinion.


        _______________________________________________
        OpenStack-dev mailing list
        OpenStack-dev@lists.openstack.org  
<mailto:OpenStack-dev@lists.openstack.org>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to