I'm totally on board with this as a future revision of the OS api. However it 
sounds like we need some sort of solution for 1.1.

> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).

Rather than overloading the two, could we just make instance-id a uuid 
asynchronous and pare down the amount of info returned in the server create 
response?

> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.

Are we worried about concurrent or rapid sequential requests?

Is there any way we could cut down on the erraticism by funneling these types 
of requests through a smaller set of schedulers? I'm very unfamiliar with the 
scheduler system but it seems like maybe routing choices at a higher level 
scheduler could help here.

3. and 4. sound like great features albeit ones that could wait on a future 
revision of the api.

Apologies if I'm speaking out of turn and should just read up on scheduler code!


"Sandy Walsh" <sandy.wa...@rackspace.com> said:

> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> Cool, I think you all understand the concerns here:
> 
> 1. We can't treat the InstanceID as a ReservationID since they do two 
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the 
> caller
> supposed to know if they're getting back a ReservationID or an InstanceID? 
> How to
> they ask for updates for each (one returns a single value, one returns a 
> list?).
> 
> 2. We need to handle "provision N instances" so the scheduler can effectively 
> load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.
> 
> 3. As Soren pointed out, we may want certain semantics around failure such as 
> "all
> or nothing"
> 
> 4. Other Nova users have mentioned a desire for instance requests such as "has
> GPU, is in North America and has a blue sticker on the box". If we try to do 
> that
> with Flavors we need to clutter the Flavor table with most-common-denominator
> fields. We can handle this now with Zone/Host Capabilities and not have to 
> extend
> the table at all. If you look at nova/tests/scheduler/test_host_filter.py 
> you'll
> see an example of this in action. To Soren's point about "losing the ability 
> to
> rely on a fixed set of topics in the message queue for doing scheduling" this 
> is
> not the case, there are no new topics introduced. Instead there are simply 
> extra
> arguments passed into the run_instance() method of the scheduler that 
> understands
> these more complex instance requests.
> 
> That said, I was thinking of adding a POST /zone/server command to support 
> these
> extended operations. It wouldn't affect anything currently in place and makes 
> it
> clear that this is a zone-specific operation. Existing EC2 and core OS API
> operations are performed as usual.
> 
> Likewise, we need a way to query the results of a Reservation ID request 
> without
> busting GET /servers/detail ... perhaps GET /zones/servers could do that?
> 
> The downside is that now we have two ways to create an instance that needs to 
> be
> tested, etc.
> 
> -S
> 
> 
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by 
> e-mail
> at ab...@rackspace.com, and delete the original message.
> Your cooperation is appreciated.
> 
> 



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to