Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Jay Pipes

On 07/29/2016 04:45 PM, Chris Dent wrote:

On Fri, 29 Jul 2016, Jay Pipes wrote:

On 07/29/2016 02:31 PM, Chris Dent wrote:

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API.
All that matters is the UUID identifiers for aggregates and resource
providers.

So, add a new aggregates table in the placement DB that simply
contains an autoincrementing ID and a uuid column and insert into that
table when the placement API receives a request to associate a
resource provider to an aggregate where the placement DB doesn't have
a record of that UUID yet.


Are you thinking that to mean:

1 Use a different name for the table than 'aggregates' and also make
  it in the API db and be able to use the same code whether the system
  is configured to use a separate placement db or not.


No, such a table already exists in the API database and will continue to 
exist there.


We will want an aggregates table in the placement DB as well. For now, 
all it will store is the UUID identifier of the aggregate in the Nova 
API database.



or

2 Only add the table in the placement DB and conditionally modify
  the SQL

These both have their weaknesses. 1 duplicates some data, 2
complicates the code.

Given "All that matters is the UUID identifiers for aggregates and
resource providers" why not stick uuids in resource_provider_aggregates
(whichever database it is in) and have the same code and same
schema? The current resource_provider_aggregates won't have anything
in it, will it?


Because integer keys are a whole lot faster and more efficient than 
CHAR(36) keys. :)



Or do we need three tables (resource provider, resource provider
aggregates, something with a name close to aggregates) in order to
be able to clam shell? If that's the case I'd prefer option 1.


Well, the clam shell join actually doesn't come into play with this 
aggregates table in the placement DB. The aggregates table in the 
placement DB will do nothing other than look up the 
internal-to-the-placement-DB integer ID of the aggregate given a UUID value.


So, literally, all we need in the placement DB is this:

CREATE TABLE aggregates (
  id INT NOT NULL AUTOINCREMENT PRIMARY KEY,
  uuid CHAR(36) NOT NULL,
  UNIQUE INDEX (uuid)
);

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Chris Dent

On Fri, 29 Jul 2016, Jay Pipes wrote:

On 07/29/2016 02:31 PM, Chris Dent wrote:

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API. All that 
matters is the UUID identifiers for aggregates and resource providers.


So, add a new aggregates table in the placement DB that simply contains an 
autoincrementing ID and a uuid column and insert into that table when the 
placement API receives a request to associate a resource provider to an 
aggregate where the placement DB doesn't have a record of that UUID yet.


Are you thinking that to mean:

1 Use a different name for the table than 'aggregates' and also make
  it in the API db and be able to use the same code whether the system
  is configured to use a separate placement db or not.

or

2 Only add the table in the placement DB and conditionally modify
  the SQL

These both have their weaknesses. 1 duplicates some data, 2
complicates the code.

Given "All that matters is the UUID identifiers for aggregates and
resource providers" why not stick uuids in resource_provider_aggregates
(whichever database it is in) and have the same code and same
schema? The current resource_provider_aggregates won't have anything
in it, will it?

Or do we need three tables (resource provider, resource provider
aggregates, something with a name close to aggregates) in order to
be able to clam shell? If that's the case I'd prefer option 1.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Jay Pipes

On 07/29/2016 02:31 PM, Chris Dent wrote:

On Thu, 28 Jul 2016, Jay Pipes wrote:

The decision at the mid-cycle was to add a new
placement_sql_connection configuration option to the nova.conf. The
default value would be None which would mean the code in
nova/objects/resource_provider.py would default to using the API
database setting.


I've been working on this with Roman Podoliaka. We've made some
reasonable progress but I'm hitting a bump in the road that we
may wish to make a decision about sooner than later. I mentioned
this before but forgot to remember it as actually important and it
got lost in the sturm und drang.

When resource providers live in the api database they will be in
there with the aggregates and the resource_provider_aggregates
table, which looks essentially like this

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id INTEGER NOT NULL,
PRIMARY KEY (resource_provider_id, aggregate_id)
);

will make great sense: We can join across this to the aggregates
table to get the aggregates or aggregate uuids that are associated
with a resource provider.

If we use a separate placement db for resource providers there's as
yet no aggregate table to join with across that
resource_provider_aggregates table.

To deal with this do we:

* Give up for now on the separate placement_sql_connection?


No.


* Change resource_provider_aggregates to:

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id VARCHAR(36) NOT NULL, # a uuid
PRIMARY KEY (resource_provider_id, aggregate_id)
);


Also no.


  in the migrations and models used by both the api and placement
  dbs?

  This could work because as I recall what we really care about is that
  there is an aggregation of some resource providers with some other
  resource providers, not the details of the Aggregate object.

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API. All 
that matters is the UUID identifiers for aggregates and resource providers.


So, add a new aggregates table in the placement DB that simply contains 
an autoincrementing ID and a uuid column and insert into that table when 
the placement API receives a request to associate a resource provider to 
an aggregate where the placement DB doesn't have a record of that UUID yet.


Best,
-jay


* Hoops I don't want to think about for aggregates in both tables?

* Some other solution I'm not thinking of.

* Actually you're wrong Chris, this isn't an issue because [please
  fill in the blank here].

A few of these seem rather less than great.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Chris Dent

On Thu, 28 Jul 2016, Jay Pipes wrote:
The decision at the mid-cycle was to add a new placement_sql_connection 
configuration option to the nova.conf. The default value would be None which 
would mean the code in nova/objects/resource_provider.py would default to 
using the API database setting.


I've been working on this with Roman Podoliaka. We've made some
reasonable progress but I'm hitting a bump in the road that we
may wish to make a decision about sooner than later. I mentioned
this before but forgot to remember it as actually important and it
got lost in the sturm und drang.

When resource providers live in the api database they will be in
there with the aggregates and the resource_provider_aggregates
table, which looks essentially like this

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id INTEGER NOT NULL,
PRIMARY KEY (resource_provider_id, aggregate_id)
);

will make great sense: We can join across this to the aggregates
table to get the aggregates or aggregate uuids that are associated
with a resource provider.

If we use a separate placement db for resource providers there's as
yet no aggregate table to join with across that
resource_provider_aggregates table.

To deal with this do we:

* Give up for now on the separate placement_sql_connection?

* Change resource_provider_aggregates to:

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id VARCHAR(36) NOT NULL, # a uuid
PRIMARY KEY (resource_provider_id, aggregate_id)
);

  in the migrations and models used by both the api and placement
  dbs?

  This could work because as I recall what we really care about is that
  there is an aggregation of some resource providers with some other
  resource providers, not the details of the Aggregate object.

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.

* Hoops I don't want to think about for aggregates in both tables?

* Some other solution I'm not thinking of.

* Actually you're wrong Chris, this isn't an issue because [please
  fill in the blank here].

A few of these seem rather less than great.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Jay Pipes

On 07/28/2016 02:10 PM, Chris Dent wrote:

On Thu, 28 Jul 2016, Jay Pipes wrote:


* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.


The decision at the mid-cycle was to add a new
placement_sql_connection configuration option to the nova.conf. The
default value would be None which would mean the code in
nova/objects/resource_provider.py would default to using the API
database setting.


Roger that. I was pretty sure that was what we decided but wanted to
confirm as unless I'm mistaken it is a considerable change.

As I understand things it means:

* integrating however much of Roman's WIP at
  https://review.openstack.org/#/c/342384/ is required (we need our
  own copies of the models and migrations and a manage script to do
  a db-sync, yes?)
* adding the config setting
* doing the creation of the correct transaction context dependent on
  that config
* adding the new db into the existing nova.fixtures so the tests can work
* reno note


The above matches my understanding and expectations, yes.


Do we want to test against both configurations?


Not sure. If you're asking whether we should have separate gate jobs 
that pass None and a not-None-not-same-as-API-DB value for 
placement_sql_connection, I don't think that's necessary. A single 
functional test that sets placement_sql_connection to a 
not-None-not-API-DB value and verifies that data is written to a 
database other than the API database would be acceptable to me.



# less straightforward and further out things


[snip]


This will be in Ocata.


Sorry if I wasn't clear about this. By "further out" I meant "not
newton". I'll spin off an adjacent thread to deal with any followups
on these parts. I think it is useful to keep the conversation
flowing on these topics, especially after all the input and
discussion at the mid-cycle.


Ack, and thanks :)

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Chris Dent

On Thu, 28 Jul 2016, Jay Pipes wrote:


* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.


The decision at the mid-cycle was to add a new placement_sql_connection 
configuration option to the nova.conf. The default value would be None which 
would mean the code in nova/objects/resource_provider.py would default to 
using the API database setting.


Roger that. I was pretty sure that was what we decided but wanted to
confirm as unless I'm mistaken it is a considerable change.

As I understand things it means:

* integrating however much of Roman's WIP at
  https://review.openstack.org/#/c/342384/ is required (we need our
  own copies of the models and migrations and a manage script to do
  a db-sync, yes?)
* adding the config setting
* doing the creation of the correct transaction context dependent on
  that config
* adding the new db into the existing nova.fixtures so the tests can work
* reno note

Do we want to test against both configurations?


# less straightforward and further out things


[snip]


This will be in Ocata.


Sorry if I wasn't clear about this. By "further out" I meant "not
newton". I'll spin off an adjacent thread to deal with any followups
on these parts. I think it is useful to keep the conversation
flowing on these topics, especially after all the input and
discussion at the mid-cycle.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Sylvain Bauza



Le 28/07/2016 15:57, Chris Dent a écrit :


I've been reviewing my notes from the mid-cycle and discussions
leading up to it and realized I have a few unresolved or open topics
that I hope discussion here can help resolve:

# fairly straightforward things

* At what stage in the game (of the placement api) do we need to
  implement oslo_policy handling and enforcement? Right now the auth
  model is simply all-admin-role-all-the-time.



LGTM for Newton to keep that simple logic given that only Nova will call 
out the placement API for the moment.

Once we begin opening doors, then yes, oslo.policy will become a thing.



* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.



I thought we agreed on that during the midcycle ?



# less straightforward and further out things

There was some discussion that conflicted with reality a bit and I
think we need to resolve before too long, but shouldn't impact the
newton-based changes:

We bounced around two different HTTP resources for returning one or
several resource providers in response to a launch request:

* POST /allocations

  returns a representation of the one target for this launch
  request, already claimed



Please, why are you opening this thread now given it's ABSOLUTELY not 
related to the placement API ?
That confuses a lot of people here, and we basically had a consensus on 
the Newton target : allocations are made by the compute nodes, I don't 
see things less than straightforward here.




* GET /resource_providers

  returns a list of candidate targets for a launch request, similar
  to what the existing select_destinations RPC call can do

The immediate problem here is that something else is already using
GET /resource_providers:

http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#get-resource-providers

Whatever the URI, it's not clear that GET would be correct here:

* We'll probably want to send a body so GET is not ideal.

* We could pass a reference to a persisted "request spec" as a query
  string item, thus maintaining a GET, but that seems to go against
  the grain of "give a thing the info it needs to get stuff done" that
  is elsewhere in the system.

  I'd personally be pretty okay with launch-info-by-reference mode as
  it allows the placement API to be in charge of request what version
  of a launch request it wants rather than its clients needing to know
  what version the placement API might accept.

It's pretty clear that were going to need at least an interim and
maybe permanent endpoint that returns a list of candidate target
resource providers. This is because, at least initially, the
placement engine will not be able to resolve all requirements down
to the one single result and additional filtering may be required in
the caller.



So we had a discussion in Hillsboro about that with no consensus yet, if 
you remember.
I heard different opinions on how nova-scheduler would integrate with 
the placement API in Ocata, and I was concerned by this service doing an 
HTTP call to an external API. My idea was rather to integrate the new 
placement tables into the existing HostManager, so that instead of 
getting a full list of compute nodes, we would provide to the filters a 
list of resource providers matching the query.




The question is: Will that need for additional filtering always be
present and if so do we:

* consider that a bad thing that we should strive to fix by
  expanding the powers and size of the placement engine
* consider that a good thing that allows the placement engine to be
  relatively simple and keeps edge-case behaviors being handled
  elsewhere

If the latter, then we'll have to consider how an allocation/claim
in a list of potential allocations can be essentially reserved,
verified, or rejected.

As an example of expanding the powers, there is the
ResourceProviderTags concept, described in:

https://review.openstack.org/#/c/345138/

This will expand the data model of resource providers and the surface
area of the HTTP API. This may very well be entirely warranted, but
there might be other options if we assuming that returning a list is
"normal".

Sorry if this is unclear. I'm rather jet-lagged. Ask questions if
you have them. Thanks.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Dan Smith
> No. POST /allocations/{consumer_uuid} is the thing that the resource
> tracker calls for the claim on the compute node.
> 
> The POST /allocations is something we've been throwing around ideas on
> for an eventual call that the placement engine would expose for "claims
> in the scheduler".

Right, okay just wanted to make sure we weren't including /allocations/*
in the statement about Ocata.

So then yeah, I think I agree with all that.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Ed Leafe
On Jul 28, 2016, at 9:47 AM, Roman Podoliaka  wrote:

> How about we do a query in two steps:
> 
> 1) take a list of compute nodes (== resource providers) and apply all
> the filters which *can not* (or *are not* at some point) be
> implemented in placement-api
> 
> 2) POST a launch request passing the *pre-filtered* list of resource
> providers.  placement api will pick one of those RP, *claim* its
> resources and return the claim info
> 
> A similar approach could probably be used for assigning weights to RPs
> when we pass the list of RPs to placement api.

That is very similar to the approach I proposed for the live migration API 
change to accommodate the needs of Watcher: provide a list of potential hosts, 
and have the scheduler select one from that list. Rather than muck with an 
existing API, we decided to postpone supporting that until the placement API 
was available. I think that providing a subset of all resource providers to the 
placement API to limit the potential results is an important addition, 
especially with the eventual thought of having this be a generic placement API 
for all sorts of resources.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Jay Pipes

On 07/28/2016 11:19 AM, Dan Smith wrote:

There was some discussion that conflicted with reality a bit and I
think we need to resolve before too long, but shouldn't impact the
newton-based changes:

We bounced around two different HTTP resources for returning one or
several resource providers in response to a launch request:

* POST /allocations

  returns a representation of the one target for this launch
  request, already claimed


This will be in Ocata.


We _do_ need the resource tracker to be reporting allocations to the
placement service in Newton in order to allow the following call (GET
/resource_providers) to work. Is this POST that thing or a different thing?


No. POST /allocations/{consumer_uuid} is the thing that the resource 
tracker calls for the claim on the compute node.


The POST /allocations is something we've been throwing around ideas on 
for an eventual call that the placement engine would expose for "claims 
in the scheduler".


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Dan Smith
>> There was some discussion that conflicted with reality a bit and I
>> think we need to resolve before too long, but shouldn't impact the
>> newton-based changes:
>>
>> We bounced around two different HTTP resources for returning one or
>> several resource providers in response to a launch request:
>>
>> * POST /allocations
>>
>>   returns a representation of the one target for this launch
>>   request, already claimed
> 
> This will be in Ocata.

We _do_ need the resource tracker to be reporting allocations to the
placement service in Newton in order to allow the following call (GET
/resource_providers) to work. Is this POST that thing or a different thing?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Roman Podoliaka
Hi Chris,

A really good summary, thank you!

On Thu, Jul 28, 2016 at 4:57 PM, Chris Dent  wrote:
> It's pretty clear that were going to need at least an interim and
> maybe permanent endpoint that returns a list of candidate target
> resource providers. This is because, at least initially, the
> placement engine will not be able to resolve all requirements down
> to the one single result and additional filtering may be required in
> the caller.
>
> The question is: Will that need for additional filtering always be
> present and if so do we:
>
> * consider that a bad thing that we should strive to fix by
>   expanding the powers and size of the placement engine
> * consider that a good thing that allows the placement engine to be
>   relatively simple and keeps edge-case behaviors being handled
>   elsewhere
>
> If the latter, then we'll have to consider how an allocation/claim
> in a list of potential allocations can be essentially reserved,
> verified, or rejected.

I'd personally prefer the latter. I don't think placement api will be
able to implement all the filters we currently have in
FilterScheduler.

How about we do a query in two steps:

1) take a list of compute nodes (== resource providers) and apply all
the filters which *can not* (or *are not* at some point) be
implemented in placement-api

2) POST a launch request passing the *pre-filtered* list of resource
providers.  placement api will pick one of those RP, *claim* its
resources and return the claim info

A similar approach could probably be used for assigning weights to RPs
when we pass the list of RPs to placement api.

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Jay Pipes
Chris, thank you so much for putting this email together. Really 
appreciate it. Comments inline. :)


On 07/28/2016 09:57 AM, Chris Dent wrote:


I've been reviewing my notes from the mid-cycle and discussions
leading up to it and realized I have a few unresolved or open topics
that I hope discussion here can help resolve:

# fairly straightforward things

* At what stage in the game (of the placement api) do we need to
  implement oslo_policy handling and enforcement? Right now the auth
  model is simply all-admin-role-all-the-time.


I think this is perfectly acceptable behaviour for Newton. In Ocata, we 
can add support for the new code-driven oslo.policy work from laski.



* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.


The decision at the mid-cycle was to add a new placement_sql_connection 
configuration option to the nova.conf. The default value would be None 
which would mean the code in nova/objects/resource_provider.py would 
default to using the API database setting.


Deployers who want to alleviate the need for a (potentially disruptive) 
data migration of tables from the API database to the new placement 
database would be able to set placement_sql_connection to a separate 
(from the API DB) URI that the placement service would begin writing 
records to in Newton. A reno note should accompany the patch that adds 
placement_sql_connection to inform deployers about their ability to 
proactively help future upgrades by setting placement_sql_connection to 
a different URI than the Nova API DB URI.



# less straightforward and further out things

There was some discussion that conflicted with reality a bit and I
think we need to resolve before too long, but shouldn't impact the
newton-based changes:

We bounced around two different HTTP resources for returning one or
several resource providers in response to a launch request:

* POST /allocations

  returns a representation of the one target for this launch
  request, already claimed


This will be in Ocata.

We should work on a spec that outlines the plan for this call and have 
it submitted and ready for discussion in Barcelona.



* GET /resource_providers

  returns a list of candidate targets for a launch request, similar
  to what the existing select_destinations RPC call can do


This will also be in Ocata. Any calls from the nova-scheduler to the new 
placement API are going into Ocata.


For Newton, we decided that the concrete goal was to have inventory and 
allocation records written *from the nova-compute workers* directly to 
the placement HTTP API.


As a stretch goal for Newton, we're going to try and get the dynamic 
resource classes CRUD operations added to the placement REST API as 
well. This will allow Ironic to participate in the brave new 
resource-providers world with the 'node resource class' that Ironic is 
adding to their API. [1]


[1] https://review.openstack.org/#/c/345080/


The immediate problem here is that something else is already using
GET /resource_providers:

http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#get-resource-providers

Whatever the URI, it's not clear that GET would be correct here:

* We'll probably want to send a body so GET is not ideal.

* We could pass a reference to a persisted "request spec" as a query
  string item, thus maintaining a GET, but that seems to go against
  the grain of "give a thing the info it needs to get stuff done" that
  is elsewhere in the system.

  I'd personally be pretty okay with launch-info-by-reference mode as
  it allows the placement API to be in charge of request what version
  of a launch request it wants rather than its clients needing to know
  what version the placement API might accept.

It's pretty clear that were going to need at least an interim and
maybe permanent endpoint that returns a list of candidate target
resource providers. This is because, at least initially, the
placement engine will not be able to resolve all requirements down
to the one single result and additional filtering may be required in
the caller.

The question is: Will that need for additional filtering always be
present and if so do we:

* consider that a bad thing that we should strive to fix by
  expanding the powers and size of the placement engine
* consider that a good thing that allows the placement engine to be
  relatively simple and keeps edge-case behaviors being handled
  elsewhere

If the latter, then we'll have to consider how an allocation/claim
in a list of potential allocations can be essentially reserved,
verified, or rejected.

As an example of expanding the powers, there is the
ResourceProviderTags concept, described 

[openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Chris Dent


I've been reviewing my notes from the mid-cycle and discussions
leading up to it and realized I have a few unresolved or open topics
that I hope discussion here can help resolve:

# fairly straightforward things

* At what stage in the game (of the placement api) do we need to
  implement oslo_policy handling and enforcement? Right now the auth
  model is simply all-admin-role-all-the-time.

* There was some discussion of adding a configuration setting (e.g.
  'placement_connection') that if not None (the default) would be
  used as the connection for the placement database. If None, the
  API database would be used. I can't recall if we said 'yea' or
  'nay' to this idea. The current code uses the api database and its
  config.

# less straightforward and further out things

There was some discussion that conflicted with reality a bit and I
think we need to resolve before too long, but shouldn't impact the
newton-based changes:

We bounced around two different HTTP resources for returning one or
several resource providers in response to a launch request:

* POST /allocations

  returns a representation of the one target for this launch
  request, already claimed

* GET /resource_providers

  returns a list of candidate targets for a launch request, similar
  to what the existing select_destinations RPC call can do

The immediate problem here is that something else is already using
GET /resource_providers:

   
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#get-resource-providers

Whatever the URI, it's not clear that GET would be correct here:

* We'll probably want to send a body so GET is not ideal.

* We could pass a reference to a persisted "request spec" as a query
  string item, thus maintaining a GET, but that seems to go against
  the grain of "give a thing the info it needs to get stuff done" that
  is elsewhere in the system.

  I'd personally be pretty okay with launch-info-by-reference mode as
  it allows the placement API to be in charge of request what version
  of a launch request it wants rather than its clients needing to know
  what version the placement API might accept.

It's pretty clear that were going to need at least an interim and
maybe permanent endpoint that returns a list of candidate target
resource providers. This is because, at least initially, the
placement engine will not be able to resolve all requirements down
to the one single result and additional filtering may be required in
the caller.

The question is: Will that need for additional filtering always be
present and if so do we:

* consider that a bad thing that we should strive to fix by
  expanding the powers and size of the placement engine
* consider that a good thing that allows the placement engine to be
  relatively simple and keeps edge-case behaviors being handled
  elsewhere

If the latter, then we'll have to consider how an allocation/claim
in a list of potential allocations can be essentially reserved,
verified, or rejected.

As an example of expanding the powers, there is the
ResourceProviderTags concept, described in:

https://review.openstack.org/#/c/345138/

This will expand the data model of resource providers and the surface
area of the HTTP API. This may very well be entirely warranted, but
there might be other options if we assuming that returning a list is
"normal".

Sorry if this is unclear. I'm rather jet-lagged. Ask questions if
you have them. Thanks.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev