Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-04-03 Thread Chris Dent

On Mon, 13 Mar 2017, Chris Dent wrote:


* The scheduler report client in nova, and to a minor degree the
 filter scheduler, use some of the same exceptions and ovo.objects
 that placement uses, which presents a bit of blechiness with
 regards to code duplication. I suppose long term we could consider
 a placement-lib or something like that, except that the
 functionality provided by the same-named objects and exceptions
 are not entirely congruent. From the point of view of the external
 part of the placement API what matters are not objects, but JSON
 structures.


Reporting here for sake of keeping track: I've made a patch to remove
the use of ResourceProvider from the filter_scheduler and
resource_tracker:

https://review.openstack.org/#/c/452569/

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-15 Thread John Garbutt
On 13 March 2017 at 15:17, Jay Pipes  wrote:
> On 03/13/2017 11:13 AM, Dan Smith wrote:
>>
>> Interestingly, we just had a meeting about cells and the scheduler,
>> which had quite a bit of overlap on this topic.
>>
>>> That said, as mentioned in the previous email, the priorities for Pike
>>> (and likely Queens) will continue to be, in order: traits, ironic,
>>> shared resource pools, and nested providers.
>>
>>
>> Given that the CachingScheduler is still a thing until we get claims in
>> the scheduler, and given that CachingScheduler doesn't use placement
>> like the FilterScheduler does, I think we need to prioritize the claims
>> part of the above list.
>>
>> Based on the discussion several of us just had, the priority list
>> actually needs to be this:
>>
>> 1. Traits
>> 2. Ironic
>> 3. Claims in the scheduler
>> 4. Shared resources
>> 5. Nested resources
>>
>> Claims in the scheduler is not likely to be a thing for Pike, but should
>> be something we do as much prep for as possible, and land early in Queens.
>>
>> Personally, I think getting to the point of claiming in the scheduler
>> will be easier if we have placement in tree, and anything we break in
>> that process will be easier to backport if they're in the same tree.
>> However, I'd say that after that goal is met, splitting placement should
>> be good to go.
> ++

+1 from me, a bit late I know.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-14 Thread Roman Podoliaka
Hi Matt,

On Tue, Mar 14, 2017 at 5:27 PM, Matt Riedemann  wrote:
> We did agree to provide an openstackclient plugin purely for CLI
> convenience. That would be in a separate repo, not part of nova or
> novaclient. I've started a blueprint [1] for tracking that work. *The
> placement osc plugin blueprint does not currently have an owner.* If this is
> something someone is interested in working on, please let me know.
>
> [1] https://blueprints.launchpad.net/nova/+spec/placement-osc-plugin

I'll be glad to help with this!

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-14 Thread Matt Riedemann

On 3/13/2017 9:14 AM, Sylvain Bauza wrote:


To be honest, one of the things I think we're missing yet is a separate
client that deployers would package so that Nova and other customer
projects would use for calling the Placement API.
At the moment, we have a huge amount of code in nova.scheduler.report
module that does smart things and I'd love to see that being in a
separate python package (maybe in the novaclient repo, or something
else) so we could ask deployers to package *that only*

The interest in that is that it wouldn't be a separate service project,
just a pure client package at a first try, and we could see how to cut
placement separately the cycle after that.

-Sylvain


We talked about the need, or lack thereof, for a python API client in 
the nova IRC channel today and decided that for now, services should 
just be using a minimal in-tree pattern using keystoneauth to work with 
the placement API. Nova and Neutron are already doing this today. There 
might be common utility code that comes out of that at some point which 
could justify a placement-lib, but let's determine that after more 
projects are using the service, like Cinder and Ironic.


We also agreed to not create a python-placementclient type package that 
mimics novaclient and has a python API binding. We want API consumers to 
use the REST API directly which forces us to have a clean and 
well-documented API, rather than hiding warts within a python API 
binding client package.


We did agree to provide an openstackclient plugin purely for CLI 
convenience. That would be in a separate repo, not part of nova or 
novaclient. I've started a blueprint [1] for tracking that work. *The 
placement osc plugin blueprint does not currently have an owner.* If 
this is something someone is interested in working on, please let me know.


[1] https://blueprints.launchpad.net/nova/+spec/placement-osc-plugin

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 11:13 AM, Dan Smith wrote:

Interestingly, we just had a meeting about cells and the scheduler,
which had quite a bit of overlap on this topic.


That said, as mentioned in the previous email, the priorities for Pike
(and likely Queens) will continue to be, in order: traits, ironic,
shared resource pools, and nested providers.


Given that the CachingScheduler is still a thing until we get claims in
the scheduler, and given that CachingScheduler doesn't use placement
like the FilterScheduler does, I think we need to prioritize the claims
part of the above list.

Based on the discussion several of us just had, the priority list
actually needs to be this:

1. Traits
2. Ironic
3. Claims in the scheduler
4. Shared resources
5. Nested resources

Claims in the scheduler is not likely to be a thing for Pike, but should
be something we do as much prep for as possible, and land early in Queens.

Personally, I think getting to the point of claiming in the scheduler
will be easier if we have placement in tree, and anything we break in
that process will be easier to backport if they're in the same tree.
However, I'd say that after that goal is met, splitting placement should
be good to go.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Dan Smith
Interestingly, we just had a meeting about cells and the scheduler,
which had quite a bit of overlap on this topic.

> That said, as mentioned in the previous email, the priorities for Pike
> (and likely Queens) will continue to be, in order: traits, ironic,
> shared resource pools, and nested providers.

Given that the CachingScheduler is still a thing until we get claims in
the scheduler, and given that CachingScheduler doesn't use placement
like the FilterScheduler does, I think we need to prioritize the claims
part of the above list.

Based on the discussion several of us just had, the priority list
actually needs to be this:

1. Traits
2. Ironic
3. Claims in the scheduler
4. Shared resources
5. Nested resources

Claims in the scheduler is not likely to be a thing for Pike, but should
be something we do as much prep for as possible, and land early in Queens.

Personally, I think getting to the point of claiming in the scheduler
will be easier if we have placement in tree, and anything we break in
that process will be easier to backport if they're in the same tree.
However, I'd say that after that goal is met, splitting placement should
be good to go.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Chris Dent

On Mon, 13 Mar 2017, Sylvain Bauza wrote:


That way, we could do the necessary quirks in the client in case the
split goes bad.


I don't understand this statement. If the client is always using the
service catalog (which it should be) and the client is always only
aware of the HTTP interface (which it should be) what difference does
where the code lives make?

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 15:17, Jay Pipes a écrit :
> On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
>> Please don't.
>> Having a separate repository would mean that deployers would need to
>> implement a separate package for placement plus discussing about
>> how/when to use it.
> 
> Apparently, there already *are* separate packages for
> openstack-nova-api-placement...
> 

Good to know. That said, I'm not sure all deployers are packaging that
separately :-)

FWIW, I'm not against the split, I just think we should first have a
separate and clean client package for placement in a previous cycle.

My thoughts are :
 - in Pike/Queens (TBD), do placementclient optional with fallbacking to
scheduler.report
 - in Queens/R, make placementclient mandatory
 - in R/S, make Placement a separate service.

That way, we could do the necessary quirks in the client in case the
split goes bad.

-Sylvain


> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 09:16 AM, Sylvain Bauza wrote:

Please don't.
Having a separate repository would mean that deployers would need to
implement a separate package for placement plus discussing about
how/when to use it.


Apparently, there already *are* separate packages for 
openstack-nova-api-placement...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 14:59, Jay Pipes a écrit :
> On 03/13/2017 08:41 AM, Chris Dent wrote:
>>
>> From the start we've been saying that it is probably right for the
>> placement service to have its own repository. This is aligned with
>> the long term goal of placement being useful to many services, not
>> just nova, and also helps to keep placement contained and
>> comprehensible and thus maintainable.
>>
>> I've been worried for some time that the longer we put this off, the
>> more complicated an extraction becomes. Rather than carry on
>> worrying about it, I took some time over the weekend to experiment
>> with a slapdash extraction to see if I could identify what would be
>> the sticking points. The results are here
>>
>> https://github.com/cdent/placement
>>
>> My methodology was to lay in the basics for being able to run the
>> functional (gabbi) tests and then using the failures to fix the
>> code. If you read the commit log (there's only 16 commits) in
>> reverse it tells a little story of what was required.
>>
>> All the gabbi tests are now passing (without them being changed)
>> except for four that verify the response strings from exceptions. I
>> didn't copy in exceptions, I created them anew to avoid copying
>> unnecessary nova-isms, and didn't bother (for now) with replicating
>> keyword handling.
>>
>> Unit tests and non-gabbi functional tests were not transferred over
>> (as that would have been something more than "slapdash").
>>
>> Some observations or things to think about:
>>
>> * Since there's only one database and all the db query code is in
>>   the objects, the database handling is simplified. olso_db setup
>>   can be used more directly.
>>
>> * The objects being oslo versioned objects is kind of overkill in
>>   this context but doesn't get too much in the way.
>>
>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>   into the same file so the latter was renamed. Doing this
>>   exploration made a lot of the ResourceClass handling look pretty
>>   complicated. Much of that complexity is because we had to deal
>>   with evolving through different functionality. If we built this
>>   functionality in a greenfield repo it could probably be more
>>   simple.
>>
>> * The FaultWrapper middleware is turned off in the WSGI stack
>>   because copying it over from nova would require dealing with a
>>   hierarchy of classes. A simplified version of it would probably
>>   need to be stuck back in (and apparently a gabbi test to exercise
>>   it, as there's not one now).
>>
>> * The number of requirements in the two requirements files is nicely
>>   small.
>>
>> * The scheduler report client in nova, and to a minor degree the
>>   filter scheduler, use some of the same exceptions and ovo.objects
>>   that placement uses, which presents a bit of blechiness with
>>   regards to code duplication. I suppose long term we could consider
>>   a placement-lib or something like that, except that the
>>   functionality provided by the same-named objects and exceptions
>>   are not entirely congruent. From the point of view of the external
>>   part of the placement API what matters are not objects, but JSON
>>   structures.
>>
>> * I've done nothing here with regard to how devstack would choose
>>   between the old and new placement code locations but that will be
>>   something to solve. It seems like it ought to be possible for two
>>   different sources of the placement-code to exist; just register
>>   one endpoint. Since we've declared that service discovery is the
>>   correctly and only way to find placement, this ought to be okay.
>>
>> I'm not sure how or if we want to proceed with this topic, but I
>> think this at least allows us to talk about it with less guessing.
>> My generally summary is "yeah, this is doable, without huge amounts
>> of work."
> 
> Chris, great work on this over the weekend. It gives us some valuable
> data points and information to consider about the split out of the
> placement API. Really appreciate the effort.
> 
> A few things:
> 
> 1) Definitely agree on the need to have the Nova-side stuff *not*
> reference ovo objects for resource providers. We want the Nova side to
> use JSON/dict representations within the resource tracker and scheduler.
> This work can be done right now and isn't dependent on anything AFAIK.
> 
> 2) The FaultWrapper stuff can also be handled relatively free of
> dependencies. In fact, there is a spec around error reporting using
> codes in addition to messages [1] that we could tack on the FaultWrapper
> cleanup items. Basically, make that spec into a "fix up error handling
> in placement API" general work item list...
> 
> 3) While the split of the placement API is not the highest priority
> placement item in Pike (we are focused on traits, ironic integration,
> shared pools and then nested providers, in that order), I do think it's
> worthwhile splitting the placement service out from Nova in Queens. I
> don't believe 

Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 10:02 AM, Eoghan Glynn wrote:

We are close to the first milestone in Pike, right ? We also have
priorities for Placement that we discussed at the PTG and we never
discussed about how to cut placement during the PTG.

Also, we haven't discussed yet with operators about how they would like
to see Placement being cut. At least, we should wait for the Forum about
that.

For the moment, only operators using Ocata are using the placement API
and we know that most of them had problems when using it. Running for
cutting Placement in Queens would then mean that they would only have
one stable cycle after Ocata for using it.
Also, discussing at the above would then mean that we could punt other
disucssions. For example, I'd prefer to discuss how we could fix the
main problem we have with the scheduler about scheduler claims *before*
trying to think on how to cut Placement.


It's definitely good to figure out what challenges people were having in
rolling things out and document them, to figure out if they've been
addressed or not. One key thing seemed to be not understanding that
services need to all be registered in the catalog before services beyond
keystone are launched. There is also probably a keystoneauth1 fix for
this make it a softer fail.

The cut over can be pretty seamless. Yes, upgrade scenarios need to be
looked at. But that's honestly not much different from deprecating
config options or making new aliases. It should be much less user
noticable than the newly required cells v2 support.

The real question to ask, now that there is a well defined external
interface, will evolution of the Placement service stack, and addressing
bugs and shortcomings related to it's usage, work better as a dedicated
core team, or inside of Nova. My gut says Queens is the right time to
make that split, and to start planning for it now.


From a downstream perspective, I'd prefer to see a concentration on
deriving *user-visible* benefits from placement before incurring more
churn with an extraction (given the proximity to the churn on
deployment tooling from the scheduler decision-making cutover to
placement at the end of ocata).


The scheduler decision-making cutover *was* a user-visible benefit from 
the placement service. :)


Just because we could have done a better job with functional integration 
testing and documentation of the upgrade steps doesn't mean we should 
slow down progress here. We've learned lessons in Ocata around the need 
to be in a tighter feedback loop with the deployment teams.


Sean (and I) are merely suggesting to get the timeline for a split-out 
hammered out and ready for Queens so that we get ahead of the game and 
actually plan meetings with deployment folks and make sure docs and 
tests are proper ahead of the split-out.


That said, as mentioned in the previous email, the priorities for Pike 
(and likely Queens) will continue to be, in order: traits, ironic, 
shared resource pools, and nested providers.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Eoghan Glynn
> > We are close to the first milestone in Pike, right ? We also have
> > priorities for Placement that we discussed at the PTG and we never
> > discussed about how to cut placement during the PTG.
> >
> > Also, we haven't discussed yet with operators about how they would like
> > to see Placement being cut. At least, we should wait for the Forum about
> > that.
> >
> > For the moment, only operators using Ocata are using the placement API
> > and we know that most of them had problems when using it. Running for
> > cutting Placement in Queens would then mean that they would only have
> > one stable cycle after Ocata for using it.
> > Also, discussing at the above would then mean that we could punt other
> > disucssions. For example, I'd prefer to discuss how we could fix the
> > main problem we have with the scheduler about scheduler claims *before*
> > trying to think on how to cut Placement.
>
> It's definitely good to figure out what challenges people were having in
> rolling things out and document them, to figure out if they've been
> addressed or not. One key thing seemed to be not understanding that
> services need to all be registered in the catalog before services beyond
> keystone are launched. There is also probably a keystoneauth1 fix for
> this make it a softer fail.
>
> The cut over can be pretty seamless. Yes, upgrade scenarios need to be
> looked at. But that's honestly not much different from deprecating
> config options or making new aliases. It should be much less user
> noticable than the newly required cells v2 support.
>
> The real question to ask, now that there is a well defined external
> interface, will evolution of the Placement service stack, and addressing
> bugs and shortcomings related to it's usage, work better as a dedicated
> core team, or inside of Nova. My gut says Queens is the right time to
> make that split, and to start planning for it now.

>From a downstream perspective, I'd prefer to see a concentration on
deriving *user-visible* benefits from placement before incurring more
churn with an extraction (given the proximity to the churn on
deployment tooling from the scheduler decision-making cutover to
placement at the end of ocata).

Just my $0.02 ...

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 08:41 AM, Chris Dent wrote:


From the start we've been saying that it is probably right for the
placement service to have its own repository. This is aligned with
the long term goal of placement being useful to many services, not
just nova, and also helps to keep placement contained and
comprehensible and thus maintainable.

I've been worried for some time that the longer we put this off, the
more complicated an extraction becomes. Rather than carry on
worrying about it, I took some time over the weekend to experiment
with a slapdash extraction to see if I could identify what would be
the sticking points. The results are here

https://github.com/cdent/placement

My methodology was to lay in the basics for being able to run the
functional (gabbi) tests and then using the failures to fix the
code. If you read the commit log (there's only 16 commits) in
reverse it tells a little story of what was required.

All the gabbi tests are now passing (without them being changed)
except for four that verify the response strings from exceptions. I
didn't copy in exceptions, I created them anew to avoid copying
unnecessary nova-isms, and didn't bother (for now) with replicating
keyword handling.

Unit tests and non-gabbi functional tests were not transferred over
(as that would have been something more than "slapdash").

Some observations or things to think about:

* Since there's only one database and all the db query code is in
  the objects, the database handling is simplified. olso_db setup
  can be used more directly.

* The objects being oslo versioned objects is kind of overkill in
  this context but doesn't get too much in the way.

* I collapsed the fields.ResourceClass and objects.ResourceClass
  into the same file so the latter was renamed. Doing this
  exploration made a lot of the ResourceClass handling look pretty
  complicated. Much of that complexity is because we had to deal
  with evolving through different functionality. If we built this
  functionality in a greenfield repo it could probably be more
  simple.

* The FaultWrapper middleware is turned off in the WSGI stack
  because copying it over from nova would require dealing with a
  hierarchy of classes. A simplified version of it would probably
  need to be stuck back in (and apparently a gabbi test to exercise
  it, as there's not one now).

* The number of requirements in the two requirements files is nicely
  small.

* The scheduler report client in nova, and to a minor degree the
  filter scheduler, use some of the same exceptions and ovo.objects
  that placement uses, which presents a bit of blechiness with
  regards to code duplication. I suppose long term we could consider
  a placement-lib or something like that, except that the
  functionality provided by the same-named objects and exceptions
  are not entirely congruent. From the point of view of the external
  part of the placement API what matters are not objects, but JSON
  structures.

* I've done nothing here with regard to how devstack would choose
  between the old and new placement code locations but that will be
  something to solve. It seems like it ought to be possible for two
  different sources of the placement-code to exist; just register
  one endpoint. Since we've declared that service discovery is the
  correctly and only way to find placement, this ought to be okay.

I'm not sure how or if we want to proceed with this topic, but I
think this at least allows us to talk about it with less guessing.
My generally summary is "yeah, this is doable, without huge amounts
of work."


Chris, great work on this over the weekend. It gives us some valuable 
data points and information to consider about the split out of the 
placement API. Really appreciate the effort.


A few things:

1) Definitely agree on the need to have the Nova-side stuff *not* 
reference ovo objects for resource providers. We want the Nova side to 
use JSON/dict representations within the resource tracker and scheduler. 
This work can be done right now and isn't dependent on anything AFAIK.


2) The FaultWrapper stuff can also be handled relatively free of 
dependencies. In fact, there is a spec around error reporting using 
codes in addition to messages [1] that we could tack on the FaultWrapper 
cleanup items. Basically, make that spec into a "fix up error handling 
in placement API" general work item list...


3) While the split of the placement API is not the highest priority 
placement item in Pike (we are focused on traits, ironic integration, 
shared pools and then nested providers, in that order), I do think it's 
worthwhile splitting the placement service out from Nova in Queens. I 
don't believe that doing claims in the placement API is something that 
needs to be completed before splitting out. I'll respond to Sylvain's 
thread about this separately.


Thanks again for your efforts this weekend,
-jay

[1] https://review.openstack.org/#/c/418393/


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sean Dague
On 03/13/2017 09:33 AM, Sylvain Bauza wrote:

> 
> We are close to the first milestone in Pike, right ? We also have
> priorities for Placement that we discussed at the PTG and we never
> discussed about how to cut placement during the PTG.
> 
> Also, we haven't discussed yet with operators about how they would like
> to see Placement being cut. At least, we should wait for the Forum about
> that.
> 
> For the moment, only operators using Ocata are using the placement API
> and we know that most of them had problems when using it. Running for
> cutting Placement in Queens would then mean that they would only have
> one stable cycle after Ocata for using it.
> Also, discussing at the above would then mean that we could punt other
> disucssions. For example, I'd prefer to discuss how we could fix the
> main problem we have with the scheduler about scheduler claims *before*
> trying to think on how to cut Placement.

It's definitely good to figure out what challenges people were having in
rolling things out and document them, to figure out if they've been
addressed or not. One key thing seemed to be not understanding that
services need to all be registered in the catalog before services beyond
keystone are launched. There is also probably a keystoneauth1 fix for
this make it a softer fail.

The cut over can be pretty seamless. Yes, upgrade scenarios need to be
looked at. But that's honestly not much different from deprecating
config options or making new aliases. It should be much less user
noticable than the newly required cells v2 support.

The real question to ask, now that there is a well defined external
interface, will evolution of the Placement service stack, and addressing
bugs and shortcomings related to it's usage, work better as a dedicated
core team, or inside of Nova. My gut says Queens is the right time to
make that split, and to start planning for it now.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 14:21, Sean Dague a écrit :
> On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
>>
>>
>> Le 13/03/2017 13:41, Chris Dent a écrit :
>>>
>>> From the start we've been saying that it is probably right for the
>>> placement service to have its own repository. This is aligned with
>>> the long term goal of placement being useful to many services, not
>>> just nova, and also helps to keep placement contained and
>>> comprehensible and thus maintainable.
>>>
>>> I've been worried for some time that the longer we put this off, the
>>> more complicated an extraction becomes. Rather than carry on
>>> worrying about it, I took some time over the weekend to experiment
>>> with a slapdash extraction to see if I could identify what would be
>>> the sticking points. The results are here
>>>
>>> https://github.com/cdent/placement
>>>
>>> My methodology was to lay in the basics for being able to run the
>>> functional (gabbi) tests and then using the failures to fix the
>>> code. If you read the commit log (there's only 16 commits) in
>>> reverse it tells a little story of what was required.
>>>
>>> All the gabbi tests are now passing (without them being changed)
>>> except for four that verify the response strings from exceptions. I
>>> didn't copy in exceptions, I created them anew to avoid copying
>>> unnecessary nova-isms, and didn't bother (for now) with replicating
>>> keyword handling.
>>>
>>> Unit tests and non-gabbi functional tests were not transferred over
>>> (as that would have been something more than "slapdash").
>>>
>>> Some observations or things to think about:
>>>
>>> * Since there's only one database and all the db query code is in
>>>   the objects, the database handling is simplified. olso_db setup
>>>   can be used more directly.
>>>
>>> * The objects being oslo versioned objects is kind of overkill in
>>>   this context but doesn't get too much in the way.
>>>
>>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>>   into the same file so the latter was renamed. Doing this
>>>   exploration made a lot of the ResourceClass handling look pretty
>>>   complicated. Much of that complexity is because we had to deal
>>>   with evolving through different functionality. If we built this
>>>   functionality in a greenfield repo it could probably be more
>>>   simple.
>>>
>>> * The FaultWrapper middleware is turned off in the WSGI stack
>>>   because copying it over from nova would require dealing with a
>>>   hierarchy of classes. A simplified version of it would probably
>>>   need to be stuck back in (and apparently a gabbi test to exercise
>>>   it, as there's not one now).
>>>
>>> * The number of requirements in the two requirements files is nicely
>>>   small.
>>>
>>> * The scheduler report client in nova, and to a minor degree the
>>>   filter scheduler, use some of the same exceptions and ovo.objects
>>>   that placement uses, which presents a bit of blechiness with
>>>   regards to code duplication. I suppose long term we could consider
>>>   a placement-lib or something like that, except that the
>>>   functionality provided by the same-named objects and exceptions
>>>   are not entirely congruent. From the point of view of the external
>>>   part of the placement API what matters are not objects, but JSON
>>>   structures.
>>>
>>> * I've done nothing here with regard to how devstack would choose
>>>   between the old and new placement code locations but that will be
>>>   something to solve. It seems like it ought to be possible for two
>>>   different sources of the placement-code to exist; just register
>>>   one endpoint. Since we've declared that service discovery is the
>>>   correctly and only way to find placement, this ought to be okay.
>>>
>>> I'm not sure how or if we want to proceed with this topic, but I
>>> think this at least allows us to talk about it with less guessing.
>>> My generally summary is "yeah, this is doable, without huge amounts
>>> of work."
>>>
>>
>> Please don't.
>> Having a separate repository would mean that deployers would need to
>> implement a separate package for placement plus discussing about
>> how/when to use it.
>>
>> For the moment, I'd rather prefer to leave operators using the placement
>> API by using Nova first and then after like 3 or 4 cycles, possibly
>> discussing with them how to cut it.
>>
>> At the moment, I think that we already have a good priority for
>> placement in Nova, so I don't think it's a problem to still have it in Nova.
> 
> Given that the design was always to split (eventually), and part of that
> means that we get to start building up a dedicated core team, I'm not
> sure why waiting 3 or 4 additional cycles makes sense here.
> 
> I get that Pike is probably the wrong release to do this cut, given that
> it only *just* became mandatory. But It feels like saying this would be
> a Queens goal, and getting things structured in such a way that the
> split is easy (like any renaming of binaries, any 

Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sean Dague
On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
> 
> 
> Le 13/03/2017 13:41, Chris Dent a écrit :
>>
>> From the start we've been saying that it is probably right for the
>> placement service to have its own repository. This is aligned with
>> the long term goal of placement being useful to many services, not
>> just nova, and also helps to keep placement contained and
>> comprehensible and thus maintainable.
>>
>> I've been worried for some time that the longer we put this off, the
>> more complicated an extraction becomes. Rather than carry on
>> worrying about it, I took some time over the weekend to experiment
>> with a slapdash extraction to see if I could identify what would be
>> the sticking points. The results are here
>>
>> https://github.com/cdent/placement
>>
>> My methodology was to lay in the basics for being able to run the
>> functional (gabbi) tests and then using the failures to fix the
>> code. If you read the commit log (there's only 16 commits) in
>> reverse it tells a little story of what was required.
>>
>> All the gabbi tests are now passing (without them being changed)
>> except for four that verify the response strings from exceptions. I
>> didn't copy in exceptions, I created them anew to avoid copying
>> unnecessary nova-isms, and didn't bother (for now) with replicating
>> keyword handling.
>>
>> Unit tests and non-gabbi functional tests were not transferred over
>> (as that would have been something more than "slapdash").
>>
>> Some observations or things to think about:
>>
>> * Since there's only one database and all the db query code is in
>>   the objects, the database handling is simplified. olso_db setup
>>   can be used more directly.
>>
>> * The objects being oslo versioned objects is kind of overkill in
>>   this context but doesn't get too much in the way.
>>
>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>   into the same file so the latter was renamed. Doing this
>>   exploration made a lot of the ResourceClass handling look pretty
>>   complicated. Much of that complexity is because we had to deal
>>   with evolving through different functionality. If we built this
>>   functionality in a greenfield repo it could probably be more
>>   simple.
>>
>> * The FaultWrapper middleware is turned off in the WSGI stack
>>   because copying it over from nova would require dealing with a
>>   hierarchy of classes. A simplified version of it would probably
>>   need to be stuck back in (and apparently a gabbi test to exercise
>>   it, as there's not one now).
>>
>> * The number of requirements in the two requirements files is nicely
>>   small.
>>
>> * The scheduler report client in nova, and to a minor degree the
>>   filter scheduler, use some of the same exceptions and ovo.objects
>>   that placement uses, which presents a bit of blechiness with
>>   regards to code duplication. I suppose long term we could consider
>>   a placement-lib or something like that, except that the
>>   functionality provided by the same-named objects and exceptions
>>   are not entirely congruent. From the point of view of the external
>>   part of the placement API what matters are not objects, but JSON
>>   structures.
>>
>> * I've done nothing here with regard to how devstack would choose
>>   between the old and new placement code locations but that will be
>>   something to solve. It seems like it ought to be possible for two
>>   different sources of the placement-code to exist; just register
>>   one endpoint. Since we've declared that service discovery is the
>>   correctly and only way to find placement, this ought to be okay.
>>
>> I'm not sure how or if we want to proceed with this topic, but I
>> think this at least allows us to talk about it with less guessing.
>> My generally summary is "yeah, this is doable, without huge amounts
>> of work."
>>
> 
> Please don't.
> Having a separate repository would mean that deployers would need to
> implement a separate package for placement plus discussing about
> how/when to use it.
> 
> For the moment, I'd rather prefer to leave operators using the placement
> API by using Nova first and then after like 3 or 4 cycles, possibly
> discussing with them how to cut it.
> 
> At the moment, I think that we already have a good priority for
> placement in Nova, so I don't think it's a problem to still have it in Nova.

Given that the design was always to split (eventually), and part of that
means that we get to start building up a dedicated core team, I'm not
sure why waiting 3 or 4 additional cycles makes sense here.

I get that Pike is probably the wrong release to do this cut, given that
it only *just* became mandatory. But It feels like saying this would be
a Queens goal, and getting things structured in such a way that the
split is easy (like any renaming of binaries, any things that should
deprecate), would seem to be good goals for Pike.

-Sean

-- 
Sean Dague
http://dague.net


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 13:41, Chris Dent a écrit :
> 
> From the start we've been saying that it is probably right for the
> placement service to have its own repository. This is aligned with
> the long term goal of placement being useful to many services, not
> just nova, and also helps to keep placement contained and
> comprehensible and thus maintainable.
> 
> I've been worried for some time that the longer we put this off, the
> more complicated an extraction becomes. Rather than carry on
> worrying about it, I took some time over the weekend to experiment
> with a slapdash extraction to see if I could identify what would be
> the sticking points. The results are here
> 
> https://github.com/cdent/placement
> 
> My methodology was to lay in the basics for being able to run the
> functional (gabbi) tests and then using the failures to fix the
> code. If you read the commit log (there's only 16 commits) in
> reverse it tells a little story of what was required.
> 
> All the gabbi tests are now passing (without them being changed)
> except for four that verify the response strings from exceptions. I
> didn't copy in exceptions, I created them anew to avoid copying
> unnecessary nova-isms, and didn't bother (for now) with replicating
> keyword handling.
> 
> Unit tests and non-gabbi functional tests were not transferred over
> (as that would have been something more than "slapdash").
> 
> Some observations or things to think about:
> 
> * Since there's only one database and all the db query code is in
>   the objects, the database handling is simplified. olso_db setup
>   can be used more directly.
> 
> * The objects being oslo versioned objects is kind of overkill in
>   this context but doesn't get too much in the way.
> 
> * I collapsed the fields.ResourceClass and objects.ResourceClass
>   into the same file so the latter was renamed. Doing this
>   exploration made a lot of the ResourceClass handling look pretty
>   complicated. Much of that complexity is because we had to deal
>   with evolving through different functionality. If we built this
>   functionality in a greenfield repo it could probably be more
>   simple.
> 
> * The FaultWrapper middleware is turned off in the WSGI stack
>   because copying it over from nova would require dealing with a
>   hierarchy of classes. A simplified version of it would probably
>   need to be stuck back in (and apparently a gabbi test to exercise
>   it, as there's not one now).
> 
> * The number of requirements in the two requirements files is nicely
>   small.
> 
> * The scheduler report client in nova, and to a minor degree the
>   filter scheduler, use some of the same exceptions and ovo.objects
>   that placement uses, which presents a bit of blechiness with
>   regards to code duplication. I suppose long term we could consider
>   a placement-lib or something like that, except that the
>   functionality provided by the same-named objects and exceptions
>   are not entirely congruent. From the point of view of the external
>   part of the placement API what matters are not objects, but JSON
>   structures.
> 
> * I've done nothing here with regard to how devstack would choose
>   between the old and new placement code locations but that will be
>   something to solve. It seems like it ought to be possible for two
>   different sources of the placement-code to exist; just register
>   one endpoint. Since we've declared that service discovery is the
>   correctly and only way to find placement, this ought to be okay.
> 
> I'm not sure how or if we want to proceed with this topic, but I
> think this at least allows us to talk about it with less guessing.
> My generally summary is "yeah, this is doable, without huge amounts
> of work."
> 

Please don't.
Having a separate repository would mean that deployers would need to
implement a separate package for placement plus discussing about
how/when to use it.

For the moment, I'd rather prefer to leave operators using the placement
API by using Nova first and then after like 3 or 4 cycles, possibly
discussing with them how to cut it.

At the moment, I think that we already have a good priority for
placement in Nova, so I don't think it's a problem to still have it in Nova.

My .02,
-Sylvain

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Chris Dent



From the start we've been saying that it is probably right for the

placement service to have its own repository. This is aligned with
the long term goal of placement being useful to many services, not
just nova, and also helps to keep placement contained and
comprehensible and thus maintainable.

I've been worried for some time that the longer we put this off, the
more complicated an extraction becomes. Rather than carry on
worrying about it, I took some time over the weekend to experiment
with a slapdash extraction to see if I could identify what would be
the sticking points. The results are here

https://github.com/cdent/placement

My methodology was to lay in the basics for being able to run the
functional (gabbi) tests and then using the failures to fix the
code. If you read the commit log (there's only 16 commits) in
reverse it tells a little story of what was required.

All the gabbi tests are now passing (without them being changed)
except for four that verify the response strings from exceptions. I
didn't copy in exceptions, I created them anew to avoid copying
unnecessary nova-isms, and didn't bother (for now) with replicating
keyword handling.

Unit tests and non-gabbi functional tests were not transferred over
(as that would have been something more than "slapdash").

Some observations or things to think about:

* Since there's only one database and all the db query code is in
  the objects, the database handling is simplified. olso_db setup
  can be used more directly.

* The objects being oslo versioned objects is kind of overkill in
  this context but doesn't get too much in the way.

* I collapsed the fields.ResourceClass and objects.ResourceClass
  into the same file so the latter was renamed. Doing this
  exploration made a lot of the ResourceClass handling look pretty
  complicated. Much of that complexity is because we had to deal
  with evolving through different functionality. If we built this
  functionality in a greenfield repo it could probably be more
  simple.

* The FaultWrapper middleware is turned off in the WSGI stack
  because copying it over from nova would require dealing with a
  hierarchy of classes. A simplified version of it would probably
  need to be stuck back in (and apparently a gabbi test to exercise
  it, as there's not one now).

* The number of requirements in the two requirements files is nicely
  small.

* The scheduler report client in nova, and to a minor degree the
  filter scheduler, use some of the same exceptions and ovo.objects
  that placement uses, which presents a bit of blechiness with
  regards to code duplication. I suppose long term we could consider
  a placement-lib or something like that, except that the
  functionality provided by the same-named objects and exceptions
  are not entirely congruent. From the point of view of the external
  part of the placement API what matters are not objects, but JSON
  structures.

* I've done nothing here with regard to how devstack would choose
  between the old and new placement code locations but that will be
  something to solve. It seems like it ought to be possible for two
  different sources of the placement-code to exist; just register
  one endpoint. Since we've declared that service discovery is the
  correctly and only way to find placement, this ought to be okay.

I'm not sure how or if we want to proceed with this topic, but I
think this at least allows us to talk about it with less guessing.
My generally summary is "yeah, this is doable, without huge amounts
of work."

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev