[openstack-dev] [nova][placement] No n-sch meeting next week

2018-11-07 Thread Eric Fried
...due to summit. -efried __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-06 Thread Eric Fried
Pipes <mailto:jaypi...@gmail.com>> wrote: > > On 11/02/2018 03:22 PM, Eric Fried wrote: > > All- > > > > Based on a (long) discussion yesterday [1] I have put up a patch [2] > > whereby you can set [compute]resource_provider_association

[openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-02 Thread Eric Fried
All- Based on a (long) discussion yesterday [1] I have put up a patch [2] whereby you can set [compute]resource_provider_association_refresh to zero and the resource tracker will never* refresh the report client's provider cache. Philosophically, we're removing the "healing" aspect of the

Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Eric Fried
Forwarding to openstack-operators per Jay. On 10/24/18 10:10, Jay Pipes wrote: > Nova's API has the ability to create "quota classes", which are > basically limits for a set of resource types. There is something called > the "default quota class" which corresponds to the limits in the >

Re: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-18 Thread Eric Fried
Sorry, I'm opposed to this idea. I admit I don't understand the political framework, nor have I read the governing documents beyond [1], but that document makes it clear that this is supposed to be a community-wide vote. Is it really legal for the TC (or whoever has merge rights on [2]) to merge

Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-10 Thread Eric Fried
On 10/10/2018 12:41 PM, Greg Hill wrote: > I've been out of the openstack loop for a few years, so I hope this > reaches the right folks. > > Josh Harlow (original author of taskflow and related libraries) and I > have been discussing the option of moving taskflow out of the openstack >

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Eric Fried
On 10/09/2018 02:20 PM, Jay Pipes wrote: > On 10/09/2018 11:04 AM, Balázs Gibizer wrote: >> If you do the force flag removal in a nw microversion that also means >> (at least to me) that you should not change the behavior of the force >> flag in the old microversions. > > Agreed. > > Keep the

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Eric Fried
IIUC, the primary thing the force flag was intended to do - allow an instance to land on the requested destination even if that means oversubscription of the host's resources - doesn't happen anymore since we started making the destination claim in placement. IOW, since pike, you don't actually

Re: [openstack-dev] [nova] Rocky RC time regression analysis

2018-10-08 Thread Eric Fried
Mel- I don't have much of anything useful to add here, but wanted to say thanks for this thorough analysis. It must have taken a lot of time and work. Musings inline. On 10/05/2018 06:59 PM, melanie witt wrote: > Hey everyone, > > During our Rocky retrospective discussion at the PTG [1], we

Re: [openstack-dev] [placement] update 18-40

2018-10-05 Thread Eric Fried
> * What should we do about nova calling the placement db, like in >   > [nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416) This should be purely a placement-side migration, nah? >   and >   >

Re: [openstack-dev] [neutron] Please opt-in for neutron-lib patches

2018-10-03 Thread Eric Fried
Hi Boden. Love this initiative. We would like networking-powervm to be included, and have proposed [5], but are wondering why we weren't picked up in [6]. Your email [1] says "If your project isn't in [3][4], but you think it should be; it maybe missing a recent neutron-lib version in your

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-02 Thread Eric Fried
On 09/28/2018 07:23 PM, Mohammed Naser wrote: > On Fri, Sep 28, 2018 at 7:17 PM Chris Dent wrote: >> >> On Fri, 28 Sep 2018, melanie witt wrote: >> >>> I'm concerned about a lot of repetition here and maintenance headache for >>> operators. That's where the thoughts about whether we should

Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Eric Fried
On 10/02/2018 11:09 AM, Jim Rollenhagen wrote: > On Tue, Oct 2, 2018 at 11:40 AM Eric Fried wrote: > > > What Eric is proposing (and Julia and I seem to be in favor of), is > > nearly the same as your proposal. The single difference is that these > > co

Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Eric Fried
> What Eric is proposing (and Julia and I seem to be in favor of), is > nearly the same as your proposal. The single difference is that these > config templates or deploy templates or whatever could *also* require > certain traits, and the scheduler would use that information to pick a > node.

Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-01 Thread Eric Fried
> So say the user requests a node that supports UEFI because their image > needs UEFI. Which workflow would you want here? > > 1) The operator (or ironic?) has already configured the node to boot in > UEFI mode. Only pre-configured nodes advertise the "supports UEFI" trait. > > 2) Any node that

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Eric Fried
On 09/29/2018 10:40 AM, Jay Pipes wrote: > On 09/28/2018 04:36 PM, Eric Fried wrote: >> So here it is. Two of the top influencers in placement, one saying we >> shouldn't overload traits, the other saying we shouldn't add a primitive >> that would obviate the need f

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Eric Fried
Dan- On 10/01/2018 10:06 AM, Dan Smith wrote: > I was out when much of this conversation happened, so I'm going to > summarize my opinion here. > >> So from a code perspective _placement_ is completely agnostic to >> whether a trait is "PCI_ADDRESS_01_AB_23_CD", "STORAGE_DISK_SSD", or >>

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried
On 09/28/2018 09:41 AM, Balázs Gibizer wrote: > > > On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried wrote: >> It's time somebody said this. >> >> Every time we turn a corner or look under a rug, we find another use >> case for provider traits in placemen

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried
On 09/28/2018 12:19 PM, Chris Dent wrote: > On Fri, 28 Sep 2018, Jay Pipes wrote: > >> On 09/28/2018 09:25 AM, Eric Fried wrote: >>> It's time somebody said this. > > Yes, a useful topic, I think. > >>> Every time we turn a corner or look under a rug, we

[openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Eric Fried
It's time somebody said this. Every time we turn a corner or look under a rug, we find another use case for provider traits in placement. But every time we have to have the argument about whether that use case satisfies the original "intended purpose" of traits. That's only reason I've ever been

Re: [openstack-dev] [nova] Stein PTG summary

2018-09-27 Thread Eric Fried
On 09/27/2018 07:37 AM, Matt Riedemann wrote: > On 9/27/2018 5:23 AM, Sylvain Bauza wrote: >> >> >> On Thu, Sep 27, 2018 at 2:46 AM Matt Riedemann > > wrote: >> >>     On 9/26/2018 5:30 PM, Sylvain Bauza wrote: >> > So, during this day, we also discussed about

Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Eric Fried
+1 On 09/19/2018 10:25 AM, Chris Dent wrote: > > > I'd like to nominate Tetsuro Nakamura for membership in the > placement-core team. Throughout placement's development Tetsuro has > provided quality reviews; done the hard work of creating rigorous > functional tests, making them fail, and

Re: [openstack-dev] About microversion setting to enable nested resource provider

2018-09-13 Thread Eric Fried
There's a patch series in progress for this: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates It needs some TLC. I'm sure gibi and tetsuro would welcome some help... efried On 09/13/2018 08:31 AM, Naichuan Sun wrote: > Thank you very much, Jay. > Is there somewhere I

[openstack-dev] [nova][placement] No NovaScheduler meeting during PTG

2018-09-07 Thread Eric Fried
Our regularly scheduled Monday nova-scheduler meeting will not take place next Monday, Sept 10th. We'll resume the following week. -efried __ OpenStack Development Mailing List (not for usage questions) Unsubscribe:

Re: [openstack-dev] Nominating Chris Dent for placement-core

2018-09-07 Thread Eric Fried
After a week with only positive responses, it is my pleasure to add Chris to the placement-core team. Welcome home, Chris. On 08/31/2018 10:45 AM, Eric Fried wrote: > The openstack/placement project [1] and its core team [2] have been > established in gerrit. > > I hereby nominat

Re: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests

2018-09-06 Thread Eric Fried
un Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Eric Fried ---09/04/2018 09:35:09 PM---Folks- > The other day, I posted an experimental patch [1] withEric Fried > ---09/04/2018 09:35:09 PM---Folks- The other day, I posted an > experiment

Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-09-04 Thread Eric Fried
> 030 is okay as long as nothing goes wrong. If something does it > raises exceptions which would currently fail as the exceptions are > not there. See below for more about exceptions. Maybe I'm misunderstanding what these migration thingies are supposed to be doing, but 030 [1] seems like it's

[openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests

2018-09-04 Thread Eric Fried
Folks- The other day, I posted an experimental patch [1] with an effectively empty ComputeDriver (just enough to make n-cpu actually start) to see how much of our CI would pass. The theory being that any tests that still pass are tests that don't touch our compute driver, and are

[openstack-dev] Nominating Chris Dent for placement-core

2018-08-31 Thread Eric Fried
The openstack/placement project [1] and its core team [2] have been established in gerrit. I hereby nominate Chris Dent for membership in the placement-core team. He has been instrumental in the design, implementation, and stewardship of the placement API since its inception and has shown clear

[openstack-dev] [nova][placement] Freezing placement for extraction

2018-08-30 Thread Eric Fried
Greetings. The captains of placement extraction have declared readiness to begin the process of seeding the new repository (once [1] has finished merging). As such, we are freezing development in the affected portions of the openstack/nova repository until this process is completed. We're relying

Re: [openstack-dev] [nova] [placement] XenServer CI failed frequently because of placement update

2018-08-28 Thread Eric Fried
Naichuan- Are you running with [1]? If you are, the placement logs (at debug level) should be giving you some useful info. If you're not... perhaps you could pull that in :) Note that it refactors the _get_provider_ids_matching method completely, so it's possible your problem will

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-27 Thread Eric Fried
Thanks Doug. I restored [4] and moved the code to the fixture module. Enjoy. -efried On 08/27/2018 10:59 AM, Doug Hellmann wrote: > Excerpts from Eric Fried's message of 2018-08-22 09:13:25 -0500: >> For some time, nova has been using uuidsentinel [1] which conveniently >> allows you to get a

Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-27 Thread Eric Fried
Thanks Matt, you summed it up nicely. Just one thing to point out... > Option 1 would clearly be a drain on at least 2 nova cores to go through > the changes. I think Eric is on board for reviewing options 1 or 2 in > either case, but he prefers option 2. Since I'm throwing a wrench in the >

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Eric Fried
So... Restore the PS of the oslo_utils version that exposed the global [1]? Or use the forced-singleton pattern from nova [2] to put it in its own importable module, e.g. oslo_utils.uuidutils.uuidsentinel? (FTR, "import only modules" is a thing for me too, but I've noticed it doesn't seem to be

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Eric Fried
The compromise, using the patch as currently written [1], would entail adding one line at the top of each test file: uuids = uuidsentinel.UUIDSentinels() ...as seen (more or less) at [2]. The subtle difference being that this `uuids` wouldn't share a namespace across the whole process, only

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-23 Thread Eric Fried
Do you mean an actual fixture, that would be used like: class MyTestCase(testtools.TestCase): def setUp(self): self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids def test_foo(self): do_a_thing_with(self.uuids.foo) ? That's... okay I guess, but the

[openstack-dev] [oslo] UUID sentinel needs a home

2018-08-22 Thread Eric Fried
For some time, nova has been using uuidsentinel [1] which conveniently allows you to get a random UUID in a single LOC with a readable name that's the same every time you reference it within that process (but not across processes). Example usage: [2]. We would like other projects (notably the

Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-22 Thread Eric Fried
o at least initially we should leave that out. On 08/22/2018 07:55 AM, Balázs Gibizer wrote: > > > On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried wrote: >> gibi- >> >>>>  - On migration, when we transfer the allocations in either >>>> direction, a >&

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-21 Thread Eric Fried
> The reshaper code > is still going through code review, then next we have the integration to > do. To clarify: we're doing the integration in concert with the API side. Right now the API side patches [1][2] are in series underneath the nova side [3]. In a placement-in-its-own-repo world, the

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-20 Thread Eric Fried
This is great information, thanks Hongbin. If I'm understanding correctly, it sounds like Zun ultimately wants to be a peer of nova in terms of placement consumption. Using the resource information reported by nova, neutron, etc., you wish to be able to discover viable targets for a container

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-18 Thread Eric Fried
> So my hope is that (in no particular order) Jay Pipes, Eric Fried, > Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov, > Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to > placement whom I'm forgetting [1] would express their preference on > what the

Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-17 Thread Eric Fried
gibi- >> - On migration, when we transfer the allocations in either direction, a >> conflict means someone managed to resize (or otherwise change >> allocations?) since the last time we pulled data. Given the global lock >> in the report client, this should have been tough to do. If it does >>

Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Eric Fried
Thanks for this, gibi. TL;DR: a). I didn't look, but I'm pretty sure we're not caching allocations in the report client. Today, nobody outside of nova (specifically the resource tracker via the report client) is supposed to be mucking with instance allocations, right? And given the global lock

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-14 Thread Eric Fried
Folks- The patch mentioned below [1] has undergone several rounds of review and collaborative revision, and we'd really like to get your feedback on it. From the commit message: Here are some examples of the debug output: - A request for three resources with no aggregate or trait

Re: [openstack-dev] [cinder] Reminder about the weekly Cinder meeting ...

2018-08-13 Thread Eric Fried
Are you talking about the nastygram from "Sigyn" saying: "Your actions in # tripped automated anti-spam measures (nicks/hilight spam), but were ignored based on your time in channel; stop now, or automated action will still be taken. If you have any questions, please don't hesitate to contact a

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-03 Thread Eric Fried
> I'm of two minds here. > > On the one hand, you have the case where the end user has accidentally > requested some combination of things that isn't normally available, and > they need to be able to ask the provider what they did wrong.  I agree > that this case is not really an exception, those

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
> And we could do the same kind of approach with the non-granular request > groups by reducing the single large SQL statement that is used for all > resources and all traits (and all agg associations) into separate SELECT > statements. > > It could be slightly less performance-optimized but more

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
I should have made it clear that this is a tiny incremental improvement, to a code path that almost nobody is even going to see until Stein. In no way was it intended to close this topic. Thanks, efried On 08/02/2018 12:40 PM, Eric Fried wrote: > Jay et al- > >> And what I

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
Jay et al- > And what I'm referring to is doing a single query per "related > resource/trait placement request group" -- which is pretty much what > we're heading towards anyway. > > If we had a request for: > > GET /allocation_candidates? >  resources0=VCPU:1& >  

Re: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal

2018-08-01 Thread Eric Fried
Sundar- > On an unrelated note, thanks for the > pointer to the GPU spec > (https://review.openstack.org/#/c/579359/10/doc/source/specs/rocky/device-passthrough.rst). > I will review that. Thanks. Please note that this is for nova-powervm, PowerVM's *out-of-tree* compute driver. We hope to bring

Re: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal

2018-07-31 Thread Eric Fried
Sundar- > * Cyborg drivers deal with device-specific aspects, including > discovery/enumeration of devices and handling the Device Half of the > attach (preparing devices/accelerators for attach to an instance, > post-attach cleanup (if any) after successful attach, releasing >

Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-13 Thread Eric Fried
Ben- On 07/13/2018 10:12 AM, Ben Nemec wrote: > > > On 07/12/2018 04:29 PM, Eric Fried wrote: >> Here it is for nova. >> >> https://review.openstack.org/#/c/582392/ >> >>>> also don't love that immediately bumping the lower bound for tox is >>&

Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-12 Thread Eric Fried
rsion is 1.6, required is at least 3.1.1 $ sudo pip install --upgrade tox $ tox -e blah ? Thanks, efried On 07/09/2018 03:58 PM, Doug Hellmann wrote: > Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500: >> >> On 07/09/2018 11:16 AM, Eric Fried wrote: >>&g

[openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure

2018-07-09 Thread Eric Fried
In gabbi, there's a way [1] to mark a test as an expected failure, which makes it show up in your stestr run thusly: {0} nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request [0.710821s] ... ok == Totals == Ran:

Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-09 Thread Eric Fried
Doug- How long til we can start relying on the new behavior in the gate? I gots me some basepython to purge... -efried On 07/09/2018 11:03 AM, Doug Hellmann wrote: > Heads-up, there is a new tox release out. 3.1 includes some behavior > changes in the way basepython behaves (thanks,

Re: [openstack-dev] [keystone] Keystone Team Update - Week of 18 June 2018

2018-06-22 Thread Eric Fried
Also: keystoneauth1 3.9.0 was released. Its new feature is the ability to set raise_exc on the Adapter object so you don't have to do it per request. Here's a patch that makes use of the feature: https://review.openstack.org/#/c/577437/ -efried On 06/22/2018 06:53 AM, Colleen Murphy wrote: > #

[openstack-dev] [nova][oot drivers] Putting a contract out on ComputeDriver.get_traits()

2018-06-19 Thread Eric Fried
All (but especially out-of-tree compute driver maintainers)- ComputeDriver.get_traits() was introduced mere months ago [1] for initial implementation by Ironic [2] mainly because the whole update_provider_tree framework [3] wasn't fully baked yet. Now that update_provider_tree is a

Re: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt'

2018-06-15 Thread Eric Fried
Doug- > The lower constraints tests only look at files in the same repo. > The minimum versions of dependencies set in requirements.txt, > test-requirements.txt, etc. need to match the values in > lower-constraints.txt. > > In this case, the more detailed error message is a few lines above the >

Re: [openstack-dev] [nova] [placement] placement update 18-24

2018-06-15 Thread Eric Fried
Thank you as always for doing this, Chris. > Some of the older items in this list are not getting much attention. > That's a shame. The list is ordered (oldest first) the way it is on > purpose. > > * >   Purge comp_node and res_prvdr records during

Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad

2018-06-15 Thread Eric Fried
We just merged an initial pass at direct access to the placement service [1]. See the test_direct suite for simple usage examples. Note that this was written primarily to satisfy the FFU use case in blueprint reshape-provider-tree [2] and therefore likely won't have everything cinder needs. So

Re: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-11 Thread Eric Fried
oint saying that virtio-blk is not longer >> limited by the number of PCI slot available. That in recent kernel and >> QEMU version [0]. >> >> I could join what you are suggesting at the bottom and fix the limit >> to 256 disks. > > Right, that's for KVM-base

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-08 Thread Eric Fried
There is now a blueprint [1] and draft spec [2]. Reviews welcomed. [1] https://blueprints.launchpad.net/nova/+spec/reshape-provider-tree [2] https://review.openstack.org/#/c/572583/ On 06/04/2018 06:00 PM, Eric Fried wrote: > There has been much discussion. We've gotten to a po

Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Eric Fried
Alex- Allocations for an instance are pulled down by the compute manager and passed into the virt driver's spawn method since [1]. An allocation comprises a consumer, provider, resource class, and amount. Once we can schedule to trees, the allocations pulled down by the compute manager

Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Eric Fried
To summarize: cyborg could model things nested-wise, but there would be no way to schedule them yet. Couple of clarifications inline. On 06/05/2018 08:29 AM, Jay Pipes wrote: > On 06/05/2018 08:50 AM, Stephen Finucane wrote: >> I thought nested resource providers were already supported by >>

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-04 Thread Eric Fried
wrote: > On 05/31/2018 02:26 PM, Eric Fried wrote: >>> 1. Make everything perform the pivot on compute node start (which can be >>>     re-used by a CLI tool for the offline case) >>> 2. Make everything default to non-nested inventory at first, and provide >>>    

Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-04 Thread Eric Fried
Sundar- We've been discussing the upgrade path on another thread [1] and are working toward a solution [2][3] that would not require downtime or special scripts (other than whatever's normally required for an upgrade). We still hope to have all of that ready for Rocky, but if

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Eric Fried
Sylvain- On 05/31/2018 02:41 PM, Sylvain Bauza wrote: > > > On Thu, May 31, 2018 at 8:26 PM, Eric Fried <mailto:openst...@fried.cc>> wrote: > > > 1. Make everything perform the pivot on compute node start (which can be > >    re-used by a CLI tool for

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-31 Thread Eric Fried
Chris- >> virt driver isn't supposed to talk to placement directly, or know >> anything about allocations? > > For sake of discussion, how much (if any) easier would it be if we > got rid of this restriction? At this point, having implemented the update_[from_]provider_tree flow as we have, it

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-31 Thread Eric Fried
Rats, typo correction below. On 05/31/2018 01:26 PM, Eric Fried wrote: >> 1. Make everything perform the pivot on compute node start (which can be >>re-used by a CLI tool for the offline case) >> 2. Make everything default to non-nested inventory at first, and provide >

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-31 Thread Eric Fried
> 1. Make everything perform the pivot on compute node start (which can be >re-used by a CLI tool for the offline case) > 2. Make everything default to non-nested inventory at first, and provide >a way to migrate a compute node and its instances one at a time (in >place) to roll

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-31 Thread Eric Fried
This seems reasonable, but... On 05/31/2018 04:34 AM, Balázs Gibizer wrote: > > > On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza wrote: >>> >> >> After considering the whole approach, discussing with a couple of >> folks over IRC, here is what I feel the best approach for a seamless >>

Re: [openstack-dev] [Cyborg] [Nova] Cyborg traits

2018-05-31 Thread Eric Fried
, Nadathur, Sundar wrote: > On 5/30/2018 1:18 PM, Eric Fried wrote: >> This all sounds fully reasonable to me.  One thing, though... >> >>>>    * There is a resource class per device category e.g. >>>> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. >

Re: [openstack-dev] [Cyborg] [Nova] Cyborg traits

2018-05-30 Thread Eric Fried
This all sounds fully reasonable to me. One thing, though... >> * There is a resource class per device category e.g. >> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. Let's propose standard resource classes for these ASAP.

Re: [openstack-dev] [nova] Extra feature of vCPU allocation on demands

2018-05-07 Thread Eric Fried
I will be interested to watch this develop. In PowerVM we already have shared vs. dedicated processors [1] along with concepts like capped vs. uncapped, min/max proc units, weights, etc. But obviously it's all heavily customized to be PowerVM-specific. If these concepts made their way into

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-05-03 Thread Eric Fried
>> verify with placement >> whether the image traits requested are 1) supported by the compute >> host the instance is residing on and 2) coincide with the >> already-existing allocations. Note that #2 is a subset of #1. The only potential advantage of including #1 is efficiency: We can do #1 in

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-24 Thread Eric Fried
Alex- On 04/24/2018 09:21 AM, Alex Xu wrote: > > > 2018-04-24 20:53 GMT+08:00 Eric Fried <openst...@fried.cc > <mailto:openst...@fried.cc>>: > > > The problem isn't just checking the traits in the nested resource > > provider. We also need to ens

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-24 Thread Eric Fried
> The problem isn't just checking the traits in the nested resource > provider. We also need to ensure the trait in the exactly same child > resource provider. No, we can't get "granular" with image traits. We accepted this as a limitation for the spawn aspect of this spec [1], for all the same

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-23 Thread Eric Fried
> for the GET > /resource_providers?in_tree==, nested > resource providers and allocation pose a problem see #3 above. This *would* work as a quick up-front check as Jay described (if you get no results from this, you know that at least one of your image traits doesn't exist anywhere in the tree)

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-23 Thread Eric Fried
raits were in the image but not in the instance's RPs: %s") % ', '.join(missing_traits)) [1] https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L986 On 04/23/2018 03:47 PM, Matt Riedemann wrote: > On 4/23/2018 3:26 PM, Eric Fried wrote: >> No, the que

Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-23 Thread Eric Fried
Semantically, GET /allocation_candidates where we don't actually want to allocate anything (i.e. we don't want to use the returned candidates) is goofy, and talking about what the result would look like when there's no `resources` is going to spider into some weird questions. Like what does the

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
%2013:48:39.213790#a1c [2] https://review.openstack.org/#/c/562687/ On 04/19/2018 07:38 AM, Balázs Gibizer wrote: > > > On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried <openst...@fried.cc> wrote: >> gibi- >> >>>  Can the proximity param specify

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
Sylvain- > What's the default behaviour if we aren't providing the proximity qparam > ? Isolate or any ? What we've been talking about, per mriedem's suggestion, is that the qparam is required when you specify any numbered request groups. There is no default. If you don't provide the qparam,

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
Chris- Thanks for this perspective. I totally agree. > * the common behavior should require the least syntax. To that point, I had been assuming "any fit" was going to be more common than "explicit anti-affinity". But I think this is where we are having trouble agreeing. So since, as you

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Eric Fried
gibi- > Can the proximity param specify relationship between the un-numbered and > the numbered groups as well or only between numbered groups? > Besides that I'm +1 about proxyimity={isolate|any} Remembering that the resources in the un-numbered group can be spread around the tree and sharing

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
> I have a feeling we're just going to go back and forth on this, as we > have for weeks now, and not reach any conclusion that is satisfactory to > everyone. And we'll delay, yet again, getting functionality into this > release that serves 90% of use cases because we are obsessing over the >

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
Sorry, addressing gaffe, bringing this back on-list... On 04/18/2018 04:36 PM, Ed Leafe wrote: > On Apr 18, 2018, at 4:11 PM, Eric Fried <openst...@fried.cc> wrote: >>> That makes a lot of sense. Since we are already suffixing the query param >>> “resources” to ind

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
> Cool. So let's not use a GET for this and instead change it to a POST > with a request body that can more cleanly describe what the user is > requesting, which is something we talked about a long time ago. I kinda doubt we could agree on a format for this in the Rocky timeframe. But for the

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
Chris- Going to accumulate a couple of your emails and answer them. I could have answered them separately (anti-affinity). But in this case I felt it appropriate to provide responses in a single note (best fit). > I'm a bit conflicted.  On the one hand... > On the other hand, Right;

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
I can't tell if you're being facetious, but this seems sane, albeit complex. It's also extensible as we come up with new and wacky affinity semantics we want to support. I can't say I'm sold on requiring `proximity` qparams that cover every granular group - that seems like a pretty onerous

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
9:06 AM, Jay Pipes wrote: > Stackers, > > Eric Fried and I are currently at an impasse regarding a decision that > will have far-reaching (and end-user facing) impacts to the placement > API and how nova interacts with the placement service from the nova > scheduler. > >

Re: [openstack-dev] [placement] Anchor/Relay Providers

2018-04-16 Thread Eric Fried
> I still don't see a use in returning the root providers in the > allocation requests -- since there is nothing consuming resources from > those providers. > > And we already return the root_provider_uuid for all providers involved > in allocation requests within the provider_summaries section.

Re: [openstack-dev] [placement] Anchor/Relay Providers

2018-04-16 Thread Eric Fried
ond. Comments inline. > > On 03/30/2018 08:34 PM, Eric Fried wrote: >> Folks who care about placement (but especially Jay and Tetsuro)- >> >> I was reviewing [1] and was at first very unsatisfied that we were not >> returning the anchor providers in the results.  But

Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Eric Fried
ree lines of code really worth making future cleanup > harder? Is a three line change really blocking "an approved blueprint > with ready code"? > > Michael > > > > On Thu, Apr 12, 2018 at 10:42 PM, Eric Fried <openst...@fried.cc > <mailto:opens

Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-12 Thread Eric Fried
+1 This sounds reasonable to me. I'm glad the issue was raised, but IMO it shouldn't derail progress on an approved blueprint with ready code. Jichen, would you please go ahead and file that blueprint template (no need to write a spec yet) and link it in a review comment on the bottom zvm patch

Re: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event

2018-04-11 Thread Eric Fried
Jichen was able to use this information immediately, to great benefit [1]. (If those paying attention could have a quick look at that to make sure he used it right, it would be appreciated; I'm not an expert here.) [1] https://review.openstack.org/#/c/527658/31..32/nova/virt/zvm/guest.py@192 On

Re: [openstack-dev] [nova] [placement] placement update 18-14

2018-04-06 Thread Eric Fried
>> it's really on nested allocation candidates. > > Yup. And that series is deadlocked on a disagreement about whether > granular request groups should be "separate by default" (meaning: if you > request multiple groups of resources, the expectation is that they will > be served by distinct

Re: [openstack-dev] [nova] Proposing Eric Fried for nova-core

2018-04-03 Thread Eric Fried
PM, melanie witt wrote: > On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote: >> Howdy everyone, >> >> I'd like to propose that we add Eric Fried to the nova-core team. >> >> Eric has been instrumental to the placement effort with his work on >> nested r

Re: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto

2018-03-31 Thread Eric Fried
Mr. Fire- > nova-powervm: no open reviews > - in test-requirements, but not actually used? > - made https://review.openstack.org/558091 for it Thanks for that. It passed all our tests; we should merge it early next week. -efried

Re: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests

2018-03-31 Thread Eric Fried
Hi Doug, I made this [2] for you. I tested it locally with oslo.config master, and whereas I started off with a slightly different set of errors than you show at [1], they were in the same suites. Since I didn't want to tox the world locally, I went ahead and added a Depends-On from [3]. Let's

Re: [openstack-dev] [placement] Anchor/Relay Providers

2018-03-31 Thread Eric Fried
rovider.py@2658 > [6] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py@3303 > [7] https://review.openstack.org/#/c/558014/ [8] https://review.openstack.org/#/c/558045/ [9] https://review.openstack.org/#/c/558044/ On 03/30/2018 07:34 PM, Eric

  1   2   >