Re: [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful?

2018-11-30 Thread Mohammed Naser
On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't

Re: [Openstack-operators] Nova hypervisor uuid

2018-11-27 Thread Matt Riedemann
On 11/27/2018 11:32 AM, Ignazio Cassano wrote: Hi  All, Please anyone know where hypervisor uuid is retrived? Sometime updating kmv nodes with yum update it changes and in nova database 2 uuids are assigned to the same node. regards Ignazio ___

[Openstack-operators] Nova hypervisor uuid

2018-11-27 Thread Ignazio Cassano
Hi All, Please anyone know where hypervisor uuid is retrived? Sometime updating kmv nodes with yum update it changes and in nova database 2 uuids are assigned to the same node. regards Ignazio ___ OpenStack-operators mailing list

[Openstack-operators] [nova][placement] Placement requests and caching in the resource tracker

2018-11-02 Thread Eric Fried
All- Based on a (long) discussion yesterday [1] I have put up a patch [2] whereby you can set [compute]resource_provider_association_refresh to zero and the resource tracker will never* refresh the report client's provider cache. Philosophically, we're removing the "healing" aspect of the

[Openstack-operators] [nova] Is anyone running their own script to purge old instance_faults table entries?

2018-11-01 Thread Matt Riedemann
I came across this bug [1] in triage today and I thought this was fixed already [2] but either something regressed or there is more to do here. I'm mostly just wondering, are operators already running any kind of script which purges old instance_faults table records before an instance is

Re: [Openstack-operators] [nova] Removing the CachingScheduler

2018-10-24 Thread Matt Riedemann
On 10/18/2018 5:07 PM, Matt Riedemann wrote: It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work

[Openstack-operators] [nova] Removing the CachingScheduler

2018-10-18 Thread Matt Riedemann
It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work for migrating his deployment from the

[Openstack-operators] [nova] nova-xvpvncproxy CLI options

2018-10-01 Thread Stephen Finucane
tl;dr: Is anyone calling 'nova-novncproxy' or 'nova-serialproxy' with CLI arguments instead of a configuration file? I've been doing some untangling of the console proxy services that nova provides and trying to clean up the documentation for same [1]. As part of these fixes, I noted a couple of

[Openstack-operators] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-14 Thread Matt Riedemann
tl;dr: I'm proposing a new parameter to the server stop (and suspend?) APIs to control if nova shelve offloads the server. Long form: This came up during the public cloud WG session this week based on a couple of feature requests [1][2]. When a user stops/suspends a server, the hypervisor

Re: [Openstack-operators] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-09-06 15:58:41 -0500: > I wanted to recap some upgrade-specific stuff from today outside of the > other [1] technical extraction thread. > > Chris has a change up for review [2] which prompted the discussion. > > That change makes placement only

[Openstack-operators] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Matt Riedemann
I wanted to recap some upgrade-specific stuff from today outside of the other [1] technical extraction thread. Chris has a change up for review [2] which prompted the discussion. That change makes placement only work with placement.conf, not nova.conf, but does get a passing tempest run in

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Matt Riedemann
On 8/29/2018 3:21 PM, Tim Bell wrote: Sounds like a good topic for PTG/Forum? Yeah it's already on the PTG agenda [1][2]. I started the thread because I wanted to get the ball rolling as early as possible, and with people that won't attend the PTG and/or the Forum, to weigh in on not only

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell
such as preserving IP addresses etc. Sounds like a good topic for PTG/Forum? Tim -Original Message- From: Jay Pipes Date: Wednesday, 29 August 2018 at 22:12 To: Dan Smith , Tim Bell Cc: "openstack-operators@lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neut

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 04:04 PM, Dan Smith wrote: - The VMs to be migrated are not generally not expensive configurations, just hardware lifecycles where boxes go out of warranty or computer centre rack/cooling needs re-organising. For CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
> - The VMs to be migrated are not generally not expensive > configurations, just hardware lifecycles where boxes go out of > warranty or computer centre rack/cooling needs re-organising. For > CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a > ~30% pet share) > - We make a

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell
2018 at 18:47 To: Jay Pipes Cc: "openstack-operators@lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration > A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this s

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 02:26 PM, Chris Friesen wrote: On 08/29/2018 10:02 AM, Jay Pipes wrote: Also, I'd love to hear from anyone in the real world who has successfully migrated (live or otherwise) an instance that "owns" expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). I thought

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Chris Friesen
On 08/29/2018 10:02 AM, Jay Pipes wrote: Also, I'd love to hear from anyone in the real world who has successfully migrated (live or otherwise) an instance that "owns" expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). I thought cold migration of instances with such devices was

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 12:39 PM, Dan Smith wrote: If we're going to discuss removing move operations from Nova, we should do that in another thread. This one is about making existing operations work :) OK, understood. :) The admin only "owns" the instance because we have no ability to transfer

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
> A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this scenario as I can imagine. And > there's a reason release upgrades are not done entirely within Nova; > clearly an external upgrade tool or script is needed to orchestrate > the many steps and

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
I respect your opinion but respectfully disagree that this is something we need to spend our time on. Comments inline. On 08/29/2018 10:47 AM, Dan Smith wrote: * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
>> * Cells can shard across flavors (and hardware type) so operators >> would like to move users off the old flavors/hardware (old cell) to >> new flavors in a new cell. > > So cell migrations are kind of the new release upgrade dance. Got it. No, cell migrations are about moving instances

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
Sorry for delayed response. Was on PTO when this came out. Comments inline... On 08/22/2018 09:23 PM, Matt Riedemann wrote: Hi everyone, I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova.

[Openstack-operators] [nova] Deprecating Core/Disk/RamFilter

2018-08-24 Thread Matt Riedemann
This is just an FYI that I have proposed that we deprecate the core/ram/disk filters [1]. We should have probably done this back in Pike when we removed them from the default enabled_filters list and also deprecated the CachingScheduler, which is the only in-tree scheduler driver that benefits

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-22 Thread Sam Morrison
I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not. Thinking moving from old to new hardware at a cell level. If storage and network isn’t available ideally it would fail at the api request. There is also ceph

[Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-22 Thread Matt Riedemann
Hi everyone, I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova. At a high level, I am going off these requirements: * Cells can shard across flavors (and hardware type) so operators would

Re: [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-21 Thread Lee Yarwood
On 20-08-18 16:29:52, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api,

[Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-20 Thread Matthew Booth
For those who aren't familiar with it, nova's volume-update (also called swap volume by nova devs) is the nova part of the implementation of cinder's live migration (also called retype). Volume-update is essentially an internal cinder<->nova api, but as that's not a thing it's also unfortunately

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-18 Thread Matt Riedemann
On 8/11/2018 12:50 AM, Chris Apsey wrote: This sounds promising and there seems to be a feasible way to do this, but it also sounds like a decent amount of effort and would be a new feature in a future release rather than a bugfix - am I correct in that assessment? Yes I'd say it's a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-10 Thread Chris Apsey
This sounds promising and there seems to be a feasible way to do this, but it also sounds like a decent amount of effort and would be a new feature in a future release rather than a bugfix - am I correct in that assessment? On August 9, 2018 13:30:31 "Daniel P. Berrangé" wrote: On Thu,

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-09 Thread Chris Apsey
Exactly. And I agree, it seems like hw_architecture should dictate which emulator is chosen, but as you mentioned its currently not. I'm not sure if this is a bug and it's supposed to 'just work', or just something that was never fully implemented (intentionally) and would be more of a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Matt Riedemann
On 8/8/2018 2:42 PM, Chris Apsey wrote: qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 packages, but they perform system-mode emulation (via dynamic instruction translation) for those target environments.  So, you run qemu-system-ppc64 on an x86 host in order to get a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Chris Apsey
Matt, qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 packages, but they perform system-mode emulation (via dynamic instruction translation) for those target environments. So, you run qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM. Our use case

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Matt Riedemann
On 8/7/2018 8:54 AM, Chris Apsey wrote: We don't actually have any non-x86 hardware at the moment - we're just looking to run certain workloads in qemu full emulation mode sans KVM extensions (we know there is a huge performance hit - it's just for a few very specific things).  The hosts I'm

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-07 Thread Chris Apsey
Hey Matt, We don't actually have any non-x86 hardware at the moment - we're just looking to run certain workloads in qemu full emulation mode sans KVM extensions (we know there is a huge performance hit - it's just for a few very specific things). The hosts I'm talking about are normal

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-07 Thread Matt Riedemann
On 8/5/2018 1:43 PM, Chris Apsey wrote: Trying to enable some alternate (non-x86) architectures on xenial + queens.  I can load up images and set the property correctly according to the supported values (https://docs.openstack.org/nova/queens/configuration/config.html) in

Re: [Openstack-operators] [nova] StarlingX diff analysis

2018-08-07 Thread Matt Riedemann
On 8/7/2018 1:10 AM, Flint WALRUS wrote: I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were

Re: [Openstack-operators] [nova] StarlingX diff analysis

2018-08-07 Thread Flint WALRUS
Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to

[Openstack-operators] [nova] StarlingX diff analysis

2018-08-06 Thread Matt Riedemann
In case you haven't heard, there was this StarlingX thing announced at the last summit. I have gone through the enormous nova diff in their repo and the results are in a spreadsheet [1]. Given the enormous spreadsheet (see a pattern?), I have further refined that into a set of high-level

[Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-05 Thread Chris Apsey
All, Trying to enable some alternate (non-x86) architectures on xenial + queens. I can load up images and set the property correctly according to the supported values (https://docs.openstack.org/nova/queens/configuration/config.html) in image_properties_default_architecture. From what I

Re: [Openstack-operators] [nova] Couple of CellsV2 questions

2018-07-24 Thread Jonathan Mills
Thanks, Matt. Those are all good suggestions, and we will incorporate your feedback into our plans. On 07/23/2018 05:57 PM, Matt Riedemann wrote: > I'll try to help a bit inline. Also cross-posting to openstack-dev and > tagging with [nova] to highlight it. > > On 7/23/2018 10:43 AM, Jonathan

Re: [Openstack-operators] [nova] Couple of CellsV2 questions

2018-07-23 Thread Matt Riedemann
I'll try to help a bit inline. Also cross-posting to openstack-dev and tagging with [nova] to highlight it. On 7/23/2018 10:43 AM, Jonathan Mills wrote: I am looking at implementing CellsV2 with multiple cells, and there's a few things I'm seeking clarification on: 1) How does a

Re: [Openstack-operators] [nova] Cinder cross_az_attach=False changes/fixes

2018-07-15 Thread Matt Riedemann
Just an update on an old thread, but I've been working on the cross_az_attach=False issues again this past week and I think I have a couple of decent fixes. On 5/31/2017 6:08 PM, Matt Riedemann wrote: This is a request for any operators out there that configure nova to set: [cinder]

[Openstack-operators] [nova] Denver Stein ptg planning

2018-07-11 Thread melanie witt
Hello Devs and Ops, I've created an etherpad where we can start collecting ideas for topics to cover at the Stein PTG. Please feel free to add your comments and topics with your IRC nick next to it to make it easier to discuss with you. https://etherpad.openstack.org/p/nova-ptg-stein

[Openstack-operators] [nova] increasing the number of allowed volumes attached per instance > 26

2018-06-07 Thread melanie witt
Hello Stackers, Recently, we've received interest about increasing the maximum number of allowed volumes to attach to a single instance > 26. The limit of 26 is because of a historical limitation in libvirt (if I remember correctly) and is no longer limited at the libvirt level in the present

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-06-07 Thread Matt Riedemann
On 2/6/2018 6:44 PM, Matt Riedemann wrote: On 2/6/2018 2:14 PM, Chris Apsey wrote: but we would rather have intermittent build failures rather than compute nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So

[Openstack-operators] [nova] Need feedback on spec for handling down cells in the API

2018-06-07 Thread Matt Riedemann
We have a nova spec [1] which is at the point that it needs some API user (and operator) feedback on what nova API should be doing when listing servers and there are down cells (unable to reach the cell DB or it times out). tl;dr: the spec proposes to return "shell" instances which have the

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Chris Friesen
On 06/04/2018 05:43 AM, Tobias Urdin wrote: Hello, I have received a question about a more specialized use case where we need to isolate several hypervisors to a specific project. My first thinking was using nova flavors for only that project and add extra specs properties to use a specific

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Tobias Urdin
Saw now in the docs that multiple aggregate_instance_extra_specs keys should be a comma-separated list. But other than that, would the below do what I'm looking for? Has a very high maintenance level when having a lot of hypervisors and steadily adding new ones, but I can't see any other way to

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Tobias Urdin
Hello, Thanks for the reply Matt. The hard thing here is that I have to ensure it the other way around as well i.e other instances cannot be allowed landing on those "reserved" hypervisors. I assume I could do something like in [1] and also set key-value metadata on all flavors to select a host

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Matt Riedemann
On 6/4/2018 6:43 AM, Tobias Urdin wrote: I have received a question about a more specialized use case where we need to isolate several hypervisors to a specific project. My first thinking was using nova flavors for only that project and add extra specs properties to use a specific host

[Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Tobias Urdin
Hello, I have received a question about a more specialized use case where we need to isolate several hypervisors to a specific project. My first thinking was using nova flavors for only that project and add extra specs properties to use a specific host aggregate but this means I need to assign

[Openstack-operators] [nova] proposal to postpone nova-network core functionality removal to Stein

2018-05-31 Thread melanie witt
Hello Operators and Devs, This cycle at the PTG, we had decided to start making some progress toward removing nova-network [1] (thanks to those who have helped!) and so far, we've landed some patches to extract common network utilities from nova-network core functionality into separate

Re: [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-29 Thread Matt Riedemann
On 5/28/2018 7:31 AM, Sylvain Bauza wrote: That said, given I'm now working on using Nested Resource Providers for VGPU inventories, I wonder about a possible upgrade problem with VGPU allocations. Given that :  - in Queens, VGPU inventories are for the root RP (ie. the compute node RP), but,

Re: [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-28 Thread Sylvain Bauza
On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann wrote: > I've written a nova-manage placement heal_allocations CLI [1] which was a > TODO from the PTG in Dublin as a step toward getting existing > CachingScheduler users to roll off that (which is deprecated). > > During the

[Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-24 Thread Matt Riedemann
I've written a nova-manage placement heal_allocations CLI [1] which was a TODO from the PTG in Dublin as a step toward getting existing CachingScheduler users to roll off that (which is deprecated). During the CERN cells v1 upgrade talk it was pointed out that CERN was able to go from

[Openstack-operators] [nova] FYI on changes that might impact out of tree scheduler filters

2018-05-17 Thread Matt Riedemann
CERN has upgraded to Cells v2 and is doing performance testing of the scheduler and were reporting some things today which got us back to this bug [1]. So I've starting pushing some patches related to this but also related to an older blueprint I created [2]. In summary, we do quite a bit of

[Openstack-operators] [nova] [placement] placement extraction session at forum

2018-05-09 Thread Chris Dent
I've started an etherpad related to the Vancouver Forum session on extracting placement from nova. It's mostly just an outline for now but is evolving: https://etherpad.openstack.org/p/YVR-placement-extraction If we can get some real information in there before the session we are much more

[Openstack-operators] [nova][ironic] ironic_host_manager and baremetal scheduler options removal

2018-05-02 Thread Matt Riedemann
The baremetal scheduling options were deprecated in Pike [1] and the ironic_host_manager was deprecated in Queens [2] and is now being removed [3]. Deployments must use resource classes now for baremetal scheduling. [4] The large host subset size value is also no longer needed. [5] I've gone

[Openstack-operators] [nova] Concern about trusted certificates API change

2018-04-18 Thread Matt Riedemann
There is a compute REST API change proposed [1] which will allow users to pass trusted certificate IDs to be used with validation of images when creating or rebuilding a server. The trusted cert IDs are based on certificates stored in some key manager, e.g. Barbican. The full nova spec is

Re: [Openstack-operators] [nova] Rocky forum topics brainstorming

2018-04-18 Thread melanie witt
On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote: +openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: Hey everyone, Let's collect forum topic brainstorming ideas for the Forum sessions in Vancouver in this etherpad [0].

[Openstack-operators] [nova] Default scheduler filters survey

2018-04-18 Thread Artom Lifshitz
Hi all, A CI issue [1] caused by tempest thinking some filters are enabled when they're really not, and a proposed patch [2] to add (Same|Different)HostFilter to the default filters as a workaround, has led to a discussion about what filters should be enabled by default in nova. The default

Re: [Openstack-operators] [nova] Rocky forum topics brainstorming

2018-04-13 Thread melanie witt
+openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: Hey everyone, Let's collect forum topic brainstorming ideas for the Forum sessions in Vancouver in this etherpad [0]. Once we've brainstormed, we'll select and submit our

[Openstack-operators] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt

2018-04-11 Thread Michael Still
Hi, https://review.openstack.org/#/c/523387 proposes adding a z/VM specific dependancy to nova's requirements.txt. When I objected the counter argument is that we have examples of windows specific dependancies (os-win) and powervm specific dependancies in that file already. I think perhaps all

Re: [Openstack-operators] Nova resources are out of sync in ocata version

2018-04-09 Thread Saverio Proto
It works for me in Newton. Try it at your own risk :) Cheers, Saverio 2018-04-09 13:23 GMT+02:00 Anwar Durrani : > No this is different one. should i try this one ? if it works ? > > On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: >> >> Hello

Re: [Openstack-operators] Nova resources are out of sync in ocata version

2018-04-09 Thread Anwar Durrani
No this is different one. should i try this one ? if it works ? On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: > Hello Anwar, > > are you talking about this script ? > https://github.com/openstack/osops-tools-contrib/blob/ > master/nova/nova-libvirt-compare.py > > it

[Openstack-operators] Nova resources are out of sync in ocata version

2018-04-09 Thread Anwar Durrani
Hi All, Nova resources are out of sync in ocata version, what values are showing on dashboard are mismatch of actual running instances, i do remember i had script for auto sync resources but this script is getting fail in this case, Kindly help here. -- Thanks & regards, Anwar M. Durrani

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Alex Schultz
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote: > On Mon, 2 Apr 2018, Alex Schultz wrote: > >> So this is/was valid. A few years back there was some perf tests done >> with various combinations of process/threads and for Keystone it was >> determined that threads should

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Jay Pipes
On 04/03/2018 06:48 AM, Chris Dent wrote: On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Chris Dent
On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process count (hence the bug). Now I guess the

Re: [Openstack-operators] nova-placement-api tuning

2018-04-02 Thread Alex Schultz
On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell wrote: > > > On 03/29/2018 02:13 AM, Belmiro Moreira wrote: >> >> Some lessons so far... >> - Scale keystone accordingly when enabling placement. > > > Speaking of which; I suppose I have the same question for keystone

Re: [Openstack-operators] [nova] about use spice console

2018-03-29 Thread 李杰
inal ------ From: "李杰"<li...@unitedstack.com>; Date: Thu, Mar 29, 2018 05:24 PM To: "openstack-operators"<openstack-operators@lists.openstack.org>; Subject: [Openstack-operators] [nova] about use spice console Hi,all Now I want to use

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: If I'm reading http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html right, it seems that the MPM is not pertinent when using WSGIDaemonProcess. It doesn't impact the number wsgi processes that will exist or how they

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 04:24 AM, Chris Dent wrote: On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. If I'm reading

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Matt Riedemann
On 3/29/2018 12:05 PM, Chris Dent wrote: Other suggestions? I'm looking at things like turning off scheduler_tracks_instance_changes, since affinity scheduling is not needed (at least so-far), but not sure that that will help with placement load (seems like it might, though?) This won't

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: placement python stack and kicks out the 401. So this mostly indicates that socket accept is taking forever. Well, this test connects and gets a 400 immediately: echo | nc -v apihost 8778 so I don't think it's at at the socket level, but, I

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 01:19 AM, Chris Dent wrote: On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. Another option is to switch to nginx and uwsgi. In situations where the

[Openstack-operators] [nova] about use spice console

2018-03-29 Thread 李杰
Hi,all Now I want to use spice console replace novnc in instance.But the openstack documentation is a bit sparse on what configuration parameters to enable for SPICE console access. But my result is the nova-compute service and nova-consoleauth service failed,and the log tell me the

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Belmiro Moreira
Hi, with Ocata upgrade we decided to run local placements (one service per cellV1) because we were nervous about possible scalability issues but specially the increase of the schedule time. Fortunately, this is now been address with the placement-req-filter work. We started slowly to aggregate

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fill up. This should be well within the

[Openstack-operators] [nova] Hard fail if you try to rename an AZ with instances in it?

2018-03-27 Thread Matt Riedemann
Sylvain has had a spec up for awhile [1] about solving an old issue where admins can rename an AZ (via host aggregate metadata changes) while it has instances in it, which likely results in at least user confusion, but probably other issues later if you try to move those instances, e.g. the

Re: [Openstack-operators] nova 17.0.1 released (queens)

2018-03-07 Thread David Medberry
Thanks for the headsup Matt. On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann wrote: > I just wanted to give a heads up to anyone thinking about upgrading to > queens that nova has released a 17.0.1 patch release [1]. > > There are some pretty important fixes in there that

[Openstack-operators] nova 17.0.1 released (queens)

2018-03-07 Thread Matt Riedemann
I just wanted to give a heads up to anyone thinking about upgrading to queens that nova has released a 17.0.1 patch release [1]. There are some pretty important fixes in there that came up after the queens GA so if you haven't upgraded yet, I recommend going straight to that one instead of

Re: [Openstack-operators] [nova] [nova-lxd] Query regarding LXC instantiation using nova

2018-02-20 Thread James Page
Hi Amit (re-titled thread with scoped topics) As Matt has already referenced, [0] is a good starting place for using the nova-lxd driver. On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote: > Hello, > > I have a running OpenStack Ocata setup on which I am able to launch VMs. >

[Openstack-operators] [nova] Regression bug for boot from volume with IsolatedHostsFilter

2018-02-11 Thread Matt Riedemann
I triaged this bug a couple of weeks ago: https://bugs.launchpad.net/nova/+bug/1746483 It looks like it's been regressed since Mitaka when that filter started using the RequestSpec object rather than legacy filter_properties dict. Looking a bit deeper though, it looks like this filter never

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Matt Riedemann
On 2/6/2018 2:14 PM, Chris Apsey wrote: but we would rather have intermittent build failures rather than compute nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So if your limit is the default (10) and you

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Chris Apsey
All, This was the core issue - setting consecutive_build_service_disable_threshold = 0 in nova.conf (on controllers and compute nodes) solved this. It was being triggered by neutron dropping requests (and/or responses) for vif-plugging due to cpu usage on the neutron endpoints being pegged

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Chris Apsey
That looks promising. I'll report back to confirm the solution. Thanks! --- v/r Chris Apsey bitskr...@bitskrieg.net https://www.bitskrieg.net On 2018-01-31 04:40 PM, Matt Riedemann wrote: On 1/31/2018 3:16 PM, Chris Apsey wrote: All, Running in to a strange issue I haven't seen before.

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Eric Fried
There's [1], but I would have expected you to see error logs like [2] if that's what you're hitting. [1] https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L627-L645 [2] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1714-L1716 efried On 01/31/2018 03:16

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Matt Riedemann
On 1/31/2018 3:16 PM, Chris Apsey wrote: All, Running in to a strange issue I haven't seen before. Randomly, the nova-compute services on compute nodes are disabling themselves (as if someone ran openstack compute service set --disable hostX nova-compute.  When this happens, the node

[Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Chris Apsey
All, Running in to a strange issue I haven't seen before. Randomly, the nova-compute services on compute nodes are disabling themselves (as if someone ran openstack compute service set --disable hostX nova-compute. When this happens, the node continues to report itself as 'up' - the service

[Openstack-operators] [nova][neutron] Extend instance IP filter for floating IP

2018-01-24 Thread Hongbin Lu
Hi all, Nova currently allows us to filter instances by fixed IP address(es). This feature is known to be useful in an operational scenario that cloud administrators detect abnormal traffic in an IP address and want to trace down to the instance that this IP address belongs to. This feature

[Openstack-operators] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-01-16 Thread melanie witt
Hello Stackers, This is a heads up to any of you using the AggregateCoreFilter, AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. These filters have effectively allowed operators to set overcommit ratios per aggregate rather than per compute node in <= Newton.

[Openstack-operators] [nova][cinder] nova support for volume multiattach

2018-01-10 Thread Matt Riedemann
Hi everyone, I wanted to point out that the nova API patch for volume mulitattach support is available for review: https://review.openstack.org/#/c/271047/ It's actually a series of changes, but that is the last one that enables the feature in nova. It relies on the 2.59 compute API

Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler

2018-01-10 Thread Matt Riedemann
On 1/10/2018 1:49 PM, Alec Hothan (ahothan) wrote: The main problem is that the nova API does not return sufficient detail on the reason for a NoValidHostFound and perhaps that should be fixed at that level. Extending the API to return a reason field which is a json dict that is returned by

Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler

2018-01-10 Thread Alec Hothan (ahothan)
the log to find out why). Regards, Alec From: Matt Riedemann <mriede...@gmail.com> Date: Wednesday, January 10, 2018 at 11:33 AM To: "openstack-operators@lists.openstack.org" <openstack-operators@lists.openstack.org> Subject: Re: [Openstack-operators] [openstack-oper

Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler

2018-01-10 Thread Matt Riedemann
On 1/10/2018 1:15 AM, Alec Hothan (ahothan) wrote: +1 on the “no valid host found”, this one should be at the very top of the to-be-fixed list. Very difficult to troubleshoot filters in lab testing (let alone in production) as there can be many of them. This will get worst with more NFV

Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler

2018-01-09 Thread Alec Hothan (ahothan)
enstack-operators@lists.openstack.org" <openstack-operators@lists.openstack.org> Subject: Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler On Tue, Jan 9, 2018 at 8:18 AM, Piotr Bielak <piotr.bie...@corp.ovh.com<mailto:piotr.bie...@corp.ovh.com>> w

Re: [Openstack-operators] [openstack-operators][nova] Verbosity of nova scheduler

2018-01-09 Thread Jeremy Stanley
On 2018-01-09 12:38:05 -0600 (-0600), Matt Riedemann wrote: [...] > Also, there is a noticeable impact to performance when running the > scheduler with debug logging enabled which is why it's not > recommended to run with debug enabled in production. Further, OpenStack considers security risks

  1   2   3   4   5   6   >