Re: [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful?

2018-11-30 Thread Mohammed Naser
On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote: > I have a request to do $SUBJECT in relation to a V2V workflow. The use > case here is conversion of a VM/Physical which was previously powered > off. We want to move its data, but we don't want to be powering on > stuff which wasn't

Re: [Openstack-operators] Nova hypervisor uuid

2018-11-27 Thread Matt Riedemann
On 11/27/2018 11:32 AM, Ignazio Cassano wrote: Hi  All, Please anyone know where hypervisor uuid is retrived? Sometime updating kmv nodes with yum update it changes and in nova database 2 uuids are assigned to the same node. regards Ignazio ___

Re: [Openstack-operators] [nova] Removing the CachingScheduler

2018-10-24 Thread Matt Riedemann
On 10/18/2018 5:07 PM, Matt Riedemann wrote: It's been deprecated since Pike, and the time has come to remove it [1]. mgagne has been the most vocal CachingScheduler operator I know and he has tested out the "nova-manage placement heal_allocations" CLI, added in Rocky, and said it will work

Re: [Openstack-operators] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-09-06 15:58:41 -0500: > I wanted to recap some upgrade-specific stuff from today outside of the > other [1] technical extraction thread. > > Chris has a change up for review [2] which prompted the discussion. > > That change makes placement only

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Matt Riedemann
On 8/29/2018 3:21 PM, Tim Bell wrote: Sounds like a good topic for PTG/Forum? Yeah it's already on the PTG agenda [1][2]. I started the thread because I wanted to get the ball rolling as early as possible, and with people that won't attend the PTG and/or the Forum, to weigh in on not only

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell
such as preserving IP addresses etc. Sounds like a good topic for PTG/Forum? Tim -Original Message- From: Jay Pipes Date: Wednesday, 29 August 2018 at 22:12 To: Dan Smith , Tim Bell Cc: "openstack-operators@lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neut

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 04:04 PM, Dan Smith wrote: - The VMs to be migrated are not generally not expensive configurations, just hardware lifecycles where boxes go out of warranty or computer centre rack/cooling needs re-organising. For CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
> - The VMs to be migrated are not generally not expensive > configurations, just hardware lifecycles where boxes go out of > warranty or computer centre rack/cooling needs re-organising. For > CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a > ~30% pet share) > - We make a

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell
2018 at 18:47 To: Jay Pipes Cc: "openstack-operators@lists.openstack.org" Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration > A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this s

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 02:26 PM, Chris Friesen wrote: On 08/29/2018 10:02 AM, Jay Pipes wrote: Also, I'd love to hear from anyone in the real world who has successfully migrated (live or otherwise) an instance that "owns" expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). I thought

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Chris Friesen
On 08/29/2018 10:02 AM, Jay Pipes wrote: Also, I'd love to hear from anyone in the real world who has successfully migrated (live or otherwise) an instance that "owns" expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise). I thought cold migration of instances with such devices was

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
On 08/29/2018 12:39 PM, Dan Smith wrote: If we're going to discuss removing move operations from Nova, we should do that in another thread. This one is about making existing operations work :) OK, understood. :) The admin only "owns" the instance because we have no ability to transfer

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
> A release upgrade dance involves coordination of multiple moving > parts. It's about as similar to this scenario as I can imagine. And > there's a reason release upgrades are not done entirely within Nova; > clearly an external upgrade tool or script is needed to orchestrate > the many steps and

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
I respect your opinion but respectfully disagree that this is something we need to spend our time on. Comments inline. On 08/29/2018 10:47 AM, Dan Smith wrote: * Cells can shard across flavors (and hardware type) so operators would like to move users off the old flavors/hardware (old cell) to

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Dan Smith
>> * Cells can shard across flavors (and hardware type) so operators >> would like to move users off the old flavors/hardware (old cell) to >> new flavors in a new cell. > > So cell migrations are kind of the new release upgrade dance. Got it. No, cell migrations are about moving instances

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Jay Pipes
Sorry for delayed response. Was on PTO when this came out. Comments inline... On 08/22/2018 09:23 PM, Matt Riedemann wrote: Hi everyone, I have started an etherpad for cells topics at the Stein PTG [1]. The main issue in there right now is dealing with cross-cell cold migration in nova.

Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-22 Thread Sam Morrison
I think in our case we’d only migrate between cells if we know the network and storage is accessible and would never do it if not. Thinking moving from old to new hardware at a cell level. If storage and network isn’t available ideally it would fail at the api request. There is also ceph

Re: [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-21 Thread Lee Yarwood
On 20-08-18 16:29:52, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api,

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-18 Thread Matt Riedemann
On 8/11/2018 12:50 AM, Chris Apsey wrote: This sounds promising and there seems to be a feasible way to do this, but it also sounds like a decent amount of effort and would be a new feature in a future release rather than a bugfix - am I correct in that assessment? Yes I'd say it's a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-10 Thread Chris Apsey
This sounds promising and there seems to be a feasible way to do this, but it also sounds like a decent amount of effort and would be a new feature in a future release rather than a bugfix - am I correct in that assessment? On August 9, 2018 13:30:31 "Daniel P. Berrangé" wrote: On Thu,

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-09 Thread Chris Apsey
Exactly. And I agree, it seems like hw_architecture should dictate which emulator is chosen, but as you mentioned its currently not. I'm not sure if this is a bug and it's supposed to 'just work', or just something that was never fully implemented (intentionally) and would be more of a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Matt Riedemann
On 8/8/2018 2:42 PM, Chris Apsey wrote: qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 packages, but they perform system-mode emulation (via dynamic instruction translation) for those target environments.  So, you run qemu-system-ppc64 on an x86 host in order to get a

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Chris Apsey
Matt, qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86 packages, but they perform system-mode emulation (via dynamic instruction translation) for those target environments. So, you run qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM. Our use case

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-08 Thread Matt Riedemann
On 8/7/2018 8:54 AM, Chris Apsey wrote: We don't actually have any non-x86 hardware at the moment - we're just looking to run certain workloads in qemu full emulation mode sans KVM extensions (we know there is a huge performance hit - it's just for a few very specific things).  The hosts I'm

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-07 Thread Chris Apsey
Hey Matt, We don't actually have any non-x86 hardware at the moment - we're just looking to run certain workloads in qemu full emulation mode sans KVM extensions (we know there is a huge performance hit - it's just for a few very specific things). The hosts I'm talking about are normal

Re: [Openstack-operators] [nova][glance] nova-compute choosing incorrect qemu binary when scheduling 'alternate' (ppc64, armv7l) architectures?

2018-08-07 Thread Matt Riedemann
On 8/5/2018 1:43 PM, Chris Apsey wrote: Trying to enable some alternate (non-x86) architectures on xenial + queens.  I can load up images and set the property correctly according to the supported values (https://docs.openstack.org/nova/queens/configuration/config.html) in

Re: [Openstack-operators] [nova] StarlingX diff analysis

2018-08-07 Thread Matt Riedemann
On 8/7/2018 1:10 AM, Flint WALRUS wrote: I didn’t had time to check StarlingX code quality, how did you feel it while you were doing your analysis? I didn't dig into the test diffs themselves, but it was my impression that from what I was poking around in the local git repo, there were

Re: [Openstack-operators] [nova] StarlingX diff analysis

2018-08-07 Thread Flint WALRUS
Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to

Re: [Openstack-operators] [nova] Couple of CellsV2 questions

2018-07-24 Thread Jonathan Mills
Thanks, Matt. Those are all good suggestions, and we will incorporate your feedback into our plans. On 07/23/2018 05:57 PM, Matt Riedemann wrote: > I'll try to help a bit inline. Also cross-posting to openstack-dev and > tagging with [nova] to highlight it. > > On 7/23/2018 10:43 AM, Jonathan

Re: [Openstack-operators] [nova] Couple of CellsV2 questions

2018-07-23 Thread Matt Riedemann
I'll try to help a bit inline. Also cross-posting to openstack-dev and tagging with [nova] to highlight it. On 7/23/2018 10:43 AM, Jonathan Mills wrote: I am looking at implementing CellsV2 with multiple cells, and there's a few things I'm seeking clarification on: 1) How does a

Re: [Openstack-operators] [nova] Cinder cross_az_attach=False changes/fixes

2018-07-15 Thread Matt Riedemann
Just an update on an old thread, but I've been working on the cross_az_attach=False issues again this past week and I think I have a couple of decent fixes. On 5/31/2017 6:08 PM, Matt Riedemann wrote: This is a request for any operators out there that configure nova to set: [cinder]

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-06-07 Thread Matt Riedemann
On 2/6/2018 6:44 PM, Matt Riedemann wrote: On 2/6/2018 2:14 PM, Chris Apsey wrote: but we would rather have intermittent build failures rather than compute nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Chris Friesen
On 06/04/2018 05:43 AM, Tobias Urdin wrote: Hello, I have received a question about a more specialized use case where we need to isolate several hypervisors to a specific project. My first thinking was using nova flavors for only that project and add extra specs properties to use a specific

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Tobias Urdin
Saw now in the docs that multiple aggregate_instance_extra_specs keys should be a comma-separated list. But other than that, would the below do what I'm looking for? Has a very high maintenance level when having a lot of hypervisors and steadily adding new ones, but I can't see any other way to

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Tobias Urdin
Hello, Thanks for the reply Matt. The hard thing here is that I have to ensure it the other way around as well i.e other instances cannot be allowed landing on those "reserved" hypervisors. I assume I could do something like in [1] and also set key-value metadata on all flavors to select a host

Re: [Openstack-operators] [nova] isolate hypervisor to project

2018-06-04 Thread Matt Riedemann
On 6/4/2018 6:43 AM, Tobias Urdin wrote: I have received a question about a more specialized use case where we need to isolate several hypervisors to a specific project. My first thinking was using nova flavors for only that project and add extra specs properties to use a specific host

Re: [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-29 Thread Matt Riedemann
On 5/28/2018 7:31 AM, Sylvain Bauza wrote: That said, given I'm now working on using Nested Resource Providers for VGPU inventories, I wonder about a possible upgrade problem with VGPU allocations. Given that :  - in Queens, VGPU inventories are for the root RP (ie. the compute node RP), but,

Re: [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-28 Thread Sylvain Bauza
On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann wrote: > I've written a nova-manage placement heal_allocations CLI [1] which was a > TODO from the PTG in Dublin as a step toward getting existing > CachingScheduler users to roll off that (which is deprecated). > > During the

Re: [Openstack-operators] [nova] Rocky forum topics brainstorming

2018-04-18 Thread melanie witt
On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote: +openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: Hey everyone, Let's collect forum topic brainstorming ideas for the Forum sessions in Vancouver in this etherpad [0].

Re: [Openstack-operators] [nova] Rocky forum topics brainstorming

2018-04-13 Thread melanie witt
+openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: Hey everyone, Let's collect forum topic brainstorming ideas for the Forum sessions in Vancouver in this etherpad [0]. Once we've brainstormed, we'll select and submit our

Re: [Openstack-operators] Nova resources are out of sync in ocata version

2018-04-09 Thread Saverio Proto
It works for me in Newton. Try it at your own risk :) Cheers, Saverio 2018-04-09 13:23 GMT+02:00 Anwar Durrani : > No this is different one. should i try this one ? if it works ? > > On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: >> >> Hello

Re: [Openstack-operators] Nova resources are out of sync in ocata version

2018-04-09 Thread Anwar Durrani
No this is different one. should i try this one ? if it works ? On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote: > Hello Anwar, > > are you talking about this script ? > https://github.com/openstack/osops-tools-contrib/blob/ > master/nova/nova-libvirt-compare.py > > it

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Alex Schultz
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote: > On Mon, 2 Apr 2018, Alex Schultz wrote: > >> So this is/was valid. A few years back there was some perf tests done >> with various combinations of process/threads and for Keystone it was >> determined that threads should

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Jay Pipes
On 04/03/2018 06:48 AM, Chris Dent wrote: On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Chris Dent
On Mon, 2 Apr 2018, Alex Schultz wrote: So this is/was valid. A few years back there was some perf tests done with various combinations of process/threads and for Keystone it was determined that threads should be 1 while you should adjust the process count (hence the bug). Now I guess the

Re: [Openstack-operators] nova-placement-api tuning

2018-04-02 Thread Alex Schultz
On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell wrote: > > > On 03/29/2018 02:13 AM, Belmiro Moreira wrote: >> >> Some lessons so far... >> - Scale keystone accordingly when enabling placement. > > > Speaking of which; I suppose I have the same question for keystone

Re: [Openstack-operators] [nova] about use spice console

2018-03-29 Thread 李杰
The error info is : CRITICAL nova [None req-a84d278b-43db-4c94-864b-7a9733aa772c None None] Unhandled error: IOError: [Errno 13] Permission denied: '/etc/nova/policy.json' ERROR nova Traceback (most recent call last): ERROR nova File "/usr/bin/nova-compute", line 10, in ERROR nova

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: If I'm reading http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html right, it seems that the MPM is not pertinent when using WSGIDaemonProcess. It doesn't impact the number wsgi processes that will exist or how they

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 04:24 AM, Chris Dent wrote: On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. If I'm reading

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Matt Riedemann
On 3/29/2018 12:05 PM, Chris Dent wrote: Other suggestions? I'm looking at things like turning off scheduler_tracks_instance_changes, since affinity scheduling is not needed (at least so-far), but not sure that that will help with placement load (seems like it might, though?) This won't

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, iain MacDonnell wrote: placement python stack and kicks out the 401. So this mostly indicates that socket accept is taking forever. Well, this test connects and gets a 400 immediately: echo | nc -v apihost 8778 so I don't think it's at at the socket level, but, I

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread iain MacDonnell
On 03/29/2018 01:19 AM, Chris Dent wrote: On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Thu, 29 Mar 2018, Belmiro Moreira wrote: [lots of great advice snipped] - Change apache mpm default from prefork to event/worker. - Increase the WSGI number of processes/threads considering where placement is running. Another option is to switch to nginx and uwsgi. In situations where the

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Belmiro Moreira
Hi, with Ocata upgrade we decided to run local placements (one service per cellV1) because we were nervous about possible scalability issues but specially the increase of the schedule time. Fortunately, this is now been address with the placement-req-filter work. We started slowly to aggregate

Re: [Openstack-operators] nova-placement-api tuning

2018-03-29 Thread Chris Dent
On Wed, 28 Mar 2018, iain MacDonnell wrote: Looking for recommendations on tuning of nova-placement-api. I have a few moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata, and instance creation is getting very slow as they fill up. This should be well within the

Re: [Openstack-operators] nova 17.0.1 released (queens)

2018-03-07 Thread David Medberry
Thanks for the headsup Matt. On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann wrote: > I just wanted to give a heads up to anyone thinking about upgrading to > queens that nova has released a 17.0.1 patch release [1]. > > There are some pretty important fixes in there that

Re: [Openstack-operators] [nova] [nova-lxd] Query regarding LXC instantiation using nova

2018-02-20 Thread James Page
Hi Amit (re-titled thread with scoped topics) As Matt has already referenced, [0] is a good starting place for using the nova-lxd driver. On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote: > Hello, > > I have a running OpenStack Ocata setup on which I am able to launch VMs. >

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Matt Riedemann
On 2/6/2018 2:14 PM, Chris Apsey wrote: but we would rather have intermittent build failures rather than compute nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So if your limit is the default (10) and you

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Chris Apsey
All, This was the core issue - setting consecutive_build_service_disable_threshold = 0 in nova.conf (on controllers and compute nodes) solved this. It was being triggered by neutron dropping requests (and/or responses) for vif-plugging due to cpu usage on the neutron endpoints being pegged

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Chris Apsey
That looks promising. I'll report back to confirm the solution. Thanks! --- v/r Chris Apsey bitskr...@bitskrieg.net https://www.bitskrieg.net On 2018-01-31 04:40 PM, Matt Riedemann wrote: On 1/31/2018 3:16 PM, Chris Apsey wrote: All, Running in to a strange issue I haven't seen before.

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Eric Fried
There's [1], but I would have expected you to see error logs like [2] if that's what you're hitting. [1] https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L627-L645 [2] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1714-L1716 efried On 01/31/2018 03:16

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-01-31 Thread Matt Riedemann
On 1/31/2018 3:16 PM, Chris Apsey wrote: All, Running in to a strange issue I haven't seen before. Randomly, the nova-compute services on compute nodes are disabling themselves (as if someone ran openstack compute service set --disable hostX nova-compute.  When this happens, the node

Re: [Openstack-operators] [nova][neutron] How do you use the instance IP filter?

2017-11-07 Thread Matt Riedemann
On 10/27/2017 1:23 PM, Matt Riedemann wrote: Nova has had this long-standing known performance issue if you're filtering a large number of instances by IP. The instance IPs are stored in a JSON blob in the database so we don't do filtering in SQL. We pull the instances out of the database,

Re: [Openstack-operators] [nova] Looking for feedback on a spec to limit max_count in multi-create requests

2017-10-12 Thread Matt Riedemann
On 10/12/2017 4:09 AM, Saverio Proto wrote: Hello Matt, starting 1000 instances in production works for me already. We are on Openstack Newton. I described my configuration here: https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/ If things blow up for you with

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-06 Thread Matt Riedemann
On 10/6/2017 1:30 PM, Joshua Harlow wrote: +1 I am also personally frustrated by the same thing clint is, It seems that somewhere along the line we lost the direction of cloud vs VPS, and somewhere it was sold (or not sold) that openstack is good for both (when it really isn't imho),

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-06 Thread Joshua Harlow
To: openstack-operators Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild? No offense is intended, so please forgive me for the possibly incendiary nature of what I'm about to write: VPS is the predecessor of cloud (and something I love very much, and rely

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-06 Thread Clint Byrum
gt; > same IPv6 through some Neutron port magic. > > > > BTW, you wouldn’t believe how often people use the Reinstall feature. > > > > Tomas from Homeatcloud > > > > > > > > From: Belmiro Moreira [mailto:moreira.belmiro.email.li...@gmail.com] > >

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-06 Thread Tomáš Vondra
does, e.g., WHMCS do it? That is a stock software that you can use to provide VPS over OpenStack. Tomas from Homeatcloud -Original Message- From: Clint Byrum [mailto:cl...@fewbar.com] Sent: Thursday, October 05, 2017 6:50 PM To: openstack-operators Subject: Re: [Openstack-operators] [nova

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-05 Thread Clint Byrum
PM > To: Chris Friesen > Cc: openstack-operators@lists.openstack.org > Subject: Re: [Openstack-operators] [nova] Should we allow passing new > user_data during rebuild? > > > > In our cloud rebuild is the only way for a user to keep the same IP. > Unfortunately, we don't

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-05 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2017-10-04 09:15:28 -0600: > On 10/03/2017 11:12 AM, Clint Byrum wrote: > > > My personal opinion is that rebuild is an anti-pattern for cloud, and > > should be frozen and deprecated. It does nothing but complicate Nova > > and present challenges for

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-05 Thread Clint Byrum
Excerpts from Belmiro Moreira's message of 2017-10-04 17:33:40 +0200: > In our cloud rebuild is the only way for a user to keep the same IP. > Unfortunately, we don't offer floating IPs, yet. > Also, we use the user_data to bootstrap some actions in new instances > (puppet, ...). > Considering all

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-05 Thread Tomáš Vondra
Moreira [mailto:moreira.belmiro.email.li...@gmail.com] Sent: Wednesday, October 04, 2017 5:34 PM To: Chris Friesen Cc: openstack-operators@lists.openstack.org Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild? In our cloud rebuild is the only way

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-04 Thread Belmiro Moreira
In our cloud rebuild is the only way for a user to keep the same IP. Unfortunately, we don't offer floating IPs, yet. Also, we use the user_data to bootstrap some actions in new instances (puppet, ...). Considering all the use-cases for rebuild it would be great if the user_data can be updated at

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-04 Thread Chris Friesen
On 10/03/2017 11:12 AM, Clint Byrum wrote: My personal opinion is that rebuild is an anti-pattern for cloud, and should be frozen and deprecated. It does nothing but complicate Nova and present challenges for scaling. That said, if it must stay as a feature, I don't think updating the

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jonathan Proulx
On Tue, Oct 03, 2017 at 08:29:45PM +, Jeremy Stanley wrote: :On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote: :[...] :> This works in our OpenStack where it's our IP space so PTR record also :> matches, not so well in public cloud where we can reserve an IP and :> set forward DNS

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jeremy Stanley
On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote: [...] > This works in our OpenStack where it's our IP space so PTR record also > matches, not so well in public cloud where we can reserve an IP and > set forward DNS but not control its reverse mapping. [...] Not that it probably

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jonathan Proulx
7 at 19:17 :> To: openstack-operators <openstack-operators@lists.openstack.org> :> Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild? :> :> :> Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500: :>

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Clint Byrum
ck-operators <openstack-operators@lists.openstack.org> > Subject: Re: [Openstack-operators] [nova] Should we allow passing new > user_data during rebuild? > > > Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500: > > We plan on

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Tim Bell
operators@lists.openstack.org> Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild? Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500: > We plan on deprecating personality files from the compute API in a new > micr

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Clint Byrum
Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500: > We plan on deprecating personality files from the compute API in a new > microversion. The spec for that is here: > > https://review.openstack.org/#/c/509013/ > > Today you can pass new personality files to inject during

Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Matt Riedemann
On 10/3/2017 10:53 AM, Matt Riedemann wrote: However, if the only reason one would need to pass personality files during rebuild is because we don't persist them during the initial server create, do we really need to also allow passing user_data for rebuild? Given personality files were

Re: [Openstack-operators] [nova] Forum topics brainstorming

2017-09-28 Thread Matt Riedemann
On 9/21/2017 4:01 PM, Matt Riedemann wrote: So this shouldn't be news now that I've read back through a few emails in the mailing list (I've been distracted with the Pike release, PTG planning, etc) [1][2][3] but we have until Sept 29 to come up with whatever forum sessions we want to propose.

Re: [Openstack-operators] [nova] api.fault notification is never emitted

2017-09-26 Thread Matt Riedemann
Cross-posting to the operators list since they are the ones that would care about this. Basically, the "notify_on_api_faults" config option hasn't worked since probably Kilo when the 2.1 microversion wsgi stack code was added. Rackspace added it back in 2012:

Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-25 Thread Volodymyr Litovka
Hi Matt, On 9/22/17 7:10 PM, Matt Riedemann wrote: while this approach is ok in general, some comments from my side - 1. For a new instance, if the neutron network has a dns_domain set, use it. I'm not totally sure how we tell from the metadata API if it's a new instance or not, except when

Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-22 Thread Matt Riedemann
On 9/22/2017 10:02 AM, Volodymyr Litovka wrote: And another topic, in Neutron, regarding domainname. Any DHCP-server, created by Neutron, will return "domain" derived from system-wide "dns_name" parameter (defined in neutron.conf and explicitly used in argument "--domain" of dnsmasq). There is

Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-22 Thread Volodymyr Litovka
Hi Stephen, I think, it's useful to have hostname in Nova's metadata - this provides some initial information for cloud-init to configure newly created VM, so I would not refuse this method. A bit confusing is domain part of the hostname (novalocal), which derived from Openstack-wide

Re: [Openstack-operators] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-21 Thread Matt Riedemann
On 9/21/2017 6:17 AM, Saverio Proto wrote: Why the change is called: Ignore original retried hosts when live migrating ? Isn't it implementing the opposite ? Dont Ignore ? Heh, you're right. I'll fix. Beyond that, any feedback on the actual intent here? -- Thanks, Matt

Re: [Openstack-operators] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-21 Thread Saverio Proto
> The actual fix for this is trivial: > > https://review.openstack.org/#/c/505771/ Why the change is called: Ignore original retried hosts when live migrating ? Isn't it implementing the opposite ? Dont Ignore ? thanks Saverio ___

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-14 Thread Matt Riedemann
On 9/13/2017 10:31 AM, Morgenstern, Chad wrote: I have been studying how to perform failover operations with Cinder --failover. Nova is not aware of the failover event. Being able to refresh the connection state especially for Nova would come in very handy, especially in admin level dr

Re: [Openstack-operators] [nova] Cinder cross_az_attach=False changes/fixes

2017-09-14 Thread Sylvain Bauza
On Tue, Jun 6, 2017 at 9:45 PM, Sam Morrison wrote: > Hi Matt, > > Just looking into this, > > > On 1 Jun 2017, at 9:08 am, Matt Riedemann wrote: > > > > This is a request for any operators out there that configure nova to set: > > > > [cinder] > >

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-14 Thread Matt Riedemann
On 9/13/2017 9:52 AM, Arne Wiebalck wrote: On 13 Sep 2017, at 16:52, Matt Riedemann wrote: On 9/13/2017 3:24 AM, Arne Wiebalck wrote: I’m reviving this thread to check if the suggestion to address potentially stale connection data by an admin command (or a scheduled

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Morgenstern, Chad
nStack Development Mailing List <openstack-...@lists.openstack.org> Subject: Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info? > On 13 Sep 2017, at 16:52, Matt Riedemann <mriede...@gmail.com> wrote: > > On 9/13/2017 3

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck
> On 13 Sep 2017, at 16:52, Matt Riedemann wrote: > > On 9/13/2017 3:24 AM, Arne Wiebalck wrote: >> I’m reviving this thread to check if the suggestion to address potentially >> stale connection >> data by an admin command (or a scheduled task) made it to the planning for

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Matt Riedemann
On 9/13/2017 3:24 AM, Arne Wiebalck wrote: I’m reviving this thread to check if the suggestion to address potentially stale connection data by an admin command (or a scheduled task) made it to the planning for one of the upcoming releases? It hasn't, but we're at the PTG this week so I can

Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-09-13 Thread Arne Wiebalck
Matt, all, I’m reviving this thread to check if the suggestion to address potentially stale connection data by an admin command (or a scheduled task) made it to the planning for one of the upcoming releases? Thanks! Arne On 16 Jun 2017, at 09:37, Saverio Proto

Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-08 Thread James Penick
I rely on cloud-init to set my hostnames. I have a number of internal systems which rely on a machine knowing its own hostname. In particular, at least one of my configuration management systems requires that a host pass its fqdn to an API to fetch its CM data, so grabbing hostnames and

Re: [Openstack-operators] [nova]

2017-08-07 Thread Volodymyr Litovka
If you don't recreate Neutron ports (just destroying VM, creating it as new and attaching old ports), then you can distinguish between interfaces by MAC addresses and store this in udev rules. You can do this on first boot (e.g. in cloud-init's "startcmd" command), using information from

Re: [Openstack-operators] [nova]

2017-07-31 Thread Morgenstern, Chad
Hi, I am trying to programmatically rebuild a nova instance that has a persistent volume for its root device. I am specifically trying to rebuild an instance that has multiple network interfaces and a floating ip. I have observed that the order in which the network interface are attached

Re: [Openstack-operators] [nova] Obtaining nova settings at runtime

2017-06-14 Thread Chris Friesen
On 06/14/2017 10:31 AM, Matt Riedemann wrote: On 6/14/2017 10:57 AM, Carlos Konstanski wrote: Is there a way to obtain nova configuration settings at runtime without resorting to SSHing onto the compute host and greping nova.conf? For instance a CLI call? At the moment I'm looking at

Re: [Openstack-operators] [nova] Obtaining nova settings at runtime

2017-06-14 Thread Matt Riedemann
On 6/14/2017 10:57 AM, Carlos Konstanski wrote: Is there a way to obtain nova configuration settings at runtime without resorting to SSHing onto the compute host and greping nova.conf? For instance a CLI call? At the moment I'm looking at cpu_allocation_ratio and ram_allocation_ratio. There may

  1   2   3   4   5   >