On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote:
> I have a request to do $SUBJECT in relation to a V2V workflow. The use
> case here is conversion of a VM/Physical which was previously powered
> off. We want to move its data, but we don't want to be powering on
> stuff which wasn't
On 11/27/2018 11:32 AM, Ignazio Cassano wrote:
Hi All,
Please anyone know where hypervisor uuid is retrived?
Sometime updating kmv nodes with yum update it changes and in nova
database 2 uuids are assigned to the same node.
regards
Ignazio
___
On 10/18/2018 5:07 PM, Matt Riedemann wrote:
It's been deprecated since Pike, and the time has come to remove it [1].
mgagne has been the most vocal CachingScheduler operator I know and he
has tested out the "nova-manage placement heal_allocations" CLI, added
in Rocky, and said it will work
Excerpts from Matt Riedemann's message of 2018-09-06 15:58:41 -0500:
> I wanted to recap some upgrade-specific stuff from today outside of the
> other [1] technical extraction thread.
>
> Chris has a change up for review [2] which prompted the discussion.
>
> That change makes placement only
On 8/29/2018 3:21 PM, Tim Bell wrote:
Sounds like a good topic for PTG/Forum?
Yeah it's already on the PTG agenda [1][2]. I started the thread because
I wanted to get the ball rolling as early as possible, and with people
that won't attend the PTG and/or the Forum, to weigh in on not only
such as preserving IP
addresses etc.
Sounds like a good topic for PTG/Forum?
Tim
-Original Message-
From: Jay Pipes
Date: Wednesday, 29 August 2018 at 22:12
To: Dan Smith , Tim Bell
Cc: "openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] [nova][cinder][neut
On 08/29/2018 04:04 PM, Dan Smith wrote:
- The VMs to be migrated are not generally not expensive
configurations, just hardware lifecycles where boxes go out of
warranty or computer centre rack/cooling needs re-organising. For
CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
> - The VMs to be migrated are not generally not expensive
> configurations, just hardware lifecycles where boxes go out of
> warranty or computer centre rack/cooling needs re-organising. For
> CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
> ~30% pet share)
> - We make a
2018 at 18:47
To: Jay Pipes
Cc: "openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold
migration
> A release upgrade dance involves coordination of multiple moving
> parts. It's about as similar to this s
On 08/29/2018 02:26 PM, Chris Friesen wrote:
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought cold migration of instances with such devices was
On 08/29/2018 12:39 PM, Dan Smith wrote:
If we're going to discuss removing move operations from Nova, we should
do that in another thread. This one is about making existing operations
work :)
OK, understood. :)
The admin only "owns" the instance because we have no ability to
transfer
> A release upgrade dance involves coordination of multiple moving
> parts. It's about as similar to this scenario as I can imagine. And
> there's a reason release upgrades are not done entirely within Nova;
> clearly an external upgrade tool or script is needed to orchestrate
> the many steps and
I respect your opinion but respectfully disagree that this is something
we need to spend our time on. Comments inline.
On 08/29/2018 10:47 AM, Dan Smith wrote:
* Cells can shard across flavors (and hardware type) so operators
would like to move users off the old flavors/hardware (old cell) to
>> * Cells can shard across flavors (and hardware type) so operators
>> would like to move users off the old flavors/hardware (old cell) to
>> new flavors in a new cell.
>
> So cell migrations are kind of the new release upgrade dance. Got it.
No, cell migrations are about moving instances
Sorry for delayed response. Was on PTO when this came out. Comments
inline...
On 08/22/2018 09:23 PM, Matt Riedemann wrote:
Hi everyone,
I have started an etherpad for cells topics at the Stein PTG [1]. The
main issue in there right now is dealing with cross-cell cold migration
in nova.
I think in our case we’d only migrate between cells if we know the network and
storage is accessible and would never do it if not.
Thinking moving from old to new hardware at a cell level.
If storage and network isn’t available ideally it would fail at the api request.
There is also ceph
On 20-08-18 16:29:52, Matthew Booth wrote:
> For those who aren't familiar with it, nova's volume-update (also
> called swap volume by nova devs) is the nova part of the
> implementation of cinder's live migration (also called retype).
> Volume-update is essentially an internal cinder<->nova api,
On 8/11/2018 12:50 AM, Chris Apsey wrote:
This sounds promising and there seems to be a feasible way to do this,
but it also sounds like a decent amount of effort and would be a new
feature in a future release rather than a bugfix - am I correct in that
assessment?
Yes I'd say it's a
This sounds promising and there seems to be a feasible way to do this, but
it also sounds like a decent amount of effort and would be a new feature in
a future release rather than a bugfix - am I correct in that assessment?
On August 9, 2018 13:30:31 "Daniel P. Berrangé" wrote:
On Thu,
Exactly. And I agree, it seems like hw_architecture should dictate
which emulator is chosen, but as you mentioned its currently not. I'm
not sure if this is a bug and it's supposed to 'just work', or just
something that was never fully implemented (intentionally) and would be
more of a
On 8/8/2018 2:42 PM, Chris Apsey wrote:
qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86
packages, but they perform system-mode emulation (via dynamic
instruction translation) for those target environments. So, you run
qemu-system-ppc64 on an x86 host in order to get a
Matt,
qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86
packages, but they perform system-mode emulation (via dynamic
instruction translation) for those target environments. So, you run
qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM.
Our use case
On 8/7/2018 8:54 AM, Chris Apsey wrote:
We don't actually have any non-x86 hardware at the moment - we're just
looking to run certain workloads in qemu full emulation mode sans KVM
extensions (we know there is a huge performance hit - it's just for a
few very specific things). The hosts I'm
Hey Matt,
We don't actually have any non-x86 hardware at the moment - we're just
looking to run certain workloads in qemu full emulation mode sans KVM
extensions (we know there is a huge performance hit - it's just for a
few very specific things). The hosts I'm talking about are normal
On 8/5/2018 1:43 PM, Chris Apsey wrote:
Trying to enable some alternate (non-x86) architectures on xenial +
queens. I can load up images and set the property correctly according
to the supported values
(https://docs.openstack.org/nova/queens/configuration/config.html) in
On 8/7/2018 1:10 AM, Flint WALRUS wrote:
I didn’t had time to check StarlingX code quality, how did you feel it
while you were doing your analysis?
I didn't dig into the test diffs themselves, but it was my impression
that from what I was poking around in the local git repo, there were
Hi matt, everyone,
I just read your analysis and would like to thank you for such work. I
really think there are numerous features included/used on this Nova rework
that would be highly beneficial for Nova and users of it.
I hope people will fairly appreciate you work.
I didn’t had time to
Thanks, Matt. Those are all good suggestions, and we will incorporate
your feedback into our plans.
On 07/23/2018 05:57 PM, Matt Riedemann wrote:
> I'll try to help a bit inline. Also cross-posting to openstack-dev and
> tagging with [nova] to highlight it.
>
> On 7/23/2018 10:43 AM, Jonathan
I'll try to help a bit inline. Also cross-posting to openstack-dev and
tagging with [nova] to highlight it.
On 7/23/2018 10:43 AM, Jonathan Mills wrote:
I am looking at implementing CellsV2 with multiple cells, and there's a
few things I'm seeking clarification on:
1) How does a
Just an update on an old thread, but I've been working on the
cross_az_attach=False issues again this past week and I think I have a
couple of decent fixes.
On 5/31/2017 6:08 PM, Matt Riedemann wrote:
This is a request for any operators out there that configure nova to set:
[cinder]
On 2/6/2018 6:44 PM, Matt Riedemann wrote:
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than
compute nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So
On 06/04/2018 05:43 AM, Tobias Urdin wrote:
Hello,
I have received a question about a more specialized use case where we need to
isolate several hypervisors to a specific project. My first thinking was
using nova flavors for only that project and add extra specs properties to
use a specific
Saw now in the docs that multiple aggregate_instance_extra_specs keys
should be a comma-separated list.
But other than that, would the below do what I'm looking for?
Has a very high maintenance level when having a lot of hypervisors and
steadily adding new ones, but I can't see any other way to
Hello,
Thanks for the reply Matt.
The hard thing here is that I have to ensure it the other way around as
well i.e other instances cannot be allowed landing on those "reserved"
hypervisors.
I assume I could do something like in [1] and also set key-value
metadata on all flavors to select a host
On 6/4/2018 6:43 AM, Tobias Urdin wrote:
I have received a question about a more specialized use case where we
need to isolate several hypervisors
to a specific project. My first thinking was using nova flavors for only
that project and add extra specs properties to use a specific host
On 5/28/2018 7:31 AM, Sylvain Bauza wrote:
That said, given I'm now working on using Nested Resource Providers for
VGPU inventories, I wonder about a possible upgrade problem with VGPU
allocations. Given that :
- in Queens, VGPU inventories are for the root RP (ie. the compute
node RP), but,
On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann
wrote:
> I've written a nova-manage placement heal_allocations CLI [1] which was a
> TODO from the PTG in Dublin as a step toward getting existing
> CachingScheduler users to roll off that (which is deprecated).
>
> During the
On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote:
+openstack-operators (apologies that I forgot to add originally)
On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote:
Hey everyone,
Let's collect forum topic brainstorming ideas for the Forum sessions in
Vancouver in this etherpad [0].
+openstack-operators (apologies that I forgot to add originally)
On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote:
Hey everyone,
Let's collect forum topic brainstorming ideas for the Forum sessions in
Vancouver in this etherpad [0]. Once we've brainstormed, we'll select
and submit our
It works for me in Newton.
Try it at your own risk :)
Cheers,
Saverio
2018-04-09 13:23 GMT+02:00 Anwar Durrani :
> No this is different one. should i try this one ? if it works ?
>
> On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote:
>>
>> Hello
No this is different one. should i try this one ? if it works ?
On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote:
> Hello Anwar,
>
> are you talking about this script ?
> https://github.com/openstack/osops-tools-contrib/blob/
> master/nova/nova-libvirt-compare.py
>
> it
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote:
> On Mon, 2 Apr 2018, Alex Schultz wrote:
>
>> So this is/was valid. A few years back there was some perf tests done
>> with various combinations of process/threads and for Keystone it was
>> determined that threads should
On 04/03/2018 06:48 AM, Chris Dent wrote:
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process count (hence the bug). Now I guess the
On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell
wrote:
>
>
> On 03/29/2018 02:13 AM, Belmiro Moreira wrote:
>>
>> Some lessons so far...
>> - Scale keystone accordingly when enabling placement.
>
>
> Speaking of which; I suppose I have the same question for keystone
The error info is :
CRITICAL nova [None req-a84d278b-43db-4c94-864b-7a9733aa772c None None]
Unhandled error: IOError: [Errno 13] Permission denied: '/etc/nova/policy.json'
ERROR nova Traceback (most recent call last):
ERROR nova File "/usr/bin/nova-compute", line 10, in
ERROR nova
On Thu, 29 Mar 2018, iain MacDonnell wrote:
If I'm reading
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
right, it seems that the MPM is not pertinent when using WSGIDaemonProcess.
It doesn't impact the number wsgi processes that will exist or how
they
On 03/29/2018 04:24 AM, Chris Dent wrote:
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where
placement
is running.
If I'm reading
On 3/29/2018 12:05 PM, Chris Dent wrote:
Other suggestions? I'm looking at things like turning off
scheduler_tracks_instance_changes, since affinity scheduling is not
needed (at least so-far), but not sure that that will help with
placement load (seems like it might, though?)
This won't
On Thu, 29 Mar 2018, iain MacDonnell wrote:
placement python stack and kicks out the 401. So this mostly
indicates that socket accept is taking forever.
Well, this test connects and gets a 400 immediately:
echo | nc -v apihost 8778
so I don't think it's at at the socket level, but, I
On 03/29/2018 01:19 AM, Chris Dent wrote:
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a
few moderately-sized deployments (~200 nodes, ~4k instances),
currently on Ocata, and instance creation is getting very slow as they
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where placement
is running.
Another option is to switch to nginx and uwsgi. In situations where
the
Hi,
with Ocata upgrade we decided to run local placements (one service per
cellV1) because we were nervous about possible scalability issues but
specially the increase of the schedule time. Fortunately, this is now been
address with the placement-req-filter work.
We started slowly to aggregate
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a few
moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata,
and instance creation is getting very slow as they fill up.
This should be well within the
Thanks for the headsup Matt.
On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann wrote:
> I just wanted to give a heads up to anyone thinking about upgrading to
> queens that nova has released a 17.0.1 patch release [1].
>
> There are some pretty important fixes in there that
Hi Amit
(re-titled thread with scoped topics)
As Matt has already referenced, [0] is a good starting place for using the
nova-lxd driver.
On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote:
> Hello,
>
> I have a running OpenStack Ocata setup on which I am able to launch VMs.
>
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than compute
nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So if your limit is the default (10) and you
All,
This was the core issue - setting
consecutive_build_service_disable_threshold = 0 in nova.conf (on
controllers and compute nodes) solved this. It was being triggered by
neutron dropping requests (and/or responses) for vif-plugging due to cpu
usage on the neutron endpoints being pegged
That looks promising. I'll report back to confirm the solution.
Thanks!
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-01-31 04:40 PM, Matt Riedemann wrote:
On 1/31/2018 3:16 PM, Chris Apsey wrote:
All,
Running in to a strange issue I haven't seen before.
There's [1], but I would have expected you to see error logs like [2] if
that's what you're hitting.
[1]
https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L627-L645
[2]
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1714-L1716
efried
On 01/31/2018 03:16
On 1/31/2018 3:16 PM, Chris Apsey wrote:
All,
Running in to a strange issue I haven't seen before.
Randomly, the nova-compute services on compute nodes are disabling
themselves (as if someone ran openstack compute service set --disable
hostX nova-compute. When this happens, the node
On 10/27/2017 1:23 PM, Matt Riedemann wrote:
Nova has had this long-standing known performance issue if you're
filtering a large number of instances by IP. The instance IPs are stored
in a JSON blob in the database so we don't do filtering in SQL. We pull
the instances out of the database,
On 10/12/2017 4:09 AM, Saverio Proto wrote:
Hello Matt,
starting 1000 instances in production works for me already. We are on
Openstack Newton.
I described my configuration here:
https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/
If things blow up for you with
On 10/6/2017 1:30 PM, Joshua Harlow wrote:
+1
I am also personally frustrated by the same thing clint is,
It seems that somewhere along the line we lost the direction of cloud vs
VPS, and somewhere it was sold (or not sold) that openstack is good for
both (when it really isn't imho),
To: openstack-operators
Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data
during rebuild?
No offense is intended, so please forgive me for the possibly incendiary nature
of what I'm about to write:
VPS is the predecessor of cloud (and something I love very much, and rely
gt; > same IPv6 through some Neutron port magic.
> >
> > BTW, you wouldn’t believe how often people use the Reinstall feature.
> >
> > Tomas from Homeatcloud
> >
> >
> >
> > From: Belmiro Moreira [mailto:moreira.belmiro.email.li...@gmail.com]
> >
does, e.g., WHMCS do it? That is a stock software
that you can use to provide VPS over OpenStack.
Tomas from Homeatcloud
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: Thursday, October 05, 2017 6:50 PM
To: openstack-operators
Subject: Re: [Openstack-operators] [nova
PM
> To: Chris Friesen
> Cc: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [nova] Should we allow passing new
> user_data during rebuild?
>
>
>
> In our cloud rebuild is the only way for a user to keep the same IP.
> Unfortunately, we don't
Excerpts from Chris Friesen's message of 2017-10-04 09:15:28 -0600:
> On 10/03/2017 11:12 AM, Clint Byrum wrote:
>
> > My personal opinion is that rebuild is an anti-pattern for cloud, and
> > should be frozen and deprecated. It does nothing but complicate Nova
> > and present challenges for
Excerpts from Belmiro Moreira's message of 2017-10-04 17:33:40 +0200:
> In our cloud rebuild is the only way for a user to keep the same IP.
> Unfortunately, we don't offer floating IPs, yet.
> Also, we use the user_data to bootstrap some actions in new instances
> (puppet, ...).
> Considering all
Moreira [mailto:moreira.belmiro.email.li...@gmail.com]
Sent: Wednesday, October 04, 2017 5:34 PM
To: Chris Friesen
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [nova] Should we allow passing new user_data
during rebuild?
In our cloud rebuild is the only way
In our cloud rebuild is the only way for a user to keep the same IP.
Unfortunately, we don't offer floating IPs, yet.
Also, we use the user_data to bootstrap some actions in new instances
(puppet, ...).
Considering all the use-cases for rebuild it would be great if the
user_data can be updated at
On 10/03/2017 11:12 AM, Clint Byrum wrote:
My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.
That said, if it must stay as a feature, I don't think updating the
On Tue, Oct 03, 2017 at 08:29:45PM +, Jeremy Stanley wrote:
:On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote:
:[...]
:> This works in our OpenStack where it's our IP space so PTR record also
:> matches, not so well in public cloud where we can reserve an IP and
:> set forward DNS
On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote:
[...]
> This works in our OpenStack where it's our IP space so PTR record also
> matches, not so well in public cloud where we can reserve an IP and
> set forward DNS but not control its reverse mapping.
[...]
Not that it probably
7 at 19:17
:> To: openstack-operators <openstack-operators@lists.openstack.org>
:> Subject: Re: [Openstack-operators] [nova] Should we allow passing new
user_data during rebuild?
:>
:>
:> Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
:>
ck-operators <openstack-operators@lists.openstack.org>
> Subject: Re: [Openstack-operators] [nova] Should we allow passing new
> user_data during rebuild?
>
>
> Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> > We plan on
operators@lists.openstack.org>
Subject: Re: [Openstack-operators] [nova] Should we allow passing new
user_data during rebuild?
Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> We plan on deprecating personality files from the compute API in a new
> micr
Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> We plan on deprecating personality files from the compute API in a new
> microversion. The spec for that is here:
>
> https://review.openstack.org/#/c/509013/
>
> Today you can pass new personality files to inject during
On 10/3/2017 10:53 AM, Matt Riedemann wrote:
However, if the only reason one would need to pass personality files
during rebuild is because we don't persist them during the initial
server create, do we really need to also allow passing user_data for
rebuild?
Given personality files were
On 9/21/2017 4:01 PM, Matt Riedemann wrote:
So this shouldn't be news now that I've read back through a few emails
in the mailing list (I've been distracted with the Pike release, PTG
planning, etc) [1][2][3] but we have until Sept 29 to come up with
whatever forum sessions we want to propose.
Cross-posting to the operators list since they are the ones that would
care about this.
Basically, the "notify_on_api_faults" config option hasn't worked since
probably Kilo when the 2.1 microversion wsgi stack code was added.
Rackspace added it back in 2012:
Hi Matt,
On 9/22/17 7:10 PM, Matt Riedemann wrote:
while this approach is ok in general, some comments from my side -
1. For a new instance, if the neutron network has a dns_domain set,
use it. I'm not totally sure how we tell from the metadata API if it's
a new instance or not, except when
On 9/22/2017 10:02 AM, Volodymyr Litovka wrote:
And another topic, in Neutron, regarding domainname. Any DHCP-server,
created by Neutron, will return "domain" derived from system-wide
"dns_name" parameter (defined in neutron.conf and explicitly used in
argument "--domain" of dnsmasq). There is
Hi Stephen,
I think, it's useful to have hostname in Nova's metadata - this provides
some initial information for cloud-init to configure newly created VM,
so I would not refuse this method. A bit confusing is domain part of the
hostname (novalocal), which derived from Openstack-wide
On 9/21/2017 6:17 AM, Saverio Proto wrote:
Why the change is called:
Ignore original retried hosts when live migrating
?
Isn't it implementing the opposite ? Dont Ignore ?
Heh, you're right. I'll fix.
Beyond that, any feedback on the actual intent here?
--
Thanks,
Matt
> The actual fix for this is trivial:
>
> https://review.openstack.org/#/c/505771/
Why the change is called:
Ignore original retried hosts when live migrating
?
Isn't it implementing the opposite ? Dont Ignore ?
thanks
Saverio
___
On 9/13/2017 10:31 AM, Morgenstern, Chad wrote:
I have been studying how to perform failover operations with Cinder --failover.
Nova is not aware of the failover event. Being able to refresh the connection
state especially for Nova would come in very handy, especially in admin level
dr
On Tue, Jun 6, 2017 at 9:45 PM, Sam Morrison wrote:
> Hi Matt,
>
> Just looking into this,
>
> > On 1 Jun 2017, at 9:08 am, Matt Riedemann wrote:
> >
> > This is a request for any operators out there that configure nova to set:
> >
> > [cinder]
> >
On 9/13/2017 9:52 AM, Arne Wiebalck wrote:
On 13 Sep 2017, at 16:52, Matt Riedemann wrote:
On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
I’m reviving this thread to check if the suggestion to address potentially
stale connection
data by an admin command (or a scheduled
nStack Development Mailing List
<openstack-...@lists.openstack.org>
Subject: Re: [Openstack-operators] [nova][cinder] Is there interest in an
admin-api to refresh volume connection info?
> On 13 Sep 2017, at 16:52, Matt Riedemann <mriede...@gmail.com> wrote:
>
> On 9/13/2017 3
> On 13 Sep 2017, at 16:52, Matt Riedemann wrote:
>
> On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
>> I’m reviving this thread to check if the suggestion to address potentially
>> stale connection
>> data by an admin command (or a scheduled task) made it to the planning for
On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
I’m reviving this thread to check if the suggestion to address
potentially stale connection
data by an admin command (or a scheduled task) made it to the planning
for one of the
upcoming releases?
It hasn't, but we're at the PTG this week so I can
Matt, all,
I’m reviving this thread to check if the suggestion to address potentially
stale connection
data by an admin command (or a scheduled task) made it to the planning for one
of the
upcoming releases?
Thanks!
Arne
On 16 Jun 2017, at 09:37, Saverio Proto
I rely on cloud-init to set my hostnames.
I have a number of internal systems which rely on a machine knowing its own
hostname. In particular, at least one of my configuration management
systems requires that a host pass its fqdn to an API to fetch its CM data,
so grabbing hostnames and
If you don't recreate Neutron ports (just destroying VM, creating it as
new and attaching old ports), then you can distinguish between
interfaces by MAC addresses and store this in udev rules. You can do
this on first boot (e.g. in cloud-init's "startcmd" command), using
information from
Hi,
I am trying to programmatically rebuild a nova instance that has a persistent
volume for its root device.
I am specifically trying to rebuild an instance that has multiple network
interfaces and a floating ip.
I have observed that the order in which the network interface are attached
On 06/14/2017 10:31 AM, Matt Riedemann wrote:
On 6/14/2017 10:57 AM, Carlos Konstanski wrote:
Is there a way to obtain nova configuration settings at runtime without
resorting to SSHing onto the compute host and greping nova.conf? For
instance a CLI call? At the moment I'm looking at
On 6/14/2017 10:57 AM, Carlos Konstanski wrote:
Is there a way to obtain nova configuration settings at runtime without
resorting to SSHing onto the compute host and greping nova.conf? For
instance a CLI call? At the moment I'm looking at cpu_allocation_ratio
and ram_allocation_ratio. There may
1 - 100 of 400 matches
Mail list logo