://docs.openstack.org/python-openstackclient/latest/cli/command-objects/compute-service.html#compute-service-delete
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo
gh which depending on your release you might not have:
https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac80
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org
On 11/28/2018 4:19 AM, Ignazio Cassano wrote:
Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me.
I am sure kvm nodes names are note changed.
Tables where uuid are duplicated are:
dataresource_providers in nova_api db
compute_nodes in nova db
Regards
Ignazio
It would be
es during the
upgrade. Is the deleted value in the database the same (0) for both of
those records?
* The exception to this is for the ironic driver which re-uses the
ironic node uuid as of this change: https://review.openstack.org/#/c/571535/
--
hat, because I don't think you can do this:
GET /flavors?spec=hw%3Acpu_policy=dedicated
Maybe you'd do:
GET /flavors?hw%3Acpu_policy=dedicated
The problem with that is we wouldn't be able to perform any kind of
request schema validation of it, especially sin
After hacking on the PoC for awhile [1] I have finally pushed up a spec
[2]. Behold it in all its dark glory!
[1] https://review.openstack.org/#/c/603930/
[2] https://review.openstack.org/#/c/616037/
On 8/22/2018 8:23 PM, Matt Riedemann wrote:
Hi everyone,
I have started an etherpad for
reaks" wouldn't result in anything breaking.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
that because of a critical bug, the lazy translation was
disabled in Havana to be fixed in Icehouse but I don't think that ever
happened before IBM developers dropped it upstream, which is further
justification for nuking this code from the various projects.
--
Thanks,
chpad.net/nova/+spec/user-locale-api
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
e
is deleted and archived/purged? Because if so, that might be something
we want to add as a nova-manage command.
[1] https://bugs.launchpad.net/nova/+bug/1800755
[2] https://review.openstack.org/#/c/409943/
--
Thanks,
Matt
___
OpenStack-operators ma
On 10/18/2018 5:07 PM, Matt Riedemann wrote:
It's been deprecated since Pike, and the time has come to remove it [1].
mgagne has been the most vocal CachingScheduler operator I know and he
has tested out the "nova-manage placement heal_allocations" CLI, added
in Rocky, and sa
let's just
wait for unified limits from keystone and let this rot on the vine".
I'd be happy to restore and update that spec.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.ope
oyment from the
CachingScheduler to the FilterScheduler + Placement.
If you are using the CachingScheduler and have a problem with its
removal, now is the time to speak up or forever hold your peace.
[1] https://review.openstack.org/#/c/611723/1
--
.org/p/compute-api-microversion-gap-in-osc
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
mpute_nodes tables for
"podto1-kvm01" and see if a record existed at one point, was deleted and
then archived to the shadow tables.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lis
-10.log.html#t2018-10-10T15:17:17
[4]
http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
't think we should attempt that. Just fail
if being forced and nested allocations exist on the source.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
rcing
those types of servers.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ere schema could be registered on that thing, and then
you pass a handle (ID reference) to that to nova when creating the
(baremetal) server, nova pulls it down from glare and hands it off to
the virt driver. It's just that no one is doing that work.
--
Thanks,
Matt
__
s created (nova/cinder/glance/keystone). For
newer projects, like placement, it's not a problem because they never
created any other CLI outside of OSC.
[1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
[2] https://etherpad.openstack.org/p/nova-ptg-stein (~L721)
ies filter?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
that should do it, thanks!
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
use those even if the filter is disabled? In the review I
had suggested that we add a pre-upgrade check which inspects the flavors
and if any of these are found, we report a warning meaning those flavors
need to be updated to use traits rather than capabilities
ing because people assume someone else is going to cover it?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
uot;Received" so does that
mean my non-Forum-at-all submission is now automatically a candidate for
the Forum because that would not be my intended audience (only suits and
big wigs please).
--
Thanks,
Matt
___
OpenStack-operators mailing list
-stable-branch-eol.html#end-of-life
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
fore I go off and start
writing up a spec?
[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop
--
Thanks,
Matt
___
On 3/28/2018 4:35 PM, Jay Pipes wrote:
On 03/28/2018 03:35 PM, Matt Riedemann wrote:
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to
make availability zones a real thing that has a permanent identifier
(UUID) and store that permanent
UC is also expected to enlist others and hopefully through our
efforts others are encouraged participate and enlist others.
[1] https://etherpad.openstack.org/p/uc-stein-ptg
[2] https://etherpad.openstack.org/p/UC-Election-Qualifications
Awesome, thank you Melvin and others on the UC.
--
Tha
evelopers involved (of which the PTL certainly may be
one), it's the role of the TC to do the same across openstack as a
whole. If a PTL doesn't have the time or willingness to do that within
their project, they shouldn't be the PTL. The same goes for TC members IMO.
--
Thanks,
would rather he not
stop doing those things to spend all his time acting as a project
manager.
I specifically called out what Doug is doing as an example of things I
want to see the TC doing. I want more/all TC members doing that.
--
Thanks,
te to give the impression
that you must be on the TC to have such an impact.
See my reply to Thierry. This isn't what I'm saying. But I expect the
elected TC members to be *much* more *directly* involved in managing and
driving hard cross-project technical deliverables.
--
to prioritize driving technical deliverables to completion based
on ranked input from operators/users/SIGs over philosophical debates and
politics/bureaucracy and help to complete the technical tasks if possible.
--
Thanks,
Matt
___
OpenSta
al priority x?" Because if not, or it's so fuzzy in scope that no
one sees the way forward, document a decision and then drop it.
[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html
[2]
https://governance.open
t's OK to run "nova-status upgrade check" to verify a green
install, it's probably good to leave the old checks in place, i.e.
you're likely always going to want those cells v2 and placement checks
we added in ocata even long after ocata EOL.
--
Thanks,
Matt
__
le, it flips a value to hide it - you have to archive/purge those
records to get them out of the main table.
[1] https://bugs.launchpad.net/nova/+bug/1791824
[2] https://etherpad.openstack.org/p/upgrade-sig-ptg-stein
[3] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.
priorities.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
nt/db_api.py#L27
[6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-upgrade
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 9/6/2018 2:56 PM, Jeremy Stanley wrote:
On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote:
On 8/29/2018 1:08 PM, Jim Rollenhagen wrote:
On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur mailto:ji...@openstack.org>> wrote:
Examples of typical sessions that make for a
on in IRC: this is an example,
right? Not a new thing being announced?
// jim
FYI for those that didn't see this on the other ML:
http://lists.openstack.org/pipermail/foundation/2018-August/002617.html
--
Thanks,
Matt
___
OpenStack-operator
On 9/5/2018 10:03 AM, Mohammed Naser wrote:
On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann wrote:
On 9/5/2018 8:47 AM, Mohammed Naser wrote:
Could placement not do what happened for a while when the nova_api
database was created?
Can you be more specific? I'm having a brain fart here an
interested in what you do plan to do for the database
migration to minimize downtime.
+openstack-operators ML since this is an operators discussion now.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http:
n not only the
known issues with cross-cell migration but also the things I'm not
thinking about.
[1] https://etherpad.openstack.org/p/nova-ptg-stein
[2] https://etherpad.openstack.org/p/nova-ptg-stein-cells
--
Thanks,
Matt
___
OpenStack-operato
are welcome here, the review, or in IRC.
[1] https://review.openstack.org/#/c/596502/
[2] https://bugs.launchpad.net/tripleo/+bug/1787910
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
icy to non-admins (or some other role of
user) to hit the API directly. I would ask why that is.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+operators
On 8/24/2018 4:08 PM, Matt Riedemann wrote:
On 8/23/2018 10:22 AM, Sean McGinnis wrote:
I haven't gone through the workflow, but I thought shelve/unshelve
could detach
the volume on shelving and reattach it on unshelve. In that workflow,
assuming
the networking is in pla
h feedback from operators, especially those
running with multiple cells today, as possible. Thanks in advance.
[1] https://etherpad.openstack.org/p/nova-ptg-stein-cells
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.ope
left!
Please your User Survey by *tomorrow*, *Tuesday, August 21 at 11:59pm UTC.*
Get started now: https://www.openstack.org/user-survey
Let me know if you have any questions.
Thank you,
VW
--
Matt Van Winkle
Senior Manager, Software Engineering | Salesforce
+ops list
On 8/18/2018 10:20 PM, Matt Riedemann wrote:
On 8/13/2018 9:30 PM, Rambo wrote:
1.Only in one region situation,what will happen in the cloud
as expansion of cluster size?Then how solve it?If have the limit
physical node number under the one region situation?How many nodes
t's a blueprint and not a bug fix - it's not something we'd
backport to stable branches upstream, for example.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-
CFP work is hard as hell. Much respect to the review panel members. It's
a thankless difficult job.
So, in lieu of being thankless, THANK YOU
-Matt
On Mon, Aug 13, 2018 at 9:59 AM, Allison Price
wrote:
> Hi everyone,
>
> One quick clarification. The speakers will be announced
e reasons why
it's never been done. I'm gonna have to rope in Kashyap and danpb since
they'd likely know more.
Dan/Kaskyap: tl;dr why doesn't the nova libvirt driver, configured for
qemu, set the guest.arch based on the hw_architecture image property so
that you can run p
L257
[4] https://libvirt.org/formatcaps.html#elementGuest
[5]
https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L5196
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists
packages installed on an x86 system?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
nly one qemu package should provide
the binary.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ave gaps in some of the things they were trying to
essentially disable in the API.
On the whole I think the quality is OK. It's not really possible to
accurately judge that when looking at a single diff this large.
--
Thanks,
Matt
_
rg/videos/search?search=live%20migration
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
gle.com/presentation/d/1P-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Thanks!
VW
--
Matt Van Winkle
Senior Manager, Software Engineering | Salesforce
Mobile: 210-445-4183
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
rade it. I wanted to ask because "evacuate" as a server
operation is something else entirely (it's rebuild on another host which
is definitely disruptive to the workload on that server).
http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-u
ould be best to get in the mindset and discipline of not
relying on shared MQ between the controller services and the cells. In
other words, just do the right thing from the start rather than have to
worry about maybe changing the deployment / configuration for that one
cell down the road when
lance, not creating a
snapshot from a server. That would be 'nova image-create':
https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-image-create
What is the error message in the 400 response? It should be in the CLI
output but if not, what's in the nova-api
Just an update on an old thread, but I've been working on the
cross_az_attach=False issues again this past week and I think I have a
couple of decent fixes.
On 5/31/2017 6:08 PM, Matt Riedemann wrote:
This is a request for any operators out there that configure nova to set:
[c
On 6/7/2018 9:02 AM, Matt Riedemann wrote:
We have a nova spec [1] which is at the point that it needs some API
user (and operator) feedback on what nova API should be doing when
listing servers and there are down cells (unable to reach the cell DB or
it times out).
tl;dr: the spec proposes
uite sure.
Oh wow, great timing:
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131308.html
I've also queued that up for the upcoming bug smash in China next week.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-
thread first.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
/blob/343c2bee234568855fd9e6ba075a05c2e70f3388/nova/virt/libvirt/driver.py#L8136
However, StarlingX has a patch for that (pretty sure anyway, I know
WindRiver had one):
https://review.openstack.org/#/c/337334/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-oper
this can serve as a template for other deployment projects to
integrate similar checks into their upgrade (and install verification)
flows.
[1] https://bugs.launchpad.net/nova/+bug/1772973
[2] https://docs.openstack.org/nova/latest/cli/nova-status.html
[3] https://review.openstack.org/#/c/575125
.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+operators (I forgot)
On 6/7/2018 1:07 PM, Matt Riedemann wrote:
On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number
of allowed volumes to attach to a single instance > 26. The limit of
26 is because of a historical limit
On 2/6/2018 6:44 PM, Matt Riedemann wrote:
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than
compute nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So
eview.openstack.org/#/c/557369/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+openstack-operators since we need to have more operator feedback in our
community-wide goals decisions.
+Melvin as my elected user committee person for the same reasons as
adding operators into the discussion.
On 6/4/2018 3:38 PM, Matt Riedemann wrote:
On 6/4/2018 1:07 PM, Sean McGinnis
d
with regard to contributing this change, like legal approval, license
agreements, etc? If so, please be up front about that.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
that
aggregate unless you also assign those to their own aggregates.
It sounds like you're might be looking for a dedicated hosts feature?
There is an RFE from the public cloud work group for that:
https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771523
--
Tha
ng to privsep in the neutron agent would
eliminate the need for this option altogether and still gain the
performance benefits.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack
On 5/30/2018 9:30 AM, Matt Riedemann wrote:
I can start pushing some docs patches and report back here for review help.
Here are the docs patches in both nova and neutron:
https://review.openstack.org/#/q/topic:bug/1774217+(status:open+OR+status:merged)
--
Thanks,
Matt
+openstack-operators
On 5/31/2018 3:04 PM, Matt Riedemann wrote:
On 5/31/2018 1:35 PM, melanie witt wrote:
This cycle at the PTG, we had decided to start making some progress
toward removing nova-network [1] (thanks to those who have helped!)
and so far, we've landed some patches to ex
On 5/30/2018 9:41 AM, Matt Riedemann wrote:
Thanks for your patience in debugging this Massimo! I'll get a bug
reported and patch posted to fix it.
I'm tracking the problem with this bug:
https://bugs.launchpad.net/nova/+bug/1774205
I found that this has actually been fixed
ely where this breaks down and takes the project_id
from the current context (admin) rather than the instance:
https://github.com/openstack/nova/blob/stable/ocata/nova/objects/request_spec.py#L407
Thanks for your patience in debugging this Massimo! I'll get a bug
reported and patch po
/github.com/openstack/neutron/blob/f486f0/setup.cfg#L54
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
lters/aggregate_multitenancy_isolation.py#L50
And make sure when it fails, it matches what you'd expect. If it's None
or '' or something weird then we have a bug.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
if this is a new server created in your Ocata
test environment that you're trying to move? Or is this a server created
before Ocata?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
e hitting this bug:
https://bugs.launchpad.net/nova/+bug/1739318
Although if this is a clean Ocata environment with new instances, you
shouldn't have that problem.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
embedded in the instance and creates
allocations against the compute node provider using the flavor. It has
no explicit knowledge about granular request groups or more advanced
features like that.
--
Thanks,
Matt
___
OpenStack-operators mailing
27;s not going to be useful in this iteration, let me know what's
missing and I can add that in to the patch.
[1] https://review.openstack.org/#/c/565886/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstac
/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement
[3]
https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
/openstack/nova/blob/de52fefa1fd52ccaac6807e5010c5f2a2dcbaab5/nova/objects/instance.py#L66
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 5/15/2018 3:48 AM, saga...@nttdata.co.jp wrote:
We store the service logs which are created by VM on that storage.
I don't mean to be glib, but have you considered maybe not doing that?
--
Thanks,
Matt
___
OpenStack-operators mailing
erver would
report as ACTIVE but the ports weren't wired up so ssh would fail.
Having an ACTIVE guest that you can't actually do anything with is kind
of pointless.
--
Thanks,
Matt
___
OpenStack-operators mailing list
Open
.
The neutron-server logs should log when external events are being sent
to nova for the given port, you probably need to trace the requests and
compare the nova-compute and neutron logs for a given server create request.
--
Thanks,
Matt
___
Open
into your pike deployment, set
region_name to RegionOne and see if it makes a difference (although I
thought RegionOne was the default if not specified?).
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
h
function).
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
decide to cancel it manually.
What storage backend are you using? What are some reasons that it has
stalled in the past?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin
stone person, but I'd think this isn't a unique problem.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 5/2/2018 12:39 PM, Matt Riedemann wrote:
FWIW, I think we can also backport the data migration CLI to stable
branches once we have it available so you can do your migration in let's
say Queens before g
FYI, here is the start on the data migration CLI:
https://review.openstack.or
ange
would be, let's say they change the hypervisor or something less drastic
but still image property invalidating.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ave it available so you can do your migration in let's
say Queens before getting to Rocky.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ional in Newton, required in Ocata).
[1] https://review.openstack.org/#/c/492210/
[2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openst
R+status:merged)
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
1 - 100 of 770 matches
Mail list logo