On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth wrote:
> I have a request to do $SUBJECT in relation to a V2V workflow. The use
> case here is conversion of a VM/Physical which was previously powered
> off. We want to move its data, but we don't want to be powering on
> stuff which wasn't
On 11/27/2018 11:32 AM, Ignazio Cassano wrote:
Hi All,
Please anyone know where hypervisor uuid is retrived?
Sometime updating kmv nodes with yum update it changes and in nova
database 2 uuids are assigned to the same node.
regards
Ignazio
___
Hi All,
Please anyone know where hypervisor uuid is retrived?
Sometime updating kmv nodes with yum update it changes and in nova database
2 uuids are assigned to the same node.
regards
Ignazio
___
OpenStack-operators mailing list
All-
Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing the "healing" aspect of
the
I came across this bug [1] in triage today and I thought this was fixed
already [2] but either something regressed or there is more to do here.
I'm mostly just wondering, are operators already running any kind of
script which purges old instance_faults table records before an instance
is
On 10/18/2018 5:07 PM, Matt Riedemann wrote:
It's been deprecated since Pike, and the time has come to remove it [1].
mgagne has been the most vocal CachingScheduler operator I know and he
has tested out the "nova-manage placement heal_allocations" CLI, added
in Rocky, and said it will work
It's been deprecated since Pike, and the time has come to remove it [1].
mgagne has been the most vocal CachingScheduler operator I know and he
has tested out the "nova-manage placement heal_allocations" CLI, added
in Rocky, and said it will work for migrating his deployment from the
tl;dr: Is anyone calling 'nova-novncproxy' or 'nova-serialproxy' with
CLI arguments instead of a configuration file?
I've been doing some untangling of the console proxy services that nova
provides and trying to clean up the documentation for same [1]. As part
of these fixes, I noted a couple of
tl;dr: I'm proposing a new parameter to the server stop (and suspend?)
APIs to control if nova shelve offloads the server.
Long form: This came up during the public cloud WG session this week
based on a couple of feature requests [1][2]. When a user stops/suspends
a server, the hypervisor
Excerpts from Matt Riedemann's message of 2018-09-06 15:58:41 -0500:
> I wanted to recap some upgrade-specific stuff from today outside of the
> other [1] technical extraction thread.
>
> Chris has a change up for review [2] which prompted the discussion.
>
> That change makes placement only
I wanted to recap some upgrade-specific stuff from today outside of the
other [1] technical extraction thread.
Chris has a change up for review [2] which prompted the discussion.
That change makes placement only work with placement.conf, not
nova.conf, but does get a passing tempest run in
On 8/29/2018 3:21 PM, Tim Bell wrote:
Sounds like a good topic for PTG/Forum?
Yeah it's already on the PTG agenda [1][2]. I started the thread because
I wanted to get the ball rolling as early as possible, and with people
that won't attend the PTG and/or the Forum, to weigh in on not only
such as preserving IP
addresses etc.
Sounds like a good topic for PTG/Forum?
Tim
-Original Message-
From: Jay Pipes
Date: Wednesday, 29 August 2018 at 22:12
To: Dan Smith , Tim Bell
Cc: "openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] [nova][cinder][neut
On 08/29/2018 04:04 PM, Dan Smith wrote:
- The VMs to be migrated are not generally not expensive
configurations, just hardware lifecycles where boxes go out of
warranty or computer centre rack/cooling needs re-organising. For
CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
> - The VMs to be migrated are not generally not expensive
> configurations, just hardware lifecycles where boxes go out of
> warranty or computer centre rack/cooling needs re-organising. For
> CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
> ~30% pet share)
> - We make a
2018 at 18:47
To: Jay Pipes
Cc: "openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold
migration
> A release upgrade dance involves coordination of multiple moving
> parts. It's about as similar to this s
On 08/29/2018 02:26 PM, Chris Friesen wrote:
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought cold migration of instances with such devices was
On 08/29/2018 12:39 PM, Dan Smith wrote:
If we're going to discuss removing move operations from Nova, we should
do that in another thread. This one is about making existing operations
work :)
OK, understood. :)
The admin only "owns" the instance because we have no ability to
transfer
> A release upgrade dance involves coordination of multiple moving
> parts. It's about as similar to this scenario as I can imagine. And
> there's a reason release upgrades are not done entirely within Nova;
> clearly an external upgrade tool or script is needed to orchestrate
> the many steps and
I respect your opinion but respectfully disagree that this is something
we need to spend our time on. Comments inline.
On 08/29/2018 10:47 AM, Dan Smith wrote:
* Cells can shard across flavors (and hardware type) so operators
would like to move users off the old flavors/hardware (old cell) to
>> * Cells can shard across flavors (and hardware type) so operators
>> would like to move users off the old flavors/hardware (old cell) to
>> new flavors in a new cell.
>
> So cell migrations are kind of the new release upgrade dance. Got it.
No, cell migrations are about moving instances
Sorry for delayed response. Was on PTO when this came out. Comments
inline...
On 08/22/2018 09:23 PM, Matt Riedemann wrote:
Hi everyone,
I have started an etherpad for cells topics at the Stein PTG [1]. The
main issue in there right now is dealing with cross-cell cold migration
in nova.
This is just an FYI that I have proposed that we deprecate the
core/ram/disk filters [1]. We should have probably done this back in
Pike when we removed them from the default enabled_filters list and also
deprecated the CachingScheduler, which is the only in-tree scheduler
driver that benefits
I think in our case we’d only migrate between cells if we know the network and
storage is accessible and would never do it if not.
Thinking moving from old to new hardware at a cell level.
If storage and network isn’t available ideally it would fail at the api request.
There is also ceph
Hi everyone,
I have started an etherpad for cells topics at the Stein PTG [1]. The
main issue in there right now is dealing with cross-cell cold migration
in nova.
At a high level, I am going off these requirements:
* Cells can shard across flavors (and hardware type) so operators would
On 20-08-18 16:29:52, Matthew Booth wrote:
> For those who aren't familiar with it, nova's volume-update (also
> called swap volume by nova devs) is the nova part of the
> implementation of cinder's live migration (also called retype).
> Volume-update is essentially an internal cinder<->nova api,
For those who aren't familiar with it, nova's volume-update (also
called swap volume by nova devs) is the nova part of the
implementation of cinder's live migration (also called retype).
Volume-update is essentially an internal cinder<->nova api, but as
that's not a thing it's also unfortunately
On 8/11/2018 12:50 AM, Chris Apsey wrote:
This sounds promising and there seems to be a feasible way to do this,
but it also sounds like a decent amount of effort and would be a new
feature in a future release rather than a bugfix - am I correct in that
assessment?
Yes I'd say it's a
This sounds promising and there seems to be a feasible way to do this, but
it also sounds like a decent amount of effort and would be a new feature in
a future release rather than a bugfix - am I correct in that assessment?
On August 9, 2018 13:30:31 "Daniel P. Berrangé" wrote:
On Thu,
Exactly. And I agree, it seems like hw_architecture should dictate
which emulator is chosen, but as you mentioned its currently not. I'm
not sure if this is a bug and it's supposed to 'just work', or just
something that was never fully implemented (intentionally) and would be
more of a
On 8/8/2018 2:42 PM, Chris Apsey wrote:
qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86
packages, but they perform system-mode emulation (via dynamic
instruction translation) for those target environments. So, you run
qemu-system-ppc64 on an x86 host in order to get a
Matt,
qemu-system-arm, qemu-system-ppc64, etc. in our environment are all x86
packages, but they perform system-mode emulation (via dynamic
instruction translation) for those target environments. So, you run
qemu-system-ppc64 on an x86 host in order to get a ppc64-emulated VM.
Our use case
On 8/7/2018 8:54 AM, Chris Apsey wrote:
We don't actually have any non-x86 hardware at the moment - we're just
looking to run certain workloads in qemu full emulation mode sans KVM
extensions (we know there is a huge performance hit - it's just for a
few very specific things). The hosts I'm
Hey Matt,
We don't actually have any non-x86 hardware at the moment - we're just
looking to run certain workloads in qemu full emulation mode sans KVM
extensions (we know there is a huge performance hit - it's just for a
few very specific things). The hosts I'm talking about are normal
On 8/5/2018 1:43 PM, Chris Apsey wrote:
Trying to enable some alternate (non-x86) architectures on xenial +
queens. I can load up images and set the property correctly according
to the supported values
(https://docs.openstack.org/nova/queens/configuration/config.html) in
On 8/7/2018 1:10 AM, Flint WALRUS wrote:
I didn’t had time to check StarlingX code quality, how did you feel it
while you were doing your analysis?
I didn't dig into the test diffs themselves, but it was my impression
that from what I was poking around in the local git repo, there were
Hi matt, everyone,
I just read your analysis and would like to thank you for such work. I
really think there are numerous features included/used on this Nova rework
that would be highly beneficial for Nova and users of it.
I hope people will fairly appreciate you work.
I didn’t had time to
In case you haven't heard, there was this StarlingX thing announced at
the last summit. I have gone through the enormous nova diff in their
repo and the results are in a spreadsheet [1]. Given the enormous
spreadsheet (see a pattern?), I have further refined that into a set of
high-level
All,
Trying to enable some alternate (non-x86) architectures on xenial +
queens. I can load up images and set the property correctly according
to the supported values
(https://docs.openstack.org/nova/queens/configuration/config.html) in
image_properties_default_architecture. From what I
Thanks, Matt. Those are all good suggestions, and we will incorporate
your feedback into our plans.
On 07/23/2018 05:57 PM, Matt Riedemann wrote:
> I'll try to help a bit inline. Also cross-posting to openstack-dev and
> tagging with [nova] to highlight it.
>
> On 7/23/2018 10:43 AM, Jonathan
I'll try to help a bit inline. Also cross-posting to openstack-dev and
tagging with [nova] to highlight it.
On 7/23/2018 10:43 AM, Jonathan Mills wrote:
I am looking at implementing CellsV2 with multiple cells, and there's a
few things I'm seeking clarification on:
1) How does a
Just an update on an old thread, but I've been working on the
cross_az_attach=False issues again this past week and I think I have a
couple of decent fixes.
On 5/31/2017 6:08 PM, Matt Riedemann wrote:
This is a request for any operators out there that configure nova to set:
[cinder]
Hello Devs and Ops,
I've created an etherpad where we can start collecting ideas for topics
to cover at the Stein PTG. Please feel free to add your comments and
topics with your IRC nick next to it to make it easier to discuss with you.
https://etherpad.openstack.org/p/nova-ptg-stein
Hello Stackers,
Recently, we've received interest about increasing the maximum number of
allowed volumes to attach to a single instance > 26. The limit of 26 is
because of a historical limitation in libvirt (if I remember correctly)
and is no longer limited at the libvirt level in the present
On 2/6/2018 6:44 PM, Matt Riedemann wrote:
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than
compute nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So
We have a nova spec [1] which is at the point that it needs some API
user (and operator) feedback on what nova API should be doing when
listing servers and there are down cells (unable to reach the cell DB or
it times out).
tl;dr: the spec proposes to return "shell" instances which have the
On 06/04/2018 05:43 AM, Tobias Urdin wrote:
Hello,
I have received a question about a more specialized use case where we need to
isolate several hypervisors to a specific project. My first thinking was
using nova flavors for only that project and add extra specs properties to
use a specific
Saw now in the docs that multiple aggregate_instance_extra_specs keys
should be a comma-separated list.
But other than that, would the below do what I'm looking for?
Has a very high maintenance level when having a lot of hypervisors and
steadily adding new ones, but I can't see any other way to
Hello,
Thanks for the reply Matt.
The hard thing here is that I have to ensure it the other way around as
well i.e other instances cannot be allowed landing on those "reserved"
hypervisors.
I assume I could do something like in [1] and also set key-value
metadata on all flavors to select a host
On 6/4/2018 6:43 AM, Tobias Urdin wrote:
I have received a question about a more specialized use case where we
need to isolate several hypervisors
to a specific project. My first thinking was using nova flavors for only
that project and add extra specs properties to use a specific host
Hello,
I have received a question about a more specialized use case where we
need to isolate several hypervisors
to a specific project. My first thinking was using nova flavors for only
that project and add extra specs properties to use a specific host
aggregate but this
means I need to assign
Hello Operators and Devs,
This cycle at the PTG, we had decided to start making some progress
toward removing nova-network [1] (thanks to those who have helped!) and
so far, we've landed some patches to extract common network utilities
from nova-network core functionality into separate
On 5/28/2018 7:31 AM, Sylvain Bauza wrote:
That said, given I'm now working on using Nested Resource Providers for
VGPU inventories, I wonder about a possible upgrade problem with VGPU
allocations. Given that :
- in Queens, VGPU inventories are for the root RP (ie. the compute
node RP), but,
On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann
wrote:
> I've written a nova-manage placement heal_allocations CLI [1] which was a
> TODO from the PTG in Dublin as a step toward getting existing
> CachingScheduler users to roll off that (which is deprecated).
>
> During the
I've written a nova-manage placement heal_allocations CLI [1] which was
a TODO from the PTG in Dublin as a step toward getting existing
CachingScheduler users to roll off that (which is deprecated).
During the CERN cells v1 upgrade talk it was pointed out that CERN was
able to go from
CERN has upgraded to Cells v2 and is doing performance testing of the
scheduler and were reporting some things today which got us back to this
bug [1]. So I've starting pushing some patches related to this but also
related to an older blueprint I created [2]. In summary, we do quite a
bit of
I've started an etherpad related to the Vancouver Forum session on
extracting placement from nova. It's mostly just an outline for
now but is evolving:
https://etherpad.openstack.org/p/YVR-placement-extraction
If we can get some real information in there before the session we
are much more
The baremetal scheduling options were deprecated in Pike [1] and the
ironic_host_manager was deprecated in Queens [2] and is now being
removed [3]. Deployments must use resource classes now for baremetal
scheduling. [4]
The large host subset size value is also no longer needed. [5]
I've gone
There is a compute REST API change proposed [1] which will allow users
to pass trusted certificate IDs to be used with validation of images
when creating or rebuilding a server. The trusted cert IDs are based on
certificates stored in some key manager, e.g. Barbican.
The full nova spec is
On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote:
+openstack-operators (apologies that I forgot to add originally)
On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote:
Hey everyone,
Let's collect forum topic brainstorming ideas for the Forum sessions in
Vancouver in this etherpad [0].
Hi all,
A CI issue [1] caused by tempest thinking some filters are enabled
when they're really not, and a proposed patch [2] to add
(Same|Different)HostFilter to the default filters as a workaround, has
led to a discussion about what filters should be enabled by default in
nova.
The default
+openstack-operators (apologies that I forgot to add originally)
On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote:
Hey everyone,
Let's collect forum topic brainstorming ideas for the Forum sessions in
Vancouver in this etherpad [0]. Once we've brainstormed, we'll select
and submit our
Hi,
https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
dependancy to nova's requirements.txt. When I objected the counter argument
is that we have examples of windows specific dependancies (os-win) and
powervm specific dependancies in that file already.
I think perhaps all
It works for me in Newton.
Try it at your own risk :)
Cheers,
Saverio
2018-04-09 13:23 GMT+02:00 Anwar Durrani :
> No this is different one. should i try this one ? if it works ?
>
> On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote:
>>
>> Hello
No this is different one. should i try this one ? if it works ?
On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto wrote:
> Hello Anwar,
>
> are you talking about this script ?
> https://github.com/openstack/osops-tools-contrib/blob/
> master/nova/nova-libvirt-compare.py
>
> it
Hi All,
Nova resources are out of sync in ocata version, what values are showing on
dashboard are mismatch of actual running instances, i do remember i had
script for auto sync resources but this script is getting fail in this
case, Kindly help here.
--
Thanks & regards,
Anwar M. Durrani
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent wrote:
> On Mon, 2 Apr 2018, Alex Schultz wrote:
>
>> So this is/was valid. A few years back there was some perf tests done
>> with various combinations of process/threads and for Keystone it was
>> determined that threads should
On 04/03/2018 06:48 AM, Chris Dent wrote:
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process
On Mon, 2 Apr 2018, Alex Schultz wrote:
So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process count (hence the bug). Now I guess the
On Fri, Mar 30, 2018 at 11:11 AM, iain MacDonnell
wrote:
>
>
> On 03/29/2018 02:13 AM, Belmiro Moreira wrote:
>>
>> Some lessons so far...
>> - Scale keystone accordingly when enabling placement.
>
>
> Speaking of which; I suppose I have the same question for keystone
inal ------
From: "李杰"<li...@unitedstack.com>;
Date: Thu, Mar 29, 2018 05:24 PM
To: "openstack-operators"<openstack-operators@lists.openstack.org>;
Subject: [Openstack-operators] [nova] about use spice console
Hi,all
Now I want to use
On Thu, 29 Mar 2018, iain MacDonnell wrote:
If I'm reading
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
right, it seems that the MPM is not pertinent when using WSGIDaemonProcess.
It doesn't impact the number wsgi processes that will exist or how
they
On 03/29/2018 04:24 AM, Chris Dent wrote:
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where
placement
is running.
If I'm reading
On 3/29/2018 12:05 PM, Chris Dent wrote:
Other suggestions? I'm looking at things like turning off
scheduler_tracks_instance_changes, since affinity scheduling is not
needed (at least so-far), but not sure that that will help with
placement load (seems like it might, though?)
This won't
On Thu, 29 Mar 2018, iain MacDonnell wrote:
placement python stack and kicks out the 401. So this mostly
indicates that socket accept is taking forever.
Well, this test connects and gets a 400 immediately:
echo | nc -v apihost 8778
so I don't think it's at at the socket level, but, I
On 03/29/2018 01:19 AM, Chris Dent wrote:
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a
few moderately-sized deployments (~200 nodes, ~4k instances),
currently on Ocata, and instance creation is getting very slow as they
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where placement
is running.
Another option is to switch to nginx and uwsgi. In situations where
the
Hi,all
Now I want to use spice console replace novnc in instance.But the
openstack documentation is a bit sparse on what configuration parameters to
enable for SPICE console access. But my result is the nova-compute service and
nova-consoleauth service failed,and the log tell me the
Hi,
with Ocata upgrade we decided to run local placements (one service per
cellV1) because we were nervous about possible scalability issues but
specially the increase of the schedule time. Fortunately, this is now been
address with the placement-req-filter work.
We started slowly to aggregate
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a few
moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata,
and instance creation is getting very slow as they fill up.
This should be well within the
Sylvain has had a spec up for awhile [1] about solving an old issue
where admins can rename an AZ (via host aggregate metadata changes)
while it has instances in it, which likely results in at least user
confusion, but probably other issues later if you try to move those
instances, e.g. the
Thanks for the headsup Matt.
On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann wrote:
> I just wanted to give a heads up to anyone thinking about upgrading to
> queens that nova has released a 17.0.1 patch release [1].
>
> There are some pretty important fixes in there that
I just wanted to give a heads up to anyone thinking about upgrading to
queens that nova has released a 17.0.1 patch release [1].
There are some pretty important fixes in there that came up after the
queens GA so if you haven't upgraded yet, I recommend going straight to
that one instead of
Hi Amit
(re-titled thread with scoped topics)
As Matt has already referenced, [0] is a good starting place for using the
nova-lxd driver.
On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote:
> Hello,
>
> I have a running OpenStack Ocata setup on which I am able to launch VMs.
>
I triaged this bug a couple of weeks ago:
https://bugs.launchpad.net/nova/+bug/1746483
It looks like it's been regressed since Mitaka when that filter started
using the RequestSpec object rather than legacy filter_properties dict.
Looking a bit deeper though, it looks like this filter never
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than compute
nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So if your limit is the default (10) and you
All,
This was the core issue - setting
consecutive_build_service_disable_threshold = 0 in nova.conf (on
controllers and compute nodes) solved this. It was being triggered by
neutron dropping requests (and/or responses) for vif-plugging due to cpu
usage on the neutron endpoints being pegged
That looks promising. I'll report back to confirm the solution.
Thanks!
---
v/r
Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net
On 2018-01-31 04:40 PM, Matt Riedemann wrote:
On 1/31/2018 3:16 PM, Chris Apsey wrote:
All,
Running in to a strange issue I haven't seen before.
There's [1], but I would have expected you to see error logs like [2] if
that's what you're hitting.
[1]
https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L627-L645
[2]
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1714-L1716
efried
On 01/31/2018 03:16
On 1/31/2018 3:16 PM, Chris Apsey wrote:
All,
Running in to a strange issue I haven't seen before.
Randomly, the nova-compute services on compute nodes are disabling
themselves (as if someone ran openstack compute service set --disable
hostX nova-compute. When this happens, the node
All,
Running in to a strange issue I haven't seen before.
Randomly, the nova-compute services on compute nodes are disabling
themselves (as if someone ran openstack compute service set --disable
hostX nova-compute. When this happens, the node continues to report
itself as 'up' - the service
Hi all,
Nova currently allows us to filter instances by fixed IP address(es). This
feature is known to be useful in an operational scenario that cloud
administrators detect abnormal traffic in an IP address and want to trace down
to the instance that this IP address belongs to. This feature
Hello Stackers,
This is a heads up to any of you using the AggregateCoreFilter,
AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
These filters have effectively allowed operators to set overcommit
ratios per aggregate rather than per compute node in <= Newton.
Hi everyone,
I wanted to point out that the nova API patch for volume mulitattach
support is available for review:
https://review.openstack.org/#/c/271047/
It's actually a series of changes, but that is the last one that enables
the feature in nova. It relies on the 2.59 compute API
On 1/10/2018 1:49 PM, Alec Hothan (ahothan) wrote:
The main problem is that the nova API does not return sufficient detail
on the reason for a NoValidHostFound and perhaps that should be fixed at
that level. Extending the API to return a reason field which is a json
dict that is returned by
the log to find out why).
Regards,
Alec
From: Matt Riedemann <mriede...@gmail.com>
Date: Wednesday, January 10, 2018 at 11:33 AM
To: "openstack-operators@lists.openstack.org"
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-oper
On 1/10/2018 1:15 AM, Alec Hothan (ahothan) wrote:
+1 on the “no valid host found”, this one should be at the very top of
the to-be-fixed list.
Very difficult to troubleshoot filters in lab testing (let alone in
production) as there can be many of them. This will get worst with more
NFV
enstack-operators@lists.openstack.org"
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-operators][nova] Verbosity of
nova scheduler
On Tue, Jan 9, 2018 at 8:18 AM, Piotr Bielak
<piotr.bie...@corp.ovh.com<mailto:piotr.bie...@corp.ovh.com>> w
On 2018-01-09 12:38:05 -0600 (-0600), Matt Riedemann wrote:
[...]
> Also, there is a noticeable impact to performance when running the
> scheduler with debug logging enabled which is why it's not
> recommended to run with debug enabled in production.
Further, OpenStack considers security risks
1 - 100 of 573 matches
Mail list logo