On 8/2/2018 7:27 PM, Jay Pipes wrote:
It's not an exception. It's normal course of events. NoValidHosts means
there were no compute nodes that met the requested resource amounts.
To clarify, I didn't mean a python exception. I concede that I
should've chosen a better word for the type of
On 8/3/2018 9:14 AM, Chris Friesen wrote:
I'm of two minds here.
On the one hand, you have the case where the end user has accidentally
requested some combination of things that isn't normally available, and
they need to be able to ask the provider what they did wrong. I agree
that this
On 8/2/2018 10:07 AM, Ben Nemec wrote:
Now it seems like I need to do:
1) Change disk_allocation_ratio in nova.conf
2) Restart nova-scheduler, nova-compute, and nova-placement (or some
subset of those?)
Restarting the placement service wouldn't have any effect here.
Wouldn't I need to
Excerpts from Matt Riedemann's message of 2018-08-04 16:44:26 -0500:
> I've reported a nova bug for this:
>
> https://bugs.launchpad.net/nova/+bug/1785425
>
> But I'm not sure what is the best way to fix it now with the zuul v3
> hotness. We had an irrelevant-files entry in project-config for
I've reported a nova bug for this:
https://bugs.launchpad.net/nova/+bug/1785425
But I'm not sure what is the best way to fix it now with the zuul v3
hotness. We had an irrelevant-files entry in project-config for the
tempest-full job but we don't have that for tempest-full-py3, so should
we
On 8/2/2018 3:04 PM, Chris Friesen wrote:
At a previous Summit[1] there were some operators that said they just
always ran nova-scheduler with debug logging enabled in order to deal
with this issue, but that it was a pain to isolate the useful logs from
the not-useful ones.
Using CONF.trace
We recently deployed Magnum and I've been making my way through getting
both Swarm and Kubernetes running. I also ran into some initial issues.
These notes may or may not help, but thought I'd share them in case:
* We're using Barbican for SSL. I have not tried with the internal
x509keypair.
* I