I want to echo the effectiveness of this change - we had vif failures when
launching more than 50 or so cirros instances simultaneously, but moving to
daemon mode made this issue disappear and we've tested 5x that amount.
This has been the single biggest scalability improvement to date. This
On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote:
The VM that I am trying to migrate was created when the Cloud was
already running Ocata
OK, I'd added the tenant_id variable in scope to the log message here:
The VM that I am trying to migrate was created when the Cloud was already
running Ocata
Cheers, Massimo
On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann wrote:
> On 5/29/2018 12:44 PM, Jay Pipes wrote:
>
>> Either that, or the wrong project_id is being used when attempting to
>> migrate? Maybe
On 5/29/2018 12:44 PM, Jay Pipes wrote:
Either that, or the wrong project_id is being used when attempting to
migrate? Maybe the admin project_id is being used instead of the
original project_id who launched the instance?
Could be, but we should be pulling the request spec from the database
On 05/29/2018 01:06 PM, Matt Riedemann wrote:
I'm wondering if the RequestSpec.project_id is null? Like, I wonder if
you're hitting this bug:
https://bugs.launchpad.net/nova/+bug/1739318
Although if this is a clean Ocata environment with new instances, you
shouldn't have that problem.
Excerpts from Petr Kovar's message of 2018-05-28 16:03:41 +0200:
> On Thu, 24 May 2018 07:19:29 -0700
> "Jonathan D. Proulx" wrote:
>
> > My intention based on current understandign would be to create a git
> > repo called "osops-docs" as this fits current naming an thin initial
> > document we
On 5/29/2018 11:10 AM, Jay Pipes wrote:
The hosts you are attempting to migrate *to* do not have the
filter_tenant_id property set to the same tenant ID as the compute host
2 that originally hosted the instance.
That is why you see this in the scheduler logs when evaluating the
fitness of
The hosts you are attempting to migrate *to* do not have the
filter_tenant_id property set to the same tenant ID as the compute host
2 that originally hosted the instance.
That is why you see this in the scheduler logs when evaluating the
fitness of compute host 1 and compute host 3:
"fails
Good plan. I'm just getting on email now and hadn't even considered IRC
yet. :^)
On Tue, May 29, 2018 at 5:53 AM, Erik McCormick
wrote:
>
>
> On Tue, May 29, 2018, 7:15 AM Chris Morgan wrote:
>
>> Some of us will be only just returning to work today after being away all
>> week last week for
On 5/28/2018 7:31 AM, Sylvain Bauza wrote:
That said, given I'm now working on using Nested Resource Providers for
VGPU inventories, I wonder about a possible upgrade problem with VGPU
allocations. Given that :
- in Queens, VGPU inventories are for the root RP (ie. the compute
node RP), but,
On Tue, May 29, 2018, 7:15 AM Chris Morgan wrote:
> Some of us will be only just returning to work today after being away all
> week last week for the (successful) OpenStack Summit, therefore I propose
> we skip having a meeting today but regroup next week?
>
+1
> Chris
>
> --
> Chris Morgan
On 05/29/2018 06:14 AM, Chris Morgan wrote:
Some of us will be only just returning to work today after being away
all week last week for the (successful) OpenStack Summit, therefore I
propose we skip having a meeting today but regroup next week?
Chris
Makes sense to me. I know I have a lot
Some of us will be only just returning to work today after being away all
week last week for the (successful) OpenStack Summit, therefore I propose
we skip having a meeting today but regroup next week?
Chris
--
Chris Morgan
___
OpenStack-operators
Hi,
I am the PTL of the OPNFV Doctor project.
I have been working for a couple of years figuring out the infrastructure
maintenance in interaction with application on top of it. Looked into Nova,
Craton and had several Ops sessions. Past half a year there has been couple of
different POCs,
I have a small testbed OpenStack cloud (running Ocata) where I am trying to
debug a problem with Nova scheduling.
In short: I see different behaviors when I create a new VM and when I try
to migrate a VM
Since I want to partition the Cloud so that each project uses only certain
compute nodes,
15 matches
Mail list logo