Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Matt Riedemann
On 2/6/2018 2:14 PM, Chris Apsey wrote: but we would rather have intermittent build failures rather than compute nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So if your limit is the default (10) and you

[Openstack-operators] [openstack-community] Feb 8 CFP Deadline - OpenStack Summit Vancouver

2018-02-06 Thread Jimmy McArthur
Hi everyone, The Vancouver Summit CFP closes in two days: February 8 at 11:59pm Pacific Time (February 9 at 6:59am UTC). For the Vancouver, the Summit Tracks have evolved to cover the entire open infrastructure landscape. Get your talks in

Re: [Openstack-operators] [nova] nova-compute automatically disabling itself?

2018-02-06 Thread Chris Apsey
All, This was the core issue - setting consecutive_build_service_disable_threshold = 0 in nova.conf (on controllers and compute nodes) solved this. It was being triggered by neutron dropping requests (and/or responses) for vif-plugging due to cpu usage on the neutron endpoints being pegged

Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Michael Johnson
No issue with using an L2 network for the lb-mgmt-net. It only requires the following: Controllers can reach amphora-agent IPs on the TCP bind_port (default 9443) Amphora-agents can reach the controllers in the controller_ip_port_list via UDP (default ) This can be via an L2 lb-mgmt-net

[Openstack-operators] Ops Meetups Team meeting minutes and so much more!

2018-02-06 Thread Chris Morgan
Good meeting today, here are the minutes: Meeting ended Tue Feb 6 15:47:56 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:47 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.html 10:48 AM Minutes

[Openstack-operators] Ops Meetups team meeting 10 minute warning!

2018-02-06 Thread Chris Morgan
Meeting starts in 10 minutes - new regular time. See you on #openstack-operators ! Chris -- Chris Morgan ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] Is there an Ops Meetup today?

2018-02-06 Thread Erik McCormick
It was moved to 10am EST die to lots of conflicts. Need to update the wiki. On Feb 6, 2018 9:11 AM, "Jimmy McArthur" wrote: > Was it canceled? > > https://wiki.openstack.org/wiki/Ops_Meetups_Team > > ___ > OpenStack-operators

[Openstack-operators] Is there an Ops Meetup today?

2018-02-06 Thread Jimmy McArthur
Was it canceled? https://wiki.openstack.org/wiki/Ops_Meetups_Team ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Jay Pipes
On 02/06/2018 04:26 AM, Flint WALRUS wrote: Aren’t CellsV2 more adapted to what you’re trying to do? No, cellsv2 are not user-facing nor is there a way to segregate certain tenants on to certain cells. Host aggregates are the appropriate way to structure this grouping. Best, -jay Le mar.

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
If you’re willing to, I could share with you a way to get a FrankeinCloud using a Docker method with kolla to get a pike/queens/whatever cloud at the same time that your Ocata one. Le mar. 6 févr. 2018 à 11:15, Massimo Sgaravatto < massimo.sgarava...@gmail.com> a écrit : > Thanks for your answer.

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Massimo Sgaravatto
Thanks for your answer. As far as I understand CellsV2 are present in Pike and later. I need to implement such use case in an Ocata Openstack based cloud Thanks, Massimo 2018-02-06 10:26 GMT+01:00 Flint WALRUS : > Aren’t CellsV2 more adapted to what you’re trying to do?

Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
Ok, that’s what I was understanding from the documentation but as I couldn’t find any information related to the L3 specifics I prefer to have another check that mine only x) I’ll have to install and operate Octavia within an unusual L2 only network and I would like to be sure I’ll not push

Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Volodymyr Litovka
Hi Flint, I think, Octavia expects reachibility between components over management network, regardless of network's technology. On 2/6/18 11:41 AM, Flint WALRUS wrote: Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider network instead of a neutron L3 vxlan ? Is Octavia

[Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider network instead of a neutron L3 vxlan ? Is Octavia specifically relying on L3 networking or can it operate without neutron L3 features ? I didn't find anything specifically related to the network requirements except for the

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
Aren’t CellsV2 more adapted to what you’re trying to do? Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < massimo.sgarava...@gmail.com> a écrit : > Hi > > I want to partition my OpenStack cloud so that: > > - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx > - Projects pn+1..