[Openstack-operators] Openstack operational configuration improvement.

2017-12-18 Thread Flint WALRUS
Hi everyone, I don't really know if this list is the good one, but I'll have my bet on it :D Here I go. Since many times now, I'm managing Openstack platforms, and one things that always amazed me is its lack of comprehensive configuration management for services within Horizon. Indeed, you

Re: [Openstack-operators] Openstack operational configuration improvement.

2017-12-18 Thread Flint WALRUS
e upgrade. > > Thanks, > > Arkady > > > > *From:* Flint WALRUS [mailto:gael.ther...@gmail.com] > *Sent:* Monday, December 18, 2017 7:29 AM > *To:* openstack-operators@lists.openstack.org > *Subject:* [Openstack-operators] Openstack operational configuration > improvement.

Re: [Openstack-operators] Openstack operational configuration improvement.

2017-12-18 Thread Flint WALRUS
a shared way. A common geometry to how we think of the stack. > > -Matt > > On Mon, Dec 18, 2017 at 10:39 AM, Flint WALRUS <gael.ther...@gmail.com> > wrote: > >> Hi arkady, >> >> Ok understood your point. >> >> However, as an operator and adminis

Re: [Openstack-operators] Openstack operational configuration improvement.

2017-12-18 Thread Flint WALRUS
Thank you very much @Akihiro and @Jeremy these answers are really useful and constructives. Akihiro, regarding your two points yes, they for sure will be challenging and I really plan to work on this feature as a horizon plugin at the beginning as you mentioned it. Here what I'm thinking to do

Re: [Openstack-operators] [OpenStack-Operators][OpenStack] Regarding production grade OpenStack deployment

2018-05-18 Thread Flint WALRUS
Hi amit, I’m using kolla-ansible as a solution on our own infrastructure, however, be aware that because of the nature of Openstack you wont be able to achieve zero downtime if your hosted application do not take advantage of the distributed natre of ressources or if they’re not basically Cloud

Re: [Openstack-operators] Need feedback for nova aborting cold migration function

2018-05-23 Thread Flint WALRUS
We are using multiple storage backend / topology on our side ranging from ScaleIO to CEPH passing by local compute host storage (were we need cold storage) and VNX, I have to said that CEPH is our best bet. Since we use it we clearly reduced our outages, allowed our user advanced features such as

[Openstack-operators] [openstack-client] - missing commands?

2018-06-13 Thread Flint WALRUS
Hi guys, I use the «new» openstack-client command as much as possible since a couple of years now, but yet I had a hard time recently to find equivalent command of the following: nova force-delete & The command on swift that permit to recursively upload the content of a directory and

Re: [Openstack-operators] [openstack-client] - missing commands?

2018-06-14 Thread Flint WALRUS
PM, Flint WALRUS wrote: > > Hi guys, I use the «new» openstack-client command as much as possible > > since a couple of years now, but yet I had a hard time recently to find > > equivalent command of the following: > > > > nova force-delete > > & > > The c

Re: [Openstack-operators] [openstack-dev][publiccloud-wg][k8s][octavia] OpenStack Load Balancer APIs and K8s

2018-05-28 Thread Flint WALRUS
Hi everyone, I’m currently deploying Octavia as our global LBaaS for a lot of various workload such as Kubernetes ingress LB. We use Queens and plan to upgrade to rocky as soon as it reach the stable release and we use the native Octavia APIv2 (Not a neutron redirect etc). What do you need to

Re: [Openstack-operators] [openstack-dev][publiccloud-wg][k8s][octavia] OpenStack Load Balancer APIs and K8s

2018-05-28 Thread Flint WALRUS
> implementation: > https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/ > > Cheers, > > Saverio > > 2018-05-28 19:09 GMT+02:00 Flint WALRUS : > > Hi everyone, I’m currently deploying Octavia as our global LBaaS for a > lot > > of various workload such

Re: [Openstack-operators] Need feedback for nova aborting cold migration function

2018-05-02 Thread Flint WALRUS
As an operator dealing with platforms that do cold migration I would like to be able to abort and rollback the process. That would give us a better service quality and availability. We do have no choices but to use cold migration on some of our remote sites as they don’t get a unified storage

Re: [Openstack-operators] [publiccloud-wg]Public Cloud Feature List Hackathon Day 2

2018-01-11 Thread Flint WALRUS
Hi folks, I’ve just added an entry with the google doc regarding GraphQL API as it strike me yesterday, if you need further information feel free to contact me. Le jeu. 11 janv. 2018 à 08:32, Zhipeng Huang a écrit : > Hi Folks, > > Today we are gonna continue to comb

Re: [Openstack-operators] [publiccloud-wg] Missing features work session

2018-01-05 Thread Flint WALRUS
I'm thrilled to see improvement within this field of concerns and the way Openstack mature by listening from users, would them be architects, operators, end-users etc. I'm glade to see such initiative and for sure will be there! See you on Wednesday! Le ven. 5 janv. 2018 à 15:02, Tobias Rydberg

Re: [Openstack-operators] neutron and dns_domain

2018-01-10 Thread Flint WALRUS
As you’re using a L2 network topology and until all of your project use a different network you can do: domain=domain1,10.10.10.0/24 domain=domain2,20.20.20.0/24 Within the dnsmasq-neutron.conf file. Of course, restart the neutron-server service once done. Le mer. 10 janv. 2018 à 22:40, Jonathan

Re: [Openstack-operators] Passing additional parameters to KVM for a single instance

2018-01-26 Thread Flint WALRUS
I would rather suggest you to deal with flavor/images metdata and host aggregate for such segregation of cpu capacity and versionning. If someone have another technics I’m pretty curious of it too. Le ven. 26 janv. 2018 à 17:00, Gary Molenkamp a écrit : > I'm trying to import a

Re: [Openstack-operators] Dedicated Network node ?

2018-02-01 Thread Flint WALRUS
I think that indeed make sens as because now a day most the installations tend to either be within a dockerized solution or a relatively fair amont (3/4) of large hw nodes. One situation that would require dedicated hw would be a very large installation requiring you to lower the network pression

Re: [Openstack-operators] Dedicated Network node ?

2018-02-01 Thread Flint WALRUS
oud-solutions.de> a écrit : > > > > On 1. Feb 2018, at 19:42, Flint WALRUS <gael.ther...@gmail.com> wrote: > > > > I think that indeed make sens as because now a day most the > installations tend to either be within a dockerized solution or a > relatively fair

Re: [Openstack-operators] Dedicated Network node ?

2018-02-01 Thread Flint WALRUS
Interesting, could you provide an example ? Le jeu. 1 févr. 2018 à 22:48, Christian Berendt < bere...@betacloud-solutions.de> a écrit : > > > > On 1. Feb 2018, at 21:42, Flint WALRUS <gael.ther...@gmail.com> wrote: > > > > Don’t get it, are you talking

Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
myself from the cliff :-) Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka <doka...@gmx.com> a écrit : > Hi Flint, > > I think, Octavia expects reachibility between components over management > network, regardless of network's technology. > > > On 2/6/18 11:41 AM, Flint WAL

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
Aren’t CellsV2 more adapted to what you’re trying to do? Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < massimo.sgarava...@gmail.com> a écrit : > Hi > > I want to partition my OpenStack cloud so that: > > - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx > - Projects pn+1..

[Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Flint WALRUS
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider network instead of a neutron L3 vxlan ? Is Octavia specifically relying on L3 networking or can it operate without neutron L3 features ? I didn't find anything specifically related to the network requirements except for the

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Flint WALRUS
ur answer. > As far as I understand CellsV2 are present in Pike and later. I need to > implement such use case in an Ocata Openstack based cloud > > Thanks, Massimo > > 2018-02-06 10:26 GMT+01:00 Flint WALRUS <gael.ther...@gmail.com>: > >> Aren’t CellsV2 more adapted

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-08-03 Thread Flint WALRUS
t > they are not configured for production use and are not always stable. > > If you are using RDO or RedHat OpenStack Platform (OSP) those projects > do provide production images. > > Michael > > On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS > wrote: > > > > Ok ok, I’ll

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-08-01 Thread Flint WALRUS
and a little bit of formatting (layout issue). Thanks for this awesome support Michael! Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit : > No worries, happy to share. Answers below. > > Michael > > > On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS > wrote: > > &g

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-07-31 Thread Flint WALRUS
ns with centos 7 amphora has been passing. It should be in the > same /etc/octavia/amphora-agent.conf location as the ubuntu based > amphora. > > Michael > > > > On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS > wrote: > > > > Hi Michael, thanks a lot for that e

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-07-31 Thread Flint WALRUS
be routed (it does not require L2 connectivity). > > Michael > > On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS > wrote: > > > > Hi Folks, > > > > I'm currently deploying the Octavia component into our testing > environment which is based on KOLLA. > &

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-08-02 Thread Flint WALRUS
t; > However we can also help with that during the review process. > > Michael > > On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS > wrote: > > > > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment > with peace in mind. > > > > Reg

[Openstack-operators] [OCTAVIA][KOLLA] - Amphora to control plan communication question.

2018-07-31 Thread Flint WALRUS
Hi Folks, I'm currently deploying the Octavia component into our testing environment which is based on KOLLA. So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation. I actually have a

Re: [Openstack-operators] [nova] StarlingX diff analysis

2018-08-07 Thread Flint WALRUS
Hi matt, everyone, I just read your analysis and would like to thank you for such work. I really think there are numerous features included/used on this Nova rework that would be highly beneficial for Nova and users of it. I hope people will fairly appreciate you work. I didn’t had time to

Re: [Openstack-operators] Live-migration experiences?

2018-08-07 Thread Flint WALRUS
Hi clint, matt. To be noticed that post-copy and auto-convergence are mutually exclusive. The drawbacks that we experienced with here is that live-migration using either way post-copy or auto-convergence will likely fail for application not being able to handle throttling. Although, post-copy is

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS

2018-08-14 Thread Flint WALRUS
I’ll try to check the certificate format and make the appropriate change if required or let you know if I’ve got something specific regarding that topic. Kind regards, G. Le mar. 14 août 2018 à 19:52, Flint WALRUS a écrit : > Hi Michael, thanks a lot for your quick response once again! >

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS

2018-08-14 Thread Flint WALRUS
our gate > tests to setup the TLS certificates: > > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305 > > Michael > On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS > wrote: > > > > > > Hi guys, > > > > I continue to work on m

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS

2018-08-16 Thread Flint WALRUS
to the gunicorn server using the lb-mgmt-net ip of the amphora. Is there any logs regarding the gunicorn server where I could check why does the amphora is not able to found the api endpoint? Le mar. 14 août 2018 à 19:53, Flint WALRUS a écrit : > I’ll try to check the certificate format and m

[Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS

2018-08-14 Thread Flint WALRUS
Hi guys, I continue to work on my Octavia integration using Kolla-Ansible and I'm facing a strange behavior. As for now I'm working on a POC using restricted HW and SW Capacities, I'm facing a strange issue when trying to launch a new load-balancer. When I create a new LB, would it be using CLI

Re: [Openstack-operators] [OCTAVIA][KOLLA] - Self signed CA/CERTS

2018-08-17 Thread Flint WALRUS
ar/log inside the amphora. > > Michael > On Thu, Aug 16, 2018 at 1:43 PM Flint WALRUS > wrote: > > > > Hi Michael, > > > > Ok, it was indeed an issue with the create_certificate.sh script for > centos that indeed improperly created the client.pem certificate.

[Openstack-operators] [kolla-ansible][octavia-role]

2018-07-17 Thread Flint WALRUS
Hi guys, I'm a trying to install Octavia as a new service on our cloud and facing few issues that I've been able to manage so far, until this nova-api keypair related issue. When creating a loadbalancer with the following command: openstack --os-cloud loadbalancer create --name lb1

Re: [Openstack-operators] [kolla-ansible][octavia-role]

2018-07-17 Thread Flint WALRUS
a écrit : > Right. I am not familiar with the kolla role either, but you are > correct. The keypair created in nova needs to be "owned" by the > octavia service account. > > Michael > On Tue, Jul 17, 2018 at 9:07 AM iain MacDonnell > wrote: > > > > > &g

Re: [Openstack-operators] How are you handling billing/chargeback?

2018-03-12 Thread Flint WALRUS
Hi lars, personally using an internally crafted service. It’s one of my main regret with Openstack, lack of a decent billing system. Le lun. 12 mars 2018 à 20:22, Lars Kellogg-Stedman a écrit : > Hey folks, > > I'm curious what folks out there are using for chargeback/billing