Hi everyone, I don't really know if this list is the good one, but I'll
have my bet on it :D
Here I go.
Since many times now, I'm managing Openstack platforms, and one things
that always amazed me is its lack of comprehensive configuration management
for services within Horizon.
Indeed, you
e upgrade.
>
> Thanks,
>
> Arkady
>
>
>
> *From:* Flint WALRUS [mailto:gael.ther...@gmail.com]
> *Sent:* Monday, December 18, 2017 7:29 AM
> *To:* openstack-operators@lists.openstack.org
> *Subject:* [Openstack-operators] Openstack operational configuration
> improvement.
a shared way. A common geometry to how we think of the stack.
>
> -Matt
>
> On Mon, Dec 18, 2017 at 10:39 AM, Flint WALRUS <gael.ther...@gmail.com>
> wrote:
>
>> Hi arkady,
>>
>> Ok understood your point.
>>
>> However, as an operator and adminis
Thank you very much @Akihiro and @Jeremy
these answers are really useful and constructives.
Akihiro, regarding your two points yes, they for sure will be challenging
and I really plan to work on this feature as a horizon plugin at the
beginning as you mentioned it.
Here what I'm thinking to do
Hi amit,
I’m using kolla-ansible as a solution on our own infrastructure, however,
be aware that because of the nature of Openstack you wont be able to
achieve zero downtime if your hosted application do not take advantage of
the distributed natre of ressources or if they’re not basically Cloud
We are using multiple storage backend / topology on our side ranging from
ScaleIO to CEPH passing by local compute host storage (were we need cold
storage) and VNX, I have to said that CEPH is our best bet. Since we use it
we clearly reduced our outages, allowed our user advanced features such as
Hi guys, I use the «new» openstack-client command as much as possible since
a couple of years now, but yet I had a hard time recently to find
equivalent command of the following:
nova force-delete
&
The command on swift that permit to recursively upload the content of a
directory and
PM, Flint WALRUS wrote:
> > Hi guys, I use the «new» openstack-client command as much as possible
> > since a couple of years now, but yet I had a hard time recently to find
> > equivalent command of the following:
> >
> > nova force-delete
> > &
> > The c
Hi everyone, I’m currently deploying Octavia as our global LBaaS for a lot
of various workload such as Kubernetes ingress LB.
We use Queens and plan to upgrade to rocky as soon as it reach the stable
release and we use the native Octavia APIv2 (Not a neutron redirect etc).
What do you need to
> implementation:
> https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/
>
> Cheers,
>
> Saverio
>
> 2018-05-28 19:09 GMT+02:00 Flint WALRUS :
> > Hi everyone, I’m currently deploying Octavia as our global LBaaS for a
> lot
> > of various workload such
As an operator dealing with platforms that do cold migration I would like
to be able to abort and rollback the process.
That would give us a better service quality and availability.
We do have no choices but to use cold migration on some of our remote sites
as they don’t get a unified storage
Hi folks, I’ve just added an entry with the google doc regarding GraphQL
API as it strike me yesterday, if you need further information feel free to
contact me.
Le jeu. 11 janv. 2018 à 08:32, Zhipeng Huang a
écrit :
> Hi Folks,
>
> Today we are gonna continue to comb
I'm thrilled to see improvement within this field of concerns and the way
Openstack mature by listening from users, would them be architects,
operators, end-users etc.
I'm glade to see such initiative and for sure will be there!
See you on Wednesday!
Le ven. 5 janv. 2018 à 15:02, Tobias Rydberg
As you’re using a L2 network topology and until all of your project use a
different network you can do:
domain=domain1,10.10.10.0/24
domain=domain2,20.20.20.0/24
Within the dnsmasq-neutron.conf file.
Of course, restart the neutron-server service once done.
Le mer. 10 janv. 2018 à 22:40, Jonathan
I would rather suggest you to deal with flavor/images metdata and host
aggregate for such segregation of cpu capacity and versionning.
If someone have another technics I’m pretty curious of it too.
Le ven. 26 janv. 2018 à 17:00, Gary Molenkamp a écrit :
> I'm trying to import a
I think that indeed make sens as because now a day most the installations
tend to either be within a dockerized solution or a relatively fair amont
(3/4) of large hw nodes.
One situation that would require dedicated hw would be a very large
installation requiring you to lower the network pression
oud-solutions.de> a écrit :
>
>
> > On 1. Feb 2018, at 19:42, Flint WALRUS <gael.ther...@gmail.com> wrote:
> >
> > I think that indeed make sens as because now a day most the
> installations tend to either be within a dockerized solution or a
> relatively fair
Interesting, could you provide an example ?
Le jeu. 1 févr. 2018 à 22:48, Christian Berendt <
bere...@betacloud-solutions.de> a écrit :
>
>
> > On 1. Feb 2018, at 21:42, Flint WALRUS <gael.ther...@gmail.com> wrote:
> >
> > Don’t get it, are you talking
myself from the cliff :-)
Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka <doka...@gmx.com> a écrit :
> Hi Flint,
>
> I think, Octavia expects reachibility between components over management
> network, regardless of network's technology.
>
>
> On 2/6/18 11:41 AM, Flint WAL
Aren’t CellsV2 more adapted to what you’re trying to do?
Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> a écrit :
> Hi
>
> I want to partition my OpenStack cloud so that:
>
> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
> - Projects pn+1..
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider
network instead of a neutron L3 vxlan ?
Is Octavia specifically relying on L3 networking or can it operate without
neutron L3 features ?
I didn't find anything specifically related to the network requirements
except for the
ur answer.
> As far as I understand CellsV2 are present in Pike and later. I need to
> implement such use case in an Ocata Openstack based cloud
>
> Thanks, Massimo
>
> 2018-02-06 10:26 GMT+01:00 Flint WALRUS <gael.ther...@gmail.com>:
>
>> Aren’t CellsV2 more adapted
t
> they are not configured for production use and are not always stable.
>
> If you are using RDO or RedHat OpenStack Platform (OSP) those projects
> do provide production images.
>
> Michael
>
> On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS
> wrote:
> >
> > Ok ok, I’ll
and a little bit of
formatting (layout issue).
Thanks for this awesome support Michael!
Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit :
> No worries, happy to share. Answers below.
>
> Michael
>
>
> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS
> wrote:
> >
&g
ns with centos 7 amphora has been passing. It should be in the
> same /etc/octavia/amphora-agent.conf location as the ubuntu based
> amphora.
>
> Michael
>
>
>
> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS
> wrote:
> >
> > Hi Michael, thanks a lot for that e
be routed (it does not require L2 connectivity).
>
> Michael
>
> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS
> wrote:
> >
> > Hi Folks,
> >
> > I'm currently deploying the Octavia component into our testing
> environment which is based on KOLLA.
> &
t;
> However we can also help with that during the review process.
>
> Michael
>
> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS
> wrote:
> >
> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment
> with peace in mind.
> >
> > Reg
Hi Folks,
I'm currently deploying the Octavia component into our testing environment
which is based on KOLLA.
So far I'm quite enjoying it as it is pretty much straight forward (Except
for some documentation pitfalls), but I'm now facing a weird and hard to
debug situation.
I actually have a
Hi matt, everyone,
I just read your analysis and would like to thank you for such work. I
really think there are numerous features included/used on this Nova rework
that would be highly beneficial for Nova and users of it.
I hope people will fairly appreciate you work.
I didn’t had time to
Hi clint, matt.
To be noticed that post-copy and auto-convergence are mutually exclusive.
The drawbacks that we experienced with here is that live-migration using
either way post-copy or auto-convergence will likely fail for application
not being able to handle throttling. Although, post-copy is
I’ll try to check the certificate format and make the appropriate change if
required or let you know if I’ve got something specific regarding that
topic.
Kind regards,
G.
Le mar. 14 août 2018 à 19:52, Flint WALRUS a
écrit :
> Hi Michael, thanks a lot for your quick response once again!
>
our gate
> tests to setup the TLS certificates:
>
> https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L295-L305
>
> Michael
> On Tue, Aug 14, 2018 at 4:54 AM Flint WALRUS
> wrote:
> >
> >
> > Hi guys,
> >
> > I continue to work on m
to the gunicorn server
using the lb-mgmt-net ip of the amphora.
Is there any logs regarding the gunicorn server where I could check why
does the amphora is not able to found the api endpoint?
Le mar. 14 août 2018 à 19:53, Flint WALRUS a
écrit :
> I’ll try to check the certificate format and m
Hi guys,
I continue to work on my Octavia integration using Kolla-Ansible and I'm
facing a strange behavior.
As for now I'm working on a POC using restricted HW and SW Capacities, I'm
facing a strange issue when trying to launch a new load-balancer.
When I create a new LB, would it be using CLI
ar/log inside the amphora.
>
> Michael
> On Thu, Aug 16, 2018 at 1:43 PM Flint WALRUS
> wrote:
> >
> > Hi Michael,
> >
> > Ok, it was indeed an issue with the create_certificate.sh script for
> centos that indeed improperly created the client.pem certificate.
Hi guys, I'm a trying to install Octavia as a new service on our cloud and
facing few issues that I've been able to manage so far, until this nova-api
keypair related issue.
When creating a loadbalancer with the following command:
openstack --os-cloud loadbalancer create --name lb1
a
écrit :
> Right. I am not familiar with the kolla role either, but you are
> correct. The keypair created in nova needs to be "owned" by the
> octavia service account.
>
> Michael
> On Tue, Jul 17, 2018 at 9:07 AM iain MacDonnell
> wrote:
> >
> >
> &g
Hi lars, personally using an internally crafted service.
It’s one of my main regret with Openstack, lack of a decent billing system.
Le lun. 12 mars 2018 à 20:22, Lars Kellogg-Stedman a
écrit :
> Hey folks,
>
> I'm curious what folks out there are using for chargeback/billing
38 matches
Mail list logo