At the moment that is all we have for a setup guide.
That said, all of the Octavia controller processes are fully HA
capable. The one setting I can think of is the controller_ip_port_list
setting mentioned above. It will need to contain an entry for each
health manager IP/port as Sa Pham
We decided to cancel the weekly Octavia IRC meeting next week due to
the OpenStack Summit in Berlin.
Some of the Octavia related sessions:
Octavia - Project Onboarding - Tue 13, 3:20pm - 4:00pm
Extending Your OpenStack Troubleshooting Tool Box - Digging deeper
into network operations - Wed 14,
This is awesome Ian. Thanks for all of the work on this!
Michael
On Tue, Oct 30, 2018 at 8:28 AM Frank Kloeker wrote:
>
> Hi Ian,
>
> thanks for sharing. What a great user story about community work and
> contributing to OpenStack. I think you did a great job as mentor and
> organizer. I want
, ruled out signing_digest, so I'm
> >>> back to something related
> >>> to the certificates or the communication between the endpoints, or what
> >>> actually responds inside the amphora (gunicorn IIUC?). Based on the
> >>> "verify" fu
, ruled out signing_digest, so I'm
> >>> back to something related
> >>> to the certificates or the communication between the endpoints, or what
> >>> actually responds inside the amphora (gunicorn IIUC?). Based on the
> >>> "verify" fu
I am still catching up on e-mail from the weekend.
There are a lot of different options for how to implement the
lb-mgmt-network for the controller<->amphora communication. I can't
talk to what options Kolla provides, but I can talk to how Octavia
works.
One thing to note on the lb-mgmt-net
Are the controller and the amphora using the same version of Octavia?
We had a python3 issue where we had to change the HMAC digest used. If
you controller is running an older version of Octavia than your
amphora images, it may not have the compatibility code to support the
new format. The
Hi Erik,
Sorry to hear you are still having certificate issues.
Issue #2 is probably caused by issue #1. Since we hot-plug the tenant
network for the VIP, one of the first steps after the worker connects
to the amphora agent is finishing the required configuration of the
VIP interface inside the
Hi there.
I'm not sure what is happening there and I don't use kolla, so I need
to ask a few more questions.
Is that network ID being used for the VIP or the lb-mgmt-net?
Any chance you can provide a debug log paste from the API process for
this request?
Basically it is saying that network ID
Hi Ivan,
As Octavia PTL I have no issue with adding a tempest-plugin repository
for the octavia-dashboard. I think we have had examples with the main
tempest tests and plugins where trying to do a suite of tests in one
repository becomes messy.
We may also want to consider doing a
I am interested in participating in this discussion.
I think we have had a few goals that were selected before all of the
parts were in place. This leads to re-work and/or pushing goals work
into the already busy milestone 3 time frame.
Michael
On Mon, Oct 15, 2018 at 5:16 AM Chris Dent wrote:
er mailing list: openstack-dev
[at] lists [dot] openstack [dot] org. Please prefix the subject with
'[openstack-dev][Octavia]'
Thank you for your support and patience during this transition,
Michael Johnson
Octavia PTL
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126836.html
er mailing list: openstack-dev
[at] lists [dot] openstack [dot] org. Please prefix the subject with
'[openstack-dev][Octavia]'
Thank you for your support and patience during this transition,
Michael Johnson
Octavia PTL
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/12683
am looking for the Project Navigator source code.
Michael
On Fri, Sep 21, 2018 at 4:22 PM Jimmy McArthur wrote:
>
> The TC tags are indeed in a different repo:
> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
>
> Let me know if this makes sense.
>
could fix it.
Michael
On Fri, Sep 21, 2018 at 1:05 PM Matt Riedemann wrote:
>
> On 9/21/2018 1:11 PM, Michael Johnson wrote:
> > Thank you Jimmy for making this available for updates.
> >
> > I was unable to find the code backing the project tags section of the
> > Pr
Thank you Jimmy for making this available for updates.
I was unable to find the code backing the project tags section of the
Project Navigator pages.
Our page is missing some upgrade tags and is showing duplicate "Stable
branch policy" tags.
Also not a docs core, but fully support this nomination!
Michael
On Wed, Sep 19, 2018 at 12:25 PM Jay S Bryant wrote:
>
>
>
> On 9/19/2018 1:50 PM, Petr Kovar wrote:
> > Hi all,
> >
> > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
> > membership in the openstack-doc-core
The bug was marked high level by our QA team. I need to fix it as soon
>>> as possible.
>>> Does Michael Johnson have any good suggestion? I am willing to
>>> complete the
>>> repair work of this bug. If your patch still takes a while to prepare.
mplemented/policy-in-code.html
Michael
On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad wrote:
>
>
>
> On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote:
>>
>> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>> which maps to t
mplemented/policy-in-code.html
Michael
On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad wrote:
>
>
>
> On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson wrote:
>>
>> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>> which maps to t
This is a known regression in the Octavia API performance. It has an
existing story[0] that is under development. You are correct, that
star join is the root of the problem.
Look for a patch soon.
[0] https://storyboard.openstack.org/#!/story/2002933
Michael
On Thu, Sep 13, 2018 at 10:32 AM Erik
In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
which maps to the "os--api::" format.
I selected it as it uses the service-type[1], references the API
resource, and then the method. So it maps well to the API reference[2]
for the service.
[0]
In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
which maps to the "os--api::" format.
I selected it as it uses the service-type[1], references the API
resource, and then the method. So it maps well to the API reference[2]
for the service.
[0]
We do this in Octavia. The openstack-tox-cover calls the cover
environment in tox.ini, so you can add it there.
Check out the tox.ini in the openstack/octavia project.
Michael
On Wed, Sep 12, 2018 at 7:01 AM reedip banerjee wrote:
>
> Hi All,
>
> Has anyone ever experimented with including
Octavia and Release teams,
I am adding Carlos Goncalves to the Octavia project release management
liaison list for Stein.
He will be assisting with regular stable branch release patches.
Let me know if you have any questions or concerns,
Michael
Corey,
Awesome! Excited to see Octavia included in the release.
Michael
On Fri, Sep 7, 2018 at 8:19 AM Corey Bryant wrote:
>
> The Ubuntu OpenStack team at Canonical is pleased to announce the general
> availability of OpenStack Rocky on Ubuntu 18.04 LTS via the Ubuntu Cloud
> Archive.
Hello Octavia community,
As many of us will be attending the OpenStack PTG next week, I am
cancelling the weekly Octavia IRC meeting on September 12th.
We will resume our normal schedule on September 19th.
Michael
__
;
>> On Fri, Aug 31, 2018 at 5:55 PM, Miguel Lavalle wrote:
>>>
>>> Well, I don't vote here but I stiil want to express my +1. I knew this was
>>> going to happen sooner rather than later
>>>
>>> On Thu, Aug 30, 2018 at 10:24 PM, Michael Johnson
Hello Octavia community,
I would like to propose Carlos Goncalves as a core reviewer on the
Octavia project.
Carlos has provided numerous enhancements to the Octavia project,
including setting up the grenade gate for Octavia upgrade testing.
Over the last few releases he has also been providing
hange if
>> required or let you know if I’ve got something specific regarding that topic.
>>
>> Kind regards,
>> G.
>> Le mar. 14 août 2018 à 19:52, Flint WALRUS a écrit :
>>>
>>> Hi Michael, thanks a lot for your quick response once again!
>>&
Hi there Flint.
Octavia fully supports using self-signed certificates and we use those
in our gate tests.
We do not allow non-TLS authenticated connections in the code, even
for lab setups.
This is a configuration issue or certificate file format issue. When
the controller is attempting to
; Thanks for the link.
> Le mer. 1 août 2018 à 17:57, Michael Johnson a écrit :
>>
>> Hi Flint,
>>
>> Yes, our documentation follows the OpenStack documentation rules. It
>> is in RestructuredText format.
>>
>> The documentation team has some gu
notations and push a patch
> for those points that would need clarification and a little bit of formatting
> (layout issue).
>
> Thanks for this awesome support Michael!
> Le mer. 1 août 2018 à 07:57, Michael Johnson a écrit :
>>
>> No worries, happy to share. Answers below
y understand the
> underlying mechanisms before we goes live with the solution.
>
> G.
>
> Le mer. 1 août 2018 à 02:36, Michael Johnson a écrit :
>>
>> Hi Flint,
>>
>> Happy to help.
>>
>> Right now the list of controller endpoints is pushed at boot time a
anks for your help!
> Le mar. 31 juil. 2018 à 18:15, Michael Johnson a écrit :
>>
>> Hi Flint,
>>
>> We don't have a logical network diagram at this time (it's still on
>> the to-do list), but I can talk you through it.
>>
>> The Octavia worker, health m
Hi Flint,
We don't have a logical network diagram at this time (it's still on
the to-do list), but I can talk you through it.
The Octavia worker, health manager, and housekeeping need to be able
to reach the amphora (service VM at this point) over the lb-mgmt-net
on TCP 9443. It knows the
nd flavors.
Thank you for your support of Octavia during Rocky and your consideration for
Stein,
Michael Johnson (johnsom)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.ope
Octavia is done. Thank you for the patch!
Michael
On Tue, Jul 24, 2018 at 8:35 AM Hongbin Lu wrote:
>
> Hi folks,
>
>
>
> Neutron has landed a patch to enable strict validation on query parameters
> when listing resources [1]. I tested the Neutorn’s change in your project’s
> gate and the
I saw your storyboard for this. Thank you for creating a story.
Since the controllers manage the certificates for the amphora (both
generation and rotation) the overhead to an operator should be
extremely low and limited to initial installation configuration.
Since we have automated the
Right. I am not familiar with the kolla role either, but you are
correct. The keypair created in nova needs to be "owned" by the
octavia service account.
Michael
On Tue, Jul 17, 2018 at 9:07 AM iain MacDonnell
wrote:
>
>
>
> On 07/17/2018 08:13 AM, Flint WALRUS wrote:
> > Hi guys, I'm a trying
Hi Octavia folks!
I have created an etherpad [1] for topics at the Stein PTG in Denver.
Please indicate if you will be attending or not and any topics you
think we should cover.
Michael
[1] https://etherpad.openstack.org/p/octavia-stein-ptg
Octavia passed tempest with this change and networkx 2.1.
Michael
On Tue, Jul 10, 2018 at 9:29 AM Doug Hellmann wrote:
>
> Excerpts from Matthew Thode's message of 2018-07-10 10:59:33 -0500:
> > On 18-07-09 15:15:23, Matthew Thode wrote:
> > > We have a patch that looks good, can we get it
Hi Jeff,
Thank you for your comments. I will reply on the story.
Michael
On Thu, Jul 5, 2018 at 8:02 PM Jeff Yang wrote:
>
> Recently, my team plans to provider load balancing services with octavia.I
> recorded some of the needs and suggestions of our team members.The following
> suggestions
I think we should continue with option 1.
It is an indicator that a project is active in OpenStack and is
explicit about which code should be used together.
Both of those statements hold no technical water, but address the
"human" factor of "What is OpenStack?", "What do I need to deploy?",
Octavia also has an informal rule about two cores from the same
company merging patches. I support this because it makes sure we have
a diverse perspective on the patches. Specifically it has worked well
for us as all of the cores have different cloud designs, so it catches
anything that would
Hi Mihaela,
Backend re-encryption is on our roadmap[1], but not yet implemented.
We have all of the technical pieces to make this work, it's just
someone getting time to do the API additions and update the flows.
[1]
Yes, this just started occurring with Thursday/Fridays updates to the
Ubuntu cloud image upstream of us.
I have posted a patch for Queens here: https://review.openstack.org/#/c/569531
We will be back porting that as soon as we can to the other stable
releases. Please review the backports as they
Hi rezroo,
Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this
The Octavia project had other breakage due to sphinx > 1.7 but we have
already resolved those issues.
Back story: the way arguments are handled for apidoc changed.
An example patch for the fix would be: https://review.openstack.org/#/c/568383/
Michael (johnsom)
On Wed, May 16, 2018 at 1:59 PM,
Some of the team will be attending the OpenStack summit in Vancouver,
so I am cancelling the weekly IRC meeting for the 23rd.
We will resume our normal schedule on the 30th.
Michael
__
OpenStack Development Mailing List
threads
>
> [haproxy_amphora]
> build_rate_limit
> build_active_retries
>
> [controller_worker]
> workers
> amp_active_retries
> amp_active_wait_sec
>
> [task_flow]
> max_workers
>
> Thank you for your help,
> Mihaela Balas
>
> -Original Message-
> From:
Hi Artem,
You are correct that the API reference at
https://developer.openstack.org/api-ref/network/v2/index.html#pools is
incorrect. As you figured out, someone mistakenly merged the long
dead/removed LBaaS v1 API specification into the LBaaS v2 API
specification at that link.
The current, and
Hi Mihaela,
I am sorry to hear you are having trouble with the queens release of
Octavia. It is true that a lot of work has gone into the failover
capability, specifically working around a python threading issue and
making it more resistant to certain neutron failure situations
(missing ports,
I am willing to help with maintenance (patch reviews/gate fixes), but
I cannot commit time to development work on it.
Michael
On Wed, Apr 11, 2018 at 6:21 AM, Chris Dent wrote:
> On Wed, 11 Apr 2018, Dougal Matthews wrote:
>
>> I would like to see us move away from WSME.
I echo Ben's question about what is the recommended replacement.
Not long ago we were advised to use WSME over the alternatives which
is why Octavia is using the WSME types and pecan extension.
Thanks,
Michael
On Mon, Apr 9, 2018 at 10:16 AM, Ben Nemec wrote:
>
>
> On
Yeah, neutron-lbaas runs in the context of the neutron service (it is
a neutron extension), so would be covered by neutron completing the
goal.
Michael
On Fri, Apr 6, 2018 at 3:37 AM, Sławek Kapłoński wrote:
> Hi,
>
> Thanks Akihiro for help. I added
t;
>> +1, definitely a good contributor! Thanks especially for your work on the
>> dashboard!
>>
>> On Tue, Mar 27, 2018 at 2:09 PM German Eichberger
>> <german.eichber...@rackspace.com> wrote:
>>>
>>> +1
>>>
>>> Really e
Does anyone know how this will work with services that are using
cotyledon instead of oslo.service (for eliminating eventlet)?
Michael
On Mon, Mar 26, 2018 at 5:35 AM, Sławomir Kapłoński wrote:
> Hi,
>
>
>> Wiadomość napisana przez ChangBo Guo w dniu
Hello Octavia community,
I would like to propose Jacky Hu (dayou) as a core reviewer on the
Octavia project.
Jacky has done amazing work on Octavia dashboard, specifically
updating the look and feel of our details pages to be more user
friendly. Recently he has contributed support for L7
Hi Vadim,
Yes, currently the only network driver available for Octavia (called
allowed-address-pairs) uses the allowed-address-pairs feature of
neutron. This allows active/standby and VIP migration during failover
situations.
If you need to run without that feature, an non-allowed-address-pairs
FYI, Octavia has started to use the new devstack-tempest parent here:
https://review.openstack.org/#/c/543034/17/zuul.d/jobs.yaml
There is a lot of work still left to do on our tempest-plugin but we
are making progress.
Thanks for the communication out!
Michael
On Tue, Feb 20, 2018 at 1:22 PM,
Hi Gary,
All of the answers to your questions are on the FAQ linked in the announcement.
1: If you are already using the Octavia driver or the neutron-lbaas
proxy driver, you are already migrated. We will provide a port
migration tool to migrate the neutron port ownership from
neutron-lbaas if
Hi Kendall,
Can you put Octavia down for 2:10 on Thursday after neutron?
Thanks,
Michael
On Wed, Feb 7, 2018 at 9:15 PM, Kendall Nelson wrote:
> Hello PTLs and SIG Chairs!
>
> So here's the deal, we have 50 spots that are first come, first served. We
> have slots
I am interested in contributing to this discussion.
Michael
On Wed, Feb 7, 2018 at 3:42 AM, Thierry Carrez wrote:
> Hi everyone,
>
> I was wondering if anyone would be interested in brainstorming the
> question of how to better align our release cycle and stable branch
>
No issue with using an L2 network for the lb-mgmt-net.
It only requires the following:
Controllers can reach amphora-agent IPs on the TCP bind_port (default 9443)
Amphora-agents can reach the controllers in the
controller_ip_port_list via UDP (default )
This can be via an L2 lb-mgmt-net
and additional operator
tooling. I plan to continue working on improving our documentation,
specifically with detailed installation, high availability, and neutron-lbaas
migration guides.
Thank you for your support of Octavia during Queens and your consideration for
Rocky,
Michael Johnson (johnsom
Hi Mihaela,
The polling logic that the neutron-lbaas octavia driver uses to update
the neutron database is as follows:
Once a Create/Update/Delete action is executed against a load balancer
using the Octavia driver a polling thread is created.
On every request_poll_interval the thread queries
you for your support and patience during this transition,
Michael Johnson
Octavia PTL
[1]
http://specs.openstack.org/openstack/neutron-specs/specs/newton/neutron-stadium.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[3] https
you for your support and patience during this transition,
Michael Johnson
Octavia PTL
[1]
http://specs.openstack.org/openstack/neutron-specs/specs/newton/neutron-stadium.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[3] https
Should be no issues with python-octaviaclient, we do not use the short options.
Michael
On Fri, Jan 26, 2018 at 1:03 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2018-01-18 10:15:16 -0500:
>> We've been working this week to resolve an issue between
This sounds great Ihar!
Let us know when we should make the changes to the neutron-lbaas projects.
Michael
On Wed, Jan 17, 2018 at 11:26 AM, Ihar Hrachyshka wrote:
> Hi all,
>
> tl;dr I propose to switch to lib/neutron devstack library in Queens. I
> ask for buy-in to the
Hi everyone,
With many member of the Octavia team taking an end-of-year vacation we
are cancelling the next weekly meeting on 12/27/2017. We will resume
our regular weekly meetings on 1/3/18.
Michael
__
OpenStack
Hi Kim,
Sorry you are having trouble after your upgrade.
From the log it appears that the neutron-lbaas Octavia driver is
unable to reach keystone to request an auth token.
Please make sure there is a [service_auth] section configured in your
neutron_lbaas.conf/neutron.conf file(s).
An
I think the steps listed in that document seem reasonable. There are
a few typos here and there, but in general it looks ok.
Michael
On Mon, Dec 4, 2017 at 10:14 PM, Yipei Niu wrote:
> Hi, all
>
> Tricircle team has already enabled LBaaS with tricircle. Here is the guide
>
Hi Volodymyr,
This is a known issue with the neutron (neutron-lbaas) database
getting out of sync and/or not properly handling driver errors.
This is one reason we are moving to deprecate neutron-lbaas. If you
can, we recommend you move to exclusively using Octavia without
neutron-lbaas.
The
Hello Volodymyr,
You have two options:
1. When you create your VIP, simply put in your external network as
the vip-subnet-id or vip-network-id. This will allocate a public IP to
the VIP.
2. Use neutron to assign a floating IP to the VIP address of the load
balancer. From your example, let's say
Is there a template for this? I wouldn't want to have 12 different
formatting styles for the page (Yes, I'm looking at Amrith and the
blinking red text. grin)
Michael
On Tue, Nov 21, 2017 at 6:28 AM, Amrith Kumar wrote:
> Very cool, thanks Sean!
>
>
> -amrith
>
>
> On
Yipei,
Yeah, we have clearly identified the problem. Those two default route
lines should not be different. See my devstack:
sudo: unable to resolve host amphora-20a717b4-eb97-4b5c-a11a-0633fe61f135
default via 10.0.0.1 dev eth1 table 1 onlink
default via 10.0.0.1 dev eth1 onlink
So the issue
Yipei,
I am struggling to follow some of the details as I see different information:
+---+-+
| Field | Value |
+---+-+
|
The Octavia project has a few graphviz diagrams in it's documentation.
You can reference that project to see how it is done.
That said, we have seen a decline in the stability of the graphviz
code over the last few years (cylinder object disappeared, graphviz
dot crashes on Ubuntu, etc.) that we
The actual gateway address does not matter to Octavia/amphora. It
gets that value from DHCP or from neutron if a static address was
assigned from neutron.
My concern is that the subnet gateway 10.0.1.10 does not match the
gateway address DHCP gave us 10.0.1.1.
Technically, since the two addresses
Hi Yipei,
I see a few things that are odd:
stack@devstack-1:/opt/stack/octavia$ sudo ip netns exec
qdhcp-310fea4b-36ae-4617-b499-5936e8eda842 curl 10.0.1.4
curl: (7) Failed to connect to 10.0.1.4 port 80: Connection timed out
This means that the connection is not working between curl and the
Michael
On Wed, Nov 8, 2017 at 2:07 AM, <mihaela.ba...@orange.com> wrote:
> I am also interested how to fix this. If you can describe shortly the
> procedure.
>
> Thanks,
> Mihaela
>
> -Original Message-
> From: Michael Johnson [mailto:johnso...@gmail.com]
> Se
Hi Yipei,
I need some more information to help you out. Can you provide the following?
1. What version of Octavia you are using.
2. "openstack server list" output for the amphora.
3. "openstack loadbalancer show" for the load balancer.
4. "openstack loadbalancer listener show" for the listener.
I think we helped you get going again in the IRC channel. Please ping
us again in the IRC channel if you need more assistance.
Michael
On Thu, Nov 2, 2017 at 4:42 AM, Kim-Norman Sahm
wrote:
> Hi,
>
> after a rabbitmq problem octavia has removed all amphora instances.
Hi Mihaela,
Welcome to the Octavia club!
In an Ocata release you are correct that there is no API way to
identify amphora related to a given load balancer.
In the queens release we have introduced a new administrator API for
amphora that provides the functionality you are looking for:
gt;>
>> congrats!
>>
>> On 10/05/2017 03:51 AM, German Eichberger wrote:
>> > +1
>> >
>> > Welcome Nir, well earned.
>> >
>> > German
>> >
>> > On 10/4/17, 4:28 PM, "Michael Johnson" <johnso...@
Hello OpenStack load balancing folks,
Summary: Octavia/neutron-lbaas weekly IRC meeting will be 20:00 UTC on
Wednesdays in the #openstack-lbaas channel.
As discussed at the last two weekly meetings[1], we are moving our
meeting time back to the previous time slot. Unfortunately the
earlier time
Hello OpenStack folks,
I would like to propose Nir Magnezi as a core reviewer on the Octavia project.
He has been an active contributor to the Octavia projects for a few
releases and has been providing solid patch review comments. His
review stats are also in line with other core reviewers.
Hi Mihaela,
The old neutron-lbaas haproxy namespace driver does not have L7
support. Only the Octavia driver and some vendor provider drivers have
L7 support.
Michael
On Tue, Oct 3, 2017 at 11:35 PM, Pawel Suder wrote:
> Hello,
>
>
> It seems that
Hi Yipei,
Even running through neutron-lbaas I get the same successful test.
Just to double check, you are using the Octavia driver?
stack@devstackpy27-2:~$ sudo ip netns exec
qdhcp-4bcefe3e-038f-4a77-af4f-a560b6316a7a curl 172.21.1.16
Welcome to 172.21.1.17 connection 3
Michael
On Thu, Sep
Hi Yipei,
I ran this scenario today using octavia and had success. I'm not sure
what could be different.
I see you are using neutron-lbaas. I will build a devstack with
neutron-lbaas enabled and try that, but I can't think of what would
impact this test case by going through the neutron-lbaas
Hi Yipei,
I just tried to reproduce this and was not successful.
I setup a tenant network, added a web server to it, created a
loadbalancer VIP on the tenant network, added the webserver as a
member on the load balancer. I can curl from the tenant network
qdhcp- netns without issue.
Are you
A recent extreme example:
https://review.openstack.org/#/c/494981/1/specs/version0.8/active_passive_loadbalancer.rst
I would love to have a boilerplate statement I can use as a template
for things like this. I feel bad -1/-2 these as I want to encourage
involvement, but they are a drain on the
+1 Miguel, thanks for putting this together!
Michael
On Wed, Sep 13, 2017 at 9:09 PM, Akihiro Motoki wrote:
> +1 thanks for organizing this
>
> 2017-09-12 17:23 GMT-06:00 Miguel Lavalle :
>> Dear Neutrinos,
>>
>> Our social event will take place on
Hi Liping,
FYI, Neutron LBaaS is no longer part of Neutron. Load balancing has
been consolidated under the Octavia project. I have added that tag to
the subject.
We currently do not have plans to add HA capabilities to the haproxy
namespace driver. The intention behind building the octavia
Yes, you can redirect to a pool. Multiple pools can be created under
the load balancer object and then referenced from the L7 Policy.
This example shows a load balancer with a redirect to pool L7 policy:
FYI, code in Octavia that checks for the extensions you could borrow:
https://github.com/openstack/octavia/blob/master/octavia/network/drivers/neutron/base.py#L49
On Mon, Sep 4, 2017 at 11:18 PM, Gary Kotton wrote:
>
>
> On 9/4/17, 3:47 PM, "Stephen Finucane"
Hi,
Flavors are intended to be setup by the operator/admin only. They
capture details of the load balancer offering and any local specific
configuration. The intent here is the flavor options and descriptions
will be visible to end users, but creation/modification would require
an admin role.
Awesome Monty. This is a great proposal. I have no preference on which way
these merge, but see huge value in straightening this out. Frankly I think
some of the tempest plugin work could benefit from having an official and well
maintained SDK as well.
So, I am in favor of getting the ball
1 - 100 of 198 matches
Mail list logo