Hello,
Is Octavia (Ocata version) supposed to work with Heat (tested with Newton
version) deployment? I launch a Heat stack trying to deploy a load balancer
with a single listener/pool and two members. While the Heat shows status
COMPLETE and the Neutron shows all objects as created, Octavia
Hello,
Are there any plans to fix this in Heat?
Thank you,
Mihaela Balas
From: Rabi Mishra [mailto:ramis...@redhat.com]
Sent: Wednesday, July 26, 2017 3:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia][heat] Octavia deployment with
Thanks a lot for the response.
Mihaela
-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com]
Sent: Wednesday, October 04, 2017 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Odp.: Odp.: [neutron][lbaasv2][agent
Hi,
We are about to deploy Octavia (Ocata) in a multi-tenant Openstack environment.
All amphoras (for all tenants) will be spawned in a "service" tenant. What is
the easiest way to list the amphora instances of a certain load balancer? As
far as I could see, there is no API call returning such
Hi German,
I just tested with Newton version and I get the same error as with Mitaka “Not
Implemented Error” (see below).
Mihaela
From: German Eichberger [mailto:german.eichber...@rackspace.com]
Sent: Tuesday, October 10, 2017 12:42 AM
To: OpenStack Development Mailing List (not for usage
Sorry, I forgot the link to this documentation:
https://docs.openstack.org/octavia/latest/user/guides/l7-cookbook.html
From: mihaela.ba...@orange.com [mailto:mihaela.ba...@orange.com]
Sent: Friday, September 08, 2017 10:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev]
Hello,
Is redirect_to_pool policy currently supported with Octavia? Since a listener
can only have one pool (the default pool) I cannot see how this can be
configured. However, this documentation details a lot of scenarios. I am
testing Octavia Ocata version.
Thank you,
Mihaela Balas
Hello,
Does the agent implementation of LBaaSv2 support L7 policies? I am testing with
Mitaka version and I get "Not Implemented Error".
{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname":
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id":
Hi,
I appreciate the help. In neutron-server I have the following service providers
enabled:
service_provider =
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider =
Hi Michael,
Thank you for the detailed explanation. I was in the worst scenario where the
database entries were purged and I had to manually re-create the DB entries and
the ports. I successfully managed to insert the lines in the database and the
amphoras were created.
Thanks a lot for the
Hello,
Is there any setting that we can provide to nova-compute in nova.conf/placement
so that it will use the internal URL for placement API? By default, I see that
(in Newton) it uses the public URL and our compute nodes do not have access to
the public IP address.
Thank you,
Mihaela Balas
I am also interested how to fix this. If you can describe shortly the procedure.
Thanks,
Mihaela
-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com]
Sent: Monday, November 06, 2017 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re:
Hello,
Is there any way to set up Octavia so that we are able to launch amphora in
different AZs and connected to different network per each AZ?
Than you,
Mihaela Balas
_
Ce
Hello,
Is there any user story for the scenario below?
- Octavia is set to TERMINATED_HTTPS and also initiates SSL to backend
servers
After testing all the combination possible and after looking at the Octavia
haproxy templates in Queens version I understand that this kind of setup
Hi Michael,
I build a new amphora image with the latest patches and I reproduced two
different bugs that I see in my environment. One of them is similar to the one
initially described in this thread. I opened two stories as you advised:
https://storyboard.openstack.org/#!/story/2001960
Hello,
I have the following setup:
Neutron - Newton version
Octavia - Ocata version
Neutron LBaaS had the following configuration in services_lbaas.conf:
[octavia]
..
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
Hello,
The Keystone Listener outputs the below error, over and over again, when
processing a delete project event. Do you have any idea why this happens?
Happens the same with Ocata and Pike versions.
Thank you,
Mihaela Balas
2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : for
Hello,
I am testing Octavia Queens and I see that the failover behavior is very much
different than the one in Ocata (this is the version we are currently running
in production).
One example of such behavior is:
I create 4 load balancers and after the creation is successful, I shut off all
18 matches
Mail list logo