[openstack-dev] [octavia] TERMINATED_HTTPS + SSL to backend server

2018-05-30 Thread mihaela.balas
Hello,

Is there any user story for the scenario below?


-  Octavia is set to TERMINATED_HTTPS and also initiates SSL to backend 
servers

After testing all the combination possible and after looking at the Octavia 
haproxy templates in Queens version I understand that this kind of setup is 
currently not supported.

Thanks,
Mihaela

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Multiple availability zone and network region support

2018-05-25 Thread mihaela.balas
Hello,

Is there any way to set up Octavia so that we are able to launch amphora in 
different AZs and connected to different network per each AZ?

Than you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout

2018-05-03 Thread mihaela.balas
Hi Michael,

I build a new amphora image with the latest patches and I reproduced two 
different bugs that I see in my environment. One of them is similar to the one 
initially described in this thread. I opened two stories as you advised:

https://storyboard.openstack.org/#!/story/2001960
https://storyboard.openstack.org/#!/story/2001955

Meanwhile, can you provide some recommendation of values for the following 
parameters (maybe in relation with number of workers, cores, computes etc)?

[health_manager]
failover_threads
status_update_threads

[haproxy_amphora]
build_rate_limit
build_active_retries

[controller_worker]
workers
amp_active_retries
amp_active_wait_sec

[task_flow]
max_workers

Thank you for your help,
Mihaela Balas

-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com] 
Sent: Friday, April 27, 2018 8:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia] Sometimes amphoras are not re-created if 
they are not reached for more than heartbeat_timeout

Hi Mihaela,

I am sorry to hear you are having trouble with the queens release of Octavia.  
It is true that a lot of work has gone into the failover capability, 
specifically working around a python threading issue and making it more 
resistant to certain neutron failure situations (missing ports, etc.).

I know of one open bug against the failover flows, 
https://storyboard.openstack.org/#!/story/2001481, "failover breaks in 
Active/Standby mode if both amphroae are down".

Unfortunately the log snippet above does not give me enough information about 
the problem to help with this issue. From the snippet it looks like the 
failovers were initiated, but the controllers are unable to reach the 
amphora-agent on the replacement amphora. It will continue those retry 
attempts, but eventually will fail the amphora into ERROR if it doesn't succeed.

One thought I have is if you created you amphora image in the last two weeks, 
you may have built an amphora using the master branch of octavia, which had a 
bug that impacted active/standby images. This was introduced working around the 
new pip 10 issues.  That patch has been
fixed: https://review.openstack.org/#/c/564371/

If neither of these situations match your environment, please open a story 
(https://storyboard.openstack.org/#!/dashboard/stories) for us and include the 
health manager logs from the point you delete the amphora up until it starts 
these connection attempts.  We will dig through those logs to see what the 
issue might be.

Michael (johnsom)

On Wed, Apr 25, 2018 at 4:07 AM,   wrote:
> Hello,
>
>
>
> I am testing Octavia Queens and I see that the failover behavior is 
> very much different than the one in Ocata (this is the version we are 
> currently running in production).
>
> One example of such behavior is:
>
>
>
> I create 4 load balancers and after the creation is successful, I shut 
> off all the 8 amphoras. Sometimes, even the health-manager agent does 
> not reach the amphoras, they are not deleted and re-created. The logs 
> look like shown below even when the heartbeat timeout is long passed. 
> Sometimes the amphoras are deleted and re-created. Sometimes,  they 
> are partially re-created – part of them remain in shut off.
>
> Heartbeat_timeout is set to 60 seconds.
>
>
>
>
>
>
>
> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 
> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-339b54a7-ab0c-422a-832f-a444cd710497 - 
> a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries 
> exceeded with url:
> /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octav
> iasrv2.orange.com.pem (Caused by 
> NewConnectionError(' object at 0x7f559862c710>: Failed to establish a new connection: 
> [Errno 113] No route to host',))
>
> [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 
> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - 
> a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries 
> exceeded with url:
> /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8
> -9d73-2397e281712c/haproxy (Caused by 
> NewConnectionError(' object at 0x7f8a0de95e10>: Failed to establish a new connection: 
> [Errno 113] No route to host',))
>
> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 
> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver
> [req-10febb10-85ea-4082-9df7-daa48894b004 - 
> a5f15235c0714365b98a50a11ec956e7
> - - -] Could not connect to instance. Retrying.: ConnectionError:
> HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries 
> exceeded with url:
> 

[openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout

2018-04-25 Thread mihaela.balas
Hello,

I am testing Octavia Queens and I see that the failover behavior is very much 
different than the one in Ocata (this is the version we are currently running 
in production).
One example of such behavior is:

I create 4 load balancers and after the creation is successful, I shut off all 
the 8 amphoras. Sometimes, even the health-manager agent does not reach the 
amphoras, they are not deleted and re-created. The logs look like shown below 
even when the heartbeat timeout is long passed. Sometimes the amphoras are 
deleted and re-created. Sometimes,  they are partially re-created - part of 
them remain in shut off.
Heartbeat_timeout is set to 60 seconds.



[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages 

[openstack-dev] [Barbican] Keystone Listener error when processing delete project event

2018-02-16 Thread mihaela.balas
Hello,


The Keystone Listener outputs the below error, over and over again, when 
processing a delete project event. Do you have any idea why this happens? 
Happens the same with Ocata and Pike versions.

Thank you,
Mihaela Balas

2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : for connection 
ef42486446c34306bd10921b264da26b heartbeat_tick 
/opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678
2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 
111624/334860, now - 111625/334860, monotonic - 895085.445269, 
last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection 
ef42486446c34306bd10921b264da26b heartbeat_tick 
/opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700
2018-02-16 15:36:02.675 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received 
message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ 
/opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257
2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : for connection 
ef42486446c34306bd10921b264da26b heartbeat_tick 
/opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678
2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 
111625/334860, now - 111625/334863, monotonic - 895085.447218, 
last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection 
ef42486446c34306bd10921b264da26b heartbeat_tick 
/opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700
2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input 
keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 
process_event 
/opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72
2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input 
keystone event payload = {u'resource_info': 
u'79d3491d58e542ada54776d2bd68ef7e'} process_event 
/opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73
2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input 
keystone event type = identity.project.deleted process_event 
/opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74
2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] Input 
keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 
'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event 
/opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75
2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] Keystone 
Event: resource type=project, operation type=deleted, keystone 
id=79d3491d58e542ada54776d2bd68ef7e process_event 
/opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80
2018-02-16 15:36:02.677 1 DEBUG barbican.tasks.keystone_consumer [-] Creating 
KeystoneEventConsumer task processor __init__ 
/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40
2018-02-16 15:36:02.677 1 DEBUG barbican.model.repositories [-] Getting 
session... get_session 
/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources [-] Could not retrieve 
information needed to process task 'Project cleanup via Keystone 
notifications'.: TypeError: 'NoneType' object is not callable
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources Traceback (most recent 
call last):
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources   File 
"/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 
91, in process
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources entity = 
self.retrieve_entity(*args, **kwargs)
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources   File 
"/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py",
 line 67, in retrieve_entity
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources 
suppress_exception=True)
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources   File 
"/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", 
line 586, in find_by_external_project_id
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources session = 
self.get_session(session)
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources   File 
"/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", 
line 354, in get_session
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return session or 
get_session()
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources   File 
"/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", 
line 161, in get_session
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return 
_SESSION_FACTORY()
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources TypeError: 'NoneType' 
object is not callable
2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources
2018-02-16 15:36:02.678 1 DEBUG 

[openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread mihaela.balas
Hello,

I have the following setup:
Neutron - Newton version
Octavia - Ocata version

Neutron LBaaS had the following configuration in services_lbaas.conf:

[octavia]

..
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
request_poll_interval = 2

# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
request_poll_timeout = 300



However, neutron-lbaas seems not to respect the request poll interval and it 
takes about 15 minutes to create a load balancer+listener+pool+members+hm. 
Below, you have the timestamps for the API calls made by neutron towards 
Octavia (extracted with tcpdump when I create a load balancer from horizon GUI):

10.100.0.14 - - [01/Feb/2018 12:11:53] "POST /v1/loadbalancers HTTP/1.1" 202 437
10.100.0.14 - - [01/Feb/2018 12:11:54] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 430
10.100.0.14 - - [01/Feb/2018 12:11:58] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:12:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:14:12] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:16:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/listeners HTTP/1.1" 202 
445
10.100.0.14 - - [01/Feb/2018 12:16:23] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:18:32] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:18:37] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools HTTP/1.1" 202 318
10.100.0.14 - - [01/Feb/2018 12:18:37] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:20:46] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:00] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 317
10.100.0.14 - - [01/Feb/2018 12:23:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:23:05] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:08] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 316
10.100.0.14 - - [01/Feb/2018 12:23:08] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:25:20] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:25:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/healthmonitor
 HTTP/1.1" 202 215
10.100.0.14 - - [01/Feb/2018 12:27:30] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 437

It seems that, after 1 or 2 polls, it waits for more than two minutes until the 
next poll. Is it normal? Has anyone seen this behavior?

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] [api] Use internal URL for placement-api

2017-11-22 Thread mihaela.balas
Hello,

Is there any setting that we can provide to nova-compute in nova.conf/placement 
so that it will use the internal URL for placement API? By default, I see that 
(in Newton) it uses the public URL and our compute nodes do not have access to 
the public IP address.

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] how to recreate amphora instances

2017-11-13 Thread mihaela.balas
Hi Michael,

Thank you for the detailed explanation. I was in the worst scenario where the 
database entries were purged and I had to manually re-create the DB entries and 
the ports. I successfully managed to insert the lines in the database and the 
amphoras were created.

Thanks a lot for the hints. I also increased the DB cleanup interval to 1 week.

Mihaela

-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com] 
Sent: Friday, November 10, 2017 3:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia] how to recreate amphora instances

I can give it a go, there are also logs of our conversation on evesdrop here: 
http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2017-11-02.log.html#t2017-11-02T11:07:45

Short background, they had an infrastructure issue where networking was out.  
This caused the load balancer amphora to be detected as failed and started the 
failover process which rebuilds the amphora, however this process also failed 
due to the wider infrastructure issues (Note, this is what I remember from the 
conversation. Correct the background if I am wrong). At this point the load 
balancers would be in a provisioning status of "ERROR", the amphora instances 
were likely deleted (depending on where the infrastructure issue impacted the 
failover process), and the amphora db records would be in the "DELETED" state.  
To make their situation worse, the DB cleanup cycle of the housekeeping process 
was set low (default is a week) and the amphora records were purged. This meant 
the customer data for the VIP address/ports was also purged.

Recovery:
If the database records were not purged, after the infrastructure issues are 
resolved, you can simply go into the octavia database and
issue: update load_balancer set provisioning_status="ACTIVE" where 
provisioning_status = "ERROR"; This will restart the health manager expecting 
the amphora to report health heartbeats and the failover process with start 
again rebuilding the amphora.  We have also recently added an admin API to 
trigger a failover manually 
(https://developer.openstack.org/api-ref/load-balancer/v2/index.html#failover-a-load-balancer).

If your amphora records have been purged, you are in some pain (don't do this). 
 This means that the VIP address and neutron port information for that VIP 
address are not available for the failover process to rebuild.  In this case, 
before you start the above process you will need to recreate the amphora 
records from your logs, either adding the port information if the ports are 
still live in neutron, or creating replacement ports.

Michael

On Wed, Nov 8, 2017 at 2:07 AM,   wrote:
> I am also interested how to fix this. If you can describe shortly the 
> procedure.
>
> Thanks,
> Mihaela
>
> -Original Message-
> From: Michael Johnson [mailto:johnso...@gmail.com]
> Sent: Monday, November 06, 2017 6:54 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [octavia] how to recreate amphora 
> instances
>
> I think we helped you get going again in the IRC channel.  Please ping us 
> again in the IRC channel if you need more assistance.
>
> Michael
>
> On Thu, Nov 2, 2017 at 4:42 AM, Kim-Norman Sahm  
> wrote:
>> Hi,
>>
>> after a rabbitmq problem octavia has removed all amphora instances.
>> the loadbalancers are in provisioning_status "ACTIVE"
>>
>> ~$ neutron lbaas-loadbalancer-list
>> neutron CLI is deprecated and will be removed in the future. Use 
>> openstack CLI instead.
>> | 07b41df6-bb75-4502-975a-20140b0832dd | Load Balancer
>> 4   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.18   | ACTIVE  | octavia  |
>> | 25664be7-15cb-426b-ad09-6102afb62b14 | Load Balancer
>> 2   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.7| ACTIVE  | octavia  |
>> | 927eb754-7c52-4060-b130-1f5e82d92555 | Load Balancer
>> 6   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.17   | ACTIVE  | octavia  |
>> | b4d93c68-89d6-4e4f-b06c-117d4ea933fa | Load Balancer
>> 5   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.24   | ACTIVE  | octavia  |
>> | d7699f8d-2106-42d6-8797-5feb72de6e2e | Load Balancer
>> 1   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.5| ACTIVE  | octavia  |
>> | dd6114ae-21e9-41bd-b155-325287aed420 | Load Balancer
>> 3   | 260b0c452e214accaf6cc0e98fb10fc0 |
>> 192.168.1.23   | ACTIVE  | octavia  |
>>
>> How can we trigger octavia to rebuild the amphore instances?
>> I've tried to restart the octavia services but it didn't solved the 
>> problem.
>>
>> Best regards
>> Kim
>>
>>
>> Kim-Norman Sahm
>> Cloud & Infrastructure(OCI)
>> noris network AG
>> Thomas-Mann-Straße 16-20
>> 90471 Nürnberg
>> Deutschland

Re: [openstack-dev] [octavia] how to recreate amphora instances

2017-11-08 Thread mihaela.balas
I am also interested how to fix this. If you can describe shortly the procedure.

Thanks,
Mihaela

-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com] 
Sent: Monday, November 06, 2017 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia] how to recreate amphora instances

I think we helped you get going again in the IRC channel.  Please ping us again 
in the IRC channel if you need more assistance.

Michael

On Thu, Nov 2, 2017 at 4:42 AM, Kim-Norman Sahm  
wrote:
> Hi,
>
> after a rabbitmq problem octavia has removed all amphora instances.
> the loadbalancers are in provisioning_status "ACTIVE"
>
> ~$ neutron lbaas-loadbalancer-list
> neutron CLI is deprecated and will be removed in the future. Use 
> openstack CLI instead.
> | 07b41df6-bb75-4502-975a-20140b0832dd | Load Balancer
> 4   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.18   | ACTIVE  | octavia  |
> | 25664be7-15cb-426b-ad09-6102afb62b14 | Load Balancer
> 2   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.7| ACTIVE  | octavia  |
> | 927eb754-7c52-4060-b130-1f5e82d92555 | Load Balancer
> 6   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.17   | ACTIVE  | octavia  |
> | b4d93c68-89d6-4e4f-b06c-117d4ea933fa | Load Balancer
> 5   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.24   | ACTIVE  | octavia  |
> | d7699f8d-2106-42d6-8797-5feb72de6e2e | Load Balancer
> 1   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.5| ACTIVE  | octavia  |
> | dd6114ae-21e9-41bd-b155-325287aed420 | Load Balancer
> 3   | 260b0c452e214accaf6cc0e98fb10fc0 |
> 192.168.1.23   | ACTIVE  | octavia  |
>
> How can we trigger octavia to rebuild the amphore instances?
> I've tried to restart the octavia services but it didn't solved the 
> problem.
>
> Best regards
> Kim
>
>
> Kim-Norman Sahm
> Cloud & Infrastructure(OCI)
> noris network AG
> Thomas-Mann-Straße 16-20
> 90471 Nürnberg
> Deutschland
> Tel +49 911 9352 1433
> Fax +49 911 9352 100
>
> kim-norman.s...@noris.de
> https://www.noris.de - Mehr Leistung als Standard
> Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel Vorsitzender des 
> Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
>
>
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-13 Thread mihaela.balas
Hi German,

I just tested with Newton version and I get the same error as with Mitaka “Not 
Implemented Error” (see below).

Mihaela

From: German Eichberger [mailto:german.eichber...@rackspace.com]
Sent: Tuesday, October 10, 2017 12:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy 
support

Mihaela,

The first version with L7 was Newton and beginning then the LBaaS V2 namespace 
driver would support it as well as Octavia.

German

From: "mihaela.ba...@orange.com" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 3, 2017 at 2:13 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy 
support

Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get “Not Implemented Error”.

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
self._create(request, body, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 148, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >ectxt.value 
= e.inner_exc
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
six.reraise(self.type_, self.value, self.tb)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 

[openstack-dev] [octavia] how to list/find the amphoras serving a load balancer

2017-10-13 Thread mihaela.balas
Hi,

We are about to deploy Octavia (Ocata) in a multi-tenant Openstack environment. 
All amphoras (for all tenants) will be spawned in a "service" tenant. What is 
the easiest way to list the amphora instances of a certain load balancer? As 
far as I could see, there is no API call returning such result. The best way I 
can do it is by checking the security group associated to the amphora port: the 
security group name includes the load balancer ID.

Thank you,
Mihaela

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Odp.: Odp.: [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-05 Thread mihaela.balas
Thanks a lot for the response.

Mihaela

-Original Message-
From: Michael Johnson [mailto:johnso...@gmail.com] 
Sent: Wednesday, October 04, 2017 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Odp.: Odp.: [neutron][lbaasv2][agent 
implementation] L7 policy support

Hi Mihaela,

The old neutron-lbaas haproxy namespace driver does not have L7 support. Only 
the Octavia driver and some vendor provider drivers have
L7 support.

Michael


On Tue, Oct 3, 2017 at 11:35 PM, Pawel Suder  wrote:
> Hello,
>
>
> It seems that HaproxyOnHostPluginDriver from
> https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lba
> as/drivers/haproxy/plugin_driver.py#L21
> extends AgentDriverBase
> https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lba
> as/drivers/common/agent_driver_base.py#L301
> where I could not located L7 things.
>
> L7 things might be related to Octavia (only?). What I found is that 
> HAProxy
> (https://www.haproxy.com/doc/aloha/7.0/haproxy/index.html) has L7 things.
>
>
> It seems that in old good times that thing was not taken into the 
> consideration.
>
>
> Cheers,
>
> Paweł
>
> 
> Od: mihaela.ba...@orange.com 
> Wysłane: 3 października 2017 14:45:11
> Do: OpenStack Development Mailing List (not for usage questions)
> Temat: Re: [openstack-dev] Odp.: [neutron][lbaasv2][agent 
> implementation] L7 policy support
>
>
> Hi,
>
>
>
> I appreciate the help. In neutron-server I have the following service 
> providers enabled:
>
>
>
> service_provider =
> LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.Hap
> roxyOnHostPluginDriver:default
>
> service_provider =
> LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDri
> ver
>
>
>
> With Octavia provider L7 policy works fine. With haproxy (agent 
> provider) I receive the error below.
>
>
>
> On the haproxy agent I have the following setting (however, the 
> neutron-server throws that error and not even sends any request to agent):
>
>
>
> interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
>
> device_driver =
> neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
>
>
>
> Mihaela
>
>
>
> From: Pawel Suder [mailto:pawel.su...@corp.ovh.com]
> Sent: Tuesday, October 03, 2017 3:10 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Odp.: [neutron][lbaasv2][agent 
> implementation] L7 policy support
>
>
>
> Hello Mihaela,
>
>
>
> It seems that you are referring to that part of code
> https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lba
> as/drivers/driver_base.py#L36
>
>
>
> I found that document for Mitaka
> https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>
>
>
> It might be related to incorrectly configured driver for LBaaS (or 
> indeed not implemented driver for L7 policy for specific driver).
>
>
>
> Questions:
>
>
>
> * What do you have configured in neutron configuration in section 
> [service_providers]?
>
> * Which driver do you want to use?
>
>
>
> Example line
>
>
>
> service_provider =
> LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.Hap
> roxyOnHostPluginDriver:default
>
>
>
> Cheers,
>
> Paweł
>
> 
>
> Od: mihaela.ba...@orange.com 
> Wysłane: 3 października 2017 11:13:34
> Do: OpenStack Development Mailing List (not for usage questions)
> Temat: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 
> policy support
>
>
>
> Hello,
>
>
>
> Does the agent implementation of LBaaSv2 support L7 policies? I am 
> testing with Mitaka version and I get “Not Implemented Error”.
>
>
>
> {"asctime": "2017-10-03 07:34:42.764","process": "18","levelname":
> "INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id":
> "req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id":
> "44364a07de754daa9ffeb2911fe3620a", "project_id":
> "a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", 
> "user_domain_id": "-",
> "project_domain_id": "-"},"instance": {},"message":"Calling driver 
> operation NotImplementedManager.create"}
>
> {"asctime": "2017-10-03 07:34:42.765","process": "18","levelname":
> "ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id":
> "req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id":
> "44364a07de754daa9ffeb2911fe3620a", "project_id":
> "a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", 
> "user_domain_id": "-",
> "project_domain_id": "-"},"instance": {},"message":"There was an error 
> in the driver"}
>
> 2017-10-03 07:34:42.765 18 TRACE 
> neutron_lbaas.services.loadbalancer.plugin
>>Traceback (most recent call last):
>
> 2017-10-03 07:34:42.765 18 TRACE 
> neutron_lbaas.services.loadbalancer.plugin
>>  File
> 

Re: [openstack-dev] Odp.: [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-03 Thread mihaela.balas
Hi,

I appreciate the help. In neutron-server I have the following service providers 
enabled:

service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider = 
LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver

With Octavia provider L7 policy works fine. With haproxy (agent provider) I 
receive the error below.

On the haproxy agent I have the following setting (however, the neutron-server 
throws that error and not even sends any request to agent):

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Mihaela

From: Pawel Suder [mailto:pawel.su...@corp.ovh.com]
Sent: Tuesday, October 03, 2017 3:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Odp.: [neutron][lbaasv2][agent implementation] L7 
policy support


Hello Mihaela,



It seems that you are referring to that part of code 
https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lbaas/drivers/driver_base.py#L36

I found that document for Mitaka 
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

It might be related to incorrectly configured driver for LBaaS (or indeed not 
implemented driver for L7 policy for specific driver).

Questions:

* What do you have configured in neutron configuration in section 
[service_providers]?
* Which driver do you want to use?

Example line

service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Cheers,
Paweł

Od: mihaela.ba...@orange.com 
>
Wysłane: 3 października 2017 11:13:34
Do: OpenStack Development Mailing List (not for usage questions)
Temat: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy 
support

Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get "Not Implemented Error".

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 

[openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-03 Thread mihaela.balas
Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get "Not Implemented Error".

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
self._create(request, body, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 148, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >ectxt.value 
= e.inner_exc
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
six.reraise(self.type_, self.value, self.tb)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
f(*args, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 521, in 
_create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >obj = 
do_create(body)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 503, in 
do_create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
request.context, reservation.reservation_id)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 

Re: [openstack-dev] [octavia][heat] Octavia deployment with Heat

2017-09-14 Thread mihaela.balas
Hello,

Are there any plans to fix this in Heat?

Thank you,
Mihaela Balas

From: Rabi Mishra [mailto:ramis...@redhat.com]
Sent: Wednesday, July 26, 2017 3:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [octavia][heat] Octavia deployment with Heat

On Wed, Jul 26, 2017 at 5:34 PM, 
> wrote:
Hello,

Is Octavia (Ocata version) supposed to work with Heat (tested with Newton 
version) deployment? I launch a Heat stack trying to deploy a load balancer 
with a single listener/pool and two members. While the Heat shows status 
COMPLETE and the Neutron shows all objects as created, Octavia creates the 
listener, the pool but with a single member (instead of two).
Another example: I launch a Heat stack trying to deploy a load balancer with a 
multiple listeners/pools each having two members. The results is that Heat 
shows status COMPLETE and the Neutron shows all objects as created, Octavia 
creates the listeners, but only some of the pools and for those pool creates 
only one member or none.
In the Octavia log I could see only these type of errors:

Sounds like https://bugs.launchpad.net/heat/+bug/1632054.
We just check provisioning_status of the loadbalancer when adding members and 
mark the resource as CREATE_COMPLETE.  I think octavia had added 
provisioning_status for all top level objects like listener etc[1], but I don't 
think those attributes are available with lbaasv2 api for us to check.

[1] https://review.openstack.org/#/c/372791/

2017-07-26 08:12:08.639 1 INFO octavia.api.v1.controllers.member 
[req-749be397-dd63-4fb6-9d86-b717f6d59e3d - 989bbadfe4134722b478ca799217833e - 
default default] Member cannot be created or modified because the Load Balancer 
is in an immutable state
2017-07-26 08:12:08.698 1 DEBUG wsme.api 
[req-749be397-dd63-4fb6-9d86-b717f6d59e3d - 989bbadfe4134722b478ca799217833e - 
default default] Client-side error: Load Balancer 
b12a29db-81d0-451a-af9c-d563b636bf01 is immutable and cannot be updated. 
format_exception /opt/octavia/lib/python2.7/site-packages/wsme/api.py:222

I think what happens is that it takes some time until the configuration is 
updated on an amphora and during that time the Load Balancer is in UPDATE state 
and new configuration cannot be added.

Is this scenario validated or it is still work in progress?

Thanks,
Mihaela Balas



_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [octavia][l7policy] Is redirect to pool supported?

2017-09-08 Thread mihaela.balas
Sorry, I forgot the link to this documentation: 
https://docs.openstack.org/octavia/latest/user/guides/l7-cookbook.html

From: mihaela.ba...@orange.com [mailto:mihaela.ba...@orange.com]
Sent: Friday, September 08, 2017 10:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [octavia][l7policy] Is redirect to pool supported?

Hello,

Is redirect_to_pool policy currently supported with Octavia? Since a listener 
can only have one pool (the default pool) I cannot see how this can be 
configured. However, this documentation details a lot of scenarios. I am 
testing Octavia Ocata version.

Thank you,
Mihaela Balas

_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia][l7policy] Is redirect to pool supported?

2017-09-08 Thread mihaela.balas
Hello,

Is redirect_to_pool policy currently supported with Octavia? Since a listener 
can only have one pool (the default pool) I cannot see how this can be 
configured. However, this documentation details a lot of scenarios. I am 
testing Octavia Ocata version.

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia][heat] Octavia deployment with Heat

2017-07-26 Thread mihaela.balas
Hello,

Is Octavia (Ocata version) supposed to work with Heat (tested with Newton 
version) deployment? I launch a Heat stack trying to deploy a load balancer 
with a single listener/pool and two members. While the Heat shows status 
COMPLETE and the Neutron shows all objects as created, Octavia creates the 
listener, the pool but with a single member (instead of two).
Another example: I launch a Heat stack trying to deploy a load balancer with a 
multiple listeners/pools each having two members. The results is that Heat 
shows status COMPLETE and the Neutron shows all objects as created, Octavia 
creates the listeners, but only some of the pools and for those pool creates 
only one member or none.
In the Octavia log I could see only these type of errors:

2017-07-26 08:12:08.639 1 INFO octavia.api.v1.controllers.member 
[req-749be397-dd63-4fb6-9d86-b717f6d59e3d - 989bbadfe4134722b478ca799217833e - 
default default] Member cannot be created or modified because the Load Balancer 
is in an immutable state
2017-07-26 08:12:08.698 1 DEBUG wsme.api 
[req-749be397-dd63-4fb6-9d86-b717f6d59e3d - 989bbadfe4134722b478ca799217833e - 
default default] Client-side error: Load Balancer 
b12a29db-81d0-451a-af9c-d563b636bf01 is immutable and cannot be updated. 
format_exception /opt/octavia/lib/python2.7/site-packages/wsme/api.py:222

I think what happens is that it takes some time until the configuration is 
updated on an amphora and during that time the Load Balancer is in UPDATE state 
and new configuration cannot be added.

Is this scenario validated or it is still work in progress?

Thanks,
Mihaela Balas



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev