Re: [openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread Michael Johnson
Hi Mihaela,

The polling logic that the neutron-lbaas octavia driver uses to update
the neutron database is as follows:

Once a Create/Update/Delete action is executed against a load balancer
using the Octavia driver a polling thread is created.
On every request_poll_interval the thread queries the Octavia v1 API
to check the status of the object modified.
It will save the updated state in the neutron databse and exit if the
objects provisioning status becomes on of: "ACTIVE", "DELETED", or
"ERROR".
It will repeat this polling until one of those provisioning statuses
is met, or the request_poll_timeout is exceeded.

My suspicion is the GET requests you are seeing for those objects is
occurring from another source.
You can test this by running neutron-lbaas in debug mode.  I will then
log a debug message for every polling interval.

The code for this thread is located here:
https://github.com/openstack/neutron-lbaas/blob/stable/ocata/neutron_lbaas/drivers/octavia/driver.py#L66

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread mihaela.balas
Hello,

I have the following setup:
Neutron - Newton version
Octavia - Ocata version

Neutron LBaaS had the following configuration in services_lbaas.conf:

[octavia]

..
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
request_poll_interval = 2

# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
request_poll_timeout = 300



However, neutron-lbaas seems not to respect the request poll interval and it 
takes about 15 minutes to create a load balancer+listener+pool+members+hm. 
Below, you have the timestamps for the API calls made by neutron towards 
Octavia (extracted with tcpdump when I create a load balancer from horizon GUI):

10.100.0.14 - - [01/Feb/2018 12:11:53] "POST /v1/loadbalancers HTTP/1.1" 202 437
10.100.0.14 - - [01/Feb/2018 12:11:54] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 430
10.100.0.14 - - [01/Feb/2018 12:11:58] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:12:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:14:12] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:16:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/listeners HTTP/1.1" 202 
445
10.100.0.14 - - [01/Feb/2018 12:16:23] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:18:32] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:18:37] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools HTTP/1.1" 202 318
10.100.0.14 - - [01/Feb/2018 12:18:37] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:20:46] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:00] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 317
10.100.0.14 - - [01/Feb/2018 12:23:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:23:05] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:08] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 316
10.100.0.14 - - [01/Feb/2018 12:23:08] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:25:20] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:25:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/healthmonitor
 HTTP/1.1" 202 215
10.100.0.14 - - [01/Feb/2018 12:27:30] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 437

It seems that, after 1 or 2 polls, it waits for more than two minutes until the 
next poll. Is it normal? Has anyone seen this behavior?

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Kosnik, Lubosz
In my opinion this patch should be changed. We should start using project_id 
instead of still keeping tenant_id property.
All occurences of project_id in [1] should be fixed.

Lubosz

[1] neutron_lbaas/tests/tempest/v2/scenario/base.py

From: Nir Magnezi <nmagn...@redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Tuesday, January 3, 2017 at 3:37 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron-lbaas][octavia]

I would like to emphasize the importance of this issue.

Currently, all te LBaaS/Octavia gates are up on running (touch wood).
Nevertheless, this bug will become more apparent (aka broken gates) in the next 
release of tempest (if we don't merge this fix beforehand).

The reason is that the issue occurs when you use tempest master,
while our gates currently use tempest tag 13.0.0 (as expected).

Nir

On Tue, Jan 3, 2017 at 11:04 AM, Genadi Chereshnya 
<gcher...@redhat.com<mailto:gcher...@redhat.com>> wrote:
When running neutron_lbaas scenarios tests with the latest tempest version we 
fail because of https://bugs.launchpad.net/octavia/+bug/1649083.
I would like if anyone can go over the patch that fixes the problem and merge 
it, so our automation will succeed.
The patch is https://review.openstack.org/#/c/411257/
Thanks in advance,
Genadi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Nir Magnezi
I would like to emphasize the importance of this issue.

Currently, all te LBaaS/Octavia gates are up on running (touch wood).
Nevertheless, this bug will become more apparent (aka broken gates) in the
next release of tempest (if we don't merge this fix beforehand).

The reason is that the issue occurs when you use tempest master,
while our gates currently use tempest tag 13.0.0 (as expected).

Nir

On Tue, Jan 3, 2017 at 11:04 AM, Genadi Chereshnya 
wrote:

> When running neutron_lbaas scenarios tests with the latest tempest version
> we fail because of https://bugs.launchpad.net/octavia/+bug/1649083.
>
> I would like if anyone can go over the patch that fixes the problem and
> merge it, so our automation will succeed.
> The patch is https://review.openstack.org/#/c/411257/
>
> Thanks in advance,
> Genadi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia]

2017-01-03 Thread Genadi Chereshnya
When running neutron_lbaas scenarios tests with the latest tempest version
we fail because of https://bugs.launchpad.net/octavia/+bug/1649083.

I would like if anyone can go over the patch that fixes the problem and
merge it, so our automation will succeed.
The patch is https://review.openstack.org/#/c/411257/

Thanks in advance,
Genadi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-29 Thread Kosnik, Lubosz
Based on this logs. I can tell you that problem is with plugging VIP address. 
You need to show us also n-cpu logs. There should be some info what happened 
because we can see in logs in line 22 that client failed with error 500 on 
attaching network adapter. Maybe you’re out of IP’s in this subnet?
Without the rest of logs there is no way to tell exactly what happened.

Regards,
Lubosz.

From: Yipei Niu <newy...@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Tuesday, December 27, 2016 at 9:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [neutron-lbaas][octavia] Error when creating load 
balancer

Hi, All,

I failed creating a load balancer on a subnet. The detailed info of o-cw.log is 
pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-27 Thread Yipei Niu
Hi, All,

I failed creating a load balancer on a subnet. The detailed info of
o-cw.log is pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged in to the amphorae vm

2016-12-12 Thread Wanjing Xu (waxu)
Losnik

There are a lot, a lot  of retries. I just omit them in email.  So how to fix 
this vip plug error?

Thanks
Wanjing
From: "Kosnik, Lubosz" <lubosz.kos...@intel.com>
Date: Friday, December 9, 2016 at 4:38 PM
To: "Wanjing Xu (waxu)" <w...@cisco.com>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged 
in to the amphorae vm

Plugging VIP worked without any problems.
Log is telling that you have very restrictive timeout configuration. 7 retries 
is very low configuration. Please reconfigure this to much bigger value.

Regards,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com<mailto:lubosz.kos...@intel.com>

On Dec 9, 2016, at 3:46 PM, Wanjing Xu (waxu) 
<w...@cisco.com<mailto:w...@cisco.com>> wrote:

I have stable/metaka Octavia which has been running OK until today, whenever I 
created loadbalancer, the amphorae vm is created with mgmt nic. But look like 
vip plugin failed.  I can ping to amphorae mgmt. NIC from controller(where 
Octavia process is running), but look like some rest api call  into amphorae to 
plug in vip failed :

Ping works:

[localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7
PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data.
64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms
64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms
^C


o-cw.log:

2016-12-09 11:03:54.468 31408 DEBUG 
octavia.controller.worker.tasks.network_tasks [-] Retrieving network details 
for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute 
/opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380
2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
(76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': 
}' 
_task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' 
(3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from 
state 'PENDING' _task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
2016-12-09 11:03:55.452 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:56.458 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:57.462 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:58.466 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:59.470 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:00.474 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:02.487 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
……
ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2016-12-09 11:29:10.509 31408 WARNING 
octavia.controller.worker.controller_worker [-] Flow 
'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' 
(f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from 
state 'RUNNING'
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: contacting the amphora timed out
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-12-09 11:29:10.509 31408 ERROR

Re: [openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged in to the amphorae vm

2016-12-09 Thread Kosnik, Lubosz
Plugging VIP worked without any problems.
Log is telling that you have very restrictive timeout configuration. 7 retries 
is very low configuration. Please reconfigure this to much bigger value.

Regards,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Dec 9, 2016, at 3:46 PM, Wanjing Xu (waxu) 
> wrote:

I have stable/metaka Octavia which has been running OK until today, whenever I 
created loadbalancer, the amphorae vm is created with mgmt nic. But look like 
vip plugin failed.  I can ping to amphorae mgmt. NIC from controller(where 
Octavia process is running), but look like some rest api call  into amphorae to 
plug in vip failed :

Ping works:

[localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7
PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data.
64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms
64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms
^C


o-cw.log:

2016-12-09 11:03:54.468 31408 DEBUG 
octavia.controller.worker.tasks.network_tasks [-] Retrieving network details 
for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute 
/opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380
2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
(76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': 
}' 
_task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' 
(3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from 
state 'PENDING' _task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
2016-12-09 11:03:55.452 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:56.458 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:57.462 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:58.466 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:59.470 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:00.474 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:02.487 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
……
ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2016-12-09 11:29:10.509 31408 WARNING 
octavia.controller.worker.controller_worker [-] Flow 
'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' 
(f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from 
state 'RUNNING'
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: contacting the amphora timed out
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/octavia/octavia/controller/queue/endpoint.py", line 45, in 
create_load_balancer
2016-12-09 

[openstack-dev] [neutron-lbaas] [octavia]vip failed to be plugged in to the amphorae vm

2016-12-09 Thread Wanjing Xu (waxu)
I have stable/metaka Octavia which has been running OK until today, whenever I 
created loadbalancer, the amphorae vm is created with mgmt nic. But look like 
vip plugin failed.  I can ping to amphorae mgmt. NIC from controller(where 
Octavia process is running), but look like some rest api call  into amphorae to 
plug in vip failed :

Ping works:

[localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7
PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data.
64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms
64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms
^C


o-cw.log:

2016-12-09 11:03:54.468 31408 DEBUG 
octavia.controller.worker.tasks.network_tasks [-] Retrieving network details 
for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute 
/opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380
2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
(76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': 
}' 
_task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker 
[-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' 
(3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from 
state 'PENDING' _task_receiver 
/usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218
2016-12-09 11:03:55.446 31408 DEBUG 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url 
https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request 
/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221
2016-12-09 11:03:55.452 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:56.458 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:57.462 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:58.466 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:03:59.470 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:00.474 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-12-09 11:04:02.487 31408 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
……
ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2016-12-09 11:29:10.509 31408 WARNING 
octavia.controller.worker.controller_worker [-] Flow 
'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' 
(f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from 
state 'RUNNING'
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: contacting the amphora timed out
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/octavia/octavia/controller/queue/endpoint.py", line 45, in 
create_load_balancer
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
self.worker.create_load_balancer(load_balancer_id)
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/octavia/octavia/controller/worker/controller_worker.py", line 322, 
in create_load_balancer
2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher 
post_lb_amp_assoc.run()
2016-12-09 

[openstack-dev] [neutron-lbaas][octavia] New time proposal for weekly meeting

2016-12-08 Thread Kobi Samoray
Hi,
As some project members are based outside of the US, I’d like to propose time 
change for the weekly meeting, which will more friendly to non-US based members.
Please post your preferences/info in the etherpad below.

https://etherpad.openstack.org/p/octavia-weekly-meeting-time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-22 Thread Yipei Niu
Hi, Micheal,

Thanks a lot for your help. I am trying your solution.

Best regards,
Yipei

On Sun, Nov 20, 2016 at 1:46 PM, Yipei Niu  wrote:

> Hi, Micheal,
>
> Thanks a lot for your comments.
>
> Please find the errors of o-cw.log in link http://paste.openstack.
> org/show/589806/ . Hope it will
> help.
>
> About the lb-mgmt-net, I just follow the guide of running LBaaS. If I
> create a ordinary subnet with neutron for the two VMs, will it prevent the
> issue you mentioned happening?
>
> Best regards,
> Yipei
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-21 Thread Michael Johnson
Hi Yipei,

That error means the controller worker process was not able to reach
the amphora REST API.

I am guessing this is the issue with diskimage-builder which we have
patches up for, but not all of them have merged yet [1][2].

Try running my script:
https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9 to
rebuild the image and boot another amphora.

Also, could you provide a link to the docs you used that booted the
web servers on the lb-mgmt-lan?  I want to make sure we update that
and clarify for future users.

Michael

[1] https://review.openstack.org/399272
[2] https://review.openstack.org/399276

On Sat, Nov 19, 2016 at 9:46 PM, Yipei Niu  wrote:
> Hi, Micheal,
>
> Thanks a lot for your comments.
>
> Please find the errors of o-cw.log in link
> http://paste.openstack.org/show/589806/. Hope it will help.
>
> About the lb-mgmt-net, I just follow the guide of running LBaaS. If I create
> a ordinary subnet with neutron for the two VMs, will it prevent the issue
> you mentioned happening?
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-19 Thread Yipei Niu
Hi, Micheal,

Thanks a lot for your comments.

Please find the errors of o-cw.log in link http://paste.openstack.org/
show/589806/ . Hope it will help.

About the lb-mgmt-net, I just follow the guide of running LBaaS. If I
create a ordinary subnet with neutron for the two VMs, will it prevent the
issue you mentioned happening?

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About playing Neutron LBaaS

2016-11-18 Thread Michael Johnson
Hi Yipei,

A note, that you probably want to use the tags [neutron-lbaas] and
[octavia] instead of [tricicle] to catch the LBaaS team attention.

Since you are using the octavia driver, can you please include a link
to your o-cw.log?  This will tell us why the load balancer create
failed.

Also, I see that your two servers are on the lb-mgmt-net, this may
cause some problems with the load balancer when you add them as
members.  The lb-mgmt-net is intended to only be used for
communication between the octavia controller processes and the octavia
amphora (service VMs).  Since you didn't get as far as adding members
I'm sure this is not the root cause of the problem you are seeing.
The o-cw log will help us determine the root cause.

Michael


On Thu, Nov 17, 2016 at 11:48 PM, Yipei Niu  wrote:
> Hi, all,
>
> Recently I try to configure and play Neutron LBaaS in one OpenStack instance
> and have some trouble when creating a load balancer.
>
> I install devstack with neutron networking as well as LBaaS in one VM. The
> detailed configuration of local.conf is pasted in the link
> http://paste.openstack.org/show/589669/.
>
> Then I boot two VMs in the OpenStack instance, which can be reached via ping
> command from the host VM. The detailed information of the two VMs are listed
> in the following table.
>
> +--+-+++-+--+
> | ID   | Name| Status | Task State |
> Power State | Networks |
> +--+-+++-+--+
> | 4cf7527b-05cc-49b7-84f9-3cc0f061be4f | server1 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.6  |
> | bc7384a0-62aa-4987-89b6-8b98a6c467a9 | server2 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.12 |
> +--+-+++-+--+
>
> After building up the environment, I try to create a load balancer based on
> the guide in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun. When
> executing the command "neutron lbaas-loadbalancer-create --name lb1
> private-subnet", the state of the load balancer remains "PENDING_CREATE" and
> finally becomes "ERROR". I checked q-agt.log and q-svc.log, the detailed
> info is pasted in http://paste.openstack.org/show/589676/.
>
> Look forward to your valuable comments. Thanks a lot!
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-10 Thread Kosnik, Lubosz
Octavia is using own DB and LBaaS v2 has his own. Because of that like Michael 
said we’re working on aligning this DBs and we’re planning to provide migration 
mechanism.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC

On Nov 10, 2016, at 1:13 AM, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:

Will the same DB be maintained or will the LBaaS DB be moved to that of 
Octavia. I am really concerned about this and I feel that it will cause 
production problems.

From: Kevin Benton <ke...@benton.pub<mailto:ke...@benton.pub>>
Reply-To: OpenStack List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 9, 2016 at 11:43 PM
To: OpenStack List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap

The people working on the migration are ensuring API compatibility and are even 
leaving in a shim on the Neutron side for some time so you don't even have to 
change endpoints initially. It should be a seamless change.

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
Just please don't make this a lbv3 thing that completely breaks compatibility 
of existing lb's yet again. If its just an "point url endpoint from thing like 
x to thing like y" in one place, thats ok. I still have v1 lb's in existence 
though I have to deal with and a backwards incompatible v3 would just cause me 
to abandon lbaas all together I think as it would show the lbaas stuff is just 
not maintainable.

Thanks,
Kevin

From: Armando M. [arma...@gmail.com<mailto:arma...@gmail.com>]
Sent: Wednesday, November 09, 2016 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap


On 9 November 2016 at 05:50, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


On 11/8/16, 1:36 AM, "Michael Johnson" 
<johnso...@gmail.com<mailto:johnso...@gmail.com>> wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors fram

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-10 Thread Michael Johnson
Hi Gary,

The LBaaS DB table contents will be moved into the Octavia database as
part of the migration process/tool.

Michael

On Wed, Nov 9, 2016 at 11:13 PM, Gary Kotton <gkot...@vmware.com> wrote:
> Will the same DB be maintained or will the LBaaS DB be moved to that of
> Octavia. I am really concerned about this and I feel that it will cause
> production problems.
>
>
>
> From: Kevin Benton <ke...@benton.pub>
> Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
> Date: Wednesday, November 9, 2016 at 11:43 PM
> To: OpenStack List <openstack-dev@lists.openstack.org>
>
>
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
> The people working on the migration are ensuring API compatibility and are
> even leaving in a shim on the Neutron side for some time so you don't even
> have to change endpoints initially. It should be a seamless change.
>
>
>
> On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>
> Just please don't make this a lbv3 thing that completely breaks
> compatibility of existing lb's yet again. If its just an "point url endpoint
> from thing like x to thing like y" in one place, thats ok. I still have v1
> lb's in existence though I have to deal with and a backwards incompatible v3
> would just cause me to abandon lbaas all together I think as it would show
> the lbaas stuff is just not maintainable.
>
> Thanks,
> Kevin
>
> 
>
> From: Armando M. [arma...@gmail.com]
> Sent: Wednesday, November 09, 2016 8:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
>
>
> On 9 November 2016 at 05:50, Gary Kotton <gkot...@vmware.com> wrote:
>
> Hi,
> What about neutron-lbaas project? Is this project still alive and kicking to
> the merge is done or are we going to continue to maintain it? I feel like we
> are between a rock and a hard place here. LBaaS is in production and it is
> not clear the migration process. Will Octavia have the same DB models as
> LBaaS or will there be a migration?
> Sorry for the pessimism but I feel that things are very unclear and that we
> cannot even indicate to our community/consumers what to use/expect.
> Thanks
> Gary
>
>
>
> http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>
>
>
>
> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>
> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full impleme

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Gary Kotton
Will the same DB be maintained or will the LBaaS DB be moved to that of 
Octavia. I am really concerned about this and I feel that it will cause 
production problems.

From: Kevin Benton <ke...@benton.pub>
Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
Date: Wednesday, November 9, 2016 at 11:43 PM
To: OpenStack List <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap

The people working on the migration are ensuring API compatibility and are even 
leaving in a shim on the Neutron side for some time so you don't even have to 
change endpoints initially. It should be a seamless change.

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
Just please don't make this a lbv3 thing that completely breaks compatibility 
of existing lb's yet again. If its just an "point url endpoint from thing like 
x to thing like y" in one place, thats ok. I still have v1 lb's in existence 
though I have to deal with and a backwards incompatible v3 would just cause me 
to abandon lbaas all together I think as it would show the lbaas stuff is just 
not maintainable.

Thanks,
Kevin

From: Armando M. [arma...@gmail.com<mailto:arma...@gmail.com>]
Sent: Wednesday, November 09, 2016 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap


On 9 November 2016 at 05:50, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


On 11/8/16, 1:36 AM, "Michael Johnson" 
<johnso...@gmail.com<mailto:johnso...@gmail.com>> wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Michael Johnson
Kevin,

Yep, totally understand.

This is not a V3, it is simply moving the API from running under
neutron to running under the octavia API process.  It will still be
the LBaaSv2 API, just a new endpoint (though the old endpoint will
work for some time into the future).

Michael

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> Just please don't make this a lbv3 thing that completely breaks
> compatibility of existing lb's yet again. If its just an "point url endpoint
> from thing like x to thing like y" in one place, thats ok. I still have v1
> lb's in existence though I have to deal with and a backwards incompatible v3
> would just cause me to abandon lbaas all together I think as it would show
> the lbaas stuff is just not maintainable.
>
> Thanks,
> Kevin
> 
> From: Armando M. [arma...@gmail.com]
> Sent: Wednesday, November 09, 2016 8:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
> On 9 November 2016 at 05:50, Gary Kotton <gkot...@vmware.com> wrote:
>>
>> Hi,
>> What about neutron-lbaas project? Is this project still alive and kicking
>> to the merge is done or are we going to continue to maintain it? I feel like
>> we are between a rock and a hard place here. LBaaS is in production and it
>> is not clear the migration process. Will Octavia have the same DB models as
>> LBaaS or will there be a migration?
>> Sorry for the pessimism but I feel that things are very unclear and that
>> we cannot even indicate to our community/consumers what to use/expect.
>> Thanks
>> Gary
>
>
> http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>
>>
>>
>> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>>
>> Ocata LBaaS retrospective and next steps recap
>> --
>>
>> This session lightly touched on the work in the newton cycle, but
>> primarily focused on planning for the Ocata release and the LBaaS spin
>> out of neutron and merge into the octavia project [1].  Notes were
>> captured on the etherpad [1].
>>
>> The focus of work for Ocata in neutron-lbaas and octavia will be on
>> the spin out/merge and not new features.
>>
>> Work has started on merging neutron-lbaas into the octavia project
>> with API sorting/pagination, quota support, keystone integration,
>> neutron-lbaas driver shim, and documentation updates.  Work is still
>> needed for policy support, the API shim to handle capability gaps
>> (example: stats are by listener in octavia, but by load balancer in
>> neturon-lbaas), neutron api proxy, a database migration script from
>> the neutron database to the octavia database for existing non-octavia
>> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
>> the octavia API server.
>>
>> The room agreed that since we will have a shim/proxy in neutron for
>> some time, updating the OpenStack client can be deferred to a future
>> cycle.
>>
>> There is a lot of concern about Ocata being a short cycle and the
>> amount of work to be done.  There is hope that additional resources
>> will help out with this task to allow us to complete the spin
>> out/merge for Ocata.
>>
>> We discussed the current state of the active/active topology patches
>> and agreed that it is unlikely this will merge in Ocata.  There are a
>> lot of open comments and work to do on the patches.  It appears that
>> these patches may have been created against an old release and require
>> significant updating.
>>
>> Finally there was a question about when octavia would implement
>> metadata tags.  When we dug into the need for the tags we found that
>> what was really wanted is a full implementation of the flavors
>> framework [3] [4].  Some vendors expressed interest in finishing the
>> flavors framework for Octavia.
>>
>> Thank you to everyone that participated in our design session and
>> etherpad.
>>
>> Michael
>>
>> [1]
>> https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>> [2]
>> https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
>> [3]
>>

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Fox, Kevin M
Ok. cool. thanks. :)

Kevin

From: Kevin Benton [ke...@benton.pub]
Sent: Wednesday, November 09, 2016 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap

The people working on the migration are ensuring API compatibility and are even 
leaving in a shim on the Neutron side for some time so you don't even have to 
change endpoints initially. It should be a seamless change.

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
Just please don't make this a lbv3 thing that completely breaks compatibility 
of existing lb's yet again. If its just an "point url endpoint from thing like 
x to thing like y" in one place, thats ok. I still have v1 lb's in existence 
though I have to deal with and a backwards incompatible v3 would just cause me 
to abandon lbaas all together I think as it would show the lbaas stuff is just 
not maintainable.

Thanks,
Kevin

From: Armando M. [arma...@gmail.com<mailto:arma...@gmail.com>]
Sent: Wednesday, November 09, 2016 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap



On 9 November 2016 at 05:50, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


On 11/8/16, 1:36 AM, "Michael Johnson" 
<johnso...@gmail.com<mailto:johnso...@gmail.com>> wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usag

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Kevin Benton
The people working on the migration are ensuring API compatibility and are
even leaving in a shim on the Neutron side for some time so you don't even
have to change endpoints initially. It should be a seamless change.

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:

> Just please don't make this a lbv3 thing that completely breaks
> compatibility of existing lb's yet again. If its just an "point url
> endpoint from thing like x to thing like y" in one place, thats ok. I still
> have v1 lb's in existence though I have to deal with and a backwards
> incompatible v3 would just cause me to abandon lbaas all together I think
> as it would show the lbaas stuff is just not maintainable.
>
> Thanks,
> Kevin
> --
> *From:* Armando M. [arma...@gmail.com]
> *Sent:* Wednesday, November 09, 2016 8:05 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
> On 9 November 2016 at 05:50, Gary Kotton <gkot...@vmware.com> wrote:
>
>> Hi,
>> What about neutron-lbaas project? Is this project still alive and kicking
>> to the merge is done or are we going to continue to maintain it? I feel
>> like we are between a rock and a hard place here. LBaaS is in production
>> and it is not clear the migration process. Will Octavia have the same DB
>> models as LBaaS or will there be a migration?
>> Sorry for the pessimism but I feel that things are very unclear and that
>> we cannot even indicate to our community/consumers what to use/expect.
>> Thanks
>> Gary
>>
>
> http://specs.openstack.org/openstack/neutron-specs/specs/
> newton/kill-neutron-lbaas.html
>
>
>>
>> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>>
>> Ocata LBaaS retrospective and next steps recap
>> 
>> --
>>
>> This session lightly touched on the work in the newton cycle, but
>> primarily focused on planning for the Ocata release and the LBaaS spin
>> out of neutron and merge into the octavia project [1].  Notes were
>> captured on the etherpad [1].
>>
>> The focus of work for Ocata in neutron-lbaas and octavia will be on
>> the spin out/merge and not new features.
>>
>> Work has started on merging neutron-lbaas into the octavia project
>> with API sorting/pagination, quota support, keystone integration,
>> neutron-lbaas driver shim, and documentation updates.  Work is still
>> needed for policy support, the API shim to handle capability gaps
>> (example: stats are by listener in octavia, but by load balancer in
>> neturon-lbaas), neutron api proxy, a database migration script from
>> the neutron database to the octavia database for existing non-octavia
>> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
>> the octavia API server.
>>
>> The room agreed that since we will have a shim/proxy in neutron for
>> some time, updating the OpenStack client can be deferred to a future
>> cycle.
>>
>> There is a lot of concern about Ocata being a short cycle and the
>> amount of work to be done.  There is hope that additional resources
>> will help out with this task to allow us to complete the spin
>> out/merge for Ocata.
>>
>> We discussed the current state of the active/active topology patches
>> and agreed that it is unlikely this will merge in Ocata.  There are a
>> lot of open comments and work to do on the patches.  It appears that
>> these patches may have been created against an old release and require
>> significant updating.
>>
>> Finally there was a question about when octavia would implement
>> metadata tags.  When we dug into the need for the tags we found that
>> what was really wanted is a full implementation of the flavors
>> framework [3] [4].  Some vendors expressed interest in finishing the
>> flavors framework for Octavia.
>>
>> Thank you to everyone that participated in our design session and
>> etherpad.
>>
>> Michael
>>
>> [1] https://specs.openstack.org/openstack/neutron-specs/specs/ne
>> wton/kill-neutron-lbaas.html
>> [2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas
>> -session
>> [3] https://sp

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Fox, Kevin M
Just please don't make this a lbv3 thing that completely breaks compatibility 
of existing lb's yet again. If its just an "point url endpoint from thing like 
x to thing like y" in one place, thats ok. I still have v1 lb's in existence 
though I have to deal with and a backwards incompatible v3 would just cause me 
to abandon lbaas all together I think as it would show the lbaas stuff is just 
not maintainable.

Thanks,
Kevin

From: Armando M. [arma...@gmail.com]
Sent: Wednesday, November 09, 2016 8:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS 
retrospective and next steps recap



On 9 November 2016 at 05:50, Gary Kotton 
<gkot...@vmware.com<mailto:gkot...@vmware.com>> wrote:
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


On 11/8/16, 1:36 AM, "Michael Johnson" 
<johnso...@gmail.com<mailto:johnso...@gmail.com>> wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Develop

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Michael Johnson
Hi Gary,

Our intent is to merge neutron-lbaas into the Octavia project.  When
this is complete, the neutron-lbaas project will remain for some time
as a light weight shim/proxy that provides the legacy neutron endpoint
experience.

The database models are already very similar to the existing
neutron-lbaas models (by design) and we will finish aligning these as
part of the merge work.  For example, the names that were added to
some objects will be added in the octavia database as well.

We are also planing a migration from the neutron LBaaSv2 database to
the octavia database.  This should not impact existing running load
balancers.

Michael



On Wed, Nov 9, 2016 at 5:50 AM, Gary Kotton  wrote:
> Hi,
> What about neutron-lbaas project? Is this project still alive and kicking to 
> the merge is done or are we going to continue to maintain it? I feel like we 
> are between a rock and a hard place here. LBaaS is in production and it is 
> not clear the migration process. Will Octavia have the same DB models as 
> LBaaS or will there be a migration?
> Sorry for the pessimism but I feel that things are very unclear and that we 
> cannot even indicate to our community/consumers what to use/expect.
> Thanks
> Gary
>
> On 11/8/16, 1:36 AM, "Michael Johnson"  wrote:
>
> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full implementation of the flavors
> framework [3] [4].  Some vendors expressed interest in finishing the
> flavors framework for Octavia.
>
> Thank you to everyone that participated in our design session and 
> etherpad.
>
> Michael
>
> [1] 
> https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
> [2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
> [3] 
> https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
> [4] 
> https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Armando M.
On 9 November 2016 at 05:50, Gary Kotton  wrote:

> Hi,
> What about neutron-lbaas project? Is this project still alive and kicking
> to the merge is done or are we going to continue to maintain it? I feel
> like we are between a rock and a hard place here. LBaaS is in production
> and it is not clear the migration process. Will Octavia have the same DB
> models as LBaaS or will there be a migration?
> Sorry for the pessimism but I feel that things are very unclear and that
> we cannot even indicate to our community/consumers what to use/expect.
> Thanks
> Gary
>

http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html


>
> On 11/8/16, 1:36 AM, "Michael Johnson"  wrote:
>
> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full implementation of the flavors
> framework [3] [4].  Some vendors expressed interest in finishing the
> flavors framework for Octavia.
>
> Thank you to everyone that participated in our design session and
> etherpad.
>
> Michael
>
> [1] https://specs.openstack.org/openstack/neutron-specs/specs/
> newton/kill-neutron-lbaas.html
> [2] https://etherpad.openstack.org/p/ocata-neutron-octavia-
> lbaas-session
> [3] https://specs.openstack.org/openstack/neutron-specs/specs/
> mitaka/neutron-flavor-framework-templates.html
> [4] https://specs.openstack.org/openstack/neutron-specs/specs/
> liberty/neutron-flavor-framework.html
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Gary Kotton
Hi,
What about neutron-lbaas project? Is this project still alive and kicking to 
the merge is done or are we going to continue to maintain it? I feel like we 
are between a rock and a hard place here. LBaaS is in production and it is not 
clear the migration process. Will Octavia have the same DB models as LBaaS or 
will there be a migration?
Sorry for the pessimism but I feel that things are very unclear and that we 
cannot even indicate to our community/consumers what to use/expect.
Thanks
Gary

On 11/8/16, 1:36 AM, "Michael Johnson"  wrote:

Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-08 Thread Lingxian Kong
thanks very much for the update!


Cheers,
Lingxian Kong (Larry)

On Tue, Nov 8, 2016 at 12:36 PM, Michael Johnson 
wrote:

> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full implementation of the flavors
> framework [3] [4].  Some vendors expressed interest in finishing the
> flavors framework for Octavia.
>
> Thank you to everyone that participated in our design session and etherpad.
>
> Michael
>
> [1] https://specs.openstack.org/openstack/neutron-specs/specs/
> newton/kill-neutron-lbaas.html
> [2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
> [3] https://specs.openstack.org/openstack/neutron-specs/specs/
> mitaka/neutron-flavor-framework-templates.html
> [4] https://specs.openstack.org/openstack/neutron-specs/specs/
> liberty/neutron-flavor-framework.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-07 Thread Michael Johnson
Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia]In amphora plugin_vip(), why cidr and gateway are required but not used?

2016-06-21 Thread Jiahao Liang
Thank you the info Lubosz.

On Fri, Jun 17, 2016 at 5:01 PM, Kosnik, Lubosz 
wrote:

> Here is a bug for that - https://bugs.launchpad.net/octavia/+bug/1585804
> You’re more than welcome to fix this issue.
>
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
> On Jun 17, 2016, at 6:37 PM, Jiahao Liang 
> wrote:
>
> Added more related topics to the original email.
>
> -- Forwarded message --
> From: Jiahao Liang (Frankie) 
> Date: Fri, Jun 17, 2016 at 4:30 PM
> Subject: [openstack-dev][Octavia]In amphora plugin_vip(), why cidr and
> gateway are required but not used?
> To: openstack-dev@lists.openstack.org
>
>
> Hi community,
>
> I am going over the Octavia amphora backend code. There is one thing
> really confused me. In
> https://github.com/openstack/octavia/blob/stable/mitaka/octavia/amphorae/backends/agent/api_server/plug.py#L45,
> plug_vip() method doesn't use the cidr and gateway from the REST request.
> But in the haproxy amphora api, those two fields are required values (an
> assert
> 
>  will
> perform on the server).
>
> What is the design considerations for this api? Could we safely remove
> these two values to avoid ambiguity?
>
> Thank you,
> Jiahao Liang
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia]In amphora plugin_vip(), why cidr and gateway are required but not used?

2016-06-17 Thread Kosnik, Lubosz
Here is a bug for that - https://bugs.launchpad.net/octavia/+bug/1585804
You’re more than welcome to fix this issue.

Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Jun 17, 2016, at 6:37 PM, Jiahao Liang 
> wrote:

Added more related topics to the original email.

-- Forwarded message --
From: Jiahao Liang (Frankie) 
>
Date: Fri, Jun 17, 2016 at 4:30 PM
Subject: [openstack-dev][Octavia]In amphora plugin_vip(), why cidr and gateway 
are required but not used?
To: openstack-dev@lists.openstack.org


Hi community,

I am going over the Octavia amphora backend code. There is one thing really 
confused me. In 
https://github.com/openstack/octavia/blob/stable/mitaka/octavia/amphorae/backends/agent/api_server/plug.py#L45,
 plug_vip() method doesn't use the cidr and gateway from the REST request. But 
in the haproxy amphora api, those two fields are required values (an 
assert
 will perform on the server).

What is the design considerations for this api? Could we safely remove these 
two values to avoid ambiguity?

Thank you,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia]In amphora plugin_vip(), why cidr and gateway are required but not used?

2016-06-17 Thread Jiahao Liang
Added more related topics to the original email.

-- Forwarded message --
From: Jiahao Liang (Frankie) 
Date: Fri, Jun 17, 2016 at 4:30 PM
Subject: [openstack-dev][Octavia]In amphora plugin_vip(), why cidr and
gateway are required but not used?
To: openstack-dev@lists.openstack.org


Hi community,

I am going over the Octavia amphora backend code. There is one thing really
confused me. In
https://github.com/openstack/octavia/blob/stable/mitaka/octavia/amphorae/backends/agent/api_server/plug.py#L45,
plug_vip() method doesn't use the cidr and gateway from the REST request.
But in the haproxy amphora api, those two fields are required values (an
assert

will
perform on the server).

What is the design considerations for this api? Could we safely remove
these two values to avoid ambiguity?

Thank you,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Armando M.
On 3 March 2016 at 18:35, Stephen Balukoff  wrote:

> Hi Armando,
>
> Please rest assured that I really am a fan of requiring. I realize that
> sarcasm doesn't translate to text, so you'll have to trust me when I say
> that I am not being sarcastic by saying that.
>
> However, I am not a fan of being given nebulous requirements and then
> being accused of laziness or neglect when I ask for help. More on that
> later.
>

> Also, the intent of my e-mail wasn't to call you out and I think you are
> right to require that new features be documented. I would do the same in
> your position.
>
> To start off, my humble suggestion would be to be kind and provide a TL;DR
>> before going in such depth, otherwise there's a danger of missing the
>> opportunity to reach out the right audience. I read this far because I felt
>> I was called into question (after all I am the one 'imposing' the
>> documentation requirement on the features we are delivering in Mitaka)!
>>
>
>> That said, If you are a seasoned LBaaS developer and you don't know where
>> LBaaS doc is located or how to contribute, that tells me that LBaaS docs
>> are chronically neglected, and the links below are a proof of my fear.
>>
>
> Yes, obviously. I am not interested in shoveling out blame. I'm interested
> in solutions to the problem.
>
> Also, how is telling us "wow, your documentation sucks" in any way helpful
> in an e-mail thread where I'm asking, essentially, "How do I go about
> fixing the documentation?"  If nothing else, it should provide evidence
> that there is a problem (which I am trying to point out in this e-mail
> thread!)
>
> In a nutshell, this is a rather disastrous. Other Neutron developers
>> already contribute successfully to user docs [5]. Most of it is already
>> converted to rst and the tools you're familiar with are the ones used to
>> produce content (like specs).
>>
>
> Really? Which tools? tox? Are there templates somewhere? (I know there are
> spec templates... but what about openstack manual templates?)  If there are
> templates, where are they? Also, evidence that others are making
> contributions to the manual is not necessarily evidence that they're doing
> it correctly or consistently.
>
> You're referring to what is essentially tribal knowledge. This is not a
> good way to proceed if you want things to be consistent and done the best
> way.
>
> I have been doing this for a while (obviously not as long as some), and
> I've seen it done in many different ways in different projects. Where are
> the usable best practices guides?
>
>
>> My suggestion would be to forge your own path, and identify a place in
>> the networking-guide where to locate some relevant content that describe
>> LBaaS/Octavia: deployment architecture, features, etc. This is a long
>> journey, that requires help from all parties, but I believe that the
>> initiative needs to be driven from the LBaaS team as they are the custodian
>> of the knowledge.
>>
>>
> Again, the "figure it out" approach means you are going to get
> inconsistent results (like the current poor documentation that you linked).
> What I'm asking for in this e-mail is a guide on how it *should* be done
> that is consistent across OpenStack, that is actually consumable without
> having to read the whole of the OpenStack manual cover-to-cover. This needs
> to not be tribal knowledge if we are going to hold people accountable for
> not complying with an unwritten standard.
>
> You're not seeing a lack of initiative. Heck, the Neutron-LBaaS and
> Octavia projects have some of the most productive people working on them
> that I've seen anywhere. You're seeing lack of meaningful guidance, and
> lack of standards.
>
> I fear that if none of the LBaaS core members steps up and figure this
>> out, LBaaS will continue to be something everyone would like to adopt but
>> no-one knows how to, at least not by tapping directly at the open source
>> faucet.
>>
>
> Exactly what I fear as well. Please note that it is offensive to accuse a
> team of not stepping up when what I am doing in this very e-mail should be
> pretty good evidence that we are trying to step up.
>

There's no reason to be offended.

Rest assured that I have no interest on laying blame on anyone, that's not
how one get positive results. I commend your desire to see this done
consistently, and I agree that we lack exhaustive documentation to produce
documentation! I was simply expressing the fear that the lack of guidance
and standards as you point out may end up deterring people from covering an
area (LBaaS documentation) that is in desperate need of attention, today.
To the risk of leading to the same ill effects I'd rather have inconsistent
documentation than no documentation at all, but that's just my opinion with
which you don't have to agree with.


>
> Stephen
>
> --
> Stephen Balukoff
> Principal Technologist
> Blue Box, An IBM Company
> www.blueboxcloud.com
> sbaluk...@blueboxcloud.com
> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Stephen Balukoff
Hi Armando,

Please rest assured that I really am a fan of requiring. I realize that
sarcasm doesn't translate to text, so you'll have to trust me when I say
that I am not being sarcastic by saying that.

However, I am not a fan of being given nebulous requirements and then being
accused of laziness or neglect when I ask for help. More on that later.

Also, the intent of my e-mail wasn't to call you out and I think you are
right to require that new features be documented. I would do the same in
your position.

To start off, my humble suggestion would be to be kind and provide a TL;DR
> before going in such depth, otherwise there's a danger of missing the
> opportunity to reach out the right audience. I read this far because I felt
> I was called into question (after all I am the one 'imposing' the
> documentation requirement on the features we are delivering in Mitaka)!
>

> That said, If you are a seasoned LBaaS developer and you don't know where
> LBaaS doc is located or how to contribute, that tells me that LBaaS docs
> are chronically neglected, and the links below are a proof of my fear.
>

Yes, obviously. I am not interested in shoveling out blame. I'm interested
in solutions to the problem.

Also, how is telling us "wow, your documentation sucks" in any way helpful
in an e-mail thread where I'm asking, essentially, "How do I go about
fixing the documentation?"  If nothing else, it should provide evidence
that there is a problem (which I am trying to point out in this e-mail
thread!)

In a nutshell, this is a rather disastrous. Other Neutron developers
> already contribute successfully to user docs [5]. Most of it is already
> converted to rst and the tools you're familiar with are the ones used to
> produce content (like specs).
>

Really? Which tools? tox? Are there templates somewhere? (I know there are
spec templates... but what about openstack manual templates?)  If there are
templates, where are they? Also, evidence that others are making
contributions to the manual is not necessarily evidence that they're doing
it correctly or consistently.

You're referring to what is essentially tribal knowledge. This is not a
good way to proceed if you want things to be consistent and done the best
way.

I have been doing this for a while (obviously not as long as some), and
I've seen it done in many different ways in different projects. Where are
the usable best practices guides?


> My suggestion would be to forge your own path, and identify a place in the
> networking-guide where to locate some relevant content that describe
> LBaaS/Octavia: deployment architecture, features, etc. This is a long
> journey, that requires help from all parties, but I believe that the
> initiative needs to be driven from the LBaaS team as they are the custodian
> of the knowledge.
>
>
Again, the "figure it out" approach means you are going to get inconsistent
results (like the current poor documentation that you linked). What I'm
asking for in this e-mail is a guide on how it *should* be done that is
consistent across OpenStack, that is actually consumable without having to
read the whole of the OpenStack manual cover-to-cover. This needs to not be
tribal knowledge if we are going to hold people accountable for not
complying with an unwritten standard.

You're not seeing a lack of initiative. Heck, the Neutron-LBaaS and Octavia
projects have some of the most productive people working on them that I've
seen anywhere. You're seeing lack of meaningful guidance, and lack of
standards.

I fear that if none of the LBaaS core members steps up and figure this out,
> LBaaS will continue to be something everyone would like to adopt but no-one
> knows how to, at least not by tapping directly at the open source faucet.
>

Exactly what I fear as well. Please note that it is offensive to accuse a
team of not stepping up when what I am doing in this very e-mail should be
pretty good evidence that we are trying to step up.

Stephen

-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Armando M.
On 3 March 2016 at 16:56, Stephen Balukoff  wrote:

> Hello!
>
> I have a problem I'm hoping someone can help with: I have gone through the
> task of completing a shiny new feature for an openstack project, and now
> I'm trying to figure out how to get that last all-important documentation
> step done so that people will know about this new feature and use it. But
> I'm having no luck figuring out how I actually go about doing this...
>
> This started when I was told that in order to consider the feature
> "complete," I needed to make sure that it was documented in the openstack
> official documentation. I wholeheartedly agree with this: If it's not
> documented, very few people will know about it, let alone use it. And few
> things make an open-source contributor more sad than the idea that the work
> they've spent months or years completing isn't getting used.
>
> So... No problem! I'm an experienced OpenStack developer, and I just spent
> months getting this major new feature through my project's gauntlet of an
> approval process. How hard could documenting it be, right?
>
> So in the intervening days I've been going through the openstack-manuals,
> openstack-doc-tools, and other repositories, trying to figure out where I
> make my edits. I found both the CLI and API reference in the
> openstack-manuals repository... but when I went to edit these files, I
> noticed that there's a comment at the top stating they are auto-generated
> and shouldn't be edited? It seemed odd to me that the results of something
> auto-generated should be checked into a git repository instead of the
> configuration which creates the auto-generated output... but it's not my
> project, right?
>
> Anyway, so then I went to try to figure out how I get this auto-generated
> output updated, and haven't found much (ha!) documented on the process...
> when I sought help from Sam-I-Am, I was told that these essentially get
> generated once per release by "somebody." So...  I'm done, right?
>
> Well... I'm not so sure. Yes, if the CLI and API documentation gets
> auto-generated from the right sources, we should be good to go on that
> front, but how can I be sure the automated process is pulling this
> information from the right place? Shouldn't there be some kind of
> continuous integration or jenkins check which tests this that I can look
> at? (And if such a thing exists, how am I supposed to find out about it?)
>
> Also, the new feature I've added is somewhat involved, and it could
> probably use another document describing its intended use beyond the CLI /
> API ref. Heck, we already created on in the OpenStack wiki... but I'm also
> being told that we're trying to not rely on the wiki as much, per se, and
> that anything in the wiki really ought to be moved into the "official"
> documentation canon.
>
> So I'm at a loss. I'm a big fan of documentation as a communication
> tool, and I'm an experienced OpenStack developer, but when I look in the
> manual for how to contribute to the OpenStack documentation, I find a guide
> that wants to walk me through setting up gerrit... and very little targeted
> toward someone who already knows that, but just needs to know the actual
> process for updating the manual (and which part of the manual should be
> updated).
>
> When I went back to Sam-I-Am about this, this spawned a much larger
> discussion and he suggested I bring this up on the mailing list because
> there might be some "big picture" issues at play that should get a larger
> discussion. So... here I am.
>
> Here's what I think the problem is:
>
> * We want developers to document the features they add or modify
> * We want developers to provide good user, operator, etc. documentation
> that actual users, operators, etc. can use to understand and use the
> software we're writing.
> * We even go so far as to say that a feature is not complete unless it has
> this documentation (which I agree with)
>

If you agree with this, why do you bring it up twice? :)


> * With a rather small openstack-docs contributor team, we want to automate
> as much as possible, and rely on the docs team to *edit* documentation
> written by developers instead of writing the docs themselves (which is more
> time consuming for the docs team to do, and may miss important things only
> the developers know about.)
>
> But:
>
> * We don't actually provide much help to the developers to know how to do
> this. We have plenty for people who are new to OpenStack to get started
> with gerrit--  but there doesn't seem to be much practical help on where to
> get started, as an experienced contributor to other projects, on the actual
> task of updating the manual.
>
> And I would wager:
>
> * We don't seem to have many automated tools that tie into the jenkins
> gate checks to make sure that new features are properly documented.
> * We need something better than the 'APIImpact' and 'DocImpact' flags you
> can add to a commit message which 

[openstack-dev] [Neutron][LBaaS][Octavia][Docs] Need experienced contributor documentation best-practices and how-tos

2016-03-03 Thread Stephen Balukoff
Hello!

I have a problem I'm hoping someone can help with: I have gone through the
task of completing a shiny new feature for an openstack project, and now
I'm trying to figure out how to get that last all-important documentation
step done so that people will know about this new feature and use it. But
I'm having no luck figuring out how I actually go about doing this...

This started when I was told that in order to consider the feature
"complete," I needed to make sure that it was documented in the openstack
official documentation. I wholeheartedly agree with this: If it's not
documented, very few people will know about it, let alone use it. And few
things make an open-source contributor more sad than the idea that the work
they've spent months or years completing isn't getting used.

So... No problem! I'm an experienced OpenStack developer, and I just spent
months getting this major new feature through my project's gauntlet of an
approval process. How hard could documenting it be, right?

So in the intervening days I've been going through the openstack-manuals,
openstack-doc-tools, and other repositories, trying to figure out where I
make my edits. I found both the CLI and API reference in the
openstack-manuals repository... but when I went to edit these files, I
noticed that there's a comment at the top stating they are auto-generated
and shouldn't be edited? It seemed odd to me that the results of something
auto-generated should be checked into a git repository instead of the
configuration which creates the auto-generated output... but it's not my
project, right?

Anyway, so then I went to try to figure out how I get this auto-generated
output updated, and haven't found much (ha!) documented on the process...
when I sought help from Sam-I-Am, I was told that these essentially get
generated once per release by "somebody." So...  I'm done, right?

Well... I'm not so sure. Yes, if the CLI and API documentation gets
auto-generated from the right sources, we should be good to go on that
front, but how can I be sure the automated process is pulling this
information from the right place? Shouldn't there be some kind of
continuous integration or jenkins check which tests this that I can look
at? (And if such a thing exists, how am I supposed to find out about it?)

Also, the new feature I've added is somewhat involved, and it could
probably use another document describing its intended use beyond the CLI /
API ref. Heck, we already created on in the OpenStack wiki... but I'm also
being told that we're trying to not rely on the wiki as much, per se, and
that anything in the wiki really ought to be moved into the "official"
documentation canon.

So I'm at a loss. I'm a big fan of documentation as a communication
tool, and I'm an experienced OpenStack developer, but when I look in the
manual for how to contribute to the OpenStack documentation, I find a guide
that wants to walk me through setting up gerrit... and very little targeted
toward someone who already knows that, but just needs to know the actual
process for updating the manual (and which part of the manual should be
updated).

When I went back to Sam-I-Am about this, this spawned a much larger
discussion and he suggested I bring this up on the mailing list because
there might be some "big picture" issues at play that should get a larger
discussion. So... here I am.

Here's what I think the problem is:

* We want developers to document the features they add or modify
* We want developers to provide good user, operator, etc. documentation
that actual users, operators, etc. can use to understand and use the
software we're writing.
* We even go so far as to say that a feature is not complete unless it has
this documentation (which I agree with)
* With a rather small openstack-docs contributor team, we want to automate
as much as possible, and rely on the docs team to *edit* documentation
written by developers instead of writing the docs themselves (which is more
time consuming for the docs team to do, and may miss important things only
the developers know about.)

But:

* We don't actually provide much help to the developers to know how to do
this. We have plenty for people who are new to OpenStack to get started
with gerrit--  but there doesn't seem to be much practical help on where to
get started, as an experienced contributor to other projects, on the actual
task of updating the manual.

And I would wager:

* We don't seem to have many automated tools that tie into the jenkins gate
checks to make sure that new features are properly documented.
* We need something better than the 'APIImpact' and 'DocImpact' flags you
can add to a commit message which generate docs project bug reports These
are post-hoc back-filling at best, and as I understand it, often mean that
some poor schmuck on the docs team will probably be the one who ends up
writing the docs for the feature the developer added, probably without the
developer's help.

Please understand: I 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Using nova interface extension instead of networks extension

2016-01-30 Thread Brandon Logan
Yeah our public cloud does not support that call.  We actually have a
different endpoint that is almost just like the os-interfaces one!
Except the openstack nova client doesn't know about it, of course.  If
for the time being we can temporarily support the os-networks way as a
fall back method if the os-interfaces one fails, then I think that'd be
best.

Thanks,
Brandon

On Fri, 2016-01-29 at 23:37 +, Eichberger, German wrote:
> All,
> 
> In a recent patch [1] Bharath and I proposed to replace the call to the nova 
> os-networks extension with a call to the nova-interface extension. Apparently 
> os-networks is geared towards nova networks and us being neutron I see no 
> reason to continue to support it. I have taken to the ML to gather feedback 
> if there are cloud operators which don’t have/won't  the nova interface 
> extension enabled and need us to support os-networks in Mitaka and beyond.
> 
> Thanks,
> German
> 
> [1] https://review.openstack.org/#/c/273733/4
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Using nova interface extension instead of networks extension

2016-01-29 Thread Eichberger, German
All,

In a recent patch [1] Bharath and I proposed to replace the call to the nova 
os-networks extension with a call to the nova-interface extension. Apparently 
os-networks is geared towards nova networks and us being neutron I see no 
reason to continue to support it. I have taken to the ML to gather feedback if 
there are cloud operators which don’t have/won't  the nova interface extension 
enabled and need us to support os-networks in Mitaka and beyond.

Thanks,
German

[1] https://review.openstack.org/#/c/273733/4
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-27 Thread Brandon Logan
I could see it being interesting, but that would have to be something
vetted by other drivers and appliances because they may not support
that.

On Mon, 2016-01-25 at 21:37 +, Fox, Kevin M wrote:
> We are using a neutron v1 lb that has external to the cloud members in a lb 
> used by a particular tenant in production. It is working well. Hoping to do 
> the same thing once we get to Octavia+LBaaSv2.
> 
> Being able to tweak the routes of the load balancer would be an interesting 
> feature, though I don't think I'd ever need to. Maybe that should be an 
> extension? I'm guessing a lot of lb plugins won't be able to support it at 
> all.
> 
> Thanks,
> Kevin
> 
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Monday, January 25, 2016 1:03 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> Any additional thoughts and opinions people want to share on this.  I
> don't have a horse in this race as long as we don't make dangerous
> assumptions about what the user wants.  So I am fine with making
> subnet_id optional.
> 
> Michael, how strong would your opposition for this be?
> 
> Thanks,
> Brandon
> 
> On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> > Michael-- I think you're assuming that adding an external subnet ID
> > means that the load balancing service will route requests to out an
> > interface with a route to said external subnet. However, the model we
> > have is actually too simple to convey this information to the load
> > balancing service. This is because while we know the member's IP and a
> > subnet to which the load balancing service should connect to
> > theoretically talk to said IP, we don't have any kind of actual
> > routing information for the IP address (like, say a default route for
> > the subnet).
> >
> >
> > Consider this not far-fetched example: Suppose a tenant wants to add a
> > back-end member which is reachable only over a VPN, the gateway for
> > which lives on a tenant internal subnet. If we had a more feature-rich
> > model to work with here, the tenant could specify the member IP, the
> > subnet containing the VPN gateway and the gateway's IP address. In
> > theory the load balancing service could add local routing rules to
> > make sure that communication to that member happens on the tenant
> > subnet and gets routed to the VPN gateway.
> >
> >
> > If we want to support this use case, then we'd probably need to add an
> > optional gateway IP parameter to the member object. (And I'd still be
> > in favor of assuming the subnet_id on the member is optional, and that
> > default routing should be used if not specified.)
> >
> >
> > Let me see if I can break down several use cases we could support with
> > this model. Let's assume the member model contains (among other
> > things) the following attributes:
> >
> >
> > ip_address (member IP, required)
> > subnet_id (member or gateway subnet, optional)
> > gateway_ip (VPN or other layer-3 gateway that should be used to access
> > the member_ip. optional)
> >
> >
> > Expected behaviors:
> >
> >
> > Scenario 1:
> > ip_address specified, subnet_id and gateway_ip are None:  Load
> > balancing service assumes member IP address is reachable through
> > default routing. Appropriate for members that are not part of the
> > local cloud that are accessible from the internet.
> >
> >
> >
> > Scenario 2:
> > ip_address and subnet_id specified, gateway_ip is None: Load balancing
> > service assumes it needs an interface on subnet_id to talk directly to
> > the member IP address. Appropriate for members that live on tenant
> > networks. member_ip should exist within the subnet specified by
> > subnet_id. This is the only scenario supported under the current model
> > if we make subnet_id a required field and don't add a gateway_ip.
> >
> >
> > Scenario 3:
> > ip_address, subnet_id and gateway_ip are all specified:  Load
> > balancing service assumes it needs an interface on subnet_id to talk
> > to the gateway_ip. Load balancing service should add local routing
> > rule (ie. to the host and / or local network namespace context of the
> > load balancing service itself, not necessarily to Neutron or anything)
> > to route any packets destined for member_ip to the gateway_ip.
> > gateway_ip should exist within the subnet specified by subnet_id.
> > Appropriate for members that are on the other

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-27 Thread Samuel Bercovici
If we take the approach do "download configuration for all v1 out of OpenStack, 
delete all v1 configuration and then, after lbaas v1 is removed and lbaas v2 is 
installed, use the data to recreate the items, this should be compatible to all 
drivers.
No sure if such procedure will be accepted though.


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Thursday, January 28, 2016 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

I could see it being interesting, but that would have to be something vetted by 
other drivers and appliances because they may not support that.

On Mon, 2016-01-25 at 21:37 +, Fox, Kevin M wrote:
> We are using a neutron v1 lb that has external to the cloud members in a lb 
> used by a particular tenant in production. It is working well. Hoping to do 
> the same thing once we get to Octavia+LBaaSv2.
> 
> Being able to tweak the routes of the load balancer would be an interesting 
> feature, though I don't think I'd ever need to. Maybe that should be an 
> extension? I'm guessing a lot of lb plugins won't be able to support it at 
> all.
> 
> Thanks,
> Kevin
> 
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Monday, January 25, 2016 1:03 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> Any additional thoughts and opinions people want to share on this.  I 
> don't have a horse in this race as long as we don't make dangerous 
> assumptions about what the user wants.  So I am fine with making 
> subnet_id optional.
> 
> Michael, how strong would your opposition for this be?
> 
> Thanks,
> Brandon
> 
> On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> > Michael-- I think you're assuming that adding an external subnet ID 
> > means that the load balancing service will route requests to out an 
> > interface with a route to said external subnet. However, the model 
> > we have is actually too simple to convey this information to the 
> > load balancing service. This is because while we know the member's 
> > IP and a subnet to which the load balancing service should connect 
> > to theoretically talk to said IP, we don't have any kind of actual 
> > routing information for the IP address (like, say a default route 
> > for the subnet).
> >
> >
> > Consider this not far-fetched example: Suppose a tenant wants to add 
> > a back-end member which is reachable only over a VPN, the gateway 
> > for which lives on a tenant internal subnet. If we had a more 
> > feature-rich model to work with here, the tenant could specify the 
> > member IP, the subnet containing the VPN gateway and the gateway's 
> > IP address. In theory the load balancing service could add local 
> > routing rules to make sure that communication to that member happens 
> > on the tenant subnet and gets routed to the VPN gateway.
> >
> >
> > If we want to support this use case, then we'd probably need to add 
> > an optional gateway IP parameter to the member object. (And I'd 
> > still be in favor of assuming the subnet_id on the member is 
> > optional, and that default routing should be used if not specified.)
> >
> >
> > Let me see if I can break down several use cases we could support 
> > with this model. Let's assume the member model contains (among other
> > things) the following attributes:
> >
> >
> > ip_address (member IP, required)
> > subnet_id (member or gateway subnet, optional) gateway_ip (VPN or 
> > other layer-3 gateway that should be used to access the member_ip. 
> > optional)
> >
> >
> > Expected behaviors:
> >
> >
> > Scenario 1:
> > ip_address specified, subnet_id and gateway_ip are None:  Load 
> > balancing service assumes member IP address is reachable through 
> > default routing. Appropriate for members that are not part of the 
> > local cloud that are accessible from the internet.
> >
> >
> >
> > Scenario 2:
> > ip_address and subnet_id specified, gateway_ip is None: Load 
> > balancing service assumes it needs an interface on subnet_id to talk 
> > directly to the member IP address. Appropriate for members that live 
> > on tenant networks. member_ip should exist within the subnet 
> > specified by subnet_id. This is the only scenario supported under 
> > the current model if we make subnet_id a required field and don't add a 
> > gateway_ip.
> >
> &

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Brandon Logan
I think you'll like that there will soon be a single create call for the
entire graph/tree of a load balancer so you can get those subnets up
front.  However, the API will still allow creating each entity
individually which you don't like. I have a feeling most clients and UIs
will use the single create call once its available over creating each
individual entity independently.  That should help out mostly.

Thanks,
Brandon

On Sun, 2016-01-17 at 09:05 +, Samuel Bercovici wrote:
> Btw.
> 
> I am still in favor on associating the subnets to the LB and then not specify 
> them per node at all.
> 
> -Sam.
> 
> 
> -Original Message-
> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Sunday, January 17, 2016 10:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> +1
> Subnet should be mandatory
> 
> The only thing this makes supporting load balancing servers which are not 
> running in the cloud more challenging to support.
> But I do not see this as a huge user story (lb in cloud load balancing IPs 
> outside the cloud)
> 
> -Sam.
> 
> -Original Message-
> From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
> Sent: Saturday, January 16, 2016 6:56 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional 
> on member create?
> 
> I filed a bug [1] a while ago that subnet_id should be an optional parameter 
> for member creation.  Currently it is required.  Review [2] is makes it 
> optional.
> 
> The original thinking was that if the load balancer is ever connected to that 
> same subnet, be it by another member on that subnet or the vip on that 
> subnet, then the user does not need to specify the subnet for new member if 
> that new member is on one of those subnets.
> 
> At the midcycle we discussed it and we had an informal agreement that it 
> required too many assumptions on the part of the end user, neutron lbaas, and 
> driver.
> 
> If anyone wants to voice their opinion on this matter, do so on the bug 
> report, review, or in response to this thread.  Otherwise, it'll probably be 
> abandoned and not done at some point.
> 
> Thanks,
> Brandon
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1426248
> [2] https://review.openstack.org/#/c/267935/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Brandon Logan
being asked for by
> > >> tenants. Therefore, I'm in favor of making member subnet
> optional.
> > >>
> > >> Stephen
> > >>
> > >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek
> <vivekj...@ebay.com> wrote:
> > >>>
>     > >>> If member port (IP address) is allocated by neutron,
> then why do we need
> > >>> to specify it explicitly? It can be derived by LBaaS
> driver implicitly.
> > >>>
> > >>> Thanks,
> > >>> Vivek
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 1/17/16, 1:05 AM, "Samuel Bercovici"
> <samu...@radware.com> wrote:
> > >>>
> > >>>> Btw.
> > >>>>
> > >>>> I am still in favor on associating the subnets to the
>     LB and then not
> > >>>> specify them per node at all.
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Samuel Bercovici [mailto:samu...@radware.com]
> > >>>> Sent: Sunday, January 17, 2016 10:14 AM
> > >>>> To: OpenStack Development Mailing List (not for usage
> questions)
> > >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia]
> Should subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> +1
> > >>>> Subnet should be mandatory
> > >>>>
> > >>>> The only thing this makes supporting load balancing
> servers which are not
> > >>>> running in the cloud more challenging to support.
> > >>>> But I do not see this as a huge user story (lb in cloud
> load balancing
> > >>>> IPs outside the cloud)
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Brandon Logan
> [mailto:brandon.lo...@rackspace.com]
> > >>>> Sent: Saturday, January 16, 2016 6:56 AM
> > >>>> To: openstack-dev@lists.openstack.org
> > >>>> Subject: [openstack-dev] [Neutron][LBaaS][Octavia]
> Should subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> I filed a bug [1] a while ago that subnet_id should be
> an optional
> > >>>> parameter for member creation.  Currently it is
> required.  Review [2] is
> > >>>> makes it optional.
> > >>>>
> > >>>> The original thinking was that if the load balancer is
> ever connected to
> > >>>> that same subnet, be it by another member on that
> subnet or the vip on that
> > >>>> subnet, then the user does not need to specify the
> subnet for new member if
> > >>>> that new member is on one of those subnets.
> > >>>>
> > >>>> At the midcycle we discussed it and we had an informal
> agreement that it
> > >>>> required too many assumptions on the part of the end
> user, neutron lbaas,
> > >>>> and driver.
> > >>>>
> > >>>> If anyone wants to voice their opinion on this matter,
> do so on the bug
> > >>>> report, review, or in response to this thread.
> Otherwise, it'll probably be
> > >>>> abandoned and not done at some point.
> > >>>>
> > >>>> Thanks,
> > >>>> Brandon
> > >>>>
> > >>>> [1] https://bugs.launchpad.net/neutron/+bug/1426248
> > >>>> [2] https://review.openstack.org/#/c/267935/
> > >>>
> > >>>>>
> 
> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-25 Thread Fox, Kevin M
We are using a neutron v1 lb that has external to the cloud members in a lb 
used by a particular tenant in production. It is working well. Hoping to do the 
same thing once we get to Octavia+LBaaSv2.

Being able to tweak the routes of the load balancer would be an interesting 
feature, though I don't think I'd ever need to. Maybe that should be an 
extension? I'm guessing a lot of lb plugins won't be able to support it at all.

Thanks,
Kevin


From: Brandon Logan [brandon.lo...@rackspace.com]
Sent: Monday, January 25, 2016 1:03 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

Any additional thoughts and opinions people want to share on this.  I
don't have a horse in this race as long as we don't make dangerous
assumptions about what the user wants.  So I am fine with making
subnet_id optional.

Michael, how strong would your opposition for this be?

Thanks,
Brandon

On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> Michael-- I think you're assuming that adding an external subnet ID
> means that the load balancing service will route requests to out an
> interface with a route to said external subnet. However, the model we
> have is actually too simple to convey this information to the load
> balancing service. This is because while we know the member's IP and a
> subnet to which the load balancing service should connect to
> theoretically talk to said IP, we don't have any kind of actual
> routing information for the IP address (like, say a default route for
> the subnet).
>
>
> Consider this not far-fetched example: Suppose a tenant wants to add a
> back-end member which is reachable only over a VPN, the gateway for
> which lives on a tenant internal subnet. If we had a more feature-rich
> model to work with here, the tenant could specify the member IP, the
> subnet containing the VPN gateway and the gateway's IP address. In
> theory the load balancing service could add local routing rules to
> make sure that communication to that member happens on the tenant
> subnet and gets routed to the VPN gateway.
>
>
> If we want to support this use case, then we'd probably need to add an
> optional gateway IP parameter to the member object. (And I'd still be
> in favor of assuming the subnet_id on the member is optional, and that
> default routing should be used if not specified.)
>
>
> Let me see if I can break down several use cases we could support with
> this model. Let's assume the member model contains (among other
> things) the following attributes:
>
>
> ip_address (member IP, required)
> subnet_id (member or gateway subnet, optional)
> gateway_ip (VPN or other layer-3 gateway that should be used to access
> the member_ip. optional)
>
>
> Expected behaviors:
>
>
> Scenario 1:
> ip_address specified, subnet_id and gateway_ip are None:  Load
> balancing service assumes member IP address is reachable through
> default routing. Appropriate for members that are not part of the
> local cloud that are accessible from the internet.
>
>
>
> Scenario 2:
> ip_address and subnet_id specified, gateway_ip is None: Load balancing
> service assumes it needs an interface on subnet_id to talk directly to
> the member IP address. Appropriate for members that live on tenant
> networks. member_ip should exist within the subnet specified by
> subnet_id. This is the only scenario supported under the current model
> if we make subnet_id a required field and don't add a gateway_ip.
>
>
> Scenario 3:
> ip_address, subnet_id and gateway_ip are all specified:  Load
> balancing service assumes it needs an interface on subnet_id to talk
> to the gateway_ip. Load balancing service should add local routing
> rule (ie. to the host and / or local network namespace context of the
> load balancing service itself, not necessarily to Neutron or anything)
> to route any packets destined for member_ip to the gateway_ip.
> gateway_ip should exist within the subnet specified by subnet_id.
> Appropriate for members that are on the other side of a VPN links, or
> reachable via other local routing within a tenant network or local
> cloud.
>
>
> Scenario 4:
> ip_address and gateway_ip are specified, subnet_id is None: This is an
> invalid configuration.
>
>
> So what do y'all think of this? Am I smoking crack with how this
> should work?
>
>
> For what it's worth, I think the "member is on the other side of a
> VPN" scenario is not one our customers are champing at the bit to
> have, so I'm fine with not supporting that kind of topology if nobody
> else wants it. I'm still in favor of making subnet_id optional, as
> this supports b

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Doug Wiegley
But, by requiring an external subnet, you are assuming that the packets always 
originate from inside a neutron network. That is not necessarily the case with 
a physical device.

doug


> On Jan 19, 2016, at 11:55 AM, Michael Johnson <johnso...@gmail.com> wrote:
> 
> I feel that the subnet should be mandatory as there are too many
> ambiguity issues due to overlapping subnets and multiple routes.
> In the case of an IP being outside of the tenant networks, the user
> would specify an external network that has the appropriate routes.  We
> cannot always assume which tenant network with an external (or VPN)
> route is the appropriate one to use.
> 
> Michael
> 
> On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff <sbaluk...@bluebox.net> 
> wrote:
>> Vivek--
>> 
>> "Member" in this case refers to an IP address that (probably) lives on a
>> tenant back-end network. We can't specify just the IP address when talking
>> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
>> this case, subnet is required). In the case of the namespace driver and
>> Octavia, we use the subnet parameter for all members to determine which
>> back-end networks the load balancing software needs a port on.
>> 
>> I think the original use case for making subnet optional was the idea that
>> sometimes a tenant would like to add a "member" IP that is not part of their
>> tenant networks at all--  this is more than likely an IP address that lives
>> outside the local cloud. The assumption, then, would be that this IP address
>> should be reachable through standard routing from wherever the load balancer
>> happens to live on the network. That is to say, the load balancer will try
>> to get to such an IP address via its default gateway, unless it has a more
>> specific route.
>> 
>> As far as I'm aware, this use case is still valid and being asked for by
>> tenants. Therefore, I'm in favor of making member subnet optional.
>> 
>> Stephen
>> 
>> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek <vivekj...@ebay.com> wrote:
>>> 
>>> If member port (IP address) is allocated by neutron, then why do we need
>>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>> 
>>> Thanks,
>>> Vivek
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
>>> 
>>>> Btw.
>>>> 
>>>> I am still in favor on associating the subnets to the LB and then not
>>>> specify them per node at all.
>>>> 
>>>> -Sam.
>>>> 
>>>> 
>>>> -Original Message-
>>>> From: Samuel Bercovici [mailto:samu...@radware.com]
>>>> Sent: Sunday, January 17, 2016 10:14 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>>>> optional on member create?
>>>> 
>>>> +1
>>>> Subnet should be mandatory
>>>> 
>>>> The only thing this makes supporting load balancing servers which are not
>>>> running in the cloud more challenging to support.
>>>> But I do not see this as a huge user story (lb in cloud load balancing
>>>> IPs outside the cloud)
>>>> 
>>>> -Sam.
>>>> 
>>>> -Original Message-
>>>> From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>>>> Sent: Saturday, January 16, 2016 6:56 AM
>>>> To: openstack-dev@lists.openstack.org
>>>> Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>>>> optional on member create?
>>>> 
>>>> I filed a bug [1] a while ago that subnet_id should be an optional
>>>> parameter for member creation.  Currently it is required.  Review [2] is
>>>> makes it optional.
>>>> 
>>>> The original thinking was that if the load balancer is ever connected to
>>>> that same subnet, be it by another member on that subnet or the vip on that
>>>> subnet, then the user does not need to specify the subnet for new member if
>>>> that new member is on one of those subnets.
>>>> 
>>>> At the midcycle we discussed it and we had an informal agreement that it
>>>> required too many assumptions on the part of the end user, neutron lbaas,
>>>> and driver.
>>>> 
>>>> If anyone wants

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Michael Johnson
I feel that the subnet should be mandatory as there are too many
ambiguity issues due to overlapping subnets and multiple routes.
In the case of an IP being outside of the tenant networks, the user
would specify an external network that has the appropriate routes.  We
cannot always assume which tenant network with an external (or VPN)
route is the appropriate one to use.

Michael

On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff <sbaluk...@bluebox.net> wrote:
> Vivek--
>
> "Member" in this case refers to an IP address that (probably) lives on a
> tenant back-end network. We can't specify just the IP address when talking
> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> this case, subnet is required). In the case of the namespace driver and
> Octavia, we use the subnet parameter for all members to determine which
> back-end networks the load balancing software needs a port on.
>
> I think the original use case for making subnet optional was the idea that
> sometimes a tenant would like to add a "member" IP that is not part of their
> tenant networks at all--  this is more than likely an IP address that lives
> outside the local cloud. The assumption, then, would be that this IP address
> should be reachable through standard routing from wherever the load balancer
> happens to live on the network. That is to say, the load balancer will try
> to get to such an IP address via its default gateway, unless it has a more
> specific route.
>
> As far as I'm aware, this use case is still valid and being asked for by
> tenants. Therefore, I'm in favor of making member subnet optional.
>
> Stephen
>
> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek <vivekj...@ebay.com> wrote:
>>
>> If member port (IP address) is allocated by neutron, then why do we need
>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>
>> Thanks,
>> Vivek
>>
>>
>>
>>
>>
>>
>> On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
>>
>> >Btw.
>> >
>> >I am still in favor on associating the subnets to the LB and then not
>> > specify them per node at all.
>> >
>> >-Sam.
>> >
>> >
>> >-Original Message-
>> >From: Samuel Bercovici [mailto:samu...@radware.com]
>> >Sent: Sunday, January 17, 2016 10:14 AM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >+1
>> >Subnet should be mandatory
>> >
>> >The only thing this makes supporting load balancing servers which are not
>> > running in the cloud more challenging to support.
>> >But I do not see this as a huge user story (lb in cloud load balancing
>> > IPs outside the cloud)
>> >
>> >-Sam.
>> >
>> >-Original Message-
>> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>> >Sent: Saturday, January 16, 2016 6:56 AM
>> >To: openstack-dev@lists.openstack.org
>> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >I filed a bug [1] a while ago that subnet_id should be an optional
>> > parameter for member creation.  Currently it is required.  Review [2] is
>> > makes it optional.
>> >
>> >The original thinking was that if the load balancer is ever connected to
>> > that same subnet, be it by another member on that subnet or the vip on that
>> > subnet, then the user does not need to specify the subnet for new member if
>> > that new member is on one of those subnets.
>> >
>> >At the midcycle we discussed it and we had an informal agreement that it
>> > required too many assumptions on the part of the end user, neutron lbaas,
>> > and driver.
>> >
>> >If anyone wants to voice their opinion on this matter, do so on the bug
>> > report, review, or in response to this thread.  Otherwise, it'll probably 
>> > be
>> > abandoned and not done at some point.
>> >
>> >Thanks,
>> >Brandon
>> >
>> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
>> >[2] https://review.openstack.org/#/c/267935/
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > ope

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Brandon Logan
So it really comes down to driver (or driver's appliance)
implementation.  Here's some scenarios to consider:

1) vip on tenant network, members on tenant network
- if a user wants to add an external IP to this configuration, how do we
handle that?  If the subnet is optional the it just uses the default
routing, then it won't ever get external unless the backend
implementation sets up routing to external from the load balancer.  I
think this is a bad idea because the tenant would probably want these
networks isolated.  But if the backend puts a load balancer on it with
external connectivity, its not as isolated as it was.  So to me, if
subnet is optional the best choice is to do default routing which
*SHOULD* fail on default routing.   This of course is something a tenant
will have to realize.  The good thing about a required subnet_id is that
the tenant has explicitly stated they wanted external connectivity and
the backend is not making assumptions as to whether they want it or
don't.

2) vip on public network, members on tenant network
- defaults route should be able to route out to external IPs now so if
subnet_id is optional it works.  If subnet_id is required then the
tenant would have to specify the public network again, which is less
than ideal and also has other issues brought up in this thread.

All other scenario permutations are similar to the above ones so I don't
think i need to go through them.

Basically, I'm waffling on this and am currently on the optional
subnet_id side but as the builders of octavia, I don't think we should
allow a load balancer external access unless the tenant has in a way
given permission by the configuration they've explicitly set.  Though,
that too should be defined.

Thanks,
Brandon
On Tue, 2016-01-19 at 12:07 -0700, Doug Wiegley wrote:
> But, by requiring an external subnet, you are assuming that the packets 
> always originate from inside a neutron network. That is not necessarily the 
> case with a physical device.
> 
> doug
> 
> 
> > On Jan 19, 2016, at 11:55 AM, Michael Johnson <johnso...@gmail.com> wrote:
> > 
> > I feel that the subnet should be mandatory as there are too many
> > ambiguity issues due to overlapping subnets and multiple routes.
> > In the case of an IP being outside of the tenant networks, the user
> > would specify an external network that has the appropriate routes.  We
> > cannot always assume which tenant network with an external (or VPN)
> > route is the appropriate one to use.
> > 
> > Michael
> > 
> > On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff <sbaluk...@bluebox.net> 
> > wrote:
> >> Vivek--
> >> 
> >> "Member" in this case refers to an IP address that (probably) lives on a
> >> tenant back-end network. We can't specify just the IP address when talking
> >> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> >> this case, subnet is required). In the case of the namespace driver and
> >> Octavia, we use the subnet parameter for all members to determine which
> >> back-end networks the load balancing software needs a port on.
> >> 
> >> I think the original use case for making subnet optional was the idea that
> >> sometimes a tenant would like to add a "member" IP that is not part of 
> >> their
> >> tenant networks at all--  this is more than likely an IP address that lives
> >> outside the local cloud. The assumption, then, would be that this IP 
> >> address
> >> should be reachable through standard routing from wherever the load 
> >> balancer
> >> happens to live on the network. That is to say, the load balancer will try
> >> to get to such an IP address via its default gateway, unless it has a more
> >> specific route.
> >> 
> >> As far as I'm aware, this use case is still valid and being asked for by
> >> tenants. Therefore, I'm in favor of making member subnet optional.
> >> 
> >> Stephen
> >> 
> >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek <vivekj...@ebay.com> wrote:
> >>> 
> >>> If member port (IP address) is allocated by neutron, then why do we need
> >>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
> >>> 
> >>> Thanks,
> >>> Vivek
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
> >>> 
> >>>> Btw.
> >>>> 
> >>>> I am still in favor on associating the subnets to the LB and then not
> >&g

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Stephen Balukoff
ss
> than ideal and also has other issues brought up in this thread.
>
> All other scenario permutations are similar to the above ones so I don't
> think i need to go through them.
>
> Basically, I'm waffling on this and am currently on the optional
> subnet_id side but as the builders of octavia, I don't think we should
> allow a load balancer external access unless the tenant has in a way
> given permission by the configuration they've explicitly set.  Though,
> that too should be defined.
>
> Thanks,
> Brandon
> On Tue, 2016-01-19 at 12:07 -0700, Doug Wiegley wrote:
> > But, by requiring an external subnet, you are assuming that the packets
> always originate from inside a neutron network. That is not necessarily the
> case with a physical device.
> >
> > doug
> >
> >
> > > On Jan 19, 2016, at 11:55 AM, Michael Johnson <johnso...@gmail.com>
> wrote:
> > >
> > > I feel that the subnet should be mandatory as there are too many
> > > ambiguity issues due to overlapping subnets and multiple routes.
> > > In the case of an IP being outside of the tenant networks, the user
> > > would specify an external network that has the appropriate routes.  We
> > > cannot always assume which tenant network with an external (or VPN)
> > > route is the appropriate one to use.
> > >
> > > Michael
> > >
> > > On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff <
> sbaluk...@bluebox.net> wrote:
> > >> Vivek--
> > >>
> > >> "Member" in this case refers to an IP address that (probably) lives
> on a
> > >> tenant back-end network. We can't specify just the IP address when
> talking
> > >> to such an IP since tenant subnets may use overlapping IP ranges (ie.
> in
> > >> this case, subnet is required). In the case of the namespace driver
> and
> > >> Octavia, we use the subnet parameter for all members to determine
> which
> > >> back-end networks the load balancing software needs a port on.
> > >>
> > >> I think the original use case for making subnet optional was the idea
> that
> > >> sometimes a tenant would like to add a "member" IP that is not part
> of their
> > >> tenant networks at all--  this is more than likely an IP address that
> lives
> > >> outside the local cloud. The assumption, then, would be that this IP
> address
> > >> should be reachable through standard routing from wherever the load
> balancer
> > >> happens to live on the network. That is to say, the load balancer
> will try
> > >> to get to such an IP address via its default gateway, unless it has a
> more
> > >> specific route.
> > >>
> > >> As far as I'm aware, this use case is still valid and being asked for
> by
> > >> tenants. Therefore, I'm in favor of making member subnet optional.
> > >>
> > >> Stephen
> > >>
> > >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek <vivekj...@ebay.com>
> wrote:
> > >>>
> > >>> If member port (IP address) is allocated by neutron, then why do we
> need
> > >>> to specify it explicitly? It can be derived by LBaaS driver
> implicitly.
> > >>>
> > >>> Thanks,
> > >>> Vivek
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
> > >>>
> > >>>> Btw.
> > >>>>
> > >>>> I am still in favor on associating the subnets to the LB and then
> not
> > >>>> specify them per node at all.
> > >>>>
> > >>>> -Sam.
> > >>>>
> > >>>>
> > >>>> -Original Message-
> > >>>> From: Samuel Bercovici [mailto:samu...@radware.com]
> > >>>> Sent: Sunday, January 17, 2016 10:14 AM
> > >>>> To: OpenStack Development Mailing List (not for usage questions)
> > >>>> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should
> subnet be
> > >>>> optional on member create?
> > >>>>
> > >>>> +1
> > >>>> Subnet should be mandatory
> > >>>>
> > >>>> The only thing this makes supporting load balancing servers which
> are not
> > >>>> running i

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Jain, Vivek
If member port (IP address) is allocated by neutron, then why do we need to 
specify it explicitly? It can be derived by LBaaS driver implicitly.

Thanks,
Vivek






On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:

>Btw.
>
>I am still in favor on associating the subnets to the LB and then not specify 
>them per node at all.
>
>-Sam.
>
>
>-Original Message-
>From: Samuel Bercovici [mailto:samu...@radware.com] 
>Sent: Sunday, January 17, 2016 10:14 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
>optional on member create?
>
>+1
>Subnet should be mandatory
>
>The only thing this makes supporting load balancing servers which are not 
>running in the cloud more challenging to support.
>But I do not see this as a huge user story (lb in cloud load balancing IPs 
>outside the cloud)
>
>-Sam.
>
>-Original Message-
>From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
>Sent: Saturday, January 16, 2016 6:56 AM
>To: openstack-dev@lists.openstack.org
>Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional 
>on member create?
>
>I filed a bug [1] a while ago that subnet_id should be an optional parameter 
>for member creation.  Currently it is required.  Review [2] is makes it 
>optional.
>
>The original thinking was that if the load balancer is ever connected to that 
>same subnet, be it by another member on that subnet or the vip on that subnet, 
>then the user does not need to specify the subnet for new member if that new 
>member is on one of those subnets.
>
>At the midcycle we discussed it and we had an informal agreement that it 
>required too many assumptions on the part of the end user, neutron lbaas, and 
>driver.
>
>If anyone wants to voice their opinion on this matter, do so on the bug 
>report, review, or in response to this thread.  Otherwise, it'll probably be 
>abandoned and not done at some point.
>
>Thanks,
>Brandon
>
>[1] https://bugs.launchpad.net/neutron/+bug/1426248
>[2] https://review.openstack.org/#/c/267935/
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Stephen Balukoff
Vivek--

"Member" in this case refers to an IP address that (probably) lives on a
tenant back-end network. We can't specify just the IP address when talking
to such an IP since tenant subnets may use overlapping IP ranges (ie. in
this case, subnet is required). In the case of the namespace driver and
Octavia, we use the subnet parameter for all members to determine which
back-end networks the load balancing software needs a port on.

I think the original use case for making subnet optional was the idea that
sometimes a tenant would like to add a "member" IP that is not part of
their tenant networks at all--  this is more than likely an IP address that
lives outside the local cloud. The assumption, then, would be that this IP
address should be reachable through standard routing from wherever the load
balancer happens to live on the network. That is to say, the load balancer
will try to get to such an IP address via its default gateway, unless it
has a more specific route.

As far as I'm aware, this use case is still valid and being asked for by
tenants. Therefore, I'm in favor of making member subnet optional.

Stephen

On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek <vivekj...@ebay.com> wrote:

> If member port (IP address) is allocated by neutron, then why do we need
> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>
> Thanks,
> Vivek
>
>
>
>
>
>
> On 1/17/16, 1:05 AM, "Samuel Bercovici" <samu...@radware.com> wrote:
>
> >Btw.
> >
> >I am still in favor on associating the subnets to the LB and then not
> specify them per node at all.
> >
> >-Sam.
> >
> >
> >-Original Message-
> >From: Samuel Bercovici [mailto:samu...@radware.com]
> >Sent: Sunday, January 17, 2016 10:14 AM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >+1
> >Subnet should be mandatory
> >
> >The only thing this makes supporting load balancing servers which are not
> running in the cloud more challenging to support.
> >But I do not see this as a huge user story (lb in cloud load balancing
> IPs outside the cloud)
> >
> >-Sam.
> >
> >-Original Message-
> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
> >Sent: Saturday, January 16, 2016 6:56 AM
> >To: openstack-dev@lists.openstack.org
> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >I filed a bug [1] a while ago that subnet_id should be an optional
> parameter for member creation.  Currently it is required.  Review [2] is
> makes it optional.
> >
> >The original thinking was that if the load balancer is ever connected to
> that same subnet, be it by another member on that subnet or the vip on that
> subnet, then the user does not need to specify the subnet for new member if
> that new member is on one of those subnets.
> >
> >At the midcycle we discussed it and we had an informal agreement that it
> required too many assumptions on the part of the end user, neutron lbaas,
> and driver.
> >
> >If anyone wants to voice their opinion on this matter, do so on the bug
> report, review, or in response to this thread.  Otherwise, it'll probably
> be abandoned and not done at some point.
> >
> >Thanks,
> >Brandon
> >
> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
> >[2] https://review.openstack.org/#/c/267935/
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-17 Thread Samuel Bercovici
Btw.

I am still in favor on associating the subnets to the LB and then not specify 
them per node at all.

-Sam.


-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Sunday, January 17, 2016 10:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

+1
Subnet should be mandatory

The only thing this makes supporting load balancing servers which are not 
running in the cloud more challenging to support.
But I do not see this as a huge user story (lb in cloud load balancing IPs 
outside the cloud)

-Sam.

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Saturday, January 16, 2016 6:56 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on 
member create?

I filed a bug [1] a while ago that subnet_id should be an optional parameter 
for member creation.  Currently it is required.  Review [2] is makes it 
optional.

The original thinking was that if the load balancer is ever connected to that 
same subnet, be it by another member on that subnet or the vip on that subnet, 
then the user does not need to specify the subnet for new member if that new 
member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it 
required too many assumptions on the part of the end user, neutron lbaas, and 
driver.

If anyone wants to voice their opinion on this matter, do so on the bug report, 
review, or in response to this thread.  Otherwise, it'll probably be abandoned 
and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-17 Thread Samuel Bercovici
+1
Subnet should be mandatory

The only thing this makes supporting load balancing servers which are not 
running in the cloud more challenging to support.
But I do not see this as a huge user story (lb in cloud load balancing IPs 
outside the cloud)

-Sam.

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Saturday, January 16, 2016 6:56 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on 
member create?

I filed a bug [1] a while ago that subnet_id should be an optional parameter 
for member creation.  Currently it is required.  Review [2] is makes it 
optional.

The original thinking was that if the load balancer is ever connected to that 
same subnet, be it by another member on that subnet or the vip on that subnet, 
then the user does not need to specify the subnet for new member if that new 
member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it 
required too many assumptions on the part of the end user, neutron lbaas, and 
driver.

If anyone wants to voice their opinion on this matter, do so on the bug report, 
review, or in response to this thread.  Otherwise, it'll probably be abandoned 
and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-15 Thread Brandon Logan
I filed a bug [1] a while ago that subnet_id should be an optional
parameter for member creation.  Currently it is required.  Review [2] is
makes it optional.

The original thinking was that if the load balancer is ever connected to
that same subnet, be it by another member on that subnet or the vip on
that subnet, then the user does not need to specify the subnet for new
member if that new member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it
required too many assumptions on the part of the end user, neutron
lbaas, and driver.

If anyone wants to voice their opinion on this matter, do so on the bug
report, review, or in response to this thread.  Otherwise, it'll
probably be abandoned and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting 5/20/15

2015-05-14 Thread Eichberger, German
All,

We won¹t have an Octavia meeting next week due to the OpenStack summit but
we will have a few sessions there ‹ so please make sure to say hiŠ

German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] No Octavia meeting today

2015-05-06 Thread Eichberger, German
All,

In order to work on the demo for Vancouver we will be skipping todays, 5/6/15 
meeting. We will have another meeting on 5/13 to finalize for the summit --

If you have questions you can find us in the channel — and again please keep up 
the good work with reviews!

Thanks,
German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-05 Thread Doug Wiegley
You’re definitely stuck on lbaas v1 until you upgrade to Kilo, but…

But, it would be possible to write an lbaasv1 driver for octavia, though 
octavia likely won’t be mature enough to be useful for that until the end of 
Liberty or so. Also, though “vendor” is a bad word in openstack (and that’s 
ok), there are a few vendor offerings, some of which will make v1 more usable 
and/or extend it with other features.

Can you describe what you’re trying to do, and we can make some suggestions?  
Worst case, we’re always looking to hear more use cases as we build things.

Thanks,
doug



 On May 4, 2015, at 10:14 PM, Daniel Comnea comnea.d...@gmail.com wrote:
 
 Thanks a bunch Doug, very clear  helpful info.
 
 so with that said those who run IceHouse or Juno are (more or less :) ) dead 
 in the water as the only option is v1 ...hmm
 
 Dani
 
 On Mon, May 4, 2015 at 10:21 PM, Doug Wiegley doug...@parksidesoftware.com 
 mailto:doug...@parksidesoftware.com wrote:
 lbaas v1:
 
 This is the original Neutron LBaaS, and what you see in Horizon or in the 
 neutron CLI as “lb-*”.  It has an haproxy backend, and a few vendors 
 supporting it. Feature-wise, it’s basically a byte pump.
 
 lbaas v2:
 
 This is the “new” Neutron LBaaS, and is in the neutron CLI as “lbaas-*” (it’s 
 not yet in Horizon.)  It first shipped in Kilo. It re-organizes the objects, 
 and adds TLS termination support, and has L7 plus other new goodies planned 
 in Liberty. It similarly has an haproxy reference backend with a few vendors 
 supporting it.
 
 octavia:
 
 Think of this as a service vm framework that is specific to lbaas, to 
 implement lbaas via nova VMs instead of “lbaas agents. It is expected to be 
 the reference backend implementation for neutron lbaasv2 in liberty. It could 
 also be used as its own front-end, and/or given drivers to be a load 
 balancing framework completely outside neutron/nova, though that is not the 
 present direction of development.
 
 Thanks,
 doug
 
 
 
 
  On May 4, 2015, at 1:57 PM, Daniel Comnea comnea.d...@gmail.com 
  mailto:comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm trying to gather more info about the differences between
 
  Neutron LBaaS v1
  Neutron LBaaS v2
  Octavia
 
  I know Octavia is still not marked production but on the other hand i keep 
  hearing inside my organization that Neutron LBaaS is missing few critical 
  pieces so i'd very much appreciate if anyone can provide detailed info 
  about the differences above.
 
  Thanks,
  Dani
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Thanks a bunch Doug, very clear  helpful info.

so with that said those who run IceHouse or Juno are (more or less :) )
dead in the water as the only option is v1 ...hmm

Dani

On Mon, May 4, 2015 at 10:21 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 lbaas v1:

 This is the original Neutron LBaaS, and what you see in Horizon or in the
 neutron CLI as “lb-*”.  It has an haproxy backend, and a few vendors
 supporting it. Feature-wise, it’s basically a byte pump.

 lbaas v2:

 This is the “new” Neutron LBaaS, and is in the neutron CLI as “lbaas-*”
 (it’s not yet in Horizon.)  It first shipped in Kilo. It re-organizes the
 objects, and adds TLS termination support, and has L7 plus other new
 goodies planned in Liberty. It similarly has an haproxy reference backend
 with a few vendors supporting it.

 octavia:

 Think of this as a service vm framework that is specific to lbaas, to
 implement lbaas via nova VMs instead of “lbaas agents. It is expected to
 be the reference backend implementation for neutron lbaasv2 in liberty. It
 could also be used as its own front-end, and/or given drivers to be a load
 balancing framework completely outside neutron/nova, though that is not the
 present direction of development.

 Thanks,
 doug




  On May 4, 2015, at 1:57 PM, Daniel Comnea comnea.d...@gmail.com wrote:
 
  Hi all,
 
  I'm trying to gather more info about the differences between
 
  Neutron LBaaS v1
  Neutron LBaaS v2
  Octavia
 
  I know Octavia is still not marked production but on the other hand i
 keep hearing inside my organization that Neutron LBaaS is missing few
 critical pieces so i'd very much appreciate if anyone can provide
 detailed info about the differences above.
 
  Thanks,
  Dani
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Daniel Comnea
Hi all,

I'm trying to gather more info about the differences between

Neutron LBaaS v1
Neutron LBaaS v2
Octavia

I know Octavia is still not marked production but on the other hand i keep
hearing inside my organization that Neutron LBaaS is missing few critical
pieces so i'd very much appreciate if anyone can provide detailed info
about the differences above.

Thanks,
Dani
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia] what are the main differences between the two

2015-05-04 Thread Doug Wiegley
lbaas v1:

This is the original Neutron LBaaS, and what you see in Horizon or in the 
neutron CLI as “lb-*”.  It has an haproxy backend, and a few vendors supporting 
it. Feature-wise, it’s basically a byte pump.

lbaas v2:

This is the “new” Neutron LBaaS, and is in the neutron CLI as “lbaas-*” (it’s 
not yet in Horizon.)  It first shipped in Kilo. It re-organizes the objects, 
and adds TLS termination support, and has L7 plus other new goodies planned in 
Liberty. It similarly has an haproxy reference backend with a few vendors 
supporting it.

octavia:

Think of this as a service vm framework that is specific to lbaas, to implement 
lbaas via nova VMs instead of “lbaas agents. It is expected to be the 
reference backend implementation for neutron lbaasv2 in liberty. It could also 
be used as its own front-end, and/or given drivers to be a load balancing 
framework completely outside neutron/nova, though that is not the present 
direction of development.

Thanks,
doug




 On May 4, 2015, at 1:57 PM, Daniel Comnea comnea.d...@gmail.com wrote:
 
 Hi all,
 
 I'm trying to gather more info about the differences between 
 
 Neutron LBaaS v1
 Neutron LBaaS v2
 Octavia
 
 I know Octavia is still not marked production but on the other hand i keep 
 hearing inside my organization that Neutron LBaaS is missing few critical 
 pieces so i'd very much appreciate if anyone can provide detailed info 
 about the differences above.
 
 Thanks,
 Dani
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-15 Thread Trevor Vardeman
I have a couple proposals done up on paper that I'll have available
shortly, I'll reply with a link.

 - Trevor J. Vardeman
 - trevor.varde...@rackspace.com
 - (210) 312 - 4606




On 4/14/15, 5:34 PM, Eichberger, German german.eichber...@hp.com wrote:

All,

Let's decide on a logo tomorrow so we can print stickers in time for
Vancouver. Here are some designs to consider:
http://bit.ly/Octavia_logo_vote

We will discuss more at tomorrow's meeting - Agenda:
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015
-04-15 - but please come prepared with one of your favorite designs...

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Logo for Octavia project

2015-04-14 Thread Eichberger, German
All,

Let's decide on a logo tomorrow so we can print stickers in time for Vancouver. 
Here are some designs to consider: http://bit.ly/Octavia_logo_vote

We will discuss more at tomorrow's meeting - Agenda: 
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2015-04-15
 - but please come prepared with one of your favorite designs...

Thanks,
German

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-06 Thread Stephen Balukoff
Hi Jorge,

So, one can query a pre-defined UDP socket or stats HTTP service (which
can be an in-band service, by the way) and HAProxy will give all kinds of
useful stats on the current listener, its pools, its members, etc. We will
probably be querying this service in any case to detect things like members
going down, etc. for sending notifications upstream. The problem is this
interface presently resets state whenever haproxy is reloaded, which needs
to happen whenever there's a configuration change. I was able to meet with
the HAProxy team (including Willy Tarreau), and they're interested in
making improvements to HAProxy that we would find useful. Foremost on their
list was the ability to preserve this state information between restarts.

Until that's ready and in a stable release of haproxy, it's also pretty
trivial to parse out IP addresses and listening ports from the haproxy
config, and use these to populate a series of IPtables chains whose entire
purpose is to gather bandwidth I/O data. These tables won't give you things
like max connnection counts, etc., but if you're billing on raw bandwidth
usage, these stats are guaranteed to be accurate and survive through
haproxy restarts. It also does not require one to scan logs, and is
available cheaply in real time. (This is how we bill for bandwidth on our
current software load balancer product.)

My vote would be to use the IPTables approach for now until HAProxy is able
to retain state between restarts. For other stats data (eg. max connection
counts, total number of requests), I would recommend gathering this data
from the haproxy daemon, and keeping an external state file that we update
immediately before restarting haproxy. (Yes, this means we lose some
information on connections that are still open when haproxy restarts, but
it gives us an approximate good value since we anticipate haproxy
restarts being relatively rare in comparison to serving actual requests).

Logs are still very handy, and I agree that if extreme accuracy in billing
is required, this is the way to get that data. Logs are also very handy for
users to have for troubleshooting purposes. But I think logs are not well
suited to providing data which will be consumed in real time (eg. stuff
which will populate a dashboard.)

What do y'all think of this?

Stephen

On Wed, Nov 5, 2014 at 10:25 AM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

   Thanks German,

  It looks like the conversation is going towards using the HAProxy stats
 interface and/or iptables. I just wanted to explore logging a bit. That
 said, can you and Stephen share your thoughts on how we might implement
 that approach? I'd like to get a spec out soon because I believe metric
 gathering can be worked on in parallel with the rest of the project. In
 fact, I was hoping to get my hands dirty on this one and contribute some
 code, but a strategy and spec are needed first before I can start that ;)

  Cheers,
 --Jorge

   From: Eichberger, German german.eichber...@hp.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, November 5, 2014 3:50 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge,



 I am still not convinced that we need to use logging for usage metrics. We
 can also use the haproxy stats interface (which the haproxy team is willing
 to improve based on our input) and/or iptables as Stephen suggested. That
 said this probably needs more exploration.



 From an HP perspective the full logs on the load balancer are mostly
 interesting for the user of the loadbalancer – we only care about
 aggregates for our metering. That said we would be happy to just move them
 on demand to a place the user can access.



 Thanks,

 German





 *From:* Jorge Miramontes [mailto:jorge.miramon...@rackspace.com
 jorge.miramon...@rackspace.com]
 *Sent:* Tuesday, November 04, 2014 8:20 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage
 Requirements



 Hi Susanne,



 Thanks for the reply. As Angus pointed out, the one big item that needs to
 be addressed with this method is network I/O of raw logs. One idea to
 mitigate this concern is to store the data locally for the
 operator-configured granularity, process it and THEN send it to cielometer,
 etc. If we can't engineer a way to deal with the high network I/O that will
 inevitably occur we may have to move towards a polling approach. Thoughts?



 Cheers,

 --Jorge



 *From: *Susanne Balle sleipnir...@gmail.com
 *Reply-To: *OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 *Date: *Tuesday, November 4, 2014 11:10 AM
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Eichberger, German
Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

From an HP perspective the full logs on the load balancer are mostly 
interesting for the user of the loadbalancer - we only care about aggregates 
for our metering. That said we would be happy to just move them on demand to a 
place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle sleipnir...@gmail.commailto:sleipnir...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, November 4, 2014 11:10 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).

 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.

Keep the logs: This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

- Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-05 Thread Jorge Miramontes
Thanks German,

It looks like the conversation is going towards using the HAProxy stats 
interface and/or iptables. I just wanted to explore logging a bit. That said, 
can you and Stephen share your thoughts on how we might implement that 
approach? I'd like to get a spec out soon because I believe metric gathering 
can be worked on in parallel with the rest of the project. In fact, I was 
hoping to get my hands dirty on this one and contribute some code, but a 
strategy and spec are needed first before I can start that ;)

Cheers,
--Jorge

From: Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, November 5, 2014 3:50 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge,

I am still not convinced that we need to use logging for usage metrics. We can 
also use the haproxy stats interface (which the haproxy team is willing to 
improve based on our input) and/or iptables as Stephen suggested. That said 
this probably needs more exploration.

From an HP perspective the full logs on the load balancer are mostly 
interesting for the user of the loadbalancer – we only care about aggregates 
for our metering. That said we would be happy to just move them on demand to a 
place the user can access.

Thanks,
German


From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Tuesday, November 04, 2014 8:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Susanne,

Thanks for the reply. As Angus pointed out, the one big item that needs to be 
addressed with this method is network I/O of raw logs. One idea to mitigate 
this concern is to store the data locally for the operator-configured 
granularity, process it and THEN send it to cielometer, etc. If we can't 
engineer a way to deal with the high network I/O that will inevitably occur we 
may have to move towards a polling approach. Thoughts?

Cheers,
--Jorge

From: Susanne Balle sleipnir...@gmail.commailto:sleipnir...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, November 4, 2014 11:10 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the 
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be moved 
to various backends such as an elastic search, hadoop HDFS, Swift, etc as well 
as by default (but with the option to disable it) ceilometer. Ceilometer is the 
metering defacto for OpenStack so we need to support it. We would like the 
integration with Ceilometer to be based on Notifications. I believe German send 
a reference to that in another email. The pre-processing will need to be 
optional and the amount of data aggregation configurable.

What you describe below to me is usage gathering/metering. The billing is 
independent since companies with private clouds might not want to bill but 
still need usage reports for capacity planning etc. Billing/Charging is just 
putting a monetary value on the various form of usage,

I agree with all points.

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).

 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.

Keep the logs: This is what we would use log forwarding to either Swift or 
Elastic Search, etc.

- Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we were 
in disagreement on the IRC. I am not sure why but it sounded like you were 
talking about something else when you were talking about the real time 
processing. If we are just taking about moving the logs to your Hadoop cluster 
or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey German/Susanne,

To continue our conversation from our IRC

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-11-04 Thread Susanne Balle
Jorge

I understand your use cases around capturing of metrics, etc.

Today we mine the logs for usage information on our Hadoop cluster. In the
future we'll capture all the metrics via ceilometer.

IMHO the amphorae should have an interface that allow for the logs to be
moved to various backends such as an elastic search, hadoop HDFS, Swift,
etc as well as by default (but with the option to disable it) ceilometer.
Ceilometer is the metering defacto for OpenStack so we need to support it.
We would like the integration with Ceilometer to be based on Notifications.
I believe German send a reference to that in another email. The
pre-processing will need to be optional and the amount of data aggregation
configurable.

What you describe below to me is usage gathering/metering. The billing is
independent since companies with private clouds might not want to bill but
still need usage reports for capacity planning etc. Billing/Charging is
just putting a monetary value on the various form of usage,

I agree with all points.

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).

 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.

Keep the logs: This is what we would use log forwarding to either Swift
or Elastic Search, etc.

- Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

It looks like we are in agreement so I am not sure why it sounded like we
were in disagreement on the IRC. I am not sure why but it sounded like you
were talking about something else when you were talking about the real time
processing. If we are just taking about moving the logs to your Hadoop
cluster or any backedn a scalable way we agree.

Susanne


On Thu, Oct 23, 2014 at 6:30 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

 Hey German/Susanne,

 To continue our conversation from our IRC meeting could you all provide
 more insight into you usage requirements? Also, I'd like to clarify a few
 points related to using logging.

 I am advocating that logs be used for multiple purposes, including
 billing. Billing requirements are different that connection logging
 requirements. However, connection logging is a very accurate mechanism to
 capture billable metrics and thus, is related. My vision for this is
 something like the following:

 - Capture logs in a scalable way (i.e. capture logs and put them on a
 separate scalable store somewhere so that it doesn't affect the amphora).
 - Every X amount of time (every hour, for example) process the logs and
 send them on their merry way to cielometer or whatever service an operator
 will be using for billing purposes.
 - Keep logs for some configurable amount of time. This could be anything
 from indefinitely to not at all. Rackspace is planing on keeping them for
 a certain period of time for the following reasons:

 A) We have connection logging as a planned feature. If a customer
 turns
 on the connection logging feature for their load balancer it will already
 have a history. One important aspect of this is that customers (at least
 ours) tend to turn on logging after they realize they need it (usually
 after a tragic lb event). By already capturing the logs I'm sure customers
 will be extremely happy to see that there are already X days worth of logs
 they can immediately sift through.
 B) Operators and their support teams can leverage logs when
 providing
 service to their customers. This is huge for finding issues and resolving
 them quickly.
 C) Albeit a minor point, building support for logs from the get-go
 mitigates capacity management uncertainty. My example earlier was the
 extreme case of every customer turning on logging at the same time. While
 unlikely, I would hate to manage that!

 I agree that there are other ways to capture billing metrics but, from my
 experience, those tend to be more complex than what I am advocating and
 without the added benefits listed above. An understanding of HP's desires
 on this matter will hopefully get this to a point where we can start
 working on a spec.

 Cheers,
 --Jorge

 P.S. Real-time stats is a different beast and I envision there being an
 API call that returns real-time data such as this ==
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


 From:  Eichberger, German german.eichber...@hp.com
 Reply-To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:  Wednesday, October 22, 2014 2:41 PM
 To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Jorge Miramontes
Thanks for the reply Angus,

DDoS attacks are definitely a concern we are trying to address here. My
assumptions are based on a solution that is engineered for this type of
thing. Are you more concerned with network I/O during a DoS attack or
storing the logs? Under the idea I had, I wanted to make the amount of
time logs are stored for configurable so that the operator can choose
whether they want the logs after processing or not. The network I/O of
pumping logs out is a concern of mine, however.

Sampling seems like the go-to solution for gathering usage but I was
looking for something different as sampling can get messy and can be
inaccurate for certain metrics. Depending on the sampling rate, this
solution has the potential to miss spikes in traffic if you are gathering
gauge metrics such as active connections/sessions. Using logs would be
100% accurate in this case. Also, I'm assuming LBaaS will have events so
combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
gets complicated. Combining logs with events is arguably less complicated
as the granularity of logs is high. Due to this granularity, one can split
the logs based on the event times cleanly. Since sampling will have a
fixed cadence you will have to perform a manual sample at the time of
the event (i.e. add complexity).

At the end of the day there is no free lunch so more insight is
appreciated. Thanks for the feedback.

Cheers,
--Jorge




On 10/27/14 6:55 PM, Angus Lees g...@inodes.org wrote:

On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
  I,d like to start a conversation on usage requirements and have a few
  suggestions. I advocate that, since we will be using TCP and
HTTP/HTTPS
  based protocols, we inherently enable connection logging for load
 
  balancers for several reasons:
 Just request from the operator side of things:
 Please think about the scalability when storing all logs.
 
 e.g. we are currently logging http requests to one load balanced
application
 (that would be a fit for LBAAS) It is about 500 requests per second,
which
 adds up to 40GB per day (in elasticsearch.) Please make sure whatever
 solution is chosen it can cope with machines doing 1000s of requests per
 second...

And to take this further, what happens during DoS attack (either syn
flood or 
full connections)?  How do we ensure that we don't lose our logging
system 
and/or amplify the DoS attack?

One solution is sampling, with a tunable knob for the sampling rate -
perhaps 
tunable per-vip.  This still increases linearly with attack traffic,
unless you 
use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).

One of the advantages of (eg) polling the number of current sessions is
that 
the cost of that monitoring is essentially fixed regardless of the number
of 
connections passing through.  Numerous other metrics (rate of new
connections, 
etc) also have this property and could presumably be used for accurate
billing 
- without amplifying attacks.

I think we should be careful about whether we want logging or metrics for
more 
accurate billing.  Both are useful, but full logging is only really
required 
for ad-hoc debugging (important! but different).

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-28 Thread Angus Lees
On Tue, 28 Oct 2014 04:42:27 PM Jorge Miramontes wrote:
 Thanks for the reply Angus,
 
 DDoS attacks are definitely a concern we are trying to address here. My
 assumptions are based on a solution that is engineered for this type of
 thing. Are you more concerned with network I/O during a DoS attack or
 storing the logs? Under the idea I had, I wanted to make the amount of
 time logs are stored for configurable so that the operator can choose
 whether they want the logs after processing or not. The network I/O of
 pumping logs out is a concern of mine, however.

My primary concern was the generated network I/O, and the write bandwidth to 
storage media implied by that (not so much the accumulated volume of data).

We're in an era where 10Gb/s networking is now common for serving/loadbalancer 
infrastructure and as far as I can see the trend for networking is climbing 
more steeply that storage I/O, so it's only going to get worse.   10Gb/s of 
short-lived connections is a *lot* to try to write to reliable storage 
somewhere and later analyse.
It's a useful option for some users, but it would be a shame to have to limit 
loadbalancer throughput by the logging infrastructure just because we didn't 
have an alternative available.

I think you're right, that we don't have an obviously-correct choice here.  I 
think we need to expose both cheap sampling/polling of counters and more 
detailed logging of connections matching patterns (and indeed actual packet 
capture would be nice too).  Someone could then choose to base their billing 
on either datasource depending on their own accuracy-vs-cost-of-collection 
tradeoffs.  I don't see that either approach is going to be sufficiently 
universal to obsolete the other :(

Also: UDP.   Most providers are all about HTTP now, but there are still some 
people that need to bill for UDP, SIP, VPN, etc traffic.

 - Gus

 Sampling seems like the go-to solution for gathering usage but I was
 looking for something different as sampling can get messy and can be
 inaccurate for certain metrics. Depending on the sampling rate, this
 solution has the potential to miss spikes in traffic if you are gathering
 gauge metrics such as active connections/sessions. Using logs would be
 100% accurate in this case. Also, I'm assuming LBaaS will have events so
 combining sampling with events (CREATE, UPDATE, SUSPEND, DELETE, etc.)
 gets complicated. Combining logs with events is arguably less complicated
 as the granularity of logs is high. Due to this granularity, one can split
 the logs based on the event times cleanly. Since sampling will have a
 fixed cadence you will have to perform a manual sample at the time of
 the event (i.e. add complexity).
 
 At the end of the day there is no free lunch so more insight is
 appreciated. Thanks for the feedback.
 
 Cheers,
 --Jorge
 
 On 10/27/14 6:55 PM, Angus Lees g...@inodes.org wrote:
 On Wed, 22 Oct 2014 11:29:27 AM Robert van Leeuwen wrote:
   I,d like to start a conversation on usage requirements and have a few
   suggestions. I advocate that, since we will be using TCP and
 
 HTTP/HTTPS
 
   based protocols, we inherently enable connection logging for load
  
   balancers for several reasons:
  Just request from the operator side of things:
  Please think about the scalability when storing all logs.
  
  e.g. we are currently logging http requests to one load balanced
 
 application
 
  (that would be a fit for LBAAS) It is about 500 requests per second,
 
 which
 
  adds up to 40GB per day (in elasticsearch.) Please make sure whatever
  solution is chosen it can cope with machines doing 1000s of requests per
  second...
 
 And to take this further, what happens during DoS attack (either syn
 flood or
 full connections)?  How do we ensure that we don't lose our logging
 system
 and/or amplify the DoS attack?
 
 One solution is sampling, with a tunable knob for the sampling rate -
 perhaps
 tunable per-vip.  This still increases linearly with attack traffic,
 unless you
 use time-based sampling (1-every-N-seconds rather than 1-every-N-packets).
 
 One of the advantages of (eg) polling the number of current sessions is
 that
 the cost of that monitoring is essentially fixed regardless of the number
 of
 connections passing through.  Numerous other metrics (rate of new
 connections,
 etc) also have this property and could presumably be used for accurate
 billing
 - without amplifying attacks.
 
 I think we should be careful about whether we want logging or metrics for
 more
 accurate billing.  Both are useful, but full logging is only really
 required
 for ad-hoc debugging (important! but different).
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
sumitnaiksa...@gmail.com wrote:
 Several people have been requesting that we resume the Advanced
 Services' meetings [1] to discuss some of the topics being mentioned
 in this thread. Perhaps it might help people to have a focussed
 discussion on the topic of advanced services' spin-out prior to the
 design summit session [2] in Paris. So I propose that we resume our
 weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
 #openstack-meeting-3.

Given how important this is to Neutron in general, I would prefer NOT
to see this discussed in the Advanced Services meeting, but rather in
the regular Neutron meeting. These are the types of things which need
broader oversight and involvement. Lets please discuss this in the
regular Neutron meeting, which is an on-demand meeting format, rather
than in a sub-team meeting.

 Thanks,
 ~Sumit.

 [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
 [2] 
 http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y

 On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
 skand...@cisco.com wrote:
 Hi Doug:

 On 10/26/14, 6:01 PM, Doug Wiegley do...@a10networks.com wrote:

Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

I agree with this sentiment.  I¹d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let¹s jump to the interesting part.

 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that

There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

Š this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron -
Services - LB, where we¹re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It¹s at least worth a
conversation or three.

 In total agreement and I have heard these sentiments in multiple
 conversations across multiple players.
 It would be really fruitful to have a constructive conversation on this
 across the services, and there are
 enough similar issues to make this worthwhile.

 Thanks

 Sridar


Thanks,
Doug




On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,

 Before we get into the details of which API goes where, I¹d like to see
us
 answer the questions of:

 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)

 To me, the ³where does the API live² is an implementation detail, and
not
 where the time will need to be spent.

 For the record, my answers are:

 1. Yes.
 2. I don¹t know.
 3. I don¹t know; this needs some serious discussion.
 4. Yes.

 Thanks,
 doug

 On 10/24/14, 3:47 PM, Brandon Logan 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley do...@a10networks.com wrote:
 Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

 I agree with this sentiment.  I’d just like to pull-up to the decision
 level, and if we can get some consensus on how we move forward, we can
 bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
 love each other.  Check.  Things are going to change sometime.  Check.  We
 might spin-out someday.  Check.  Now, let’s jump to the interesting part.

I think we all know we want to spin these out, as Doug says we just
need to have a plan around how we make that happen. I'm in agreement
with Doug's sentiment above.

 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that

 There is merit here, but consider the sorts of things that an advanced
 services framework should be doing:

 - plugging into neutron ports, with all manner of topologies
 - service VM handling
 - plugging into nova-network
 - service chaining
 - applying things like security groups to services

 … this is all stuff that Octavia is talking about implementing itself in a
 basically defensive manner, instead of leveraging other work.  And there
 are specific reasons for that.  But, maybe we can at least take steps to
 not be incompatible about it.  Or maybe there is a hierarchy of Neutron -
 Services - LB, where we’re still spun out, but not doing it in a way that
 we have to re-implement the world all the time.  It’s at least worth a
 conversation or three.

Doug, can you document this on the etherpad for the services spinout
[1]? I've added some brief text at the top on what the objective for
this session is, but documenting more along the lines of what you have
here would be good.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/neutron-services

 Thanks,
 Doug




 On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,

 Before we get into the details of which API goes where, I’d like to see
us
 answer the questions of:

 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)

 To me, the “where does the API live” is an implementation detail, and
not
 where the time will need to be spent.

 For the record, my answers are:

 1. Yes.
 2. I don’t know.
 3. I don’t know; this needs some serious discussion.
 4. Yes.

 Thanks,
 doug

 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out
of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.
 
 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at
a
 higher velocity than an Openstack project, so I expect by the time Kilo
 comes around it's API will be very mature.
 
 Octavia's API doesn't have to be called Octavia either.  It can be
 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-27 Thread Jorge Miramontes
Hey German,

I totally agree on the security/privacy aspect of logs, especially due to
the SSL/TLS Termination feature.

After looking at BP [1] and the spec [2] for metering, it looks like it is
proposing to send more than just billable usage to cielometer. From my
previous email I considered this tracking usage (billable usage can be
a subset of tracking usage). It also appears to me that there is an
implied interface  for cielometer as we need to be able to capture metrics
from various lb devices (HAProxy, Nginx, Netscaler, etc.), standardize
them, and then send them off. That said, what type of implementation was
HP thinking of to gather these metrics? Instead of focusing on my idea of
using logging I'd like to change the discussion and get a picture as to
what you all are envisioning for a possible implementation direction.
Important items for Rackspace include accuracy of data, no lost data (i.e.
when sending to upstream system ensure it gets there), reliability of
cadence when sending usage to upstream system, and the ability to
backtrack and audit data whenever there seems to be a discrepancy in a
customer's monthly statement. Keep in mind that we need to integrate with
our current billing pipeline so we are not planning on using cielometer at
the moment. Thus, we need to make this somewhat configurable for those not
using cielometer.

Cheers,
--Jorge

[1] 
https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-meter-lbaas

[2] https://review.openstack.org/#/c/94958/12/specs/juno/lbaas_metering.rst


On 10/24/14 5:19 PM, Eichberger, German german.eichber...@hp.com wrote:

Hi Jorge,

I agree completely with the points you make about the logs. We still feel
that metering and logging are two different problems. The ceilometers
community has a proposal on how to meter lbaas (see
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_met
ering.html) and we at HP think that those values are be sufficient for us
for the time being.

I think our discussion is mostly about connection logs which are emitted
some way from amphora (e.g. haproxy logs). Since they are customer's logs
we need to explore on our end the privacy implications (I assume at RAX
you have controls in place to make sure that there is no violation :-).
Also I need to check if our central logging system is scalable enough and
we can send logs there without creating security holes.

Another possibility is to log like syslog our apmphora agent logs to a
central system to help with trouble shooting debugging. Those could be
sufficiently anonymized to avoid privacy issue. What are your thoughts on
logging those?

Thanks,
German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Thursday, October 23, 2014 3:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an
operator will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:
   
   A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure
customers will be extremely happy to see that there are already X days
worth of logs they can immediately sift through.
   B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
   C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Mandeep Dhami
Hi Kyle:

Are you scheduling an on-demand meeting, or are you proposing that the
agenda for next neutron meeting include this as an on-demand item?

Regards,
Mandeep


On Mon, Oct 27, 2014 at 6:56 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
 sumitnaiksa...@gmail.com wrote:
  Several people have been requesting that we resume the Advanced
  Services' meetings [1] to discuss some of the topics being mentioned
  in this thread. Perhaps it might help people to have a focussed
  discussion on the topic of advanced services' spin-out prior to the
  design summit session [2] in Paris. So I propose that we resume our
  weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
  #openstack-meeting-3.
 
 Given how important this is to Neutron in general, I would prefer NOT
 to see this discussed in the Advanced Services meeting, but rather in
 the regular Neutron meeting. These are the types of things which need
 broader oversight and involvement. Lets please discuss this in the
 regular Neutron meeting, which is an on-demand meeting format, rather
 than in a sub-team meeting.

  Thanks,
  ~Sumit.
 
  [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
  [2]
 http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
 
  On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
  skand...@cisco.com wrote:
  Hi Doug:
 
  On 10/26/14, 6:01 PM, Doug Wiegley do...@a10networks.com wrote:
 
 Hi Brandon,
 
  4. I brought this up now so that we can decide whether we want to
  discuss it at the advanced services spin out session.  I don't see the
  harm in opinions being discussed before the summit, during the summit,
  and more thoroughly after the summit.
 
 I agree with this sentiment.  I¹d just like to pull-up to the decision
 level, and if we can get some consensus on how we move forward, we can
 bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
 We
 love each other.  Check.  Things are going to change sometime.  Check.
 We
 might spin-out someday.  Check.  Now, let¹s jump to the interesting
 part.
 
  3. The main reason a spin out makes sense from Neutron is that the
 scope
  for Neutron is too large for the attention advances services needs
 from
  the Neutron Core.  If all of advanced services spins out, I see that
 
 There is merit here, but consider the sorts of things that an advanced
 services framework should be doing:
 
 - plugging into neutron ports, with all manner of topologies
 - service VM handling
 - plugging into nova-network
 - service chaining
 - applying things like security groups to services
 
 Š this is all stuff that Octavia is talking about implementing itself
 in a
 basically defensive manner, instead of leveraging other work.  And there
 are specific reasons for that.  But, maybe we can at least take steps to
 not be incompatible about it.  Or maybe there is a hierarchy of Neutron
 -
 Services - LB, where we¹re still spun out, but not doing it in a way
 that
 we have to re-implement the world all the time.  It¹s at least worth a
 conversation or three.
 
  In total agreement and I have heard these sentiments in multiple
  conversations across multiple players.
  It would be really fruitful to have a constructive conversation on this
  across the services, and there are
  enough similar issues to make this worthwhile.
 
  Thanks
 
  Sridar
 
 
 Thanks,
 Doug
 
 
 
 
 On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:
 
 Good questions Doug.  My answers are as follows:
 
 1. Yes
 2. Some time after Kilo (same as I don't know when)
 3. The main reason a spin out makes sense from Neutron is that the
 scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that
 repeating itself within an advanced services project.  More and more
 advanced services will get added in and the scope will become too
 large.  There would definitely be benefits to it though, but I think we
 would end up being right where we are today.
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.
 
 Yes the brunt of the time will not be spent on the API, but since it
 seemed like an opportunity to kill two birds with one stone, I figured
 it warranted a discussion.
 
 Thanks,
 Brandon
 
 On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
  Hi all,
 
  Before we get into the details of which API goes where, I¹d like to
 see
 us
  answer the questions of:
 
  1. Are we spinning out?
  2. When?
  3. With or without the rest of advanced services?
  4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
 have
  had the Paris summit discussions on vendor split-out and adv.
 services
  spinout 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Kyle Mestery
On Mon, Oct 27, 2014 at 11:48 AM, Mandeep Dhami dh...@noironetworks.com wrote:
 Hi Kyle:

 Are you scheduling an on-demand meeting, or are you proposing that the
 agenda for next neutron meeting include this as an on-demand item?

Per my email to the list recently [1], the weekly rotating Neutron
meeting is now an on-demand agenda, rather than a rollup of sub-team
status. I'm saying this particular topic (advanced services spinout)
will be discussed in Paris, and it's worth adding it to the weekly
Neutron meeting [2] agenda in the on-demand section. This is a pretty
large topic with many interested parties, thus the attention in the
broader neutron meeting.

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/048328.html
[2] https://wiki.openstack.org/wiki/Network/Meetings

 Regards,
 Mandeep


 On Mon, Oct 27, 2014 at 6:56 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Oct 27, 2014 at 12:15 AM, Sumit Naiksatam
 sumitnaiksa...@gmail.com wrote:
  Several people have been requesting that we resume the Advanced
  Services' meetings [1] to discuss some of the topics being mentioned
  in this thread. Perhaps it might help people to have a focussed
  discussion on the topic of advanced services' spin-out prior to the
  design summit session [2] in Paris. So I propose that we resume our
  weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
  #openstack-meeting-3.
 
 Given how important this is to Neutron in general, I would prefer NOT
 to see this discussed in the Advanced Services meeting, but rather in
 the regular Neutron meeting. These are the types of things which need
 broader oversight and involvement. Lets please discuss this in the
 regular Neutron meeting, which is an on-demand meeting format, rather
 than in a sub-team meeting.

  Thanks,
  ~Sumit.
 
  [1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
  [2]
  http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y
 
  On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
  skand...@cisco.com wrote:
  Hi Doug:
 
  On 10/26/14, 6:01 PM, Doug Wiegley do...@a10networks.com wrote:
 
 Hi Brandon,
 
  4. I brought this up now so that we can decide whether we want to
  discuss it at the advanced services spin out session.  I don't see
  the
  harm in opinions being discussed before the summit, during the
  summit,
  and more thoroughly after the summit.
 
 I agree with this sentiment.  I¹d just like to pull-up to the decision
 level, and if we can get some consensus on how we move forward, we can
 bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
  We
 love each other.  Check.  Things are going to change sometime.  Check.
  We
 might spin-out someday.  Check.  Now, let¹s jump to the interesting
  part.
 
  3. The main reason a spin out makes sense from Neutron is that the
  scope
  for Neutron is too large for the attention advances services needs
  from
  the Neutron Core.  If all of advanced services spins out, I see that
 
 There is merit here, but consider the sorts of things that an advanced
 services framework should be doing:
 
 - plugging into neutron ports, with all manner of topologies
 - service VM handling
 - plugging into nova-network
 - service chaining
 - applying things like security groups to services
 
 Š this is all stuff that Octavia is talking about implementing itself
  in a
 basically defensive manner, instead of leveraging other work.  And
  there
 are specific reasons for that.  But, maybe we can at least take steps
  to
 not be incompatible about it.  Or maybe there is a hierarchy of Neutron
  -
 Services - LB, where we¹re still spun out, but not doing it in a way
  that
 we have to re-implement the world all the time.  It¹s at least worth a
 conversation or three.
 
  In total agreement and I have heard these sentiments in multiple
  conversations across multiple players.
  It would be really fruitful to have a constructive conversation on this
  across the services, and there are
  enough similar issues to make this worthwhile.
 
  Thanks
 
  Sridar
 
 
 Thanks,
 Doug
 
 
 
 
 On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com
  wrote:
 
 Good questions Doug.  My answers are as follows:
 
 1. Yes
 2. Some time after Kilo (same as I don't know when)
 3. The main reason a spin out makes sense from Neutron is that the
  scope
 for Neutron is too large for the attention advances services needs
  from
 the Neutron Core.  If all of advanced services spins out, I see that
 repeating itself within an advanced services project.  More and more
 advanced services will get added in and the scope will become too
 large.  There would definitely be benefits to it though, but I think
  we
 would end up being right where we are today.
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Doug Wiegley
Hi Jay,

Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
this timeslot?

https://wiki.openstack.org/wiki/Octavia#Meetings


Thanks,
Doug


On 10/27/14, 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:

Sorry for top-posting, but where can the API working group see the
proposed Octavia API specification or documentation? I'd love it if the
API WG could be involved in reviewing the public REST API.

Best,
-jay

On 10/27/2014 10:01 AM, Kyle Mestery wrote:
 On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley do...@a10networks.com
wrote:
 Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

 I agree with this sentiment.  I’d just like to pull-up to the decision
 level, and if we can get some consensus on how we move forward, we can
 bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
We
 love each other.  Check.  Things are going to change sometime.  Check.
 We
 might spin-out someday.  Check.  Now, let’s jump to the interesting
part.

 I think we all know we want to spin these out, as Doug says we just
 need to have a plan around how we make that happen. I'm in agreement
 with Doug's sentiment above.

 3. The main reason a spin out makes sense from Neutron is that the
scope
 for Neutron is too large for the attention advances services needs
from
 the Neutron Core.  If all of advanced services spins out, I see that

 There is merit here, but consider the sorts of things that an advanced
 services framework should be doing:

 - plugging into neutron ports, with all manner of topologies
 - service VM handling
 - plugging into nova-network
 - service chaining
 - applying things like security groups to services

 … this is all stuff that Octavia is talking about implementing itself
in a
 basically defensive manner, instead of leveraging other work.  And
there
 are specific reasons for that.  But, maybe we can at least take steps
to
 not be incompatible about it.  Or maybe there is a hierarchy of
Neutron -
 Services - LB, where we’re still spun out, but not doing it in a way
that
 we have to re-implement the world all the time.  It’s at least worth a
 conversation or three.

 Doug, can you document this on the etherpad for the services spinout
 [1]? I've added some brief text at the top on what the objective for
 this session is, but documenting more along the lines of what you have
 here would be good.

 Thanks,
 Kyle

 [1] https://etherpad.openstack.org/p/neutron-services

 Thanks,
 Doug




 On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 Good questions Doug.  My answers are as follows:

 1. Yes
 2. Some time after Kilo (same as I don't know when)
 3. The main reason a spin out makes sense from Neutron is that the
scope
 for Neutron is too large for the attention advances services needs
from
 the Neutron Core.  If all of advanced services spins out, I see that
 repeating itself within an advanced services project.  More and more
 advanced services will get added in and the scope will become too
 large.  There would definitely be benefits to it though, but I think
we
 would end up being right where we are today.
 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

 Yes the brunt of the time will not be spent on the API, but since it
 seemed like an opportunity to kill two birds with one stone, I figured
 it warranted a discussion.

 Thanks,
 Brandon

 On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,

 Before we get into the details of which API goes where, I’d like to
see
 us
 answer the questions of:

 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
 have
 had the Paris summit discussions on vendor split-out and adv.
services
 spinout before we answer those questions?  (Yes, that question is
 leading.)

 To me, the “where does the API live” is an implementation detail, and
 not
 where the time will need to be spent.

 For the record, my answers are:

 1. Yes.
 2. I don’t know.
 3. I don’t know; this needs some serious discussion.
 4. Yes.

 Thanks,
 doug

 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 With the recent talk about advanced services spinning out of
Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin
out
 of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.

 Octavia is going to (and has) an API.  The current thinking is that
an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Jay Pipes

Yup, can do! :)

-jay

On 10/27/2014 01:55 PM, Doug Wiegley wrote:

Hi Jay,

Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
this timeslot?

https://wiki.openstack.org/wiki/Octavia#Meetings


Thanks,
Doug


On 10/27/14, 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:


Sorry for top-posting, but where can the API working group see the
proposed Octavia API specification or documentation? I'd love it if the
API WG could be involved in reviewing the public REST API.

Best,
-jay

On 10/27/2014 10:01 AM, Kyle Mestery wrote:

On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley do...@a10networks.com
wrote:

Hi Brandon,


4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.


I agree with this sentiment.  I’d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
We
love each other.  Check.  Things are going to change sometime.  Check.
We
might spin-out someday.  Check.  Now, let’s jump to the interesting
part.


I think we all know we want to spin these out, as Doug says we just
need to have a plan around how we make that happen. I'm in agreement
with Doug's sentiment above.


3. The main reason a spin out makes sense from Neutron is that the
scope
for Neutron is too large for the attention advances services needs
from
the Neutron Core.  If all of advanced services spins out, I see that


There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

… this is all stuff that Octavia is talking about implementing itself
in a
basically defensive manner, instead of leveraging other work.  And
there
are specific reasons for that.  But, maybe we can at least take steps
to
not be incompatible about it.  Or maybe there is a hierarchy of
Neutron -
Services - LB, where we’re still spun out, but not doing it in a way
that
we have to re-implement the world all the time.  It’s at least worth a
conversation or three.


Doug, can you document this on the etherpad for the services spinout
[1]? I've added some brief text at the top on what the objective for
this session is, but documenting more along the lines of what you have
here would be good.

Thanks,
Kyle

[1] https://etherpad.openstack.org/p/neutron-services


Thanks,
Doug




On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:


Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the
scope
for Neutron is too large for the attention advances services needs
from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think
we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:

Hi all,

Before we get into the details of which API goes where, I’d like to
see
us
answer the questions of:

1. Are we spinning out?
2. When?
3. With or without the rest of advanced services?
4. Do we want to wait until we (the royal “we” of “the Neutron team”)
have
had the Paris summit discussions on vendor split-out and adv.
services
spinout before we answer those questions?  (Yes, that question is
leading.)

To me, the “where does the API live” is an implementation detail, and
not
where the time will need to be spent.

For the record, my answers are:

1. Yes.
2. I don’t know.
3. I don’t know; this needs some serious discussion.
4. Yes.

Thanks,
doug

On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:


With the recent talk about advanced services spinning out of
Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin
out

of

Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that
an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-27 Thread Brandon Logan
Hi Jay,
Just so you have some information on the API before the meeting here is
the spec for it:

https://review.openstack.org/#/c/122338/

I'm sure there is a lot of details that might be missing but it should
give you a decent idea.  Sorry for the markup/markdown being dumb if you
try to build with spinx.  Probably easier to just read the raw .rst
file.

Thanks,
Brandon

On Mon, 2014-10-27 at 14:05 -0400, Jay Pipes wrote:
 Yup, can do! :)
 
 -jay
 
 On 10/27/2014 01:55 PM, Doug Wiegley wrote:
  Hi Jay,
 
  Let’s add that as an agenda item at our Weekly IRC meeting.  Can you make
  this timeslot?
 
  https://wiki.openstack.org/wiki/Octavia#Meetings
 
 
  Thanks,
  Doug
 
 
  On 10/27/14, 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  Sorry for top-posting, but where can the API working group see the
  proposed Octavia API specification or documentation? I'd love it if the
  API WG could be involved in reviewing the public REST API.
 
  Best,
  -jay
 
  On 10/27/2014 10:01 AM, Kyle Mestery wrote:
  On Sun, Oct 26, 2014 at 8:01 PM, Doug Wiegley do...@a10networks.com
  wrote:
  Hi Brandon,
 
  4. I brought this up now so that we can decide whether we want to
  discuss it at the advanced services spin out session.  I don't see the
  harm in opinions being discussed before the summit, during the summit,
  and more thoroughly after the summit.
 
  I agree with this sentiment.  I’d just like to pull-up to the decision
  level, and if we can get some consensus on how we move forward, we can
  bring a concrete plan to the summit instead of 40 minutes of Kumbaya.
  We
  love each other.  Check.  Things are going to change sometime.  Check.
  We
  might spin-out someday.  Check.  Now, let’s jump to the interesting
  part.
 
  I think we all know we want to spin these out, as Doug says we just
  need to have a plan around how we make that happen. I'm in agreement
  with Doug's sentiment above.
 
  3. The main reason a spin out makes sense from Neutron is that the
  scope
  for Neutron is too large for the attention advances services needs
  from
  the Neutron Core.  If all of advanced services spins out, I see that
 
  There is merit here, but consider the sorts of things that an advanced
  services framework should be doing:
 
  - plugging into neutron ports, with all manner of topologies
  - service VM handling
  - plugging into nova-network
  - service chaining
  - applying things like security groups to services
 
  … this is all stuff that Octavia is talking about implementing itself
  in a
  basically defensive manner, instead of leveraging other work.  And
  there
  are specific reasons for that.  But, maybe we can at least take steps
  to
  not be incompatible about it.  Or maybe there is a hierarchy of
  Neutron -
  Services - LB, where we’re still spun out, but not doing it in a way
  that
  we have to re-implement the world all the time.  It’s at least worth a
  conversation or three.
 
  Doug, can you document this on the etherpad for the services spinout
  [1]? I've added some brief text at the top on what the objective for
  this session is, but documenting more along the lines of what you have
  here would be good.
 
  Thanks,
  Kyle
 
  [1] https://etherpad.openstack.org/p/neutron-services
 
  Thanks,
  Doug
 
 
 
 
  On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com
  wrote:
 
  Good questions Doug.  My answers are as follows:
 
  1. Yes
  2. Some time after Kilo (same as I don't know when)
  3. The main reason a spin out makes sense from Neutron is that the
  scope
  for Neutron is too large for the attention advances services needs
  from
  the Neutron Core.  If all of advanced services spins out, I see that
  repeating itself within an advanced services project.  More and more
  advanced services will get added in and the scope will become too
  large.  There would definitely be benefits to it though, but I think
  we
  would end up being right where we are today.
  4. I brought this up now so that we can decide whether we want to
  discuss it at the advanced services spin out session.  I don't see the
  harm in opinions being discussed before the summit, during the summit,
  and more thoroughly after the summit.
 
  Yes the brunt of the time will not be spent on the API, but since it
  seemed like an opportunity to kill two birds with one stone, I figured
  it warranted a discussion.
 
  Thanks,
  Brandon
 
  On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
  Hi all,
 
  Before we get into the details of which API goes where, I’d like to
  see
  us
  answer the questions of:
 
  1. Are we spinning out?
  2. When?
  3. With or without the rest of advanced services?
  4. Do we want to wait until we (the royal “we” of “the Neutron team”)
  have
  had the Paris summit discussions on vendor split-out and adv.
  services
  spinout before we answer those questions?  (Yes, that question is
  leading.)
 
  To me, the “where does the API live” is an implementation detail, and
  not
 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Doug Wiegley
Hi all,

Before we get into the details of which API goes where, I’d like to see us
answer the questions of:

1. Are we spinning out?
2. When?
3. With or without the rest of advanced services?
4. Do we want to wait until we (the royal “we” of “the Neutron team”) have
had the Paris summit discussions on vendor split-out and adv. services
spinout before we answer those questions?  (Yes, that question is leading.)

To me, the “where does the API live” is an implementation detail, and not
where the time will need to be spent.

For the record, my answers are:

1. Yes.
2. I don’t know.
3. I don’t know; this needs some serious discussion.
4. Yes.

Thanks,
doug

On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

With the recent talk about advanced services spinning out of Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin out of
Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it will
need a standalone API.  Octavia's API seems to be a good solution to
this.  It will support vendor drivers much like the current Neutron
LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
exact duplicate.  Octavia will be growing more mature in stackforge at a
higher velocity than an Openstack project, so I expect by the time Kilo
comes around it's API will be very mature.

Octavia's API doesn't have to be called Octavia either.  It can be
separated out and it can be called Openstack LBaaS, and the rest of
Octavia (the actual brains of it) will just be another driver to
Openstack LBaaS, which would retain the Octavia name.

This is my PROS and CONS list to using Octavia's API as the spun out
LBaaS:

PROS
1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
will already have this done.
2. Most of the same people working on Octavia have worked on Neutron
LBaaS v2.
3. It's out of Neutron faster, which is good for Neutron and LBaaS.

CONS
1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
another version of an LBaaS API.
2. The Octavia API will also have a separate Operator API which will
most likely only work with Octavia, not any vendors.

The CONS are easily solvable, and IMHO the PROS greatly outweigh the
CONS.

This is just my opinion though and I'd like to hear back from as many as
possible.  Add on to the PROS and CONS if wanted.

If it is direction we can agree on going then we can add as a talking
point in the advanced services spin out meeting:

http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.
VEq66HWx3UY

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Brandon Logan
Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,
 
 Before we get into the details of which API goes where, I’d like to see us
 answer the questions of:
 
 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal “we” of “the Neutron team”) have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is leading.)
 
 To me, the “where does the API live” is an implementation detail, and not
 where the time will need to be spent.
 
 For the record, my answers are:
 
 1. Yes.
 2. I don’t know.
 3. I don’t know; this needs some serious discussion.
 4. Yes.
 
 Thanks,
 doug
 
 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com wrote:
 
 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.
 
 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at a
 higher velocity than an Openstack project, so I expect by the time Kilo
 comes around it's API will be very mature.
 
 Octavia's API doesn't have to be called Octavia either.  It can be
 separated out and it can be called Openstack LBaaS, and the rest of
 Octavia (the actual brains of it) will just be another driver to
 Openstack LBaaS, which would retain the Octavia name.
 
 This is my PROS and CONS list to using Octavia's API as the spun out
 LBaaS:
 
 PROS
 1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
 will already have this done.
 2. Most of the same people working on Octavia have worked on Neutron
 LBaaS v2.
 3. It's out of Neutron faster, which is good for Neutron and LBaaS.
 
 CONS
 1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
 another version of an LBaaS API.
 2. The Octavia API will also have a separate Operator API which will
 most likely only work with Octavia, not any vendors.
 
 The CONS are easily solvable, and IMHO the PROS greatly outweigh the
 CONS.
 
 This is just my opinion though and I'd like to hear back from as many as
 possible.  Add on to the PROS and CONS if wanted.
 
 If it is direction we can agree on going then we can add as a talking
 point in the advanced services spin out meeting:
 
 http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.
 VEq66HWx3UY
 
 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Doug Wiegley
Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

I agree with this sentiment.  I’d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let’s jump to the interesting part.

 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that

There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

… this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron -
Services - LB, where we’re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It’s at least worth a
conversation or three.

Thanks,
Doug




On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,
 
 Before we get into the details of which API goes where, I’d like to see
us
 answer the questions of:
 
 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal “we” of “the Neutron team”)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)
 
 To me, the “where does the API live” is an implementation detail, and
not
 where the time will need to be spent.
 
 For the record, my answers are:
 
 1. Yes.
 2. I don’t know.
 3. I don’t know; this needs some serious discussion.
 4. Yes.
 
 Thanks,
 doug
 
 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:
 
 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out
of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.
 
 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at
a
 higher velocity than an Openstack project, so I expect by the time Kilo
 comes around it's API will be very mature.
 
 Octavia's API doesn't have to be called Octavia either.  It can be
 separated out and it can be called Openstack LBaaS, and the rest of
 Octavia (the actual brains of it) will just be another driver to
 Openstack LBaaS, which would retain the Octavia name.
 
 This is my PROS and CONS list to using Octavia's API as the spun out
 LBaaS:
 
 PROS
 1. Time will need to be spent on a spun out LBaaS's API anyway.
Octavia
 will already have this done.
 2. Most of the same people working on Octavia have worked on Neutron
 LBaaS v2.
 3. It's out of Neutron faster, which is good for Neutron and LBaaS.
 
 CONS
 1. The Octavia 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Sridar Kandaswamy (skandasw)
Hi Doug:

On 10/26/14, 6:01 PM, Doug Wiegley do...@a10networks.com wrote:

Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

I agree with this sentiment.  I¹d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let¹s jump to the interesting part.

 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that

There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

Š this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron -
Services - LB, where we¹re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It¹s at least worth a
conversation or three.

In total agreement and I have heard these sentiments in multiple
conversations across multiple players.
It would be really fruitful to have a constructive conversation on this
across the services, and there are
enough similar issues to make this worthwhile.

Thanks

Sridar


Thanks,
Doug




On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,
 
 Before we get into the details of which API goes where, I¹d like to see
us
 answer the questions of:
 
 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)
 
 To me, the ³where does the API live² is an implementation detail, and
not
 where the time will need to be spent.
 
 For the record, my answers are:
 
 1. Yes.
 2. I don¹t know.
 3. I don¹t know; this needs some serious discussion.
 4. Yes.
 
 Thanks,
 doug
 
 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:
 
 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out
of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.
 
 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at
a
 higher velocity than an Openstack project, so I expect by the time
Kilo
 comes around it's API will be very mature.
 
 Octavia's API doesn't have to be called Octavia either.  It can be
 separated out and it can be called Openstack LBaaS, and the rest of
 Octavia (the actual brains of it) will just be another driver to
 Openstack LBaaS, which would retain the Octavia name.
 
 This is my 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-26 Thread Sumit Naiksatam
Several people have been requesting that we resume the Advanced
Services' meetings [1] to discuss some of the topics being mentioned
in this thread. Perhaps it might help people to have a focussed
discussion on the topic of advanced services' spin-out prior to the
design summit session [2] in Paris. So I propose that we resume our
weekly IRC meetings starting this Wednesday (Oct 29th), 17.30 UTC on
#openstack-meeting-3.

Thanks,
~Sumit.

[1] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
[2] 
http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VE3Ukot4r4y

On Sun, Oct 26, 2014 at 7:55 PM, Sridar Kandaswamy (skandasw)
skand...@cisco.com wrote:
 Hi Doug:

 On 10/26/14, 6:01 PM, Doug Wiegley do...@a10networks.com wrote:

Hi Brandon,

 4. I brought this up now so that we can decide whether we want to
 discuss it at the advanced services spin out session.  I don't see the
 harm in opinions being discussed before the summit, during the summit,
 and more thoroughly after the summit.

I agree with this sentiment.  I¹d just like to pull-up to the decision
level, and if we can get some consensus on how we move forward, we can
bring a concrete plan to the summit instead of 40 minutes of Kumbaya.  We
love each other.  Check.  Things are going to change sometime.  Check.  We
might spin-out someday.  Check.  Now, let¹s jump to the interesting part.

 3. The main reason a spin out makes sense from Neutron is that the scope
 for Neutron is too large for the attention advances services needs from
 the Neutron Core.  If all of advanced services spins out, I see that

There is merit here, but consider the sorts of things that an advanced
services framework should be doing:

- plugging into neutron ports, with all manner of topologies
- service VM handling
- plugging into nova-network
- service chaining
- applying things like security groups to services

Š this is all stuff that Octavia is talking about implementing itself in a
basically defensive manner, instead of leveraging other work.  And there
are specific reasons for that.  But, maybe we can at least take steps to
not be incompatible about it.  Or maybe there is a hierarchy of Neutron -
Services - LB, where we¹re still spun out, but not doing it in a way that
we have to re-implement the world all the time.  It¹s at least worth a
conversation or three.

 In total agreement and I have heard these sentiments in multiple
 conversations across multiple players.
 It would be really fruitful to have a constructive conversation on this
 across the services, and there are
 enough similar issues to make this worthwhile.

 Thanks

 Sridar


Thanks,
Doug




On 10/26/14, 6:35 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Good questions Doug.  My answers are as follows:

1. Yes
2. Some time after Kilo (same as I don't know when)
3. The main reason a spin out makes sense from Neutron is that the scope
for Neutron is too large for the attention advances services needs from
the Neutron Core.  If all of advanced services spins out, I see that
repeating itself within an advanced services project.  More and more
advanced services will get added in and the scope will become too
large.  There would definitely be benefits to it though, but I think we
would end up being right where we are today.
4. I brought this up now so that we can decide whether we want to
discuss it at the advanced services spin out session.  I don't see the
harm in opinions being discussed before the summit, during the summit,
and more thoroughly after the summit.

Yes the brunt of the time will not be spent on the API, but since it
seemed like an opportunity to kill two birds with one stone, I figured
it warranted a discussion.

Thanks,
Brandon

On Sun, 2014-10-26 at 23:15 +, Doug Wiegley wrote:
 Hi all,

 Before we get into the details of which API goes where, I¹d like to see
us
 answer the questions of:

 1. Are we spinning out?
 2. When?
 3. With or without the rest of advanced services?
 4. Do we want to wait until we (the royal ³we² of ³the Neutron team²)
have
 had the Paris summit discussions on vendor split-out and adv. services
 spinout before we answer those questions?  (Yes, that question is
leading.)

 To me, the ³where does the API live² is an implementation detail, and
not
 where the time will need to be spent.

 For the record, my answers are:

 1. Yes.
 2. I don¹t know.
 3. I don¹t know; this needs some serious discussion.
 4. Yes.

 Thanks,
 doug

 On 10/24/14, 3:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out
of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.
 
 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, 

[openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Brandon Logan
With the recent talk about advanced services spinning out of Neutron,
and the fact most of the LBaaS community has wanted LBaaS to spin out of
Neutron, I wanted to bring up a possibility and gauge interest and
opinion on this possibility.

Octavia is going to (and has) an API.  The current thinking is that an
Octavia driver will be created in Neutron LBaaS that will make a
requests to the Octavia API.  When LBaaS spins out of Neutron, it will
need a standalone API.  Octavia's API seems to be a good solution to
this.  It will support vendor drivers much like the current Neutron
LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
exact duplicate.  Octavia will be growing more mature in stackforge at a
higher velocity than an Openstack project, so I expect by the time Kilo
comes around it's API will be very mature.

Octavia's API doesn't have to be called Octavia either.  It can be
separated out and it can be called Openstack LBaaS, and the rest of
Octavia (the actual brains of it) will just be another driver to
Openstack LBaaS, which would retain the Octavia name.

This is my PROS and CONS list to using Octavia's API as the spun out
LBaaS:

PROS
1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
will already have this done.
2. Most of the same people working on Octavia have worked on Neutron
LBaaS v2.
3. It's out of Neutron faster, which is good for Neutron and LBaaS.

CONS
1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
another version of an LBaaS API.
2. The Octavia API will also have a separate Operator API which will
most likely only work with Octavia, not any vendors.

The CONS are easily solvable, and IMHO the PROS greatly outweigh the
CONS.

This is just my opinion though and I'd like to hear back from as many as
possible.  Add on to the PROS and CONS if wanted.

If it is direction we can agree on going then we can add as a talking
point in the advanced services spin out meeting:

http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Octavia's API becoming spun out LBaaS

2014-10-24 Thread Stephen Balukoff
+1 to this, eh!

Though it sounds more like you're talking about spinning the Octavia user
API out of Octavia to become it's own thing (ie. Openstack LBaaS), and
then ensuring a standardized driver interface that vendors (including
Octavia) will interface with. It's sort of a half-dozen of one, six of the
other kind of deal.

To the pros, I would add:  Spin out from Neutron ensures that LBaaS uses
clean interfaces to the networking layer, and separation of concerns here
means that Neutron and LBaaS can evolve independently. (And testing and
failure modes, etc. all become easier with separation of concerns.)

One other thing to consider (not sure if pro or con): I know at Atlanta
there was a lot of talk around using the Neutron flavor framework to allow
for multiple vendors in a single installation as well as differentiated
product offerings for Operators. If / when LBaaS is spun out of Neutron,
LBaaS will still probably have need for something like Neutron flavors,
even if it isn't an equivalent implementation. (Noting of course, that no
implementation of Neutron flavors actually presently exists. XD )

Stephen


On Fri, Oct 24, 2014 at 2:47 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

 With the recent talk about advanced services spinning out of Neutron,
 and the fact most of the LBaaS community has wanted LBaaS to spin out of
 Neutron, I wanted to bring up a possibility and gauge interest and
 opinion on this possibility.

 Octavia is going to (and has) an API.  The current thinking is that an
 Octavia driver will be created in Neutron LBaaS that will make a
 requests to the Octavia API.  When LBaaS spins out of Neutron, it will
 need a standalone API.  Octavia's API seems to be a good solution to
 this.  It will support vendor drivers much like the current Neutron
 LBaaS does.  It has a similar API as Neutron LBaaS v2, but its not an
 exact duplicate.  Octavia will be growing more mature in stackforge at a
 higher velocity than an Openstack project, so I expect by the time Kilo
 comes around it's API will be very mature.

 Octavia's API doesn't have to be called Octavia either.  It can be
 separated out and it can be called Openstack LBaaS, and the rest of
 Octavia (the actual brains of it) will just be another driver to
 Openstack LBaaS, which would retain the Octavia name.

 This is my PROS and CONS list to using Octavia's API as the spun out
 LBaaS:

 PROS
 1. Time will need to be spent on a spun out LBaaS's API anyway.  Octavia
 will already have this done.
 2. Most of the same people working on Octavia have worked on Neutron
 LBaaS v2.
 3. It's out of Neutron faster, which is good for Neutron and LBaaS.

 CONS
 1. The Octavia API is dissimilar enough from Neutron LBaaS v2 to be yet
 another version of an LBaaS API.
 2. The Octavia API will also have a separate Operator API which will
 most likely only work with Octavia, not any vendors.

 The CONS are easily solvable, and IMHO the PROS greatly outweigh the
 CONS.

 This is just my opinion though and I'd like to hear back from as many as
 possible.  Add on to the PROS and CONS if wanted.

 If it is direction we can agree on going then we can add as a talking
 point in the advanced services spin out meeting:


 http://kilodesignsummit.sched.org/event/8a0b7c1d64883c08286e4446e163f1a6#.VEq66HWx3UY

 Thanks,
 Brandon
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-24 Thread Eichberger, German
Hi Jorge,

I agree completely with the points you make about the logs. We still feel that 
metering and logging are two different problems. The ceilometers community has 
a proposal on how to meter lbaas (see 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/lbaas_metering.html)
 and we at HP think that those values are be sufficient for us for the time 
being. 

I think our discussion is mostly about connection logs which are emitted some 
way from amphora (e.g. haproxy logs). Since they are customer's logs we need to 
explore on our end the privacy implications (I assume at RAX you have controls 
in place to make sure that there is no violation :-). Also I need to check if 
our central logging system is scalable enough and we can send logs there 
without creating security holes.

Another possibility is to log like syslog our apmphora agent logs to a central 
system to help with trouble shooting debugging. Those could be sufficiently 
anonymized to avoid privacy issue. What are your thoughts on logging those?

Thanks,
German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Thursday, October 23, 2014 3:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide more 
insight into you usage requirements? Also, I'd like to clarify a few points 
related to using logging.

I am advocating that logs be used for multiple purposes, including billing. 
Billing requirements are different that connection logging requirements. 
However, connection logging is a very accurate mechanism to capture billable 
metrics and thus, is related. My vision for this is something like the 
following:

- Capture logs in a scalable way (i.e. capture logs and put them on a separate 
scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and send 
them on their merry way to cielometer or whatever service an operator will be 
using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything from 
indefinitely to not at all. Rackspace is planing on keeping them for a certain 
period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns 
on the connection logging feature for their load balancer it will already have 
a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually after a 
tragic lb event). By already capturing the logs I'm sure customers will be 
extremely happy to see that there are already X days worth of logs they can 
immediately sift through.
B) Operators and their support teams can leverage logs when providing 
service to their customers. This is huge for finding issues and resolving them 
quickly.
C) Albeit a minor point, building support for logs from the get-go 
mitigates capacity management uncertainty. My example earlier was the extreme 
case of every customer turning on logging at the same time. While unlikely, I 
would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my 
experience, those tend to be more complex than what I am advocating and without 
the added benefits listed above. An understanding of HP's desires on this 
matter will hopefully get this to a point where we can start working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an API 
call that returns real-time data such as this == 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  Eichberger, German german.eichber...@hp.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Wednesday, October 22, 2014 2:41 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


Hi Jorge,
 
Good discussion so far + glad to have you back J
 
I am not a big fan of using logs for billing information since 
ultimately (at least at HP) we need to pump it into ceilometer. So I am 
envisioning either the  amphora (via a proxy) to pump it straight into 
that system or we collect it on the controller and pump it from there.
 
Allowing/enabling logging creates some requirements on the hardware, 
mainly, that they can handle the IO coming from logging. Some operators 
might choose to  hook up very cheap and non performing disks which 
might not be able to deal with the log traffic. So I would suggest that 
there is some rate limiting on the log output to help with that.

 
Thanks,
German
 
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]

Sent: Wednesday

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-23 Thread Jorge Miramontes
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex than what I am advocating and
without the added benefits listed above. An understanding of HP's desires
on this matter will hopefully get this to a point where we can start
working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an
API call that returns real-time data such as this ==
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  Eichberger, German german.eichber...@hp.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Wednesday, October 22, 2014 2:41 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


Hi Jorge,
 
Good discussion so far + glad to have you back
J
 
I am not a big fan of using logs for billing information since ultimately
(at least at HP) we need to pump it into ceilometer. So I am envisioning
either the
 amphora (via a proxy) to pump it straight into that system or we collect
it on the controller and pump it from there.
 
Allowing/enabling logging creates some requirements on the hardware,
mainly, that they can handle the IO coming from logging. Some operators
might choose to
 hook up very cheap and non performing disks which might not be able to
deal with the log traffic. So I would suggest that there is some rate
limiting on the log output to help with that.

 
Thanks,
German
 
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]

Sent: Wednesday, October 22, 2014 6:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


 
Hey Stephen (and Robert),

 

For real-time usage I was thinking something similar to what you are
proposing. Using logs for this would be overkill IMO so your suggestions
were what I was
 thinking of starting with.

 

As far as storing logs is concerned I was definitely thinking of
offloading these onto separate storage devices. Robert, I totally hear
you on the scalability
 part as our current LBaaS setup generates TB of request logs. I'll start
planning out a spec and then I'll let everyone chime in there. I just
wanted to get a general feel for the ideas I had mentioned. I'll also
bring it up in today's meeting.

 

Cheers,

--Jorge




 

From:
Stephen Balukoff sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

 

Hi Jorge!

 

Welcome back, eh! You've been missed.

 

Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Stephen Balukoff
Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having concrete requirements for logging,
eh. Once this discussion is nearing a conclusion, could you write up the
specifics of logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as
there doesn't seem to be high demand for it, and it certainly won't be
supported in v 0.5 of Octavia (and maybe not in v1 or v2 either, unless we
see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding
getting this from a combination of iptables and / or the haproxy stats
interface. Were you thinking something different that involves on-the-fly
analysis of the logs or something?  (I tend to find that logs are great for
non-real time data, but can often be lacking if you need, say, a gauge like
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae
themselves, then we need to have log rotation as part of the configuration
here. It would be silly to have an amphora failure just because its
ephemeral disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.com wrote:

 Hey Octavia folks!


 First off, yes, I'm still alive and kicking. :)

 I,d like to start a conversation on usage requirements and have a few
 suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
 based protocols, we inherently enable connection logging for load
 balancers for several reasons:

 1) We can use these logs as the raw and granular data needed to track
 usage. With logs, the operator has flexibility as to what usage metrics
 they want to bill against. For example, bandwidth is easy to track and can
 even be split into header and body data so that the provider can choose if
 they want to bill on header data or not. Also, the provider can determine
 if they will bill their customers for failed requests that were the fault
 of the provider themselves. These are just a few examples; the point is
 the flexible nature of logs.

 2) Creating billable usage from logs is easy compared to other options
 like polling. For example, in our current LBaaS iteration at Rackspace we
 bill partly on average concurrent connections. This is based on polling
 and is not as accurate as it possibly can be. It's very close, but it
 doesn't get more accurate that the logs themselves. Furthermore, polling
 is more complex and uses up resources on the polling cadence.

 3) Enabling logs for all load balancers can be used for debugging, support
 and audit purposes. While the customer may or may not want their logs
 uploaded to swift, operators and their support teams can still use this
 data to help customers out with billing and setup issues. Auditing will
 also be easier with raw logs.

 4) Enabling logs for all load balancers will help mitigate uncertainty in
 terms of capacity planning. Imagine if every customer suddenly enabled
 logs without it ever being turned on. This could produce a spike in
 resource utilization that will be hard to manage. Enabling logs from the
 start means we are certain as to what to plan for other than the nature of
 the customer's traffic pattern.

 Some Cons I can think of (please add more as I think the pros outweigh the
 cons):

 1) If we every add UDP based protocols then this model won't work.  1% of
 our load balancers at Rackspace are UDP based so we are not looking at
 using this protocol for Octavia. I'm more of a fan of building a really
 good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
 a different problem. For me different problem == different product.

 2) I'm assuming HA Proxy. Thus, if we choose another technology for the
 amphora then this model may break.


 Also, and more generally speaking, I have categorized usage into three
 categories:

 1) Tracking usage - this is usage that will be used my operators and
 support teams to gain insight into what load balancers are doing in an
 attempt to monitor potential issues.
 2) Billable usage - this is usage that is a subset of tracking usage used
 to bill customers.
 3) Real-time usage - this is usage that should be exposed via the API so
 that customers can make decisions that affect their configuration (ex.
 Based off of the number of connections my web heads can handle when
 should I add another node to my pool?).

 These are my preliminary thoughts, and I'd love to gain insight into what
 the community thinks. I have built about 3 usage collection systems thus
 far (1 with Brandon) and have learned a lot. Some basic rules I have
 discovered with collecting usage are:

 1) Always collect granular usage as it paints a picture of what actually
 happened. Massaged/un-granular usage == lost information.
 2) Never imply, always be explicit. Implications usually stem from bad
 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Robert van Leeuwen
 I,d like to start a conversation on usage requirements and have a few
 suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
 based protocols, we inherently enable connection logging for load
 balancers for several reasons:

Just request from the operator side of things:
Please think about the scalability when storing all logs.

e.g. we are currently logging http requests to one load balanced application 
(that would be a fit for LBAAS)
It is about 500 requests per second, which adds up to 40GB per day (in 
elasticsearch.)
Please make sure whatever solution is chosen it can cope with machines doing 
1000s of requests per second...

Cheers,
Robert van Leeuwen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Jorge Miramontes
Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work.  1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-22 Thread Eichberger, German
Hi Jorge,

Good discussion so far + glad to have you back :)

I am not a big fan of using logs for billing information since ultimately (at 
least at HP) we need to pump it into ceilometer. So I am envisioning either the 
amphora (via a proxy) to pump it straight into that system or we collect it on 
the controller and pump it from there.

Allowing/enabling logging creates some requirements on the hardware, mainly, 
that they can handle the IO coming from logging. Some operators might choose to 
hook up very cheap and non performing disks which might not be able to deal 
with the log traffic. So I would suggest that there is some rate limiting on 
the log output to help with that.

Thanks,
German

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Wednesday, October 22, 2014 6:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hey Stephen (and Robert),

For real-time usage I was thinking something similar to what you are proposing. 
Using logs for this would be overkill IMO so your suggestions were what I was 
thinking of starting with.

As far as storing logs is concerned I was definitely thinking of offloading 
these onto separate storage devices. Robert, I totally hear you on the 
scalability part as our current LBaaS setup generates TB of request logs. I'll 
start planning out a spec and then I'll let everyone chime in there. I just 
wanted to get a general feel for the ideas I had mentioned. I'll also bring it 
up in today's meeting.

Cheers,
--Jorge

From: Stephen Balukoff sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 22, 2014 4:04 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

Hi Jorge!

Welcome back, eh! You've been missed.

Anyway, I just wanted to say that your proposal sounds great to me, and it's 
good to finally be closer to having concrete requirements for logging, eh. Once 
this discussion is nearing a conclusion, could you write up the specifics of 
logging into a specification proposal document?

Regarding the discussion itself: I think we can ignore UDP for now, as there 
doesn't seem to be high demand for it, and it certainly won't be supported in v 
0.5 of Octavia (and maybe not in v1 or v2 either, unless we see real demand).

Regarding the 'real-time usage' information: I have some ideas regarding 
getting this from a combination of iptables and / or the haproxy stats 
interface. Were you thinking something different that involves on-the-fly 
analysis of the logs or something?  (I tend to find that logs are great for 
non-real time data, but can often be lacking if you need, say, a gauge like 
'currently open connections' or something.)

One other thing: If there's a chance we'll be storing logs on the amphorae 
themselves, then we need to have log rotation as part of the configuration 
here. It would be silly to have an amphora failure just because its ephemeral 
disk fills up, eh.

Stephen

On Wed, Oct 15, 2014 at 4:03 PM, Jorge Miramontes 
jorge.miramon...@rackspace.commailto:jorge.miramon...@rackspace.com wrote:
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
I felt guilty after reading Vijay B. ’s reply ☺.
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All,

 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association.

 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up.

 See DOCS here:

 
http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
r
eate.html

 Currently, LBaaS will make internal service calls (clean interface :/)
to create and attach a Neutron port.
 The VIP from this port is added to the Loadbalancer object of the Load
balancer

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.   Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.   Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Vijay Venkatachalam
 I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the owner of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. 's reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first create a VIP with a private IP and then creates a FLIP and 
assigns FLIP to private VIP which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-15 Thread Phillip Toohill
Ah, this makes sense. Guess I'm wondering more so how that’s configured and if 
it utilizes Neutron at all. And if it does how does it configure that.

I have some more research to do it seems ;)

Thanks for the clarification

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 1:33 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

 I'm unsure the meaning of hosting FLIP directly on the LB.

There can be LB appliances (usually physical appliance) that sit at the edge 
and is connected to receive floating IP traffic .

In such a case, the VIP/Virtual Server with FLIP  can be configured in the LB 
appliance.
Meaning, LB appliance is now the “owner” of the FLIP and will be responding to 
ARPs.


Thanks,
Vijay V.

From: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
Sent: 15 October 2014 23:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

No worries :)

Could you possibly clarify what you mean by the FLIP hosted directly on the LB?

Im currently assuming the FLIP pools are designated in Neutron at this point 
and we would simply be associating with the VIP port, I'm unsure the meaning of 
hosting FLIP directly on the LB.

Thank you for responses! There is definitely a more thought out discussion to 
be had, and glad these ideas are being brought up now rather than later.

From: Vijay Venkatachalam 
vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 15, 2014 9:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

I felt guilty after reading Vijay B. ’s reply :).
My apologies to have replied in brief, here are my thoughts in detail.

Currently LB configuration exposed via floating IP is a 2 step operation.
User has to first “create a VIP with a private IP” and then “creates a FLIP and 
assigns FLIP to private VIP” which would result in a DNAT in the gateway.
The proposal is to combine these 2 steps, meaning make DNATing operation as 
part of VIP creation process.

While this seems to be a simple proposal I think we should consider finer 
details.
The proposal assumes that the FLIP is implemented by the gateway through a DNAT.
We should also be open to creating a VIP with a FLIP rather than a private IP.
Meaning FLIP will be hosted directly by the LB appliance.

In essence, LBaaS plugin will create the FLIP for the VIP and let the driver 
know about the FLIP that should get implemented.
The drivers should have a choice of implementing in its own way.
It could choose to

1.  Implement via the traditional route. Driver creates a private IP, 
implements VIP as a private IP in the lb appliance and calls Neutron to 
implement DNAT (FLIP to private VIP )

2.  Implement VIP as a FLIP directly on the LB appliance. Here there is no 
private IP involved.

Pros:
Not much changes in the LBaaS API, there is only one IP as part of the VIP.
We can ensure backward compatibility with the driver as well by having Step (1) 
implemented in abstract driver.

Thanks,
Vjay

From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
Sent: 15 October 2014 05:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown

[openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-15 Thread Jorge Miramontes
Hey Octavia folks!


First off, yes, I'm still alive and kicking. :)

I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:

1) We can use these logs as the raw and granular data needed to track
usage. With logs, the operator has flexibility as to what usage metrics
they want to bill against. For example, bandwidth is easy to track and can
even be split into header and body data so that the provider can choose if
they want to bill on header data or not. Also, the provider can determine
if they will bill their customers for failed requests that were the fault
of the provider themselves. These are just a few examples; the point is
the flexible nature of logs.

2) Creating billable usage from logs is easy compared to other options
like polling. For example, in our current LBaaS iteration at Rackspace we
bill partly on average concurrent connections. This is based on polling
and is not as accurate as it possibly can be. It's very close, but it
doesn't get more accurate that the logs themselves. Furthermore, polling
is more complex and uses up resources on the polling cadence.

3) Enabling logs for all load balancers can be used for debugging, support
and audit purposes. While the customer may or may not want their logs
uploaded to swift, operators and their support teams can still use this
data to help customers out with billing and setup issues. Auditing will
also be easier with raw logs.

4) Enabling logs for all load balancers will help mitigate uncertainty in
terms of capacity planning. Imagine if every customer suddenly enabled
logs without it ever being turned on. This could produce a spike in
resource utilization that will be hard to manage. Enabling logs from the
start means we are certain as to what to plan for other than the nature of
the customer's traffic pattern.

Some Cons I can think of (please add more as I think the pros outweigh the
cons):

1) If we every add UDP based protocols then this model won't work.  1% of
our load balancers at Rackspace are UDP based so we are not looking at
using this protocol for Octavia. I'm more of a fan of building a really
good TCP/HTTP/HTTPS based load balancer because UDP load balancing solves
a different problem. For me different problem == different product.

2) I'm assuming HA Proxy. Thus, if we choose another technology for the
amphora then this model may break.


Also, and more generally speaking, I have categorized usage into three
categories:

1) Tracking usage - this is usage that will be used my operators and
support teams to gain insight into what load balancers are doing in an
attempt to monitor potential issues.
2) Billable usage - this is usage that is a subset of tracking usage used
to bill customers.
3) Real-time usage - this is usage that should be exposed via the API so
that customers can make decisions that affect their configuration (ex.
Based off of the number of connections my web heads can handle when
should I add another node to my pool?).

These are my preliminary thoughts, and I'd love to gain insight into what
the community thinks. I have built about 3 usage collection systems thus
far (1 with Brandon) and have learned a lot. Some basic rules I have
discovered with collecting usage are:

1) Always collect granular usage as it paints a picture of what actually
happened. Massaged/un-granular usage == lost information.
2) Never imply, always be explicit. Implications usually stem from bad
assumptions.


Last but not least, we need to store every user and system load balancer
event such as creation, updates, suspension and deletion so that we may
bill on things like uptime and serve our customers better by knowing what
happened and when.


Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Susanne Balle
Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:

 Diagrams in jpeg format..

 On 10/12/14 10:06 PM, Phillip Toohill phillip.tooh...@rackspace.com
 wrote:

 Hello all,
 
 Heres some additional diagrams and docs. Not incredibly detailed, but
 should get the point across.
 
 Feel free to edit if needed.
 
 Once we come to some kind of agreement and understanding I can rewrite
 these more to be thorough and get them in a more official place. Also, I
 understand theres other use cases not shown in the initial docs, so this
 is a good time to collaborate to make this more thought out.
 
 Please feel free to ping me with any questions,
 
 Thank you
 
 
 Google DOCS link for FLIP folder:
 
 https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
 a
 ring
 
 -diagrams are draw.io based and can be opened from within Drive by
 selecting the appropriate application.
 
 On 10/7/14 2:25 PM, Brandon Logan brandon.lo...@rackspace.com wrote:
 
 I'll add some more info to this as well:
 
 Neutron LBaaS creates the neutron port for the VIP in the plugin layer
 before drivers ever have any control.  In the case of an async driver,
 it will then call the driver's create method, and then return to the
 user the vip info.  This means the user will know the VIP before the
 driver even finishes creating the load balancer.
 
 So if Octavia is just going to create a floating IP and then associate
 that floating IP to the neutron port, there is the problem of the user
 not ever seeing the correct VIP (which would be the floating iP).
 
 So really, we need to have a very detailed discussion on what the
 options are for us to get this to work for those of us intending to use
 floating ips as VIPs while also working for those only requiring a
 neutron port.  I'm pretty sure this will require changing the way V2
 behaves, but there's more discussion points needed on that.  Luckily, V2
 is in a feature branch and not merged into Neutron master, so we can
 change it pretty easily.  Phil and I will bring this up in the meeting
 tomorrow, which may lead to a meeting topic in the neutron lbaas
 meeting.
 
 Thanks,
 Brandon
 
 
 On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
  Hello All,
 
  I wanted to start a discussion on floating IP management and ultimately
  decide how the LBaaS group wants to handle the association.
 
  There is a need to utilize floating IPs(FLIP) and its API calls to
  associate a FLIP to the neutron port that we currently spin up.
 
  See DOCS here:
 
  
 
 http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
 r
 eate.html
 
  Currently, LBaaS will make internal service calls (clean interface :/)
 to create and attach a Neutron port.
  The VIP from this port is added to the Loadbalancer object of the Load
 balancer configuration and returned to the user.
 
  This creates a bit of a problem if we want to associate a FLIP with the
 port and display the FLIP to the user instead of
  the ports VIP because the port is currently created and attached in the
 plugin and there is no code anywhere to handle the FLIP
  association.
 
  To keep this short and to the point:
 
  We need to discuss where and how we want to handle this association. I
 have a few questions to start it off.
 
  Do we want to add logic in the plugin to call the FLIP association API?
 
  If we have logic in the plugin should we have configuration that
 identifies weather to use/return the FLIP instead the port VIP?
 
  Would we rather have logic for FLIP association in the drivers?
 
  If logic is in the drivers would we still return the port VIP to the
 user then later overwrite it with the FLIP?
  Or would we have configuration to not return the port VIP initially,
 but an additional query would show the associated FLIP.
 
 
  Is there an internal service call for this, and if so would we use it
 instead of API calls?
 
 
  Theres plenty of other thoughts and questions to be asked and discussed
 in regards to FLIP handling,
  hopefully this will get us going. I'm certain I may not be completely
 understanding this and
  is the hopes of this email to clarify any uncertainties.
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay B
Hi Phillip,


Adding my thoughts below. I’ll first answer the questions you raised with
what I think should be done, and then give my explanations to reason
through with those views.



1. Do we want to add logic in the plugin to call the FLIP association API?


  We should implement the logic in the new v2 extension and the plugin
layer as a single API call. We would need to add to the existing v2 API to
be able to do this. The best place to add this option of passing the FLIP
info/request to the VIP is in the VIP create and update API calls via new
parameters.


2. If we have logic in the plugin should we have configuration that
identifies whether to use/return the FLIP instead of the port VIP?


  Yes and no, in that we should return the complete result of the VIP
create/update/list/show API calls, in which we show the VIP internal IP,
but we also show the FLIP either as empty or having a FLIP uuid. External
users will anyway use only the FLIP, else they wouldn’t be able to reach
the LB and the VIP IP, but the APIs need to show both fields.


3. Would we rather have logic for FLIP association in the drivers?


  This is the hardest part to decide. To understand this, we need to look
at two important drivers of LBaaS design:


 I)  The Neutron core plugin we’re using.

II) The different types of LB devices - physical, virtual standalone, and
virtual controlled by a management plane. This leads to different kinds of
LBaaS drivers and different kinds of interaction or the lack of it between
them and the core neutron plugin.


The reason we need to take into account both these is that port
provisioning as well as NATing for the FLIP to internal VIP IP will be
configured differently by the different network management/backend planes
that the plugins use, and the way drivers configure LBs can be highly
impacted by this.


For example, we can have an NSX infrastructure that will implement the FLIP
to internal IP conversion in the logical router module which sits pretty
much outside of Openstack’s realm, using openflow. Or we can use lighter
solutions directly on the hypervisor that still employ open flow entries
without actually having a dedicated logical routing module. Neither will
matter much if we are in a position to have them deploy our networking for
us, i.e., in the cases of us using virtual LBs sitting on compute nodes.
But if we have a physical LB, the neutron plugins cannot do much of the
network provisioning work for us, typically because physical LBs usually
sit outside of the cloud, and are among the earliest points of contact from
the external world.


This already nudges us to consider putting the FLIP provisioning
functionality in the driver. However, consider again more closely the major
ways in which LBaaS drivers talk to LB solutions today depending on II) :


 a) LBaaS drivers that talk to a virtual LB device on a compute node,
directly.

b) LBaaS drivers that talk to a physical LB device (or a virtual LB sitting
outside the cloud) directly.

c) LBaaS drivers that talk to a management plane like F5’s BigIQ, or
Netscaler’s NCC, or as in our case, Octavia, that try to provide tenant
based provisioning of virtual LBs.

d) The HAProxy reference namespace driver.


d) is really a PoC use case, and we can forget it. Let’s consider a), b)
and c).


If we use a) or b), we must assume that the required routing for the
virtual LB has been setup correctly, either already through nova or
manually. So we can afford to do our FLIP plumbing in the neutron plugin
layer, but, driven by the driver - how? - typically, after the VIP is
successfully created on the LB, and just before the driver updates the
VIP’s status as ACTIVE, it can create the FLIP. Of course, if the FLIP
provisioning fails for any reason, the VIP still stands. It’ll be empty in
the result, and the API will error out saying “VIP created but FLIP
creation failed”. It must be manually deleted by another delete VIP call.
We can’t afford to provision a FLIP before a VIP is active, for external
traffic shouldn’t be taken while the VIP isn’t up yet. If the lines are
getting hazy right now because of this callback model, let’s just focus on
the point that we’re initiating FLIP creation in the driver layer while the
code sits in the plugin layer because it will need to update the database.
But in absolute terms, we’re doing it in the driver.


It is use case c) that is interesting. In this case, we should do all
neutron based provisioning neither in the driver, nor in the plugin in
neutron, rather, we should do this in Octavia, and in the Octavia
controller to be specific. This is very important to note, because if
customers are using this deployment (which today has the potential to be
way greater in the near future than any other model simply because of the
sheer existing customer base), we can’t be creating the FLIP in the plugin
layer and have the controller reattempt it. Indeed, the controllers can
change their code to not attempt this, but 

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-14 Thread Vijay Venkatachalam
Thanks for the doc.

The floating IP could be hosted directly by the lb backend/lb appliance as well?
It depends on the appliance deployment.

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: 14 October 2014 21:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

Nice diagrams. :-) Thanks. Susanne

On Mon, Oct 13, 2014 at 4:18 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com wrote:
Diagrams in jpeg format..

On 10/12/14 10:06 PM, Phillip Toohill 
phillip.tooh...@rackspace.commailto:phillip.tooh...@rackspace.com
wrote:

Hello all,

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sh
a
ring

-diagrams are draw.iohttp://draw.io based and can be opened from within 
Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All,

 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association.

 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up.

 See DOCS here:

 
http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_c
r
eate.html

 Currently, LBaaS will make internal service calls (clean interface :/)
to create and attach a Neutron port.
 The VIP from this port is added to the Loadbalancer object of the Load
balancer configuration and returned to the user.

 This creates a bit of a problem if we want to associate a FLIP with the
port and display the FLIP to the user instead of
 the ports VIP because the port is currently created and attached in the
plugin and there is no code anywhere to handle the FLIP
 association.

 To keep this short and to the point:

 We need to discuss where and how we want to handle this association. I
have a few questions to start it off.

 Do we want to add logic in the plugin to call the FLIP association API?

 If we have logic in the plugin should we have configuration that
identifies weather to use/return the FLIP instead the port VIP?

 Would we rather have logic for FLIP association in the drivers?

 If logic is in the drivers would we still return the port VIP to the
user then later overwrite it with the FLIP?
 Or would we have configuration to not return the port VIP initially,
but an additional query would show the associated FLIP.


 Is there an internal service call for this, and if so would we use it
instead of API calls?


 Theres plenty of other thoughts and questions to be asked and discussed
in regards to FLIP handling,
 hopefully this will get us going. I'm certain I may not be completely
understanding this and
 is the hopes of this email to clarify any uncertainties.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http

Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-12 Thread Phillip Toohill
Hello all, 

Heres some additional diagrams and docs. Not incredibly detailed, but
should get the point across.

Feel free to edit if needed.

Once we come to some kind of agreement and understanding I can rewrite
these more to be thorough and get them in a more official place. Also, I
understand theres other use cases not shown in the initial docs, so this
is a good time to collaborate to make this more thought out.

Please feel free to ping me with any questions,

Thank you


Google DOCS link for FLIP folder:
https://drive.google.com/folderview?id=0B2r4apUP7uPwU1FWUjJBN0NMbWMusp=sha
ring

-diagrams are draw.io based and can be opened from within Drive by
selecting the appropriate application.

On 10/7/14 2:25 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All, 
 
 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association.
 
 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up.
 
 See DOCS here:
 
  
http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_cr
eate.html
 
 Currently, LBaaS will make internal service calls (clean interface :/)
to create and attach a Neutron port.
 The VIP from this port is added to the Loadbalancer object of the Load
balancer configuration and returned to the user.
 
 This creates a bit of a problem if we want to associate a FLIP with the
port and display the FLIP to the user instead of
 the ports VIP because the port is currently created and attached in the
plugin and there is no code anywhere to handle the FLIP
 association. 
 
 To keep this short and to the point:
 
 We need to discuss where and how we want to handle this association. I
have a few questions to start it off.
 
 Do we want to add logic in the plugin to call the FLIP association API?
 
 If we have logic in the plugin should we have configuration that
identifies weather to use/return the FLIP instead the port VIP?
 
 Would we rather have logic for FLIP association in the drivers?
 
 If logic is in the drivers would we still return the port VIP to the
user then later overwrite it with the FLIP?
 Or would we have configuration to not return the port VIP initially,
but an additional query would show the associated FLIP.
 
 
 Is there an internal service call for this, and if so would we use it
instead of API calls?
 
 
 Theres plenty of other thoughts and questions to be asked and discussed
in regards to FLIP handling,
 hopefully this will get us going. I'm certain I may not be completely
understanding this and
 is the hopes of this email to clarify any uncertainties.
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-07 Thread Brandon Logan
I'll add some more info to this as well:

Neutron LBaaS creates the neutron port for the VIP in the plugin layer
before drivers ever have any control.  In the case of an async driver,
it will then call the driver's create method, and then return to the
user the vip info.  This means the user will know the VIP before the
driver even finishes creating the load balancer.

So if Octavia is just going to create a floating IP and then associate
that floating IP to the neutron port, there is the problem of the user
not ever seeing the correct VIP (which would be the floating iP).

So really, we need to have a very detailed discussion on what the
options are for us to get this to work for those of us intending to use
floating ips as VIPs while also working for those only requiring a
neutron port.  I'm pretty sure this will require changing the way V2
behaves, but there's more discussion points needed on that.  Luckily, V2
is in a feature branch and not merged into Neutron master, so we can
change it pretty easily.  Phil and I will bring this up in the meeting
tomorrow, which may lead to a meeting topic in the neutron lbaas
meeting.

Thanks,
Brandon


On Mon, 2014-10-06 at 17:40 +, Phillip Toohill wrote:
 Hello All, 
 
 I wanted to start a discussion on floating IP management and ultimately
 decide how the LBaaS group wants to handle the association. 
 
 There is a need to utilize floating IPs(FLIP) and its API calls to
 associate a FLIP to the neutron port that we currently spin up. 
 
 See DOCS here:
 
  http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_create.html
 
 Currently, LBaaS will make internal service calls (clean interface :/) to 
 create and attach a Neutron port. 
 The VIP from this port is added to the Loadbalancer object of the Load 
 balancer configuration and returned to the user.
 
 This creates a bit of a problem if we want to associate a FLIP with the port 
 and display the FLIP to the user instead of
 the ports VIP because the port is currently created and attached in the 
 plugin and there is no code anywhere to handle the FLIP
 association. 
 
 To keep this short and to the point:
 
 We need to discuss where and how we want to handle this association. I have a 
 few questions to start it off. 
 
 Do we want to add logic in the plugin to call the FLIP association API?
 
 If we have logic in the plugin should we have configuration that identifies 
 weather to use/return the FLIP instead the port VIP?
 
 Would we rather have logic for FLIP association in the drivers?
 
 If logic is in the drivers would we still return the port VIP to the user 
 then later overwrite it with the FLIP? 
 Or would we have configuration to not return the port VIP initially, but an 
 additional query would show the associated FLIP.
 
 
 Is there an internal service call for this, and if so would we use it instead 
 of API calls? 
 
 
 Theres plenty of other thoughts and questions to be asked and discussed in 
 regards to FLIP handling, 
 hopefully this will get us going. I'm certain I may not be completely 
 understanding this and 
 is the hopes of this email to clarify any uncertainties. 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Floating IP management

2014-10-06 Thread Phillip Toohill
Hello All, 

I wanted to start a discussion on floating IP management and ultimately
decide how the LBaaS group wants to handle the association. 

There is a need to utilize floating IPs(FLIP) and its API calls to
associate a FLIP to the neutron port that we currently spin up. 

See DOCS here:

 http://docs.openstack.org/api/openstack-network/2.0/content/floatingip_create.html

Currently, LBaaS will make internal service calls (clean interface :/) to 
create and attach a Neutron port. 
The VIP from this port is added to the Loadbalancer object of the Load balancer 
configuration and returned to the user.

This creates a bit of a problem if we want to associate a FLIP with the port 
and display the FLIP to the user instead of
the ports VIP because the port is currently created and attached in the plugin 
and there is no code anywhere to handle the FLIP
association. 

To keep this short and to the point:

We need to discuss where and how we want to handle this association. I have a 
few questions to start it off. 

Do we want to add logic in the plugin to call the FLIP association API?

If we have logic in the plugin should we have configuration that identifies 
weather to use/return the FLIP instead the port VIP?

Would we rather have logic for FLIP association in the drivers?

If logic is in the drivers would we still return the port VIP to the user then 
later overwrite it with the FLIP? 
Or would we have configuration to not return the port VIP initially, but an 
additional query would show the associated FLIP.


Is there an internal service call for this, and if so would we use it instead 
of API calls? 


Theres plenty of other thoughts and questions to be asked and discussed in 
regards to FLIP handling, 
hopefully this will get us going. I'm certain I may not be completely 
understanding this and 
is the hopes of this email to clarify any uncertainties. 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Avishay Balderman
+1

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Tuesday, September 02, 2014 8:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][lbaas][octavia]

Hi Susanne and everyone,

My opinions are that keeping it in stackforge until it gets mature is the best 
solution.  I'm pretty sure we can all agree on that.  Whenever it is mature 
then, and only then, we should try to get it into openstack one way or another. 
 If Neutron LBaaS v2 is still incubated then it should be relatively easy to 
get it in that codebase.  If Neutron LBaaS has already spun out, even easier 
for us.  If we want Octavia to just become an openstack project all its own 
then that will be the difficult part.

I think the best course of action is to get Octavia itself into the same 
codebase as LBaaS (Neutron or spun out).  They do go together, and the 
maintainers will almost always be the same for both.  This makes even more 
sense when LBaaS is spun out into its own project.

I really think all of the answers to these questions will fall into place when 
we actually deliver a product that we are all wanting and talking about 
delivering with Octavia.  Once we prove that we can all come together as a 
community and manage a product from inception to maturity, we will then have 
the respect and trust to do what is best for an Openstack LBaaS product.

Thanks,
Brandon

On Mon, 2014-09-01 at 10:18 -0400, Susanne Balle wrote:
 Kyle, Adam,
 
  
 
 Based on this thread Kyle is suggesting the follow moving forward
 plan: 
 
  
 
 1) We incubate Neutron LBaaS V2 in the “Neutron” incubator “and freeze 
 LBaas V1.0”
 2) “Eventually” It graduates into a project under the networking 
 program.
 3) “At that point” We deprecate Neutron LBaaS v1.
 
  
 
 The words in “xx“ are works I added to make sure I/We understand the 
 whole picture.
 
  
 
 And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 / 
 Radware / A10 / etc appliances which is a definition I agree with BTW.
 
  
 
 What I am trying to now understand is how we will move Octavia into 
 the new LBaaS project?
 
  
 
 If we do it later rather than develop Octavia in tree under the new 
 incubated LBaaS project when do we plan to bring it in-tree from 
 Stackforge? Kilo? Later? When LBaaS is a separate project under the 
 Networking program?

  
 
 What are the criteria to bring a driver into the LBaaS project and 
 what do we need to do to replace the existing reference driver? Maybe 
 adding a software driver to LBaaS source tree is less of a problem 
 than converting a whole project to an OpenStack project.

  
 
 Again I am open to both directions I just want to make sure we 
 understand why we are choosing to do one or the other and that our  
 decision is based on data and not emotions.
 
  
 
 I am assuming that keeping Octavia in Stackforge will increase the 
 velocity of the project and allow us more freedom which is goodness.
 We just need to have a plan to make it part of the Openstack LBaaS 
 project.
 
  
 
 Regards Susanne
 
 
 
 
 On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell 
 adam.harw...@rackspace.com wrote:
 Only really have comments on two of your related points:
 
 
 [Susanne] To me Octavia is a driver so it is very hard to me
 to think of it as a standalone project. It needs the new
 Neutron LBaaS v2 to function which is why I think of them
 together. This of course can change since we can add whatever
 layers we want to Octavia.
 
 
 [Adam] I guess I've always shared Stephen's
 viewpoint — Octavia != LBaaS-v2. Octavia is a peer to F5 /
 Radware / A10 / etcappliances, not to an Openstack API layer
 like Neutron-LBaaS. It's a little tricky to clearly define
 this difference in conversation, and I have noticed that quite
 a few people are having the same issue differentiating. In a
 small group, having quite a few people not on the same page is
 a bit scary, so maybe we need to really sit down and map this
 out so everyone is together one way or the other.
 
 
 [Susanne] Ok now I am confused… But I agree with you that it
 need to focus on our use cases. I remember us discussing
 Octavia being the refenece implementation for OpenStack LBaaS
 (whatever that is). Has that changed while I was on vacation?
 
 
 [Adam] I believe that having the Octavia driver (not the
 Octavia codebase itself, technically) become the reference
 implementation for Neutron-LBaaS is still the plan in my eyes.
 The Octavia Driver in Neutron-LBaaS is a separate bit of code
 from the actual Octavia project, similar to the way the A10
 driver is a separate bit of code from the A10 appliance. To do
 that though, we need Octavia to be fairly close

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Adam Harwell
 hard to me
 to think of it as a standalone project. It needs the new
 Neutron LBaaS v2 to function which is why I think of them
 together. This of course can change since we can add whatever
 layers we want to Octavia.
 
 
 [Adam] I guess I've always shared Stephen's
 viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
 Radware / A10 / etcappliances, not to an Openstack API layer
 like Neutron-LBaaS. It's a little tricky to clearly define
 this difference in conversation, and I have noticed that quite
 a few people are having the same issue differentiating. In a
 small group, having quite a few people not on the same page is
 a bit scary, so maybe we need to really sit down and map this
 out so everyone is together one way or the other.
 
 
 [Susanne] Ok now I am confusedŠ But I agree with you that it
 need to focus on our use cases. I remember us discussing
 Octavia being the refenece implementation for OpenStack LBaaS
 (whatever that is). Has that changed while I was on vacation?
 
 
 [Adam] I believe that having the Octavia driver (not the
 Octavia codebase itself, technically) become the reference
 implementation for Neutron-LBaaS is still the plan in my eyes.
 The Octavia Driver in Neutron-LBaaS is a separate bit of code
 from the actual Octavia project, similar to the way the A10
 driver is a separate bit of code from the A10 appliance. To do
 that though, we need Octavia to be fairly close to fully
 functional. I believe we can do this because even though the
 reference driver would then require an additional service to
 run, what it requires is still fully-open-source and (by way
 of our plan) available as part of OpenStack core.
 
 
 --Adam
 
 
 https://keybase.io/rm_you
 
 
 
 
 From: Susanne Balle sleipnir...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Friday, August 29, 2014 9:19 AM
 To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
 
 
 
 Stephen
 
 
 
 See inline comments.
 
 
 
 Susanne
 
 
 
 -
 
 
 
 Susanne--
 
 
 
 I think you are conflating the difference between
 OpenStack incubation and Neutron incubator. These
 are two very different matters and should be treated
 separately. So, addressing each one individually:
 
 
 
 OpenStack Incubation
 
 I think this has been the end-goal of Octavia all
 along and continues to be the end-goal. Under this
 scenario, Octavia is its own stand-alone project with
 its own PTL and core developer team, its own
 governance, and should eventually become part of the
 integrated OpenStack release. No project ever starts
 out as OpenStack incubated.
 
 
 
 [Susanne] I totally agree that the end goal is for
 Neutron LBaaS to become its own incubated project. I
 did miss the nuance that was pointed out by Mestery in
 an earlier email that if a Neutron incubator project
 wants to become a separate project it will have to
 apply for incubation again or at that time. It was my
 understanding that such a Neutron incubated project
 would be grandfathered in but again we do not have
 much details on the process yet.
 
 
 
 To me Octavia is a driver so it is very hard to me to
 think of it as a standalone project. It needs the new
 Neutron LBaaS v2 to function which is why I think of
 them together. This of course can change since we can
 add whatever layers we want to Octavia.
 
 
 
 Neutron Incubator
 
 This has only become

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Salvatore Orlando
 Neutron LBaaS V2 in the ³Neutron² incubator ³and freeze
  LBaas V1.0²
  2) ³Eventually² It graduates into a project under the networking
  program.
  3) ³At that point² We deprecate Neutron LBaaS v1.
 
 
 
  The words in ³xx³ are works I added to make sure I/We understand the
  whole picture.
 
 
 
  And as Adam mentions: Octavia != LBaaS-v2. Octavia is a peer to F5 /
  Radware / A10 / etc appliances which is a definition I agree with BTW.
 
 
 
  What I am trying to now understand is how we will move Octavia into
  the new LBaaS project?
 
 
 
  If we do it later rather than develop Octavia in tree under the new
  incubated LBaaS project when do we plan to bring it in-tree from
  Stackforge? Kilo? Later? When LBaaS is a separate project under the
  Networking program?
 
 
 
  What are the criteria to bring a driver into the LBaaS project and
  what do we need to do to replace the existing reference driver? Maybe
  adding a software driver to LBaaS source tree is less of a problem
  than converting a whole project to an OpenStack project.
 
 
 
  Again I am open to both directions I just want to make sure we
  understand why we are choosing to do one or the other and that our
   decision is based on data and not emotions.
 
 
 
  I am assuming that keeping Octavia in Stackforge will increase the
  velocity of the project and allow us more freedom which is goodness.
  We just need to have a plan to make it part of the Openstack LBaaS
  project.
 
 
 
  Regards Susanne
 
 
 
 
  On Sat, Aug 30, 2014 at 2:09 PM, Adam Harwell
  adam.harw...@rackspace.com wrote:
  Only really have comments on two of your related points:
 
 
  [Susanne] To me Octavia is a driver so it is very hard to me
  to think of it as a standalone project. It needs the new
  Neutron LBaaS v2 to function which is why I think of them
  together. This of course can change since we can add whatever
  layers we want to Octavia.
 
 
  [Adam] I guess I've always shared Stephen's
  viewpoint ‹ Octavia != LBaaS-v2. Octavia is a peer to F5 /
  Radware / A10 / etcappliances, not to an Openstack API layer
  like Neutron-LBaaS. It's a little tricky to clearly define
  this difference in conversation, and I have noticed that quite
  a few people are having the same issue differentiating. In a
  small group, having quite a few people not on the same page is
  a bit scary, so maybe we need to really sit down and map this
  out so everyone is together one way or the other.
 
 
  [Susanne] Ok now I am confusedŠ But I agree with you that it
  need to focus on our use cases. I remember us discussing
  Octavia being the refenece implementation for OpenStack LBaaS
  (whatever that is). Has that changed while I was on vacation?
 
 
  [Adam] I believe that having the Octavia driver (not the
  Octavia codebase itself, technically) become the reference
  implementation for Neutron-LBaaS is still the plan in my eyes.
  The Octavia Driver in Neutron-LBaaS is a separate bit of code
  from the actual Octavia project, similar to the way the A10
  driver is a separate bit of code from the A10 appliance. To do
  that though, we need Octavia to be fairly close to fully
  functional. I believe we can do this because even though the
  reference driver would then require an additional service to
  run, what it requires is still fully-open-source and (by way
  of our plan) available as part of OpenStack core.
 
 
  --Adam
 
 
  https://keybase.io/rm_you
 
 
 
 
  From: Susanne Balle sleipnir...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage
  questions) openstack-dev@lists.openstack.org
  Date: Friday, August 29, 2014 9:19 AM
  To: OpenStack Development Mailing List (not for usage
  questions) openstack-dev@lists.openstack.org
 
  Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
 
 
 
  Stephen
 
 
 
  See inline comments.
 
 
 
  Susanne
 
 
 
  -
 
 
 
  Susanne--
 
 
 
  I think you are conflating the difference between
  OpenStack incubation and Neutron incubator. These
  are two very different matters and should be treated
  separately. So, addressing each one individually:
 
 
 
  OpenStack Incubation
 
  I think this has been the end-goal of Octavia all
  along and continues to be the end-goal. Under this
  scenario, Octavia is its own stand-alone project with
  its own PTL and core developer team, its own
  governance, and should eventually become part

Re: [openstack-dev] [neutron][lbaas][octavia]

2014-09-02 Thread Susanne Balle
 for Neutron-LBaaS is still the plan in my eyes.
  The Octavia Driver in Neutron-LBaaS is a separate bit of code
  from the actual Octavia project, similar to the way the A10
  driver is a separate bit of code from the A10 appliance. To do
  that though, we need Octavia to be fairly close to fully
  functional. I believe we can do this because even though the
  reference driver would then require an additional service to
  run, what it requires is still fully-open-source and (by way
  of our plan) available as part of OpenStack core.
 
 
  --Adam
 
 
  https://keybase.io/rm_you
 
 
 
 
  From: Susanne Balle sleipnir...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage
  questions) openstack-dev@lists.openstack.org
  Date: Friday, August 29, 2014 9:19 AM
  To: OpenStack Development Mailing List (not for usage
  questions) openstack-dev@lists.openstack.org
 
  Subject: Re: [openstack-dev] [neutron][lbaas][octavia]
 
 
 
  Stephen
 
 
 
  See inline comments.
 
 
 
  Susanne
 
 
 
  -
 
 
 
  Susanne--
 
 
 
  I think you are conflating the difference between
  OpenStack incubation and Neutron incubator. These
  are two very different matters and should be treated
  separately. So, addressing each one individually:
 
 
 
  OpenStack Incubation
 
  I think this has been the end-goal of Octavia all
  along and continues to be the end-goal. Under this
  scenario, Octavia is its own stand-alone project with
  its own PTL and core developer team, its own
  governance, and should eventually become part of the
  integrated OpenStack release. No project ever starts
  out as OpenStack incubated.
 
 
 
  [Susanne] I totally agree that the end goal is for
  Neutron LBaaS to become its own incubated project. I
  did miss the nuance that was pointed out by Mestery in
  an earlier email that if a Neutron incubator project
  wants to become a separate project it will have to
  apply for incubation again or at that time. It was my
  understanding that such a Neutron incubated project
  would be grandfathered in but again we do not have
  much details on the process yet.
 
 
 
  To me Octavia is a driver so it is very hard to me to
  think of it as a standalone project. It needs the new
  Neutron LBaaS v2 to function which is why I think of
  them together. This of course can change since we can
  add whatever layers we want to Octavia.
 
 
 
  Neutron Incubator
 
  This has only become a serious discussion in the last
  few weeks and has yet to land, so there are many
  assumptions about this which don't pan out (either
  because of purposeful design and governance decisions,
  or because of how this project actually ends up being
  implemented from a practical standpoint). But given
  the inherent limitations about making statements with
  so many unknowns, the following seem fairly clear from
  what has been shared so far:
 
  · Neutron incubator is the on-ramp for projects which
  should eventually become a part of Neutron itself.
 
  · Projects which enter the Neutron incubator on-ramp
  should be fairly close to maturity in their final
  form. I think the intent here is for them to live in
  incubator for 1 or 2 cycles before either being merged
  into Neutron core, or being ejected (as abandoned, or
  as a separate project).
 
  · Neutron incubator projects effectively do not have
  their own PTL and core developer team, and do not have
  their own governance.
 
  [Susanne] Ok I missed the last point. In an earlier
  discussion Mestery implied that an incubated project
  would have at least one or two of its own cores. Maybe
  that changed between now and then.
 
  In addition we know the following about Neutron LBaaS
  and Octavia:
 
  · It's already (informally?) agreed that the ultimate
  long-term place for a LBaaS solution is probably to be
  spun out into its own

  1   2   >