Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-13 Thread Sangho Shin
Andreas and Neil,

Thank you so much for your help.
I was able to fix the issues thanks to your help.

Sangho 

나의 iPhone에서 보냄

2018. 5. 11. 오후 7:19, Neil Jerram  작성:

>> On Fri, May 11, 2018 at 10:09 AM Andreas Scheuring 
>>  wrote:
>> So what you need to do first is to make a patch for networking-onos that 
>> does ONLY the following
>> 
>> 
>> replace all occurrences of 
>> 
>> * neutron.callbacks  by neutron_lib.callbacks
>> * neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api
> 
> FYI here's what networking-calico has for the second of these points:
> 
> try:
> from neutron_lib.plugins.ml2 import api
> except ImportError:
> # Neutron code prior to a2c36d7e (10th November 2017).
> from neutron.plugins.ml2 import driver_api as api
> 
> (http://git.openstack.org/cgit/openstack/networking-calico/tree/networking_calico/plugins/ml2/drivers/calico/mech_calico.py#n49)
> 
> However, we do it like this because we want the master networking-calico code 
> to work with many past Neutron releases, and I understand that that is not a 
> common approach; so for networking-onos you may only want the "from 
> neutron_lib.plugins.ml2 import api" line.
> 
> Regards - Neil
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-11 Thread Neil Jerram
On Fri, May 11, 2018 at 10:09 AM Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> So what you need to do first is to make a patch for networking-onos that
> does ONLY the following
>
>
> replace all occurrences of
>
> * neutron.callbacks  by neutron_lib.callbacks
> * neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api
>

FYI here's what networking-calico has for the second of these points:

try:
from neutron_lib.plugins.ml2 import api
except ImportError:
# Neutron code prior to a2c36d7e (10th November 2017).
from neutron.plugins.ml2 import driver_api as api

(
http://git.openstack.org/cgit/openstack/networking-calico/tree/networking_calico/plugins/ml2/drivers/calico/mech_calico.py#n49
)

However, we do it like this because we want the master networking-calico
code to work with many past Neutron releases, and I understand that that is
not a common approach; so for networking-onos you may only want the "from
neutron_lib.plugins.ml2 import api" line.

Regards - Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-11 Thread Andreas Scheuring
So what you need to do first is to make a patch for networking-onos that does 
ONLY the following


replace all occurrences of 

* neutron.callbacks  by neutron_lib.callbacks
* neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api


Push this patch for review. After that tests should succeed again in the check 
queue - merge it.

Then you can put your new great custom code on top of this patch.

---
Andreas Scheuring (andreas_s)



On 9. May 2018, at 10:04, Andreas Scheuring  wrote:

neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to 
update the networking-onos code and fix all imports there and push the 
changes...


---
Andreas Scheuring (andreas_s)



On 9. May 2018, at 10:00, Sangho Shin > wrote:

Hello, 

I am getting the following unit test error in Zuul test. See below.
The error is caused only in the pike version, and in stable/ocata version, I do 
not have the error.
( If you can give me any clue, it would be very helpful )

BTW, in nosetests, there is no error.
However, in tox -e py27 tests, I am getting different errors like below. 
Actually, it is caused because the tests are using different version of neutron 
library somehow. Actual neutron is installed in /opt/stack/neutron path, and it 
has correct python files such as callbacks and driver api, which are complained 
below.

So, I would like to know how to specify the correct neutron location in tox 
tests.

Thank you,

Sangho


tox -e py27 errors.

-


=
Failures during discovery
=
--- import errors ---
Failed to import test module: networking_onos.tests.unit.extensions.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in 

import networking_onos.extensions.securitygroup as onos_sg_driver
  File "networking_onos/extensions/securitygroup.py", line 21, in 
from networking_onos.extensions import callback
  File "networking_onos/extensions/callback.py", line 15, in 
from neutron.callbacks import events
ImportError: No module named callbacks

Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in 

from neutron.plugins.ml2 import driver_api as api
ImportError: cannot import name driver_api






Zuul errors.

---

Traceback (most recent call last):
2018-05-09 05:12:30.077594 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py
 
",
 line 1182, in _execute_context
2018-05-09 05:12:30.077653 

 | ubuntu-xenial | context)
2018-05-09 05:12:30.077964 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py
 
",
 line 470, in do_execute
2018-05-09 05:12:30.078065 

 | ubuntu-xenial | cursor.execute(statement, parameters)
2018-05-09 05:12:30.078210 

 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably 
unsupported type.
2018-05-09 05:12:30.078282 

 | ubuntu-xenial | 

Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-09 Thread Sangho Shin
Andreas,

Thank you for your answer. Actually, I was able to make it use the correct 
neutron API in my local tox tests, and all tests passed.
However, only in Zuul, I am still getting the following errors. :-(

Thank you,

Sangho


> 2018. 5. 9. 오후 4:04, Andreas Scheuring  작성:
> 
> neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to 
> update the networking-onos code and fix all imports there and push the 
> changes...
> 
> 
> ---
> Andreas Scheuring (andreas_s)
> 
> 
> 
> On 9. May 2018, at 10:00, Sangho Shin  > wrote:
> 
> Hello, 
> 
> I am getting the following unit test error in Zuul test. See below.
> The error is caused only in the pike version, and in stable/ocata version, I 
> do not have the error.
> ( If you can give me any clue, it would be very helpful )
> 
> BTW, in nosetests, there is no error.
> However, in tox -e py27 tests, I am getting different errors like below. 
> Actually, it is caused because the tests are using different version of 
> neutron library somehow. Actual neutron is installed in /opt/stack/neutron 
> path, and it has correct python files such as callbacks and driver api, which 
> are complained below.
> 
> So, I would like to know how to specify the correct neutron location in tox 
> tests.
> 
> Thank you,
> 
> Sangho
> 
> 
> tox -e py27 errors.
> 
> -
> 
> 
> =
> Failures during discovery
> =
> --- import errors ---
> Failed to import test module: 
> networking_onos.tests.unit.extensions.test_driver
> Traceback (most recent call last):
>   File 
> "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
>  line 456, in _find_test_path
> module = self._get_module_from_name(name)
>   File 
> "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
>  line 395, in _get_module_from_name
> __import__(name)
>   File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in 
> 
> import networking_onos.extensions.securitygroup as onos_sg_driver
>   File "networking_onos/extensions/securitygroup.py", line 21, in 
> from networking_onos.extensions import callback
>   File "networking_onos/extensions/callback.py", line 15, in 
> from neutron.callbacks import events
> ImportError: No module named callbacks
> 
> Failed to import test module: 
> networking_onos.tests.unit.plugins.ml2.test_driver
> Traceback (most recent call last):
>   File 
> "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
>  line 456, in _find_test_path
> module = self._get_module_from_name(name)
>   File 
> "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
>  line 395, in _get_module_from_name
> __import__(name)
>   File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in 
> 
> from neutron.plugins.ml2 import driver_api as api
> ImportError: cannot import name driver_api
> 
> 
> 
> 
> 
> 
> Zuul errors.
> 
> ---
> 
> Traceback (most recent call last):
> 2018-05-09 05:12:30.077594 
> 
>  | ubuntu-xenial |   File 
> "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py
>  
> ",
>  line 1182, in _execute_context
> 2018-05-09 05:12:30.077653 
> 
>  | ubuntu-xenial | context)
> 2018-05-09 05:12:30.077964 
> 
>  | ubuntu-xenial |   File 
> "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py
>  
> ",
>  line 470, in do_execute
> 2018-05-09 05:12:30.078065 
> 
>  | ubuntu-xenial | cursor.execute(statement, parameters)
> 2018-05-09 05:12:30.078210 
> 
>  | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably 
> unsupported type.
> 2018-05-09 05:12:30.078282 
> 
>  

Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-09 Thread Sangho Shin

I just manually installed neutron in .tox folder (I am not sure if this is a 
correct way to fix the problem) and run tox tests again.
And.. All tests passed.. as below..

But, I am not sure why Zuul tests fails as in my first email. :-(

Thank you,

Sangho

Tox tests success log in my local environment…
---

ubuntu@sangho-sona-pike-1:~/networking-onos$ tox -epy27 -vv
  removing /home/ubuntu/networking-onos/.tox/log
using tox.ini: /home/ubuntu/networking-onos/tox.ini
using tox-2.3.1 from /usr/lib/python3/dist-packages/tox/__init__.py
skipping sdist step
py27 start: getenv /home/ubuntu/networking-onos/.tox/py27
py27 reusing: /home/ubuntu/networking-onos/.tox/py27
py27 finish: getenv after 0.09 seconds
py27 start: developpkg /home/ubuntu/networking-onos
  /home/ubuntu/networking-onos$ 
/home/ubuntu/networking-onos/.tox/py27/bin/python 
/home/ubuntu/networking-onos/setup.py --name
py27 develop-inst-nodeps: /home/ubuntu/networking-onos
setting 
PATH=/home/ubuntu/networking-onos/.tox/py27/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
  /home/ubuntu/networking-onos$ 
/home/ubuntu/networking-onos/tools/tox_install.sh 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
 --no-deps -e /home/ubuntu/networking-onos 
>/home/ubuntu/networking-onos/.tox/py27/log/py27-4.log
py27 finish: developpkg after 10.28 seconds
py27 start: envreport
setting 
PATH=/home/ubuntu/networking-onos/.tox/py27/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
  /home/ubuntu/networking-onos$ /home/ubuntu/networking-onos/.tox/py27/bin/pip 
freeze >/home/ubuntu/networking-onos/.tox/py27/log/py27-5.log
py27 installed: 
alabaster==0.7.10,alembic==0.9.9,amqp==2.2.2,appdirs==1.4.3,asn1crypto==0.24.0,Babel==2.5.3,bcrypt==3.1.4,beautifulsoup4==4.6.0,cachetools==2.0.1,certifi==2018.4.16,cffi==1.11.5,chardet==3.0.4,cliff==2.11.0,cmd2==0.8.5,contextlib2==0.5.5,coverage==4.5.1,cryptography==2.2.2,debtcollector==1.19.0,decorator==4.3.0,deprecation==2.0.2,doc8==0.8.0,docutils==0.14,dogpile.cache==0.6.5,dulwich==0.19.2,enum-compat==0.0.2,enum34==1.1.6,eventlet==0.20.0,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.16.0,futures==3.2.0,futurist==1.7.0,greenlet==0.4.13,hacking==0.12.0,httplib2==0.11.3,idna==2.6,imagesize==1.0.0,ipaddress==1.0.22,iso8601==0.1.12,Jinja2==2.10,jmespath==0.9.3,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.5.0,keystonemiddleware==5.0.0,kombu==4.1.0,linecache2==1.0.0,logutils==0.3.5,Mako==1.0.7,MarkupSafe==1.0,mccabe==0.2.1,mock==2.0.0,monotonic==1.4,mox3==0.25.0,msgpack==0.5.6,munch==2.3.1,netaddr==0.7.19,netifaces==0.10.6,-e
 
git+ssh://sanghos...@review.openstack.org:29418/openstack/networking-onos@678eaaf9c917b7037a426eaadecc252a07fdd47b#egg=networking_onos,-e
 
git+https://git.openstack.org/openstack/networking-sfc@379fcd5cfcb7a71e7dbbe969da0255bc3ff09a33#egg=networking_sfc,-e
 

Re: [openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-09 Thread Andreas Scheuring
neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to 
update the networking-onos code and fix all imports there and push the 
changes...


---
Andreas Scheuring (andreas_s)



On 9. May 2018, at 10:00, Sangho Shin  wrote:

Hello, 

I am getting the following unit test error in Zuul test. See below.
The error is caused only in the pike version, and in stable/ocata version, I do 
not have the error.
( If you can give me any clue, it would be very helpful )

BTW, in nosetests, there is no error.
However, in tox -e py27 tests, I am getting different errors like below. 
Actually, it is caused because the tests are using different version of neutron 
library somehow. Actual neutron is installed in /opt/stack/neutron path, and it 
has correct python files such as callbacks and driver api, which are complained 
below.

So, I would like to know how to specify the correct neutron location in tox 
tests.

Thank you,

Sangho


tox -e py27 errors.

-


=
Failures during discovery
=
--- import errors ---
Failed to import test module: networking_onos.tests.unit.extensions.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in 

import networking_onos.extensions.securitygroup as onos_sg_driver
  File "networking_onos/extensions/securitygroup.py", line 21, in 
from networking_onos.extensions import callback
  File "networking_onos/extensions/callback.py", line 15, in 
from neutron.callbacks import events
ImportError: No module named callbacks

Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in 

from neutron.plugins.ml2 import driver_api as api
ImportError: cannot import name driver_api






Zuul errors.

---

Traceback (most recent call last):
2018-05-09 05:12:30.077594 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py
 
",
 line 1182, in _execute_context
2018-05-09 05:12:30.077653 

 | ubuntu-xenial | context)
2018-05-09 05:12:30.077964 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py
 
",
 line 470, in do_execute
2018-05-09 05:12:30.078065 

 | ubuntu-xenial | cursor.execute(statement, parameters)
2018-05-09 05:12:30.078210 

 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably 
unsupported type.
2018-05-09 05:12:30.078282 

 | ubuntu-xenial | update failed: No details.
2018-05-09 05:12:30.078367 

 | ubuntu-xenial | Traceback (most recent call last):
2018-05-09 05:12:30.078683 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py
 

[openstack-dev] [neutron][ml2 plugin] unit test errors

2018-05-09 Thread Sangho Shin
Hello, 

I am getting the following unit test error in Zuul test. See below.
The error is caused only in the pike version, and in stable/ocata version, I do 
not have the error.
( If you can give me any clue, it would be very helpful )

BTW, in nosetests, there is no error.
However, in tox -e py27 tests, I am getting different errors like below. 
Actually, it is caused because the tests are using different version of neutron 
library somehow. Actual neutron is installed in /opt/stack/neutron path, and it 
has correct python files such as callbacks and driver api, which are complained 
below.

So, I would like to know how to specify the correct neutron location in tox 
tests.

Thank you,

Sangho


tox -e py27 errors.

-


=
Failures during discovery
=
--- import errors ---
Failed to import test module: networking_onos.tests.unit.extensions.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in 

import networking_onos.extensions.securitygroup as onos_sg_driver
  File "networking_onos/extensions/securitygroup.py", line 21, in 
from networking_onos.extensions import callback
  File "networking_onos/extensions/callback.py", line 15, in 
from neutron.callbacks import events
ImportError: No module named callbacks

Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver
Traceback (most recent call last):
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in 

from neutron.plugins.ml2 import driver_api as api
ImportError: cannot import name driver_api






Zuul errors.

---

Traceback (most recent call last):
2018-05-09 05:12:30.077594 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1182, in _execute_context
2018-05-09 05:12:30.077653 

 | ubuntu-xenial | context)
2018-05-09 05:12:30.077964 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 470, in do_execute
2018-05-09 05:12:30.078065 

 | ubuntu-xenial | cursor.execute(statement, parameters)
2018-05-09 05:12:30.078210 

 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably 
unsupported type.
2018-05-09 05:12:30.078282 

 | ubuntu-xenial | update failed: No details.
2018-05-09 05:12:30.078367 

 | ubuntu-xenial | Traceback (most recent call last):
2018-05-09 05:12:30.078683 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 98, in resource
2018-05-09 05:12:30.078791 

 | ubuntu-xenial | result = method(request=request, **args)
2018-05-09 05:12:30.079085 

 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py",
 line 615, in 

Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Kevin Benton
Yes, let's move discussion to bug report.

On Fri, Jun 23, 2017 at 5:01 AM, Margin Hu  wrote:

> Hi kevin,
>
> [ovs]
> bridge_mappings = physnet1:br-ex,physnet2:provision,physnet3:provider
> ovsdb_connection = tcp:10.53.16.12:6640
> local_ip = 10.53.32.12
> you can check the attachement,  and more logs can be found at
> https://bugs.launchpad.net/neutron/+bug/1697243
>
>
> On 6/23 16:43, Kevin Benton wrote:
>
> Can you provide your ml2_conf.ini values you are using?
>
> On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu  wrote:
>
>> thanks.
>>
>> I met an issue , I  configured three ovs bridge ( br-ex, provision,
>> provider) in ml2_conf.ini  but after I reboot the node , found only 2
>> bridges flow table is normal , the other one bridge's flow table is empty.
>>
>> the bridge sometimes is "provision" , sometimes is "provider" ,  which
>> possibilities is there for this issue.?
>> [root@cloud]# ovs-ofctl show provision
>> OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
>> n_tables:254, n_buffers:256
>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
>> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>>  1(bond0): addr:24:8a:07:55:41:e8
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>>  2(phy-provision): addr:2e:7c:ba:fe:91:72
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>>  LOCAL(provision): addr:24:8a:07:55:41:e8
>>  config: 0
>>  state:  0
>>  speed: 0 Mbps now, 0 Mbps max
>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>> [root@cloud]# ovs-ofctl dump-flows  provision
>> NXST_FLOW reply (xid=0x4):
>>
>> [root@cloud]# ip r
>> default via 192.168.60.247 dev br-ex
>> 10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
>> 10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
>> 10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
>> 10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
>> 10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
>> 10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
>> 169.254.0.0/16 dev vlan16  scope link  metric 1012
>> 169.254.0.0/16 dev vlan22  scope link  metric 1014
>> 169.254.0.0/16 dev vlan32  scope link  metric 1015
>> 169.254.0.0/16 dev br-ex  scope link  metric 1032
>> 169.254.0.0/16 dev provision  scope link  metric 1033
>> 169.254.0.0/16 dev provider  scope link  metric 1034
>> 192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111
>>
>> what' the root cause ?
>>
>>  rpm -qa | grep openvswitch
>> openvswitch-2.6.1-4.1.git20161206.el7.x86_64
>> python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
>> openstack-neutron-openvswitch-10.0.1-1.el7.noarch
>>
>>
>>
>> On 6/22 9:53, Kevin Benton wrote:
>>
>> Rules to allow aren't setup until the port is wired and it calls the
>> functions like this:
>> https://github.com/openstack/neutron/blob/master/neutron/plu
>> gins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606
>>
>> On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:
>>
>>> Hi Guys,
>>>
>>> I have a question in setup_physical_bridges funtion  of
>>> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>>>
>>>  # block all untranslated traffic between bridges
>>> self.int_br.drop_port(in_port=int_ofport)
>>> br.drop_port(in_port=phys_ofport)
>>>
>>> [refer](https://github.com/openstack/neutron/blob/master/neu
>>> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>>>
>>> when permit traffic between bridges ?  when modify flow table of ovs
>>> bridge?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Margin Hu

Hi kevin,

[ovs]
bridge_mappings = physnet1:br-ex,physnet2:provision,physnet3:provider
ovsdb_connection = tcp:10.53.16.12:6640
local_ip = 10.53.32.12

you can check the attachement,  and more logs can be found at
https://bugs.launchpad.net/neutron/+bug/1697243

On 6/23 16:43, Kevin Benton wrote:

Can you provide your ml2_conf.ini values you are using?

On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu > wrote:


thanks.

I met an issue , I  configured three ovs bridge ( br-ex,
provision, provider) in ml2_conf.ini  but after I reboot the node
, found only 2 bridges flow table is normal , the other one
bridge's flow table is empty.

the bridge sometimes is "provision" , sometimes is "provider" , 
which possibilities is there for this issue.?


[root@cloud]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS
ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan
mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src
mod_tp_dst
 1(bond0): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(phy-provision): addr:2e:7c:ba:fe:91:72
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(provision): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud]# ovs-ofctl dump-flows  provision
NXST_FLOW reply (xid=0x4):

[root@cloud]# ip r
default via 192.168.60.247 dev br-ex
10.53.16.0/24  dev vlan16 proto kernel 
scope link  src 10.53.16.11
10.53.17.0/24  dev provider proto kernel 
scope link  src 10.53.17.11
10.53.22.0/24  dev vlan22 proto kernel 
scope link  src 10.53.22.111
10.53.32.0/24  dev vlan32 proto kernel 
scope link  src 10.53.32.11
10.53.33.0/24  dev provision proto kernel 
scope link  src 10.53.33.11
10.53.128.0/24  dev docker0 proto kernel 
scope link  src 10.53.128.1
169.254.0.0/16  dev vlan16 scope link 
metric 1012
169.254.0.0/16  dev vlan22 scope link 
metric 1014
169.254.0.0/16  dev vlan32 scope link 
metric 1015
169.254.0.0/16  dev br-ex scope link 
metric 1032
169.254.0.0/16  dev provision scope link 
metric 1033
169.254.0.0/16  dev provider scope link 
metric 1034
192.168.60.0/24  dev br-ex proto kernel 
scope link  src 192.168.60.111


what' the root cause ?

 rpm -qa | grep openvswitch
openvswitch-2.6.1-4.1.git20161206.el7.x86_64
python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
openstack-neutron-openvswitch-10.0.1-1.el7.noarch



On 6/22 9:53, Kevin Benton wrote:

Rules to allow aren't setup until the port is wired and it calls
the functions like this:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606



On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu > wrote:

Hi Guys,

I have a question in setup_physical_bridges funtion  of
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)


[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159

)

when permit traffic between bridges ?  when modify flow table
of ovs bridge?










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack 

Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-23 Thread Kevin Benton
Can you provide your ml2_conf.ini values you are using?

On Thu, Jun 22, 2017 at 7:06 AM, Margin Hu  wrote:

> thanks.
>
> I met an issue , I  configured three ovs bridge ( br-ex, provision,
> provider) in ml2_conf.ini  but after I reboot the node , found only 2
> bridges flow table is normal , the other one bridge's flow table is empty.
>
> the bridge sometimes is "provision" , sometimes is "provider" ,  which
> possibilities is there for this issue.?
> [root@cloud]# ovs-ofctl show provision
> OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
> n_tables:254, n_buffers:256
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>  1(bond0): addr:24:8a:07:55:41:e8
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
>  2(phy-provision): addr:2e:7c:ba:fe:91:72
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
>  LOCAL(provision): addr:24:8a:07:55:41:e8
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
> [root@cloud]# ovs-ofctl dump-flows  provision
> NXST_FLOW reply (xid=0x4):
>
> [root@cloud]# ip r
> default via 192.168.60.247 dev br-ex
> 10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
> 10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
> 10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
> 10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
> 10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
> 10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
> 169.254.0.0/16 dev vlan16  scope link  metric 1012
> 169.254.0.0/16 dev vlan22  scope link  metric 1014
> 169.254.0.0/16 dev vlan32  scope link  metric 1015
> 169.254.0.0/16 dev br-ex  scope link  metric 1032
> 169.254.0.0/16 dev provision  scope link  metric 1033
> 169.254.0.0/16 dev provider  scope link  metric 1034
> 192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111
>
> what' the root cause ?
>
>  rpm -qa | grep openvswitch
> openvswitch-2.6.1-4.1.git20161206.el7.x86_64
> python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
> openstack-neutron-openvswitch-10.0.1-1.el7.noarch
>
>
>
> On 6/22 9:53, Kevin Benton wrote:
>
> Rules to allow aren't setup until the port is wired and it calls the
> functions like this:
> https://github.com/openstack/neutron/blob/master/neutron/
> plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606
>
> On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:
>
>> Hi Guys,
>>
>> I have a question in setup_physical_bridges funtion  of
>> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>>
>>  # block all untranslated traffic between bridges
>> self.int_br.drop_port(in_port=int_ofport)
>> br.drop_port(in_port=phys_ofport)
>>
>> [refer](https://github.com/openstack/neutron/blob/master/neu
>> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>>
>> when permit traffic between bridges ?  when modify flow table of ovs
>> bridge?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-22 Thread Margin Hu

thanks.

I met an issue , I  configured three ovs bridge ( br-ex, provision, 
provider) in ml2_conf.ini  but after I reboot the node , found only 2 
bridges flow table is normal , the other one bridge's flow table is empty.


the bridge sometimes is "provision" , sometimes is "provider" , which 
possibilities is there for this issue.?


[root@cloud]# ovs-ofctl show provision
OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst

 1(bond0): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(phy-provision): addr:2e:7c:ba:fe:91:72
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(provision): addr:24:8a:07:55:41:e8
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@cloud]# ovs-ofctl dump-flows  provision
NXST_FLOW reply (xid=0x4):

[root@cloud]# ip r
default via 192.168.60.247 dev br-ex
10.53.16.0/24 dev vlan16  proto kernel  scope link  src 10.53.16.11
10.53.17.0/24 dev provider  proto kernel  scope link  src 10.53.17.11
10.53.22.0/24 dev vlan22  proto kernel  scope link  src 10.53.22.111
10.53.32.0/24 dev vlan32  proto kernel  scope link  src 10.53.32.11
10.53.33.0/24 dev provision  proto kernel  scope link  src 10.53.33.11
10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1
169.254.0.0/16 dev vlan16  scope link  metric 1012
169.254.0.0/16 dev vlan22  scope link  metric 1014
169.254.0.0/16 dev vlan32  scope link  metric 1015
169.254.0.0/16 dev br-ex  scope link  metric 1032
169.254.0.0/16 dev provision  scope link  metric 1033
169.254.0.0/16 dev provider  scope link  metric 1034
192.168.60.0/24 dev br-ex  proto kernel  scope link  src 192.168.60.111

what' the root cause ?

 rpm -qa | grep openvswitch
openvswitch-2.6.1-4.1.git20161206.el7.x86_64
python-openvswitch-2.6.1-4.1.git20161206.el7.noarch
openstack-neutron-openvswitch-10.0.1-1.el7.noarch


On 6/22 9:53, Kevin Benton wrote:
Rules to allow aren't setup until the port is wired and it calls the 
functions like this:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606

On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu > wrote:


Hi Guys,

I have a question in setup_physical_bridges funtion  of
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)


[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159

)

when permit traffic between bridges ?  when modify flow table of
ovs bridge?









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-21 Thread Kevin Benton
Rules to allow aren't setup until the port is wired and it calls the
functions like this:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L602-L606

On Wed, Jun 21, 2017 at 4:49 PM, Margin Hu  wrote:

> Hi Guys,
>
> I have a question in setup_physical_bridges funtion  of
> neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
>
>  # block all untranslated traffic between bridges
> self.int_br.drop_port(in_port=int_ofport)
> br.drop_port(in_port=phys_ofport)
>
> [refer](https://github.com/openstack/neutron/blob/master/neu
> tron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)
>
> when permit traffic between bridges ?  when modify flow table of ovs
> bridge?
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2][drivers][openvswitch] Question

2017-06-21 Thread Margin Hu

Hi Guys,

I have a question in setup_physical_bridges funtion  of 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py


 # block all untranslated traffic between bridges
self.int_br.drop_port(in_port=int_ofport)
br.drop_port(in_port=phys_ofport)

[refer](https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1159)

when permit traffic between bridges ?  when modify flow table of ovs 
bridge?










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - heads up for mechanism drivers that don't use in tree DHCP agent

2017-04-03 Thread Kevin Benton
Yes. The code will still require something to acknowledge that DHCP has
been wired for a port whether or not the agent extension is present.

On Fri, Mar 31, 2017 at 3:39 AM, Neil Jerram  wrote:

> Thanks for the heads up, Kevin!
>
> Is this still necessary if a deployment disables the Neutron server's DHCP
> scheduling, with
>
> self._supported_extension_aliases.remove("dhcp_agent_
> scheduler")
>
> ?
>
> Thanks,
>   Neil
>
>
> On Fri, Mar 31, 2017 at 12:52 AM Kevin Benton  wrote:
>
>> Hi,
>>
>> Once [1] merges, a port will not transition to ACTIVE on a subnet with
>> enable_dhcp=True unless something clears the DHCP provisioning block.
>>
>> If your mechanism driver uses the in-tree DHCP agent, there is nothing
>> you need to do. However, if you do not utilize the DHCP agent in your
>> deployment scenarios and you offload DHCP to something else, your mechanism
>> driver must now explicitly acknowledge that DHCP has been provisioned for
>> that port.
>>
>> Acknowledging that DHCP is ready for a port is a one-line call to the
>> provisioning_blocks module[2]. For more information on provisioning blocks,
>> see [3].
>>
>> 1. https://review.openstack.org/452009
>> 2. https://github.com/openstack/neutron/blob/
>> 4ed53a880714fd33280064c58e6f91b9ecd3823e/neutron/api/rpc/
>> handlers/dhcp_rpc.py#L292-L294
>> 3. https://docs.openstack.org/developer/neutron/devref/
>> provisioning_blocks.html
>>
>> Cheers,
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - heads up for mechanism drivers that don't use in tree DHCP agent

2017-03-31 Thread Neil Jerram
Thanks for the heads up, Kevin!

Is this still necessary if a deployment disables the Neutron server's DHCP
scheduling, with

self._supported_extension_aliases.remove("dhcp_agent_scheduler")

?

Thanks,
  Neil


On Fri, Mar 31, 2017 at 12:52 AM Kevin Benton  wrote:

> Hi,
>
> Once [1] merges, a port will not transition to ACTIVE on a subnet with
> enable_dhcp=True unless something clears the DHCP provisioning block.
>
> If your mechanism driver uses the in-tree DHCP agent, there is nothing you
> need to do. However, if you do not utilize the DHCP agent in your
> deployment scenarios and you offload DHCP to something else, your mechanism
> driver must now explicitly acknowledge that DHCP has been provisioned for
> that port.
>
> Acknowledging that DHCP is ready for a port is a one-line call to the
> provisioning_blocks module[2]. For more information on provisioning blocks,
> see [3].
>
> 1. https://review.openstack.org/452009
> 2.
> https://github.com/openstack/neutron/blob/4ed53a880714fd33280064c58e6f91b9ecd3823e/neutron/api/rpc/handlers/dhcp_rpc.py#L292-L294
> 3.
> https://docs.openstack.org/developer/neutron/devref/provisioning_blocks.html
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] - heads up for mechanism drivers that don't use in tree DHCP agent

2017-03-30 Thread Kevin Benton
Hi,

Once [1] merges, a port will not transition to ACTIVE on a subnet with
enable_dhcp=True unless something clears the DHCP provisioning block.

If your mechanism driver uses the in-tree DHCP agent, there is nothing you
need to do. However, if you do not utilize the DHCP agent in your
deployment scenarios and you offload DHCP to something else, your mechanism
driver must now explicitly acknowledge that DHCP has been provisioned for
that port.

Acknowledging that DHCP is ready for a port is a one-line call to the
provisioning_blocks module[2]. For more information on provisioning blocks,
see [3].

1. https://review.openstack.org/452009
2. https://github.com/openstack/neutron/blob/4ed53a880714fd33280064c58e6f91
b9ecd3823e/neutron/api/rpc/handlers/dhcp_rpc.py#L292-L294
3. https://docs.openstack.org/developer/neutron/devref/
provisioning_blocks.html

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-15 Thread Robert Kukura

RFE is at https://bugs.launchpad.net/neutron/+bug/1673142.

-Bob


On 3/13/17 2:37 PM, Robert Kukura wrote:


Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you 
please file an RFE so we can prioritize it appropriately? We have to 
decide if we are going to block features based on the enforcement by 
this framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-13 Thread Robert Kukura

Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you please 
file an RFE so we can prioritize it appropriately? We have to decide 
if we are going to block features based on the enforcement by this 
framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-13 Thread Kevin Benton
Hi,

At the PTG we briefly discussed a generic system for verifying that the
appropriate drivers are enforcing a particular user-requested feature in
ML2 (e.g. security groups, qos, etc).

Is someone planning on working on this for Pike? If so, can you please file
an RFE so we can prioritize it appropriately? We have to decide if we are
going to block features based on the enforcement by this framework.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-05 Thread slawek

Hello,

In such case like You described, ports will be bound with openvswitch 
mechanism driver because this agent will be found as alive on host. So 
linuxbridge mechanism driver will do nothing for binding such ports.


--
Slawek Kaplonski
sla...@kaplonski.pl

W dniu 05.01.2017 04:51, zhi napisał(a):

Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in 
neutron server, and running ovs-agent in compute nodes. What does 
openvsitch mechanism driver do? What does linuxbridge mechanism do? I 
think there must have some differences between the openvswitch and the 
linuxbridge mechanism driver. But I can't get the exact point about the 
two mechanism drivers when running ovs-agent in compute nodes now.


2017-01-04 16:16 GMT+08:00 Kevin Benton :

Note that with the openvswitch and linuxbridge mechanism drivers, it 
will be safe to have both loaded on the Neutron server at the same time 
since each driver will only bind a port if it has an agent of that type 
running on the host.


On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński  
wrote:

Hello,

I don't know what is hierarchical port binding but about mechanism
drivers, You should use this mechanism driver which L2 agent You are
using on compute/network nodes. If You have OVS L2 agent then You 
should

have enabled openvswitch mechanism driver.
In general both of those drivers are doing similar work on
neutron-server side because they are checking if proper agent type is
working on host and if other conditions required to bind port are 
valid.

Mechanism drivers can have also some additional informations about
backend driver, e.g. there is info about supported QoS rule types for
each backend driver (OVS, Linuxbridge and SR-IOV).

BTW. IMHO You should send such questions to 
openst...@lists.openstack.org


--
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 30 Dec 2016, zhi wrote:


Hi, all

First of all. Happy New year for everyone!

I have a question about mechanism drivers when using ML2 driver.

When should I use openvswitch mechanism driver ?

When should I use linuxbridge mechanism driver ?

And, when should I use openvswitch and linuxbridge mechanism drivers ?

In my opinion, ML2 driver has supported hierarchical port binding. By 
using

hierarchical port binding,
neutron will know every binding info in network topology, isn't it? If 
yes,
where I can found the every binding info. And what the relationship 
between

hierarchical port binding and mechanism drivers?


Hope for your reply.

Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-05 Thread Kevin Benton
The mechanism drivers populate the vif details that tell nova how it's
supposed to setup the VM port. So the linux bridge driver tells it the port
type is linux bridge[1] and the OVS tells it that the type is OVS.

So if you have both loaded and ovs is running on the compute node. The
following steps will happen:

* nova sends a port update populating the host_id of the compute node the
port will be on
* ML2 processes the update and starts the port binding operation and calls
each driver
* The linux bridge mech driver will see that it has no active agents on
that host so it will not bind the port
* The openvswitch mech driver will see that it does have an active agent,
so it will bind the port and populate the details indicating it's an OVS
port
* The updated port with the vif details indicating that it's an OVS port
will be returned to Nova and nova will wire up the port for OVS




1.
https://github.com/openstack/neutron/blob/bcd6fddb127f4fe3f7ce3415f5b5e0da910e0e0b/neutron/plugins/ml2/drivers/linuxbridge/mech_driver/mech_linuxbridge.py#L40-L43

On Wed, Jan 4, 2017 at 7:51 PM, zhi  wrote:

> Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in
> neutron server, and running ovs-agent in compute nodes. What does
> openvsitch mechanism driver do? What does linuxbridge mechanism do? I think
> there must have some differences between the openvswitch and the
> linuxbridge mechanism driver. But I can't get the exact point about the two
> mechanism drivers when running ovs-agent in compute nodes now.
>
> 2017-01-04 16:16 GMT+08:00 Kevin Benton :
>
>> Note that with the openvswitch and linuxbridge mechanism drivers, it will
>> be safe to have both loaded on the Neutron server at the same time since
>> each driver will only bind a port if it has an agent of that type running
>> on the host.
>>
>> On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński 
>> wrote:
>>
>>> Hello,
>>>
>>> I don't know what is hierarchical port binding but about mechanism
>>> drivers, You should use this mechanism driver which L2 agent You are
>>> using on compute/network nodes. If You have OVS L2 agent then You should
>>> have enabled openvswitch mechanism driver.
>>> In general both of those drivers are doing similar work on
>>> neutron-server side because they are checking if proper agent type is
>>> working on host and if other conditions required to bind port are valid.
>>> Mechanism drivers can have also some additional informations about
>>> backend driver, e.g. there is info about supported QoS rule types for
>>> each backend driver (OVS, Linuxbridge and SR-IOV).
>>>
>>> BTW. IMHO You should send such questions to
>>> openst...@lists.openstack.org
>>>
>>> --
>>> Best regards / Pozdrawiam
>>> Sławek Kapłoński
>>> sla...@kaplonski.pl
>>>
>>> On Fri, 30 Dec 2016, zhi wrote:
>>>
>>> > Hi, all
>>> >
>>> > First of all. Happy New year for everyone!
>>> >
>>> > I have a question about mechanism drivers when using ML2 driver.
>>> >
>>> > When should I use openvswitch mechanism driver ?
>>> >
>>> > When should I use linuxbridge mechanism driver ?
>>> >
>>> > And, when should I use openvswitch and linuxbridge mechanism drivers ?
>>> >
>>> > In my opinion, ML2 driver has supported hierarchical port binding. By
>>> using
>>> > hierarchical port binding,
>>> > neutron will know every binding info in network topology, isn't it? If
>>> yes,
>>> > where I can found the every binding info. And what the relationship
>>> between
>>> > hierarchical port binding and mechanism drivers?
>>> >
>>> >
>>> > Hope for your reply.
>>> >
>>> > Thanks
>>> > Zhi Chang
>>>
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List 

Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-04 Thread zhi
Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in
neutron server, and running ovs-agent in compute nodes. What does
openvsitch mechanism driver do? What does linuxbridge mechanism do? I think
there must have some differences between the openvswitch and the
linuxbridge mechanism driver. But I can't get the exact point about the two
mechanism drivers when running ovs-agent in compute nodes now.

2017-01-04 16:16 GMT+08:00 Kevin Benton :

> Note that with the openvswitch and linuxbridge mechanism drivers, it will
> be safe to have both loaded on the Neutron server at the same time since
> each driver will only bind a port if it has an agent of that type running
> on the host.
>
> On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński 
> wrote:
>
>> Hello,
>>
>> I don't know what is hierarchical port binding but about mechanism
>> drivers, You should use this mechanism driver which L2 agent You are
>> using on compute/network nodes. If You have OVS L2 agent then You should
>> have enabled openvswitch mechanism driver.
>> In general both of those drivers are doing similar work on
>> neutron-server side because they are checking if proper agent type is
>> working on host and if other conditions required to bind port are valid.
>> Mechanism drivers can have also some additional informations about
>> backend driver, e.g. there is info about supported QoS rule types for
>> each backend driver (OVS, Linuxbridge and SR-IOV).
>>
>> BTW. IMHO You should send such questions to openst...@lists.openstack.org
>>
>> --
>> Best regards / Pozdrawiam
>> Sławek Kapłoński
>> sla...@kaplonski.pl
>>
>> On Fri, 30 Dec 2016, zhi wrote:
>>
>> > Hi, all
>> >
>> > First of all. Happy New year for everyone!
>> >
>> > I have a question about mechanism drivers when using ML2 driver.
>> >
>> > When should I use openvswitch mechanism driver ?
>> >
>> > When should I use linuxbridge mechanism driver ?
>> >
>> > And, when should I use openvswitch and linuxbridge mechanism drivers ?
>> >
>> > In my opinion, ML2 driver has supported hierarchical port binding. By
>> using
>> > hierarchical port binding,
>> > neutron will know every binding info in network topology, isn't it? If
>> yes,
>> > where I can found the every binding info. And what the relationship
>> between
>> > hierarchical port binding and mechanism drivers?
>> >
>> >
>> > Hope for your reply.
>> >
>> > Thanks
>> > Zhi Chang
>>
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-04 Thread Kevin Benton
Note that with the openvswitch and linuxbridge mechanism drivers, it will
be safe to have both loaded on the Neutron server at the same time since
each driver will only bind a port if it has an agent of that type running
on the host.

On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> I don't know what is hierarchical port binding but about mechanism
> drivers, You should use this mechanism driver which L2 agent You are
> using on compute/network nodes. If You have OVS L2 agent then You should
> have enabled openvswitch mechanism driver.
> In general both of those drivers are doing similar work on
> neutron-server side because they are checking if proper agent type is
> working on host and if other conditions required to bind port are valid.
> Mechanism drivers can have also some additional informations about
> backend driver, e.g. there is info about supported QoS rule types for
> each backend driver (OVS, Linuxbridge and SR-IOV).
>
> BTW. IMHO You should send such questions to openst...@lists.openstack.org
>
> --
> Best regards / Pozdrawiam
> Sławek Kapłoński
> sla...@kaplonski.pl
>
> On Fri, 30 Dec 2016, zhi wrote:
>
> > Hi, all
> >
> > First of all. Happy New year for everyone!
> >
> > I have a question about mechanism drivers when using ML2 driver.
> >
> > When should I use openvswitch mechanism driver ?
> >
> > When should I use linuxbridge mechanism driver ?
> >
> > And, when should I use openvswitch and linuxbridge mechanism drivers ?
> >
> > In my opinion, ML2 driver has supported hierarchical port binding. By
> using
> > hierarchical port binding,
> > neutron will know every binding info in network topology, isn't it? If
> yes,
> > where I can found the every binding info. And what the relationship
> between
> > hierarchical port binding and mechanism drivers?
> >
> >
> > Hope for your reply.
> >
> > Thanks
> > Zhi Chang
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-03 Thread Sukhdev Kapur
Zhi,

Selection of driver is deployment dependent. You could run or more ML2
drivers simultaneous depending upon your deployment.

Hierarchical Port Binding (HPB) facilitates multi-segmented L2 networks
where the scope of the Segmentation ID is local to a given segment.
For example - if you want to inter-connect two VLAN based network segments
with an overlay network of VXLAN, you would use HPB. With HPB, each VLAN
segment could use the same or different VLAN ID. Therefore, HPB facilitates
the deployments with greater than 4K VLANs.
Without HPB, L2 networks in Neutron are limited to 4K VLANS.

As to the binding information, it is bit tricky in case of HPB. There is no
generic CLI in neutron which lists the binding information. However, this
information is available in the driver. Drivers bind the ports dynamically
(segment by segment)
You can refer to Cisco or Arista ML2 drivers to see how this information is
used/retrieved.

regards..
-Sukhdev


On Fri, Dec 30, 2016 at 5:49 AM, zhi  wrote:

> Hi, all
>
> First of all. Happy New year for everyone!
>
> I have a question about mechanism drivers when using ML2 driver.
>
> When should I use openvswitch mechanism driver ?
>
> When should I use linuxbridge mechanism driver ?
>
> And, when should I use openvswitch and linuxbridge mechanism drivers ?
>
> In my opinion, ML2 driver has supported hierarchical port binding. By
> using hierarchical port binding,
> neutron will know every binding info in network topology, isn't it? If
> yes, where I can found the every binding info. And what the relationship
> between hierarchical port binding and mechanism drivers?
>
>
> Hope for your reply.
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2016-12-30 Thread Sławek Kapłoński
Hello,

I don't know what is hierarchical port binding but about mechanism
drivers, You should use this mechanism driver which L2 agent You are
using on compute/network nodes. If You have OVS L2 agent then You should
have enabled openvswitch mechanism driver.
In general both of those drivers are doing similar work on
neutron-server side because they are checking if proper agent type is
working on host and if other conditions required to bind port are valid.
Mechanism drivers can have also some additional informations about
backend driver, e.g. there is info about supported QoS rule types for
each backend driver (OVS, Linuxbridge and SR-IOV).

BTW. IMHO You should send such questions to openst...@lists.openstack.org

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 30 Dec 2016, zhi wrote:

> Hi, all
> 
> First of all. Happy New year for everyone!
> 
> I have a question about mechanism drivers when using ML2 driver.
> 
> When should I use openvswitch mechanism driver ?
> 
> When should I use linuxbridge mechanism driver ?
> 
> And, when should I use openvswitch and linuxbridge mechanism drivers ?
> 
> In my opinion, ML2 driver has supported hierarchical port binding. By using
> hierarchical port binding,
> neutron will know every binding info in network topology, isn't it? If yes,
> where I can found the every binding info. And what the relationship between
> hierarchical port binding and mechanism drivers?
> 
> 
> Hope for your reply.
> 
> Thanks
> Zhi Chang

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2016-12-30 Thread zhi
Hi, all

First of all. Happy New year for everyone!

I have a question about mechanism drivers when using ML2 driver.

When should I use openvswitch mechanism driver ?

When should I use linuxbridge mechanism driver ?

And, when should I use openvswitch and linuxbridge mechanism drivers ?

In my opinion, ML2 driver has supported hierarchical port binding. By using
hierarchical port binding,
neutron will know every binding info in network topology, isn't it? If yes,
where I can found the every binding info. And what the relationship between
hierarchical port binding and mechanism drivers?


Hope for your reply.

Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ml2 dns extension

2016-09-24 Thread Mārtiņš Jakubovičs

Hello all,

I faced issue, when booting instance from shared external network, 
designate did not create DNS records. So I looked deeper and find, that 
issue are in networks router:external to True.


But in DNS extension code I did not get why such feature, are restricted:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/extensions/dns_integration.py#L296

Can someone with deeper knowledges describe why to allow create DNS 
records in network with router:external are bad behavior? I don't see a 
point to create DNS record with router:external if shared are to False, 
but if it is True, I don't see any particular issues.


Best regards,

Martins


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-29 Thread Hong Hui Xiao
Hi ML2 team.

I created this patch [1] based on the discussion in the mail list. Since 
it touch the code in ml2(especially in the segment part), could you review 
it and give some advice on it?

[1] https://review.openstack.org/#/c/317358/

HongHui Xiao(肖宏辉)



From:   Carl Baldwin <c...@ecbaldwin.net>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date:   05/19/2016 05:34
Subject:        Re: [openstack-dev] [Neutron][ML2][Routed Networks]



On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao <xiaoh...@cn.ibm.com> 
wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it 
is
> a good way. Do we need to consider bind dhcp port to another segment 
when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-20 Thread Brandon Logan
On Wed, 2016-05-18 at 15:29 -0600, Carl Baldwin wrote:
> On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  wrote:
> > I update [1] to auto delete dhcp port if there is no other ports. But
> > after the dhcp port is deleted, the dhcp service is not usable. I can
> 
> I think this is what I expect.
> 
> > resume the dhcp service by adding another subnet, but I don't think it is
> > a good way. Do we need to consider bind dhcp port to another segment when
> > deleting the existing one?
> 
> Where would you bind the port?  DHCP requires L2 connectivity to the
> segment which it serves.  But, you deleted the segment.  So, it makes
> sense that it wouldn't work.
> 
> Brandon is working on DHCP scheduling which should take care of this.
> DHCP should be scheduled to all of the segments with DHCP enabled
> subnets.  It should have a port for each of these segments.  So, if a
> segment (and its ports) are deleted, I think the right thing to do is
> to make sure that DHCP scheduling removes DHCP from that segment.  I
> would expect this to happen automatically when the subnet is deleted.
> We should check with Brandon to make sure this works (or will work
> when his work merges).

This is definitely something I've thought about, basically I'm treating
each segment as its own network, so in this case the rules that apply to
the network will be carried over for each segment with dhcp enabled
subnets.

> 
> Carl
> 
> > [1] https://review.openstack.org/#/c/317358


Thanks,
Brandon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Hong Hui Xiao
Thanks for the clarification. If we are going to have dhcp service in 
every segment separately, then I think the current behavior is reasonable. 
The remaining segments can use dhcp by the dhcp ports in their own 
segments.

HongHui Xiao(肖宏辉)




From:   Carl Baldwin <c...@ecbaldwin.net>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date:   05/19/2016 05:34
Subject:        Re: [openstack-dev] [Neutron][ML2][Routed Networks]



On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao <xiaoh...@cn.ibm.com> 
wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it 
is
> a good way. Do we need to consider bind dhcp port to another segment 
when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Carl Baldwin
On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it is
> a good way. Do we need to consider bind dhcp port to another segment when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Kevin Benton
>I update [1] to auto delete dhcp port if there is no other ports. But after
the dhcp port is deleted, the dhcp service is not usable.

You mean in the case where the segments are in the same L2 domain, right?
If not, I don't understand why we wouldn't expect a segment that was
deleted to stop working

Have the DHCP agent scheduler subscribe to segment delete and it can
determine if the network needs to be hosted on any more agents.

On Wed, May 18, 2016 at 4:24 AM, Hong Hui Xiao <xiaoh...@cn.ibm.com> wrote:

> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can
> resume the dhcp service by adding another subnet, but I don't think it is
> a good way. Do we need to consider bind dhcp port to another segment when
> deleting the existing one?
>
> [1] https://review.openstack.org/#/c/317358
>
> HongHui Xiao(肖宏辉) PMP®
>
>
> From:   Carl Baldwin <c...@ecbaldwin.net>
> To: OpenStack Development Mailing List
> <openstack-dev@lists.openstack.org>
> Date:   05/18/2016 11:50
> Subject:Re: [openstack-dev] [Neutron][ML2][Routed Networks]
>
>
>
>
> On May 17, 2016 2:18 PM, "Kevin Benton" <ke...@benton.pub> wrote:
> >
> > >I kind of think it makes sense to require evacuating a segment of
> its ports before deleting it.
> >
> > Ah, I left out an important assumption I was making. We also need to
> auto delete the DHCP port as the segment is deleted. I was thinking this
> will be basically be like the delete_network case where we will auto
> remove the network owned ports.
> I can go along with that. Thanks for the clarification.
> Carl
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Hong Hui Xiao
I update [1] to auto delete dhcp port if there is no other ports. But 
after the dhcp port is deleted, the dhcp service is not usable. I can 
resume the dhcp service by adding another subnet, but I don't think it is 
a good way. Do we need to consider bind dhcp port to another segment when 
deleting the existing one? 

[1] https://review.openstack.org/#/c/317358

HongHui Xiao(肖宏辉) PMP®


From:   Carl Baldwin <c...@ecbaldwin.net>
To: OpenStack Development Mailing List 
<openstack-dev@lists.openstack.org>
Date:   05/18/2016 11:50
Subject:    Re: [openstack-dev] [Neutron][ML2][Routed Networks]




On May 17, 2016 2:18 PM, "Kevin Benton" <ke...@benton.pub> wrote:
>
> >I kind of think it makes sense to require evacuating a segment of 
its ports before deleting it.
>
> Ah, I left out an important assumption I was making. We also need to 
auto delete the DHCP port as the segment is deleted. I was thinking this 
will be basically be like the delete_network case where we will auto 
remove the network owned ports.
I can go along with that. Thanks for the clarification.
Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Carl Baldwin
On May 17, 2016 2:18 PM, "Kevin Benton"  wrote:
>
> >I kind of think it makes sense to require evacuating a segment of
its ports before deleting it.
>
> Ah, I left out an important assumption I was making. We also need to auto
delete the DHCP port as the segment is deleted. I was thinking this will be
basically be like the delete_network case where we will auto remove the
network owned ports.

I can go along with that. Thanks for the clarification.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Kevin Benton
>I kind of think it makes sense to require evacuating a segment of its ports
before deleting it.

Ah, I left out an important assumption I was making. We also need to auto
delete the DHCP port as the segment is deleted. I was thinking this will be
basically be like the delete_network case where we will auto remove the
network owned ports.

On Tue, May 17, 2016 at 12:29 PM, Carl Baldwin  wrote:

> On Tue, May 17, 2016 at 10:56 AM, Kevin Benton  wrote:
> >>a) Deleting network's last segment will be prevented. Every network
> should
> >> have at least one segment to let the port to bind.
> >
> > This seems a bit arbitrary to me. If a segment is limited to a small
> part of
> > the datacenter, it being able to bind for one section of the datacenter
> and
> > not the rest is not much different than being able to bind from no
> sections.
> > Just allow it to be deleted because we need to have logic to deal with
> the
> > unbindable port case anyway. Especially since it's a racy check that is
> hard
> > to get correct for little gain.
>
> I agree with Kevin here.
>
> >>b) Deleting the segment that has been associated with subnet will be
> >> prevented.
> >
> > +1
>
> ++
>
> >>c) Deleting the segment that has been bound to port will be prevented.
> >
> > +1.
>
> ++
>
> >>d) Based on c), we need to query ml2_port_binding_levels, I think
> >> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2.
> This
> >> is also because port and segment are both neutron server resources, no
> need
> >> to keep PortBindingLevel at ml2.
> >
> > There are things in this model that make sense only to ML2 (level and
> > driver), especially since ML2 allows a single port_id to appear multiple
> > times in the table (primary key is port_id + level).  To achieve your
> goals
> > in 'C' above, just emit a BEFORE_DELETE event in the callback registry
> for
> > segments. Then ML2 can query this table with a registered callback and
> other
> > plugins can register a callback to prevent this however they want.
>
> Sounds reasonable.
>
> > However, be sure to ignore the DHCP port when preventing segment deletion
> > otherwise having DHCP enabled will make it difficult to get rid of a
> > segment.
>
> They will be left somewhat defunct, won't they?  I think a foreign key
> constraint would be violated if you tried to delete a segment with
> even a DHCP port on it.
>
>   port <- ipallocations (FK) -> subnets (FK) -> networksegments
>
> I guess there is no foreign key constraint holding the ipallocations
> to the port.  So, the ipallocations could be deleted.  But, that is
> effectively stripping an existing port of its IP addresses which would
> be weird.
>
> I kind of think it makes sense to require evacuating a segment of its
> ports before deleting it.
>
> >>e) Is it possible to update a segment(physical_network, segmentation_id,
> or
> >> even network_type), when the segment is being used?
> >
> > I would defer this for future work and not allow it for now. If the
> segment
> > details change, we need to ask the drivers responsible for every bound
> port
> > to make they can support it under the new conditions. It will be quite a
> bit
> > of logic to deal with that I don't think we need to support up front.
>
> ++ Simplify!  We don't have a use case for this now.
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Carl Baldwin
On Tue, May 17, 2016 at 10:56 AM, Kevin Benton  wrote:
>>a) Deleting network's last segment will be prevented. Every network should
>> have at least one segment to let the port to bind.
>
> This seems a bit arbitrary to me. If a segment is limited to a small part of
> the datacenter, it being able to bind for one section of the datacenter and
> not the rest is not much different than being able to bind from no sections.
> Just allow it to be deleted because we need to have logic to deal with the
> unbindable port case anyway. Especially since it's a racy check that is hard
> to get correct for little gain.

I agree with Kevin here.

>>b) Deleting the segment that has been associated with subnet will be
>> prevented.
>
> +1

++

>>c) Deleting the segment that has been bound to port will be prevented.
>
> +1.

++

>>d) Based on c), we need to query ml2_port_binding_levels, I think
>> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. This
>> is also because port and segment are both neutron server resources, no need
>> to keep PortBindingLevel at ml2.
>
> There are things in this model that make sense only to ML2 (level and
> driver), especially since ML2 allows a single port_id to appear multiple
> times in the table (primary key is port_id + level).  To achieve your goals
> in 'C' above, just emit a BEFORE_DELETE event in the callback registry for
> segments. Then ML2 can query this table with a registered callback and other
> plugins can register a callback to prevent this however they want.

Sounds reasonable.

> However, be sure to ignore the DHCP port when preventing segment deletion
> otherwise having DHCP enabled will make it difficult to get rid of a
> segment.

They will be left somewhat defunct, won't they?  I think a foreign key
constraint would be violated if you tried to delete a segment with
even a DHCP port on it.

  port <- ipallocations (FK) -> subnets (FK) -> networksegments

I guess there is no foreign key constraint holding the ipallocations
to the port.  So, the ipallocations could be deleted.  But, that is
effectively stripping an existing port of its IP addresses which would
be weird.

I kind of think it makes sense to require evacuating a segment of its
ports before deleting it.

>>e) Is it possible to update a segment(physical_network, segmentation_id, or
>> even network_type), when the segment is being used?
>
> I would defer this for future work and not allow it for now. If the segment
> details change, we need to ask the drivers responsible for every bound port
> to make they can support it under the new conditions. It will be quite a bit
> of logic to deal with that I don't think we need to support up front.

++ Simplify!  We don't have a use case for this now.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Kevin Benton
>a) Deleting network's last segment will be prevented. Every network should have
at least one segment to let the port to bind.

This seems a bit arbitrary to me. If a segment is limited to a small part
of the datacenter, it being able to bind for one section of the datacenter
and not the rest is not much different than being able to bind from no
sections. Just allow it to be deleted because we need to have logic to deal
with the unbindable port case anyway. Especially since it's a racy check
that is hard to get correct for little gain.

>b) Deleting the segment that has been associated with subnet will be
prevented.

+1

>c) Deleting the segment that has been bound to port will be prevented.

+1.

>d) Based on c), we need to query ml2_port_binding_levels, I think
neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. This
is also because port and segment are both neutron server resources, no need
to keep PortBindingLevel at ml2.

There are things in this model that make sense only to ML2 (level and
driver), especially since ML2 allows a single port_id to appear multiple
times in the table (primary key is port_id + level).  To achieve your goals
in 'C' above, just emit a BEFORE_DELETE event in the callback registry for
segments. Then ML2 can query this table with a registered callback and
other plugins can register a callback to prevent this however they want.

However, be sure to ignore the DHCP port when preventing segment deletion
otherwise having DHCP enabled will make it difficult to get rid of a
segment.

>e) Is it possible to update a segment(physical_network, segmentation_id, or
even network_type), when the segment is being used?

I would defer this for future work and not allow it for now. If the segment
details change, we need to ask the drivers responsible for every bound port
to make they can support it under the new conditions. It will be quite a
bit of logic to deal with that I don't think we need to support up front.

On Tue, May 17, 2016 at 8:39 AM, Hong Hui Xiao <xiaoh...@cn.ibm.com> wrote:

> Hi,
>
> I create this patch [1] to allow multi-segmented routed provider networks
> to grow and shrink over time, reviews are welcomed. I found these points
> during working on the patch, and I think it is good to bring them out for
> discussion.
>
> a) Deleting network's last segment will be prevented. Every network should
> have at least one segment to let the port to bind.
>
> b) Deleting the segment that has been associated with subnet will be
> prevented.
>
> c) Deleting the segment that has been bound to port will be prevented.
>
> d) Based on c), we need to query ml2_port_binding_levels, I think
> neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2.
> This is also because port and segment are both neutron server resources,
> no need to keep PortBindingLevel at ml2.
>
> e) Is it possible to update a segment(physical_network, segmentation_id,
> or even network_type), when the segment is being used?
>
> [1] https://review.openstack.org/#/c/317358
>
> HongHui Xiao(肖宏辉)
>
>
>
> From:   Carl Baldwin <c...@ecbaldwin.net>
> To: OpenStack Development Mailing List
> <openstack-dev@lists.openstack.org>
> Date:   05/12/2016 23:36
> Subject:[openstack-dev] [Neutron][ML2][Routed Networks]
>
>
>
> Hi,
>
> Segments are now a first class thing in Neutron with the merge of this
> patch [1].  It exposes API for segments directly.  With ML2, it is
> currently only possible to view segments that have been created
> through the provider net or multi-provider net extensions.  This can
> only be done at network creation time.
>
> In order to allow multi-segmented routed provider networks to grow and
> shrink over time, it is necessary to allow creation and deletion of
> segments through the new segment endpoint.  Hong Hui Xiao has offered
> to help with this.
>
> We need to provide the integration between the service plugin that
> provides the segments endpoint with ML2 to allow the creates and
> deletes to work properly.  We'd like to here from ML2 experts out
> there on how this integration can proceed.  Is there any caution that
> we need to take?  What are the non-obvious aspects of this that we're
> not thinking about?
>
> Carl Baldwin
>
> [1] https://review.openstack.org/#/c/296603/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsub

Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-17 Thread Hong Hui Xiao
Hi,

I create this patch [1] to allow multi-segmented routed provider networks 
to grow and shrink over time, reviews are welcomed. I found these points 
during working on the patch, and I think it is good to bring them out for 
discussion.

a) Deleting network's last segment will be prevented. Every network should 
have at least one segment to let the port to bind.

b) Deleting the segment that has been associated with subnet will be 
prevented.

c) Deleting the segment that has been bound to port will be prevented. 

d) Based on c), we need to query ml2_port_binding_levels, I think 
neutron.plugins.ml2.models.PortBindingLevel should be moved out of ml2. 
This is also because port and segment are both neutron server resources, 
no need to keep PortBindingLevel at ml2.

e) Is it possible to update a segment(physical_network, segmentation_id, 
or even network_type), when the segment is being used? 

[1] https://review.openstack.org/#/c/317358

HongHui Xiao(肖宏辉)



From:   Carl Baldwin <c...@ecbaldwin.net>
To: OpenStack Development Mailing List 
<openstack-dev@lists.openstack.org>
Date:   05/12/2016 23:36
Subject:    [openstack-dev] [Neutron][ML2][Routed Networks]



Hi,

Segments are now a first class thing in Neutron with the merge of this
patch [1].  It exposes API for segments directly.  With ML2, it is
currently only possible to view segments that have been created
through the provider net or multi-provider net extensions.  This can
only be done at network creation time.

In order to allow multi-segmented routed provider networks to grow and
shrink over time, it is necessary to allow creation and deletion of
segments through the new segment endpoint.  Hong Hui Xiao has offered
to help with this.

We need to provide the integration between the service plugin that
provides the segments endpoint with ML2 to allow the creates and
deletes to work properly.  We'd like to here from ML2 experts out
there on how this integration can proceed.  Is there any caution that
we need to take?  What are the non-obvious aspects of this that we're
not thinking about?

Carl Baldwin

[1] https://review.openstack.org/#/c/296603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-12 Thread Carl Baldwin
Hi,

Segments are now a first class thing in Neutron with the merge of this
patch [1].  It exposes API for segments directly.  With ML2, it is
currently only possible to view segments that have been created
through the provider net or multi-provider net extensions.  This can
only be done at network creation time.

In order to allow multi-segmented routed provider networks to grow and
shrink over time, it is necessary to allow creation and deletion of
segments through the new segment endpoint.  Hong Hui Xiao has offered
to help with this.

We need to provide the integration between the service plugin that
provides the segments endpoint with ML2 to allow the creates and
deletes to work properly.  We'd like to here from ML2 experts out
there on how this integration can proceed.  Is there any caution that
we need to take?  What are the non-obvious aspects of this that we're
not thinking about?

Carl Baldwin

[1] https://review.openstack.org/#/c/296603/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2][API]Binding a port to multiple hosts

2016-04-22 Thread Andreas Scheuring
After some discussion how portbinding handling could be improved during
live migration, the following idea came up:

Why not bind a port to both, the source and the target host during
migration? 

This would allow us to set the portbinding for the target already in
pre_live_migration. Doing so, we could verify that portbinding works and
get around issues where the instance is stuck in error state after a
migration [2]. Also things like Migration between 2 l2 agents would work
out [2] and the restrictions with the macvtap agent on live migration
can be lifted [3]. Of course also some changes to nova are required -
but first we need to prepare neutron for this.

I was able to write down a first spec with some information. Looking
forward to discuss that at this summit with a few folks.

https://review.openstack.org/#/c/309416/

Andreas



[1] https://review.openstack.org/#/c/309416/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092073.html
[3] https://bugs.launchpad.net/neutron/+bug/1550400,

-- 
-
Andreas (IRC: scheuran) 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-03-10 Thread Koderer, Marc
Hi folks,

I had a deep dive session with Bob (thx Bob).

We have a plan to solve the issue without any change of APIs or 
manila driver reworks.

The process will look like the following:

1.) In case of multi-segment/hpb Manila creates a port like Ironic ML2
would do it [1]:
  vif_type = baremetal
  binding_profile = list of sw ports
  device_owner = manila:ID

2.) Manila waits until all ports are actively bound
2.a) Neutron binds the port through the segments
2.b) A manila-neutron mech driver (a really simple one) fulfils the binding
   and sets:
 vif_details = {“share_segmentation_id" = XX,
   “share_network_type" = YY}

3.) When the port becomes active Manila has all it needs to proceed

In 2.b. the manila MD will only search for device_owner == “manila:”,
sets the needed details and fulfils the binding (~10 LOC).

We also discussed about using ML2 GW [2] or trunk port [3].
I consider the design above as simple enough to get merged smoothly
in Newton.

@Ben: Will be back for bug hunting now.

Regards
Marc

[1]: 
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html
 

[2]: https://wiki.openstack.org/wiki/Neutron/L2-GW
[3]: https://wiki.openstack.org/wiki/Neutron/TrunkPort


> On 09 Mar 2016, at 09:25, Koderer, Marc  wrote:
> 
>> 
>> On 09 Mar 2016, at 08:43, Sukhdev Kapur > > wrote:
>> 
>> Hi Marc, 
>> 
>> I am driving the ironic-ml2 integration and introduced baremetal type for 
>> vmic_type. 
> 
> Basically that’s my plan. So in my current implementation
> I use the baremetal vnic_type [1] and add a binding profile [2].
> 
>> You can very much use the same integration here - however, I am not 
>> completely clear about your use case. 
>> Do you want neutron/ML2 to plumb the network for Manila or do you want to 
>> find out what VLAN (segmentation id) is used on the segment which connects 
>> TOR to the storage device? 
> 
> Generally I want to find the best architecture for this feature :)
> Introducing a neutron-agent that does the plumbing will mean this agent needs
> to have a connection to the storage node itself. So we will end-up in a
> storage agent with a driver model (or an agent for each storage device). On
> the other side it follows the idea that neutron takes care about networking.
> 
> From a Manila perspective the easiest solution would be to have an interface 
> to
> get the segmentation id of the lowest bound segment.
> 
>> 
>> You had this on the agenda of ML2 meeting for tomorrow and I was going to 
>> discuss this with you in the meeting. But, I noticed that you removed it 
>> from the agenda. Do you have what you need? If not, you may want to join us 
>> in the ML2 meeting tomorrow and we can discuss this use case there. 
> 
> Yeah I am sorry - I have to move the topic +1 week due to an internal meeting 
> :(
> But we can have a chat on IRC (mkoderer on freenode).
> 
> Regards
> Marc
> 
> [1]: https://review.openstack.org/#/c/283494/ 
> 
> [2]: https://review.openstack.org/#/c/284034/ 
> 
> 
> 
>> 
>> -Sukhdev
>> 
>> 
>> On Tue, Mar 1, 2016 at 1:08 AM, Koderer, Marc > > wrote:
>> 
>>> On 01 Mar 2016, at 06:22, Kevin Benton >> > wrote:
>>> 
>>> >This seems gross and backwards. It makes sense as a short term hack but 
>>> >given that we have time to design this correctly I'd prefer to get this 
>>> >information in a more straighforward way.
>>> 
>>> Well it depends on what is happening here. If Manilla is wiring up a 
>>> specific VLAN for a port, that makes it part of the port binding process, 
>>> in which case it should be an ML2 driver. Can you provide some more details 
>>> about what Manilla is doing with this info?
>> 
>> The VLAN segment ID and IP address is used in the share driver to configure 
>> the
>> corresponding interface resources within the storage. Just to give some
>> examples:
>> 
>>  - NetApp driver uses it to create a logical interface and assign it to a
>>“storage virtual machine” [1]
>>  - EMC driver does it in similar manner [2]
>> 
>> My idea was to use the same principle as ironic ml2 intregration is doing [3]
>> by setting the vnic_type to “baremetal”.
>> 
>> In Manila's current implementation storage drivers are also responsible to
>> setup the right networking setup. Would you suggest to move this part into 
>> the
>> port binding phase?
>> 
>> Regards
>> Marc
>> 
>> 
>> [1]: 
>> https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272
>>  
>> 

Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-03-09 Thread Koderer, Marc

> On 09 Mar 2016, at 08:43, Sukhdev Kapur  wrote:
> 
> Hi Marc, 
> 
> I am driving the ironic-ml2 integration and introduced baremetal type for 
> vmic_type. 

Basically that’s my plan. So in my current implementation
I use the baremetal vnic_type [1] and add a binding profile [2].

> You can very much use the same integration here - however, I am not 
> completely clear about your use case. 
> Do you want neutron/ML2 to plumb the network for Manila or do you want to 
> find out what VLAN (segmentation id) is used on the segment which connects 
> TOR to the storage device? 

Generally I want to find the best architecture for this feature :)
Introducing a neutron-agent that does the plumbing will mean this agent needs
to have a connection to the storage node itself. So we will end-up in a
storage agent with a driver model (or an agent for each storage device). On
the other side it follows the idea that neutron takes care about networking.

From a Manila perspective the easiest solution would be to have an interface to
get the segmentation id of the lowest bound segment.

> 
> You had this on the agenda of ML2 meeting for tomorrow and I was going to 
> discuss this with you in the meeting. But, I noticed that you removed it from 
> the agenda. Do you have what you need? If not, you may want to join us in the 
> ML2 meeting tomorrow and we can discuss this use case there. 

Yeah I am sorry - I have to move the topic +1 week due to an internal meeting :(
But we can have a chat on IRC (mkoderer on freenode).

Regards
Marc

[1]: https://review.openstack.org/#/c/283494/ 

[2]: https://review.openstack.org/#/c/284034/ 



> 
> -Sukhdev
> 
> 
> On Tue, Mar 1, 2016 at 1:08 AM, Koderer, Marc  > wrote:
> 
>> On 01 Mar 2016, at 06:22, Kevin Benton > > wrote:
>> 
>> >This seems gross and backwards. It makes sense as a short term hack but 
>> >given that we have time to design this correctly I'd prefer to get this 
>> >information in a more straighforward way.
>> 
>> Well it depends on what is happening here. If Manilla is wiring up a 
>> specific VLAN for a port, that makes it part of the port binding process, in 
>> which case it should be an ML2 driver. Can you provide some more details 
>> about what Manilla is doing with this info?
> 
> The VLAN segment ID and IP address is used in the share driver to configure 
> the
> corresponding interface resources within the storage. Just to give some
> examples:
> 
>  - NetApp driver uses it to create a logical interface and assign it to a
>“storage virtual machine” [1]
>  - EMC driver does it in similar manner [2]
> 
> My idea was to use the same principle as ironic ml2 intregration is doing [3]
> by setting the vnic_type to “baremetal”.
> 
> In Manila's current implementation storage drivers are also responsible to
> setup the right networking setup. Would you suggest to move this part into the
> port binding phase?
> 
> Regards
> Marc
> 
> 
> [1]: 
> https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272
>  
> 
> [2]: 
> https://github.com/openstack/manila/blob/master/manila/share/drivers/emc/plugins/vnx/connection.py#L609
>  
> 
> [3]: 
> https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html
>  
> 
> 
> 
>> 
>> On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander > > wrote:
>> On 02/29/2016 04:38 PM, Kevin Benton wrote:
>> You're correct. Right now there is no way via the HTTP API to find which
>> segments a port is bound to.
>> This is something we can certainly consider adding, but it will need an
>> RFE so it wouldn't land until Newton at the earliest.
>> 
>> I believe Newton is the target for this work. This is feature freeze week 
>> after all.
>> 
>> Have you considered writing an ML2 driver that just notifies Manilla of
>> the port's segment info? All of this information is available to ML2
>> drivers in the PortContext object that is passed to them.
>> 
>> This seems gross and backwards. It makes sense as a short term hack but 
>> given that we have time to design this correctly I'd prefer to get this 
>> information in a more straighforward way.
>> 
>> -Ben Swartzlander
>> 
>> 
>> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka > 
>> >> wrote:
>> 
>> Fixed neutron tag 

Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-03-08 Thread Sukhdev Kapur
Hi Marc,

I am driving the ironic-ml2 integration and introduced baremetal type for
vmic_type.
You can very much use the same integration here - however, I am not
completely clear about your use case.
Do you want neutron/ML2 to plumb the network for Manila or do you want to
find out what VLAN (segmentation id) is used on the segment which connects
TOR to the storage device?

You had this on the agenda of ML2 meeting for tomorrow and I was going to
discuss this with you in the meeting. But, I noticed that you removed it
from the agenda. Do you have what you need? If not, you may want to join us
in the ML2 meeting tomorrow and we can discuss this use case there.

-Sukhdev


On Tue, Mar 1, 2016 at 1:08 AM, Koderer, Marc  wrote:

>
> On 01 Mar 2016, at 06:22, Kevin Benton  wrote:
>
> >This seems gross and backwards. It makes sense as a short term hack but
> given that we have time to design this correctly I'd prefer to get this
> information in a more straighforward way.
>
> Well it depends on what is happening here. If Manilla is wiring up a
> specific VLAN for a port, that makes it part of the port binding process,
> in which case it should be an ML2 driver. Can you provide some more details
> about what Manilla is doing with this info?
>
>
> The VLAN segment ID and IP address is used in the share driver to
> configure the
> corresponding interface resources within the storage. Just to give some
> examples:
>
>  - NetApp driver uses it to create a logical interface and assign it to a
>“storage virtual machine” [1]
>  - EMC driver does it in similar manner [2]
>
> My idea was to use the same principle as ironic ml2 intregration is doing
> [3]
> by setting the vnic_type to “baremetal”.
>
> In Manila's current implementation storage drivers are also responsible to
> setup the right networking setup. Would you suggest to move this part into
> the
> port binding phase?
>
> Regards
> Marc
>
>
> [1]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272
> [2]:
> https://github.com/openstack/manila/blob/master/manila/share/drivers/emc/plugins/vnx/connection.py#L609
> [3]:
> https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html
>
>
>
> On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander 
> wrote:
>
>> On 02/29/2016 04:38 PM, Kevin Benton wrote:
>>
>>> You're correct. Right now there is no way via the HTTP API to find which
>>> segments a port is bound to.
>>> This is something we can certainly consider adding, but it will need an
>>> RFE so it wouldn't land until Newton at the earliest.
>>>
>>
>> I believe Newton is the target for this work. This is feature freeze week
>> after all.
>>
>> Have you considered writing an ML2 driver that just notifies Manilla of
>>> the port's segment info? All of this information is available to ML2
>>> drivers in the PortContext object that is passed to them.
>>>
>>
>> This seems gross and backwards. It makes sense as a short term hack but
>> given that we have time to design this correctly I'd prefer to get this
>> information in a more straighforward way.
>>
>> -Ben Swartzlander
>>
>>
>> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka >> > wrote:
>>>
>>> Fixed neutron tag in the subject.
>>>
>>> Marc > wrote:
>>>
>>> Hi Neutron team,
>>>
>>> I am currently working on a feature for hierarchical port
>>> binding support in
>>> Manila [1] [2]. Just to give some context: In the current
>>> implementation Manila
>>> creates a neutron port but let it unbound (state DOWN).
>>> Therefore Manila uses
>>> the port create only retrieve an IP address and segmentation ID
>>> (some drivers
>>> only support VLAN here).
>>>
>>> My idea is to change this behavior and do an actual port binding
>>> action so that
>>> the configuration of VLAN isn’t a manual job any longer. And
>>> that multi-segment
>>> and HPB is supported on the long-run.
>>>
>>> My current issue is: How can Manila retrieve the segment
>>> information for a
>>> bound port? Manila only is interested in the last (bottom)
>>> segmentation ID
>>> since I assume the storage is connected to a ToR switch.
>>>
>>> Database-wise it’s possible to query it using
>>> ml2_port_binding_levels table.
>>> But AFAIK there is no API to query this. The only information
>>> that is exposed
>>> are all segments of a network. But this is not sufficient to
>>> identify which
>>> segments actually used for a port binding.
>>>
>>> Regards
>>> Marc
>>> SAP SE
>>>
>>> [1]:
>>>
>>> 

Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-03-01 Thread Koderer, Marc

> On 01 Mar 2016, at 06:22, Kevin Benton  wrote:
> 
> >This seems gross and backwards. It makes sense as a short term hack but 
> >given that we have time to design this correctly I'd prefer to get this 
> >information in a more straighforward way.
> 
> Well it depends on what is happening here. If Manilla is wiring up a specific 
> VLAN for a port, that makes it part of the port binding process, in which 
> case it should be an ML2 driver. Can you provide some more details about what 
> Manilla is doing with this info?

The VLAN segment ID and IP address is used in the share driver to configure the
corresponding interface resources within the storage. Just to give some
examples:

 - NetApp driver uses it to create a logical interface and assign it to a
   “storage virtual machine” [1]
 - EMC driver does it in similar manner [2]

My idea was to use the same principle as ironic ml2 intregration is doing [3]
by setting the vnic_type to “baremetal”.

In Manila's current implementation storage drivers are also responsible to
setup the right networking setup. Would you suggest to move this part into the
port binding phase?

Regards
Marc


[1]: 
https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272
 

[2]: 
https://github.com/openstack/manila/blob/master/manila/share/drivers/emc/plugins/vnx/connection.py#L609
 

[3]: 
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html
 



> 
> On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander  > wrote:
> On 02/29/2016 04:38 PM, Kevin Benton wrote:
> You're correct. Right now there is no way via the HTTP API to find which
> segments a port is bound to.
> This is something we can certainly consider adding, but it will need an
> RFE so it wouldn't land until Newton at the earliest.
> 
> I believe Newton is the target for this work. This is feature freeze week 
> after all.
> 
> Have you considered writing an ML2 driver that just notifies Manilla of
> the port's segment info? All of this information is available to ML2
> drivers in the PortContext object that is passed to them.
> 
> This seems gross and backwards. It makes sense as a short term hack but given 
> that we have time to design this correctly I'd prefer to get this information 
> in a more straighforward way.
> 
> -Ben Swartzlander
> 
> 
> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka  
> >> wrote:
> 
> Fixed neutron tag in the subject.
> 
> Marc   >> wrote:
> 
> Hi Neutron team,
> 
> I am currently working on a feature for hierarchical port
> binding support in
> Manila [1] [2]. Just to give some context: In the current
> implementation Manila
> creates a neutron port but let it unbound (state DOWN).
> Therefore Manila uses
> the port create only retrieve an IP address and segmentation ID
> (some drivers
> only support VLAN here).
> 
> My idea is to change this behavior and do an actual port binding
> action so that
> the configuration of VLAN isn’t a manual job any longer. And
> that multi-segment
> and HPB is supported on the long-run.
> 
> My current issue is: How can Manila retrieve the segment
> information for a
> bound port? Manila only is interested in the last (bottom)
> segmentation ID
> since I assume the storage is connected to a ToR switch.
> 
> Database-wise it’s possible to query it using
> ml2_port_binding_levels table.
> But AFAIK there is no API to query this. The only information
> that is exposed
> are all segments of a network. But this is not sufficient to
> identify which
> segments actually used for a port binding.
> 
> Regards
> Marc
> SAP SE
> 
> [1]:
> 
> https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support 
> 
> [2]: https://review.openstack.org/#/c/277731/ 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 

Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Valeriy Ponomaryov
For the moment Manila does following:
- creates Neutron port, consider "reserves" network info.
- then calls its own share driver that supports network handling and that
driver does binding on its backend.
- Neutron does not have info about real usage of a port and its info.

So, it is correct to say that "bind" operation is performed by each Manila
share driver on its own way. Manila common code (not share-driver-specific)
does only "reservation" for now.
Would be good to have common possibility to write "bind" info to Neutron
that was performed by Manila. It is very undesired to write separate
Neutron driver for each Manila backend that supports networking.

Valeriy

On Tue, Mar 1, 2016 at 7:22 AM, Kevin Benton  wrote:

> >This seems gross and backwards. It makes sense as a short term hack but
> given that we have time to design this correctly I'd prefer to get this
> information in a more straighforward way.
>
> Well it depends on what is happening here. If Manilla is wiring up a
> specific VLAN for a port, that makes it part of the port binding process,
> in which case it should be an ML2 driver. Can you provide some more details
> about what Manilla is doing with this info?
>
> On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander 
> wrote:
>
>> On 02/29/2016 04:38 PM, Kevin Benton wrote:
>>
>>> You're correct. Right now there is no way via the HTTP API to find which
>>> segments a port is bound to.
>>> This is something we can certainly consider adding, but it will need an
>>> RFE so it wouldn't land until Newton at the earliest.
>>>
>>
>> I believe Newton is the target for this work. This is feature freeze week
>> after all.
>>
>> Have you considered writing an ML2 driver that just notifies Manilla of
>>> the port's segment info? All of this information is available to ML2
>>> drivers in the PortContext object that is passed to them.
>>>
>>
>> This seems gross and backwards. It makes sense as a short term hack but
>> given that we have time to design this correctly I'd prefer to get this
>> information in a more straighforward way.
>>
>> -Ben Swartzlander
>>
>>
>> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka >> > wrote:
>>>
>>> Fixed neutron tag in the subject.
>>>
>>> Marc > wrote:
>>>
>>> Hi Neutron team,
>>>
>>> I am currently working on a feature for hierarchical port
>>> binding support in
>>> Manila [1] [2]. Just to give some context: In the current
>>> implementation Manila
>>> creates a neutron port but let it unbound (state DOWN).
>>> Therefore Manila uses
>>> the port create only retrieve an IP address and segmentation ID
>>> (some drivers
>>> only support VLAN here).
>>>
>>> My idea is to change this behavior and do an actual port binding
>>> action so that
>>> the configuration of VLAN isn’t a manual job any longer. And
>>> that multi-segment
>>> and HPB is supported on the long-run.
>>>
>>> My current issue is: How can Manila retrieve the segment
>>> information for a
>>> bound port? Manila only is interested in the last (bottom)
>>> segmentation ID
>>> since I assume the storage is connected to a ToR switch.
>>>
>>> Database-wise it’s possible to query it using
>>> ml2_port_binding_levels table.
>>> But AFAIK there is no API to query this. The only information
>>> that is exposed
>>> are all segments of a network. But this is not sufficient to
>>> identify which
>>> segments actually used for a port binding.
>>>
>>> Regards
>>> Marc
>>> SAP SE
>>>
>>> [1]:
>>>
>>> https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
>>> [2]: https://review.openstack.org/#/c/277731/
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <
>>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <
>>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> 

Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Kevin Benton
>This seems gross and backwards. It makes sense as a short term hack but
given that we have time to design this correctly I'd prefer to get this
information in a more straighforward way.

Well it depends on what is happening here. If Manilla is wiring up a
specific VLAN for a port, that makes it part of the port binding process,
in which case it should be an ML2 driver. Can you provide some more details
about what Manilla is doing with this info?

On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander 
wrote:

> On 02/29/2016 04:38 PM, Kevin Benton wrote:
>
>> You're correct. Right now there is no way via the HTTP API to find which
>> segments a port is bound to.
>> This is something we can certainly consider adding, but it will need an
>> RFE so it wouldn't land until Newton at the earliest.
>>
>
> I believe Newton is the target for this work. This is feature freeze week
> after all.
>
> Have you considered writing an ML2 driver that just notifies Manilla of
>> the port's segment info? All of this information is available to ML2
>> drivers in the PortContext object that is passed to them.
>>
>
> This seems gross and backwards. It makes sense as a short term hack but
> given that we have time to design this correctly I'd prefer to get this
> information in a more straighforward way.
>
> -Ben Swartzlander
>
>
> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka > > wrote:
>>
>> Fixed neutron tag in the subject.
>>
>> Marc > wrote:
>>
>> Hi Neutron team,
>>
>> I am currently working on a feature for hierarchical port
>> binding support in
>> Manila [1] [2]. Just to give some context: In the current
>> implementation Manila
>> creates a neutron port but let it unbound (state DOWN).
>> Therefore Manila uses
>> the port create only retrieve an IP address and segmentation ID
>> (some drivers
>> only support VLAN here).
>>
>> My idea is to change this behavior and do an actual port binding
>> action so that
>> the configuration of VLAN isn’t a manual job any longer. And
>> that multi-segment
>> and HPB is supported on the long-run.
>>
>> My current issue is: How can Manila retrieve the segment
>> information for a
>> bound port? Manila only is interested in the last (bottom)
>> segmentation ID
>> since I assume the storage is connected to a ToR switch.
>>
>> Database-wise it’s possible to query it using
>> ml2_port_binding_levels table.
>> But AFAIK there is no API to query this. The only information
>> that is exposed
>> are all segments of a network. But this is not sufficient to
>> identify which
>> segments actually used for a port binding.
>>
>> Regards
>> Marc
>> SAP SE
>>
>> [1]:
>>
>> https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
>> [2]: https://review.openstack.org/#/c/277731/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Ben Swartzlander

On 02/29/2016 04:38 PM, Kevin Benton wrote:

You're correct. Right now there is no way via the HTTP API to find which
segments a port is bound to.
This is something we can certainly consider adding, but it will need an
RFE so it wouldn't land until Newton at the earliest.


I believe Newton is the target for this work. This is feature freeze 
week after all.



Have you considered writing an ML2 driver that just notifies Manilla of
the port's segment info? All of this information is available to ML2
drivers in the PortContext object that is passed to them.


This seems gross and backwards. It makes sense as a short term hack but 
given that we have time to design this correctly I'd prefer to get this 
information in a more straighforward way.


-Ben Swartzlander



On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka > wrote:

Fixed neutron tag in the subject.

Marc > wrote:

Hi Neutron team,

I am currently working on a feature for hierarchical port
binding support in
Manila [1] [2]. Just to give some context: In the current
implementation Manila
creates a neutron port but let it unbound (state DOWN).
Therefore Manila uses
the port create only retrieve an IP address and segmentation ID
(some drivers
only support VLAN here).

My idea is to change this behavior and do an actual port binding
action so that
the configuration of VLAN isn’t a manual job any longer. And
that multi-segment
and HPB is supported on the long-run.

My current issue is: How can Manila retrieve the segment
information for a
bound port? Manila only is interested in the last (bottom)
segmentation ID
since I assume the storage is connected to a ToR switch.

Database-wise it’s possible to query it using
ml2_port_binding_levels table.
But AFAIK there is no API to query this. The only information
that is exposed
are all segments of a network. But this is not sufficient to
identify which
segments actually used for a port binding.

Regards
Marc
SAP SE

[1]:
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
[2]: https://review.openstack.org/#/c/277731/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Robert Kukura
Is Manila actually connecting (i.e. binding) something it controls to a 
Neutron port, similar to how a Neutron L3 or DHCP agent connects a 
network namespace to a port? Or does it just need to know the details 
about a port bound for a VM (or a service)?


If the former, it should probably be using something similar to 
Neutron's interface drivers (or maybe Nova's new VIF library) so it can 
work with arbitrary core plugins or ML2 mechanism drivers, and any 
corresponding L2 agent. If it absolutely requires a VLAN on a node's 
physical network, then Kevin's idea of a Manila-specific mechanism 
driver that does the binding (without involving an L2 agent) may be the 
way to go.


-Bob

On 2/29/16 4:38 PM, Kevin Benton wrote:
You're correct. Right now there is no way via the HTTP API to find 
which segments a port is bound to.
This is something we can certainly consider adding, but it will need 
an RFE so it wouldn't land until Newton at the earliest.


Have you considered writing an ML2 driver that just notifies Manilla 
of the port's segment info? All of this information is available to 
ML2 drivers in the PortContext object that is passed to them.


On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka > wrote:


Fixed neutron tag in the subject.

Marc > wrote:

Hi Neutron team,

I am currently working on a feature for hierarchical port
binding support in
Manila [1] [2]. Just to give some context: In the current
implementation Manila
creates a neutron port but let it unbound (state DOWN).
Therefore Manila uses
the port create only retrieve an IP address and segmentation
ID (some drivers
only support VLAN here).

My idea is to change this behavior and do an actual port
binding action so that
the configuration of VLAN isn’t a manual job any longer. And
that multi-segment
and HPB is supported on the long-run.

My current issue is: How can Manila retrieve the segment
information for a
bound port? Manila only is interested in the last (bottom)
segmentation ID
since I assume the storage is connected to a ToR switch.

Database-wise it’s possible to query it using
ml2_port_binding_levels table.
But AFAIK there is no API to query this. The only information
that is exposed
are all segments of a network. But this is not sufficient to
identify which
segments actually used for a port binding.

Regards
Marc
SAP SE

[1]:
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
[2]: https://review.openstack.org/#/c/277731/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Kevin Benton
You're correct. Right now there is no way via the HTTP API to find which
segments a port is bound to.
This is something we can certainly consider adding, but it will need an RFE
so it wouldn't land until Newton at the earliest.

Have you considered writing an ML2 driver that just notifies Manilla of the
port's segment info? All of this information is available to ML2 drivers in
the PortContext object that is passed to them.

On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka 
wrote:

> Fixed neutron tag in the subject.
>
> Marc  wrote:
>
> Hi Neutron team,
>>
>> I am currently working on a feature for hierarchical port binding support
>> in
>> Manila [1] [2]. Just to give some context: In the current implementation
>> Manila
>> creates a neutron port but let it unbound (state DOWN). Therefore Manila
>> uses
>> the port create only retrieve an IP address and segmentation ID (some
>> drivers
>> only support VLAN here).
>>
>> My idea is to change this behavior and do an actual port binding action
>> so that
>> the configuration of VLAN isn’t a manual job any longer. And that
>> multi-segment
>> and HPB is supported on the long-run.
>>
>> My current issue is: How can Manila retrieve the segment information for a
>> bound port? Manila only is interested in the last (bottom) segmentation ID
>> since I assume the storage is connected to a ToR switch.
>>
>> Database-wise it’s possible to query it using ml2_port_binding_levels
>> table.
>> But AFAIK there is no API to query this. The only information that is
>> exposed
>> are all segments of a network. But this is not sufficient to identify
>> which
>> segments actually used for a port binding.
>>
>> Regards
>> Marc
>> SAP SE
>>
>> [1]:
>> https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
>> [2]: https://review.openstack.org/#/c/277731/
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

2016-02-29 Thread Ihar Hrachyshka

Fixed neutron tag in the subject.

Marc  wrote:


Hi Neutron team,

I am currently working on a feature for hierarchical port binding support  
in
Manila [1] [2]. Just to give some context: In the current implementation  
Manila
creates a neutron port but let it unbound (state DOWN). Therefore Manila  
uses
the port create only retrieve an IP address and segmentation ID (some  
drivers

only support VLAN here).

My idea is to change this behavior and do an actual port binding action  
so that
the configuration of VLAN isn’t a manual job any longer. And that  
multi-segment

and HPB is supported on the long-run.

My current issue is: How can Manila retrieve the segment information for a
bound port? Manila only is interested in the last (bottom) segmentation ID
since I assume the storage is connected to a ToR switch.

Database-wise it’s possible to query it using ml2_port_binding_levels  
table.
But AFAIK there is no API to query this. The only information that is  
exposed

are all segments of a network. But this is not sufficient to identify which
segments actually used for a port binding.

Regards
Marc
SAP SE

[1]:  
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support

[2]: https://review.openstack.org/#/c/277731/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-10 Thread Sukhdev Kapur
Hi Gal,

I was hoping you will join us in yesterday's ML2 meeting to discuss
further. Anyhow,  we would love your participation in this activity. Please
check the ether pad and see if you could join us in this sprint? If yes,
please sign up so that host can make appropriate arrangements.
Regardless,  please review the document and provide feedback. Also,  see if
you can join next week's meeting.

Thanks
Sukhdev
On Sep 9, 2015 4:50 AM, "Gal Sagie"  wrote:

> Hi Sukhdev,
>
> The common sync framework is something i was also thinking about for some
> time now.
> I think its a very good idea and would love if i could participate in the
> talks (and hopefully the implementation as well)
>
> Thanks
> Gal.
>
> On Wed, Sep 9, 2015 at 9:46 AM, Sukhdev Kapur 
> wrote:
>
>> Folks,
>>
>> We are planning on having ML2 coding sprint on October 6 through 8, 2015.
>> Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
>> early-cycle sprint.
>>
>> ML2 team has been discussing the issues related to synchronization of the
>> Neutron DB resources with the back-end drivers. Several issues have been
>> reported when multiple ML2 drivers are deployed in scaled HA deployments.
>> The issues surface when either side (Neutron or back-end HW/drivers)
>> restart and resource view gets out of sync. There is no mechanism in
>> Neutron or ML2 plugin which ensures the synchronization of the state
>> between the front-end and back-end. The drivers either end up implementing
>> their own solutions or they dump the issue on the operators to intervene
>> and correct it manually.
>>
>> We plan on utilizing Task Flow to implement the framework in ML2 plugin
>> which can be leveraged by ML2 drivers to achieve synchronization in a
>> simplified manner.
>>
>> There are couple of additional items on the Sprint agenda, which are
>> listed on the etherpad [1]. The details of venue and schedule are listed on
>> the enterpad as well. The sprint is hosted by Yahoo Inc.
>> Whoever is interested in the topics listed on the etherpad, is welcome to
>> sign up for the sprint and join us in making this reality.
>>
>> Additionally, we will utilize this sprint to formalize the design
>> proposal(s) for the fish bowl session at Tokyo summit [2]
>>
>> Any questions/clarifications, please join us in our weekly ML2 meeting on
>> Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt
>>
>> Thanks
>> -Sukhdev
>>
>> [1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
>> [2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-09 Thread Sukhdev Kapur
Folks,

We are planning on having ML2 coding sprint on October 6 through 8, 2015.
Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
early-cycle sprint.

ML2 team has been discussing the issues related to synchronization of the
Neutron DB resources with the back-end drivers. Several issues have been
reported when multiple ML2 drivers are deployed in scaled HA deployments.
The issues surface when either side (Neutron or back-end HW/drivers)
restart and resource view gets out of sync. There is no mechanism in
Neutron or ML2 plugin which ensures the synchronization of the state
between the front-end and back-end. The drivers either end up implementing
their own solutions or they dump the issue on the operators to intervene
and correct it manually.

We plan on utilizing Task Flow to implement the framework in ML2 plugin
which can be leveraged by ML2 drivers to achieve synchronization in a
simplified manner.

There are couple of additional items on the Sprint agenda, which are listed
on the etherpad [1]. The details of venue and schedule are listed on the
enterpad as well. The sprint is hosted by Yahoo Inc.
Whoever is interested in the topics listed on the etherpad, is welcome to
sign up for the sprint and join us in making this reality.

Additionally, we will utilize this sprint to formalize the design
proposal(s) for the fish bowl session at Tokyo summit [2]

Any questions/clarifications, please join us in our weekly ML2 meeting on
Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt

Thanks
-Sukhdev

[1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
[2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-09 Thread Gal Sagie
Hi Sukhdev,

The common sync framework is something i was also thinking about for some
time now.
I think its a very good idea and would love if i could participate in the
talks (and hopefully the implementation as well)

Thanks
Gal.

On Wed, Sep 9, 2015 at 9:46 AM, Sukhdev Kapur 
wrote:

> Folks,
>
> We are planning on having ML2 coding sprint on October 6 through 8, 2015.
> Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
> early-cycle sprint.
>
> ML2 team has been discussing the issues related to synchronization of the
> Neutron DB resources with the back-end drivers. Several issues have been
> reported when multiple ML2 drivers are deployed in scaled HA deployments.
> The issues surface when either side (Neutron or back-end HW/drivers)
> restart and resource view gets out of sync. There is no mechanism in
> Neutron or ML2 plugin which ensures the synchronization of the state
> between the front-end and back-end. The drivers either end up implementing
> their own solutions or they dump the issue on the operators to intervene
> and correct it manually.
>
> We plan on utilizing Task Flow to implement the framework in ML2 plugin
> which can be leveraged by ML2 drivers to achieve synchronization in a
> simplified manner.
>
> There are couple of additional items on the Sprint agenda, which are
> listed on the etherpad [1]. The details of venue and schedule are listed on
> the enterpad as well. The sprint is hosted by Yahoo Inc.
> Whoever is interested in the topics listed on the etherpad, is welcome to
> sign up for the sprint and join us in making this reality.
>
> Additionally, we will utilize this sprint to formalize the design
> proposal(s) for the fish bowl session at Tokyo summit [2]
>
> Any questions/clarifications, please join us in our weekly ML2 meeting on
> Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt
>
> Thanks
> -Sukhdev
>
> [1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
> [2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ml2] Any other patch needed for hierarchical port binding?

2015-07-14 Thread loy wolfe
We want to try hierarchical port binding for some TOR vtep. However it
seems that the patch [1] could not be applied directly onto JUNO
release. So are there any other patch needed when merging this patch?

Thanks a lot!

[1] https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] - generic port binding for ml2 and dvr

2015-06-19 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Rob Kukura,
We are seeing an issue in the ML2 device_context that was modified by you in a 
recent patch to get everything from the PortContext instead of the DVR context.
Right now the PortContext does not seem to return the DVRbinding status, but 
just returns the PortContext and it errors out, since the PortContext does not 
have the attribute status.

https://bugs.launchpad.net/neutron/+bug/1465434

The error is seen when you try to update a distributed virtual router 
interface.

It errors out in port_update_postcommit.

You have mentioned in your TODO comments that it would be addressed by the 
bug 1367391, but I still see a patch in WIP waiting for review.

Can you take a look at this bug please.

Thanks.


Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread Kevin Benton
Coordinating communication between various backends for encapsulation
termination is something that would be really nice to address in Liberty.
I've added it to the etherpad to bring it up at the summit.[1]


1. http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.
 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
 be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going further
  by introducing the bgp speaker on each compute node, in use case B of
 [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
  [2]
 http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
 Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread Kevin Benton
Whoops, wrong link in last email.

https://etherpad.openstack.org/p/liberty-neutron-summit-topics

On Thu, Apr 2, 2015 at 12:50 AM, Kevin Benton blak...@gmail.com wrote:

 Coordinating communication between various backends for encapsulation
 termination is something that would be really nice to address in Liberty.
 I've added it to the etherpad to bring it up at the summit.[1]


 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

 On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
 wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.
 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
 be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going
 further
  by introducing the bgp speaker on each compute node, in use case B of
 [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
  [2]
 http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
 Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread henry hly
On Thu, Apr 2, 2015 at 3:51 PM, Kevin Benton blak...@gmail.com wrote:
 Whoops, wrong link in last email.

 https://etherpad.openstack.org/p/liberty-neutron-summit-topics

 On Thu, Apr 2, 2015 at 12:50 AM, Kevin Benton blak...@gmail.com wrote:

 Coordinating communication between various backends for encapsulation
 termination is something that would be really nice to address in Liberty.
 I've added it to the etherpad to bring it up at the summit.[1]


Thanks a lot, Kevin.
I think it's really important, for more customers are asking about
various backends coordinating.


 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

 On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
 wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.

Sure, I agree.

 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

Maybe not necessary here, the key is that interaction between l2pop
and external device MD is needed. Below are just some very basic
ideas:

1) MD as the plugin side agent?
*  each MD register hook in l2pop, then l2pop call the hook list as
well as notifying to agent;
*  MD simulate a update_device_up/down, however with binding:tun_ip
because it has no agent_ip;
* How MD get port status remain unsolved.

2) Things may be much easier in case of hierarchical port binding
(merged in Kilo)
* A ovs/linuxbridge agent still exist to produce update_device_up/down message;
* external device MD get port status update, then add tun_ip to port
context, then trigger l2pop MD?


 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think
  about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
  be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance).
  Bagpipe

Hi Mathieu,

Thanks for your idea, the interaction between l2pop and other MD is
really the key point, and remove agent_ip is just the first step.
BGP speakers is interesting, however I think the goal is not very
same, because I want to keep compatibility of existing deployed l2pop
solutions, and want to extend and enhance it while not replacing it
totally.

  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going
  further
  by introducing the bgp speaker on each compute node, in use case B of
  [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
 
  [2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
   Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's
   very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be
   the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for
   different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set
   when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
  __
  OpenStack 

Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-31 Thread Sławek Kapłoński
Hello,

I think that easiest way could be to have own mech_driver (AFAIK such
drivers are for such usage) to talk with external devices to tell them
what tunnels should it establish.
With change to tun_ip Henry propese l2_pop agent will be able to
establish tunnel with external device.

On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
 hi henry,
 
 thanks for this interesting idea. It would be interesting to think about
 how external gateway could leverage the l2pop framework.
 
 Currently l2pop sends its fdb messages once the status of the port is
 modified. AFAIK, this status is only modified by agents which send
 update_devce_up/down().
 This issue has also to be addressed if we want agent less equipments to be
 announced through l2pop.
 
 Another way to do it is to introduce some bgp speakers with e-vpn
 capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
 [1] is an opensource bgp speaker which is able to do that.
 BGP is standardized so equipments might already have it embedded.
 
 last summit, we talked about this kind of idea [2]. We were going further
 by introducing the bgp speaker on each compute node, in use case B of [2].
 
 [1]https://github.com/Orange-OpenSource/bagpipe-bgp
 [2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
 On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
  Hi ML2er,
 
  Today we use agent_ip in L2pop to store endpoints for ports on a
  tunnel type network, such as vxlan or gre. However this has some
  drawbacks:
 
  1) It can only work with backends with agents;
  2) Only one fixed ip is supported per-each agent;
  3) Difficult to interact with other backend and world outside of Openstack.
 
  L2pop is already widely accepted and deployed in host based overlay,
  however because it use agent_ip to populate tunnel endpoint, it's very
  hard to co-exist and inter-operating with other vxlan backend,
  especially agentless MD.
 
  A small change is suggested that the tunnel endpoint should not be the
  attribute of *agent*, but be the attribute of *port*, so if we store
  it in something like *binding:tun_ip*, it is much easier for different
  backend to co-exists. Existing ovs agent and bridge need a small
  patch, to put the local agent_ip into the port context binding fields
  when doing port_up rpc.
 
  Several extra benefits may also be obtained by this way:
 
  1) we can easily and naturally create *external vxlan/gre port* which
  is not attached by an Nova booted VM, with the binding:tun_ip set when
  creating;
  2) we can develop some *proxy agent* which manage a bunch of remote
  external backend, without restriction of its agent_ip.
 
  Best Regards,
  Henry
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-30 Thread Mathieu Rohon
hi henry,

thanks for this interesting idea. It would be interesting to think about
how external gateway could leverage the l2pop framework.

Currently l2pop sends its fdb messages once the status of the port is
modified. AFAIK, this status is only modified by agents which send
update_devce_up/down().
This issue has also to be addressed if we want agent less equipments to be
announced through l2pop.

Another way to do it is to introduce some bgp speakers with e-vpn
capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
[1] is an opensource bgp speaker which is able to do that.
BGP is standardized so equipments might already have it embedded.

last summit, we talked about this kind of idea [2]. We were going further
by introducing the bgp speaker on each compute node, in use case B of [2].

[1]https://github.com/Orange-OpenSource/bagpipe-bgp
[2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe

On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:

 Hi ML2er,

 Today we use agent_ip in L2pop to store endpoints for ports on a
 tunnel type network, such as vxlan or gre. However this has some
 drawbacks:

 1) It can only work with backends with agents;
 2) Only one fixed ip is supported per-each agent;
 3) Difficult to interact with other backend and world outside of Openstack.

 L2pop is already widely accepted and deployed in host based overlay,
 however because it use agent_ip to populate tunnel endpoint, it's very
 hard to co-exist and inter-operating with other vxlan backend,
 especially agentless MD.

 A small change is suggested that the tunnel endpoint should not be the
 attribute of *agent*, but be the attribute of *port*, so if we store
 it in something like *binding:tun_ip*, it is much easier for different
 backend to co-exists. Existing ovs agent and bridge need a small
 patch, to put the local agent_ip into the port context binding fields
 when doing port_up rpc.

 Several extra benefits may also be obtained by this way:

 1) we can easily and naturally create *external vxlan/gre port* which
 is not attached by an Nova booted VM, with the binding:tun_ip set when
 creating;
 2) we can develop some *proxy agent* which manage a bunch of remote
 external backend, without restriction of its agent_ip.

 Best Regards,
 Henry

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-26 Thread henry hly
Hi ML2er,

Today we use agent_ip in L2pop to store endpoints for ports on a
tunnel type network, such as vxlan or gre. However this has some
drawbacks:

1) It can only work with backends with agents;
2) Only one fixed ip is supported per-each agent;
3) Difficult to interact with other backend and world outside of Openstack.

L2pop is already widely accepted and deployed in host based overlay,
however because it use agent_ip to populate tunnel endpoint, it's very
hard to co-exist and inter-operating with other vxlan backend,
especially agentless MD.

A small change is suggested that the tunnel endpoint should not be the
attribute of *agent*, but be the attribute of *port*, so if we store
it in something like *binding:tun_ip*, it is much easier for different
backend to co-exists. Existing ovs agent and bridge need a small
patch, to put the local agent_ip into the port context binding fields
when doing port_up rpc.

Several extra benefits may also be obtained by this way:

1) we can easily and naturally create *external vxlan/gre port* which
is not attached by an Nova booted VM, with the binding:tun_ip set when
creating;
2) we can develop some *proxy agent* which manage a bunch of remote
external backend, without restriction of its agent_ip.

Best Regards,
Henry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-26 Thread Kevin Benton
That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.

There is no tunnel network in this case, just VLAN networks. Security
groups work fine and they can communicate with each other over the network.
The IVS agent wires its ports' security groups and OVS wires its own.
Security group filtering is local to a port. Why didn't you think that
would work?

Those agent notification is handled by other common code in ML2, so thin
MDs can seamlessly be integrated with each other horizontally for all
features, like tunnel l2pop.

That's just the tunnel coordination issue that has already been brought up.
That's orthogonal to whether or not a mechanism driver is 'thin' or 'fat'.
Someone could implement another 'fat' driver that doesn't communicate with
a backend and it could still be incompatible with the OVS driver if it sets
up tunnels in its own way.


To bring this back to the relevant topic. OVN can have an ML2 driver that
calls a backend without having neutron agents (agents != ML2).
Interoperability with other vxlan drivers will be an issue because there
isn't a general solution for that yet. That's still better (from an
interoperability perspective) than being a monolithic plugin that doesn't
allow anything else to run.

On Wed, Feb 25, 2015 at 10:04 PM, loy wolfe loywo...@gmail.com wrote:


 On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


 That's just what I mean about horizontal, which is limited for some
 features. For example, ports belonging to BSN driver and OVS driver can't
 communicate with each other in the same tunnel network, neither does
 security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


 Here is the key difference: thin MD such as ovs and bridge never push any
 work to agent, which only handle port bind, just as a scheduler selecting
 the backend vif type. Those agent notification is handled by other common
 code in ML2, so thin MDs can seamlessly be integrated with each other
 horizontally for all features, like tunnel l2pop. On the other hand fat MD
 will push every work to backend through HTTP call, which partly block
 horizontal inter-operation with other backends.

 Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
 backend? Which should be much easier for horizontal inter-operate.


 On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with 
 OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
The fact that a system doesn't use a neutron agent is not a good
justification for monolithic vs driver. The VLAN drivers co-exist with OVS
just fine when using VLAN encapsulation even though some are agent-less.

There is a missing way to coordinate connectivity with tunnel networks
across drivers, but that doesn't mean you can't run multiple drivers to
handle different types or just to provide additional features (auditing,
more access control, etc).
On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2 framework
 (also inter-operate with native Neutron L3/service plugins), while all
 other fat MD(agentless) go with the old style of monolithic plugin, with
 all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
Yeah, it seems ML2 at the least should save you a lot of boilerplate.
On Feb 25, 2015 2:32 AM, Russell Bryant rbry...@redhat.com wrote:

 On 02/24/2015 05:38 PM, Kevin Benton wrote:
  OVN implementing it's own control plane isn't a good reason to make it a
  monolithic plugin. Many of the ML2 drivers are for technologies with
  their own control plane.
 
  Going with the monolithic plugin only makes sense if you are certain
  that you never want interoperability with other technologies at the
  Neutron level. Instead of ruling that out this early, why not make it as
  an ML2 driver and then change to a monolithic plugin if you run into
  some fundamental issue with ML2?

 That was my original thinking.  I figure the important code of the ML2
 driver could be reused if/when the switch is needed.  I'd really just
 take the quicker path to making something work unless it's obvious that
 ML2 isn't the right path.  As this thread is still ongoing, it certainly
 doesn't seem obvious.

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
You can horizontally split as well (if I understand what axis definitions
you are using). The Big Switch driver for example will bind ports that
belong to hypervisors running IVS while leaving the OVS driver to bind
ports attached to hypervisors running OVS.

I don't fully understand your comments about  the architecture of neutron.
Most work is delegated to either agents or a backend server. Basically
every ML2 driver pushes the work via an agent notification or an HTTP call
of some sort. If you do want to have a discussion about the architecture of
neutron, please start a new thread. This one is related to developing an
OVN plugin/driver and we have already diverged too far.
On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a 
 few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
In the cases I'm referring to, OVS handles the security groups and
vswitch.  The other drivers handle fabric configuration for VLAN tagging to
the host and whatever other plumbing they want to do.
On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
Oh, what you mean is vertical splitting, while I'm talking about horizontal
splitting.

I'm a little confused about why Neutron is designed so differently with
Nova and Cinder. In fact MD could be very simple, delegating nearly all
things out to agent. Remember Cinder volume manager? The real storage
backend could also be deployed outside the server farm as the dedicated
hardware, not necessary the local host based resource. The agent could act
as the proxy to an outside module, instead of heavy burden on central
plugin servers, and also, all backend can inter-operate and co-exist
seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

so how about security group, and all other things which need coordination
between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


Here is the key difference: thin MD such as ovs and bridge never push any
work to agent, which only handle port bind, just as a scheduler selecting
the backend vif type. Those agent notification is handled by other common
code in ML2, so thin MDs can seamlessly be integrated with each other
horizontally for all features, like tunnel l2pop. On the other hand fat MD
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.

Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
backend? Which should be much easier for horizontal inter-operate.


On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in
 networking). I am getting a bit confused by this discussion. Aren’t 
 there
 already a few monolithic plugins (that is what I could understand from
 reading the Networking chapter of the OpenStack Cloud Administrator 
 Guide.
 Table 7.3 Available networking plugi-ins)? So how do we have
 interoperability between those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Fawad Khaliq
On Wed, Feb 25, 2015 at 5:34 AM, Sukhdev Kapur sukhdevka...@gmail.com
wrote:

 Folks,

 A great discussion. I am not expert at OVN, hence, want to ask a question.
 The answer may make a  case that it should probably be a ML2 driver as
 oppose to monolithic plugin.

 Say a customer want to deploy an OVN based solution and use HW devices
 from one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use
 another vendor for services (e.g. F5 or A10) - how can that be supported?

 If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
 achieve above solution. For a monolithic plugin, don't I have an issue?

On the specifics of service plugins: service plugins and standalone plugins
can co-exist to provide a solution with advanced services from different
vendors. Some existing monolithic plugins (e.g. PLUMgrid) have blueprints
deployed using this approach.


 regards..
 -Sukhdev


 On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is
 not anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com
 wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will
 be, and that was my initial concern with doing ML2 vs. full plugin. With
 the HW VTEP support in OVN+OVS, you can tie in physical devices this way.
 Anyways, this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
+1 to separate monolithic OVN plugin

The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent.

However, when things come with all kinds of agentless solutions, especially
all kinds of SDN controller (except for Ryu-Lib style), Mechanism Driver
became the new monolithic place despite the benefits of code reduction:
 MDs can't inter-operate neither between themselves nor with ovs/bridge
agent L2pop, each MD has its own exclusive vxlan mapping/broadcasting
solution.

So my suggestion is that keep those thin MD(with agent) in ML2 framework
(also inter-operate with native Neutron L3/service plugins), while all
other fat MD(agentless) go with the old style of monolithic plugin, with
all L2-L7 features tightly integrated.

On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I am
 getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kyle Mestery
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


To be honest, after thinking about this last night, I'm now leaning towards
doing this as a full plugin. I don't really envision OVN running with other
plugins, as OVN is implementing it's own control plane, as you say. So the
value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
ML2 MechanismDriver.

Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for performing
 operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it would be the case to look at
 developing a ML2 mechanism driver, and perhaps a L3 service plugin.
 It is worth nothing that ML2, thanks to its type and mechanism driver
 provides also some control plane capabilities. If those capabilities are
 however on OVN's roadmap it might be instead worth looking at a
 monolithic plugin, which can also be easily implemented by inheriting
 from neutron.db.db_base_plugin_v2.NeutronDbPluginV2, and then adding all
 the python mixins for the extensions the plugin needs to support.

 Salvatore


 On 23 February 2015 at 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Kevin Benton
OVN implementing it's own control plane isn't a good reason to make it a
monolithic plugin. Many of the ML2 drivers are for technologies with their
own control plane.

Going with the monolithic plugin only makes sense if you are certain that
you never want interoperability with other technologies at the Neutron
level. Instead of ruling that out this early, why not make it as an ML2
driver and then change to a monolithic plugin if you run into some
fundamental issue with ML2?
On Feb 24, 2015 8:16 AM, Kyle Mestery mest...@mestery.com wrote:

 On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

 Russel and I have already merged the initial ML2 skeleton driver [1].

 The thinking is that we can always revert to a non-ML2 driver if needed.


 If nothing else an authoritative decision on a design direction saves us
 the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
 However, since the same kind of approach has been adopted for ODL I guess
 this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


 I'm not sure how useful having using OVN with other drivers will be, and
 that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


 That was also kind of my point regarding the control plane bits provided
 by ML2 which OVN does not need. Still, the fact that we do not use a
 function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

 See above. I'd like to propose we move OVN to a full plugin instead of an
 ML2 MechanismDriver.

 Kyle


 Salvatore



 Thanks,
 Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

 One example of this that is common now is having the current OVS driver
 responsible for setting up the vswitch and then having a ToR driver (e.g.
 Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

 I suppose with an overlay it's easier to take the route that you don't
 want to be compatible with other networking stuff at the Neutron layer
 (e.g. integration with the 3rd parties is orchestrated somewhere else). In
 that case, the above scenario wouldn't make much sense to support, but it's
 worth keeping in mind.

 On Mon, Feb 23, 2015 at 10:28 AM, Salvatore Orlando 
 sorla...@nicira.com wrote:

 I think there are a few factors which influence the ML2 driver vs
 monolithic plugin debate, and they mostly depend on OVN rather than
 Neutron.
 From a Neutron perspective both plugins and drivers (as long at they
 live in their own tree) will be supported in the foreseeable future. If a
 ML2 mech driver is not the best option for OVN that would be ok - I don't
 think the Neutron community advices development of a ML2 driver as the
 preferred way for integrating with new backends.

 The ML2 framework provides a long list of benefits that mechanism
 driver developer can leverage.
 Among those:
 - The ability of leveraging Type drivers which relieves driver
 developers from dealing with network segment allocation
 - Post-commit and (for most operations) pre-commit hooks for
 performing operation on the backend
 - The ability to leverage some of the features offered by Neutron's
 built-in control-plane such as L2-population
 - A flexible mechanism for enabling driver-specific API extensions
 - Promotes modular development and integration with higher-layer
 services, such as L3. For instance OVN could provide the L2 support for
 Neutron's built-in L3 control plane
 - The (potential afaict) ability of interacting with other mechanism
 driver such as those operating on physical appliances on the data center
 - add your benefit here

 In my opinion OVN developers should look at ML2 benefits, and check
 which ones apply to this specific platform. I'd say that if there are 1 or
 2 checks in the above list, maybe it 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Salvatore Orlando
I think we're speculating a lot about what would be best for OVN whereas we
should probably just expose pro and cons of ML2 drivers vs standalone
plugin (as I said earlier on indeed it does not necessarily imply
monolithic *)

I reckon the job of the Neutron community is to provide a full picture to
OVN developers - so that they could make a call on the integration strategy
that best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not
anymore a plugin but the interface with the API layer, then any choice
which is not a ML2 driver does not make any sense. Personally I'm not sure
we ever want to do that, at least not in the near/medium term, but I'm one
and hardly representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic
simply does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't sound
 immediately useful, there are several potential use cases to consider. One
 is that it allows new technology to be introduced into an existing cloud
 alongside what previously existed. Migration from one ML2 driver to another
 may be a lot simpler (and/or flexible) than migration from one plugin to
 another. Another is that additional drivers can support special cases, such
 as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.

  Kyle


   Salvatore



  Thanks,
  Kyle

 [1] https://github.com/stackforge/networking-ovn

 On Mon, Feb 23, 2015 at 4:09 PM, Kevin Benton blak...@gmail.com wrote:

 I want to emphasize Salvatore's last two points a bit more. If you go
 with a monolithic plugin, you eliminate the possibility of heterogenous
 deployments.

  One example of this that is common now is having the current OVS
 driver responsible for setting up the vswitch and then having a ToR driver
 (e.g. Big Switch, Arista, etc) responsible for setting up the fabric.
 Additionally, there is a separate L3 plugin (e.g. the reference one,
 Vyatta, etc) for providing routing.

  I suppose with 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Amit Kumar Saha (amisaha)
Hi,

I am new to OpenStack (and am particularly interested in networking). I am 
getting a bit confused by this discussion. Aren’t there already a few 
monolithic plugins (that is what I could understand from reading the Networking 
chapter of the OpenStack Cloud Administrator Guide. Table 7.3 Available 
networking plugi-ins)? So how do we have interoperability between those (or do 
we not intend to)?

BTW, it is funny that the acronym ML can also be used for “monolithic” ☺

Regards,
Amit Saha
Cisco, Bangalore



From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Wednesday, February 25, 2015 6:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question. The 
answer may make a  case that it should probably be a ML2 driver as oppose to 
monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from one 
vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another vendor for 
services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to achieve 
above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
I think we're speculating a lot about what would be best for OVN whereas we 
should probably just expose pro and cons of ML2 drivers vs standalone plugin 
(as I said earlier on indeed it does not necessarily imply monolithic *)

I reckon the job of the Neutron community is to provide a full picture to OVN 
developers - so that they could make a call on the integration strategy that 
best suits them.
On the other hand, if we're planning to commit to a model where ML2 is not 
anymore a plugin but the interface with the API layer, then any choice which is 
not a ML2 driver does not make any sense. Personally I'm not sure we ever want 
to do that, at least not in the near/medium term, but I'm one and hardly 
representative of the developer/operator communities.

Salvatore


* In particular with the advanced service split out the term monolithic simply 
does not mean anything anymore.

On 24 February 2015 at 17:48, Robert Kukura 
kuk...@noironetworks.commailto:kuk...@noironetworks.com wrote:
Kyle, What happened to the long-term potential goal of ML2 driver APIs becoming 
neutron's core APIs? Do we really want to encourage new monolithic plugins?

ML2 is not a control plane - its really just an integration point for control 
planes. Although co-existence of multiple mechanism drivers is possible, and 
sometimes very useful, the single-driver case is fully supported. Even with 
hierarchical bindings, its not really ML2 that controls what happens - its the 
drivers within the framework. I don't think ML2 really limits what drivers can 
do, as long as a virtual network can be described as a set of static and 
possibly dynamic network segments. ML2 is intended to impose as few constraints 
on drivers as possible.

My recommendation would be to implement an ML2 mechanism driver for OVN, along 
with any needed new type drivers or extension drivers. I believe this will 
result in a lot less new code to write and maintain.

Also, keep in mind that even if multiple driver co-existence doesn't sound 
immediately useful, there are several potential use cases to consider. One is 
that it allows new technology to be introduced into an existing cloud alongside 
what previously existed. Migration from one ML2 driver to another may be a lot 
simpler (and/or flexible) than migration from one plugin to another. Another is 
that additional drivers can support special cases, such as bare metal, 
appliances, etc..

-Bob

On 2/24/15 11:11 AM, Kyle Mestery wrote:
On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
On 24 February 2015 at 01:34, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
Russel and I have already merged the initial ML2 skeleton driver [1].
The thinking is that we can always revert to a non-ML2 driver if needed.

If nothing else an authoritative decision on a design direction saves us the 
hassle of going through iterations and discussions.
The integration through ML2 is definitely viable. My opinion however is that 
since OVN implements a full control plane, the control plane bits provided by 
ML2 are not necessary, and a plugin which provides only management layer 
capabilities might be the best solution. Note: this does not mean it has to be 
monolithic. We can still do L3 with a service plugin.
However, since the same kind of approach has been adopted for ODL I guess this 
provides some sort of validation.

To be honest, after thinking about this last night, I'm now leaning towards 
doing this as a full plugin. I don't really envision OVN

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-24 Thread Sukhdev Kapur
Folks,

A great discussion. I am not expert at OVN, hence, want to ask a question.
The answer may make a  case that it should probably be a ML2 driver as
oppose to monolithic plugin.

Say a customer want to deploy an OVN based solution and use HW devices from
one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use another
vendor for services (e.g. F5 or A10) - how can that be supported?

If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
achieve above solution. For a monolithic plugin, don't I have an issue?

regards..
-Sukhdev


On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is not
 anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will be,
 and that was my initial concern with doing ML2 vs. full plugin. With the HW
 VTEP support in OVN+OVS, you can tie in physical devices this way. Anyways,
 this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control plane bits
 provided by ML2 which OVN does not need. Still, the fact that we do not use
 a function does not make any harm.
 Also i'm not sure if OVN needs at all a type manager. If not, we can
 always implement a no-op type manager, I guess.

See above. I'd like to propose we move OVN to a full plugin instead
 of an ML2 MechanismDriver.

  

[openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-23 Thread Ben Pfaff
[branching off a discussion on ovs-dev at this point:
http://openvswitch.org/pipermail/dev/2015-February/051609.html]

On Fri, Feb 20, 2015 at 6:56 PM, Kyle Mestery mest...@mestery.com wrote:
 One thing to keep in mind, this ties somewhat into my response to Russell
 earlier on the decision around ML2 vs. core plugin. If we do ML2, there are
 type drivers for VLAN, VXLAN, and GRE tunnels. There is no TypeDriver for
 STT tunnels upstream now. It's just an item we need on the TODO list if we
 go down the STT tunnel path.

It was suggested to me off-list that this part of the discussion should be on
openstack-dev, so here it is ;-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] [arp] [l2pop] arp responding for vlan network

2015-02-04 Thread Mathieu Rohon
Hi henry,

It looks great and quite simple thanks to the work done by the ofagent team.

This kind of work might be used also for DVR which now support VLAN
networks [3].

I have some concerns about the patch submitted in [1], so let's review!

[3]https://review.openstack.org/#/c/129884/

On Wed, Feb 4, 2015 at 8:06 AM, henry hly henry4...@gmail.com wrote:

 Hi ML2'ers,

 We encounter use case of large amount of vlan network deployment, and
 want to reduce ARP storm by local responding.

 Luckily from Icehouse arp local response is implemented, however vlan
 is missed for l2pop. Then came this BP[1], which implement the plugin
 support of l2pop for configurable network types, and the ofagent vlan
 l2pop.

 Now I find proposal for ovs vlan support for l2pop [2], it's very
 small and was submitted as a bugfix, so I want to know is it possible
 to be merged in the K cycle?

 Best regards
 Henry

 [1] https://review.openstack.org/#/c/112947/
 [2] https://bugs.launchpad.net/neutron/+bug/1413056

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] [arp] [l2pop] arp responding for vlan network

2015-02-03 Thread henry hly
Hi ML2'ers,

We encounter use case of large amount of vlan network deployment, and
want to reduce ARP storm by local responding.

Luckily from Icehouse arp local response is implemented, however vlan
is missed for l2pop. Then came this BP[1], which implement the plugin
support of l2pop for configurable network types, and the ofagent vlan
l2pop.

Now I find proposal for ovs vlan support for l2pop [2], it's very
small and was submitted as a bugfix, so I want to know is it possible
to be merged in the K cycle?

Best regards
Henry

[1] https://review.openstack.org/#/c/112947/
[2] https://bugs.launchpad.net/neutron/+bug/1413056

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Kevin Benton
Your VM must be launched on the controller node then. In a multi-node setup
the controller will also act as a compute node unless you have disabled the
n-cpu service. The 'host' attribute is specifically to indicate where a
port is being used. It's not for anything else.

On Mon, Feb 2, 2015 at 1:15 AM, Harshada Kakad harshada.ka...@izeltech.com
wrote:

 Thanks Kevin for reply.
 But 'host' attribute returns me the controller hostname and not compute
 host name. I am having multi node setup, and I want to know compute host
 where the VM get launch.

 On Mon, Feb 2, 2015 at 2:19 PM, Kevin Benton blak...@gmail.com wrote:

 ML2 makes the hostname available in the context it passes to the drivers
 via the 'host' attribute.[1] This is the only thing Neutron knows about the
 compute node using the port.

 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776

 On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad 
 harshada.ka...@izeltech.com wrote:

 Hi All,

 I am developing ml2 driver and I want compute host details while
 creation of ports. I mean to say is, I have multi node setup and when I
 launch VM I want to get deatils on which compute node does this VM got
 launced while creation of ports. Can anyone please help me on this.

 Thanks in Advance.

 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Kevin Benton
ML2 makes the hostname available in the context it passes to the drivers
via the 'host' attribute.[1] This is the only thing Neutron knows about the
compute node using the port.

1.
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776

On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad harshada.ka...@izeltech.com
 wrote:

 Hi All,

 I am developing ml2 driver and I want compute host details while creation
 of ports. I mean to say is, I have multi node setup and when I launch VM I
 want to get deatils on which compute node does this VM got launced while
 creation of ports. Can anyone please help me on this.

 Thanks in Advance.

 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Harshada Kakad
Thanks Kevin for reply.
But 'host' attribute returns me the controller hostname and not compute
host name. I am having multi node setup, and I want to know compute host
where the VM get launch.

On Mon, Feb 2, 2015 at 2:19 PM, Kevin Benton blak...@gmail.com wrote:

 ML2 makes the hostname available in the context it passes to the drivers
 via the 'host' attribute.[1] This is the only thing Neutron knows about the
 compute node using the port.

 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776

 On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad 
 harshada.ka...@izeltech.com wrote:

 Hi All,

 I am developing ml2 driver and I want compute host details while creation
 of ports. I mean to say is, I have multi node setup and when I launch VM I
 want to get deatils on which compute node does this VM got launced while
 creation of ports. Can anyone please help me on this.

 Thanks in Advance.

 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] How to get compute host details

2015-02-01 Thread Harshada Kakad
Hi All,

I am developing ml2 driver and I want compute host details while creation
of ports. I mean to say is, I have multi node setup and when I launch VM I
want to get deatils on which compute node does this VM got launced while
creation of ports. Can anyone please help me on this.

Thanks in Advance.

-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][sqlalchemy] Database table does not exists error

2015-01-22 Thread Jakub Libosvar
On 01/22/2015 01:00 PM, Ettore zugliani wrote:
 I am implementing the precommit part of a mechanism driver (ml2) right
 now i'm having problems with sqlalchemy. 
 I made the class that uses the tables, but when the precommit is called
 an error pops up telling that the tables don't exists. 
 To create the tables should i use a create all on initialize? or is
 there a proper way of doing it?
Hi Ettore,

you need to make a migration script for your class. More info can be
found here: https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

After autogenerating you can fine-tune it. It's gonna be placed at
neutron/db/migration/alembic_migrations/versions/

Kuba

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2][sqlalchemy] Database table does not exists error

2015-01-22 Thread Ettore zugliani
I am implementing the precommit part of a mechanism driver (ml2) right now
i'm having problems with sqlalchemy.
I made the class that uses the tables, but when the precommit is called an
error pops up telling that the tables don't exists.
To create the tables should i use a create all on initialize? or is there a
proper way of doing it?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2][sqlalchemy] Database table does not exists error

2015-01-22 Thread Ettore zugliani
Thank you! I managed to do it following your tip and the guide you sent.

2015-01-22 10:54 GMT-02:00 Jakub Libosvar libos...@redhat.com:

 On 01/22/2015 01:00 PM, Ettore zugliani wrote:
  I am implementing the precommit part of a mechanism driver (ml2) right
  now i'm having problems with sqlalchemy.
  I made the class that uses the tables, but when the precommit is called
  an error pops up telling that the tables don't exists.
  To create the tables should i use a create all on initialize? or is
  there a proper way of doing it?
 Hi Ettore,

 you need to make a migration script for your class. More info can be
 found here: https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

 After autogenerating you can fine-tune it. It's gonna be placed at
 neutron/db/migration/alembic_migrations/versions/

 Kuba

 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] - canceling this week's ML2 meeting

2014-12-29 Thread Sukhdev Kapur
Dear fellow ML2'ers,

In sprit of holidays, Bob and I decided to cancel this week's ML2 meeting.
We will resume our meetings from January 7th onwards.

Happy New Year to you and your loved ones.

-Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] no ML2 IRC meeting today

2014-11-12 Thread Robert Kukura
Let's skip the ML2 IRC meeting this week, while some people are still 
traveling and/or recovering. Next week I hope to have good discussions 
regarding a common ML2 driver repo vs. separate repos per vendor, as 
well as the ML2 BPs for Kilo.


-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ml2] How ML2 reflects on the topology?

2014-10-16 Thread Mathieu Rohon
Hi,

if you use a VLAN type driver, TOR switches should be configured in
trunk mode to allow VLAN specified in vlan_type section of
ml2_conf.ini.
vlan id range is defined in this section. Any tenant network will use
an id from this range, and it is totally independent from tenant id.
Some mechanism drivers should allow you to automatically configure the
TOR switch with the correct vlan ID in the trunk port connected to
compute nodes.

when you create a port, traffic from this port will use the vlan tag
from the network which owns this port.

Hope this help

On Wed, Oct 15, 2014 at 7:18 PM, Ettore zugliani
ettorezugli...@gmail.com wrote:
 Hi, I've got a few questions that have been left unanswered on Ask.Openstack
 and on the IRC channel.

 How the topology may be affected by the ML2 API calls? In other words, how
 would a Create Network call affect the actual topology? How is it
 controlled?

 An example: Once we receive a Create Network ML2 API call we don't know
 how exactly it reflects on ANY switch configuration. Supposing that we
 received a create_network with the tenant_id = tid and we are using the
 TypeDriver VLAN, should we create a VLAN on the swtich with vid = tid?

 On a create_port API call should we add a specifc port -manually- to this
 vlan? Another thing that comes to mind is that if is there a default port or
 do we get the correct port from Neutron context?

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-28 Thread Andreas Scheuring
Hi Mathieu, 
please see my comments below.


On Wed, 2014-08-27 at 16:13 +0200, Mathieu Rohon wrote:
 hi irena,
 
 in the proposal of andreas you want to enforce the non-promisc mode
 per l2-agent? 
yes, kind of. We're currently not yet sure how to figure out the right
interface for the registration. In the worst case it would be another
config parameter saying register macs on ethx. But maybe there's a
better automated way. We need to have a closer look at this.

 so every port managed by this agent will have to be in a
 non-promisc state?
 at a first read of the mail, I understood that you want to manage that
 per port with an extension.
I guess you came to that conclusion as I was mentioning only in vlan
and flat networking, right? This applies at least for my use case, but
I do not yet have full insight into the sriov use case of Irena. 
But if it's also only for vlan and flat networking, an extension might
be a good option. But we need a better understanding of the problem and
of the extension framework first! Thanks for your hint!

 By using an extension, an agent could host promisc and non-promisc
 net-adapters, and other MD could potentially leverage this info (at
 least LB MD).
 On Wed, Aug 27, 2014 at 3:45 PM, Irena Berezovsky ire...@mellanox.com wrote:
  Hi Mathieu,
  We had a short discussion with Andreas about the use case stated below and 
  also considered the SR-IOV related use case.
  It seems that all required changes can be encapsulated in the L2 OVS agent, 
  since it requires to add fdb mac registration on adapted interface.
  What was your idea related to extension manager in ML2?
 
  Thanks,
  Irena
 
  -Original Message-
  From: Mathieu Rohon [mailto:]
  Sent: Wednesday, August 27, 2014 3:11 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for 
  non promic mode adapters
 
  you probably should consider using the future extension manager in ML2 :
 
  https://review.openstack.org/#/c/89211/
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Kevin Benton
Ports are bound in order of configured drivers so as long as the
OpenVswitch driver is put first in the list, it will bind the ports it can
and then ODL would bind the leftovers. [1][2] The only missing component is
that ODL doesn't look like it uses l2pop so establishing tunnels between
the OVS agents and the ODL-managed vswitches would be an issue that would
have to be handled via another process.

Regardless, my original point is that the driver keeps the neutron
semantics and DB in tact. In my opinion, the lack of compatibility with
l2pop isn't an issue with the driver, but more of an issue with how l2pop
was designed. It's very tightly coupled to having agents managed by Neutron
via RPC, which shouldn't be necessary when it's primary purpose is to
establish endpoints for overlay tunnels.


1.
https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
2.
https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket statement
 about the data model of drivers/plugins with REST backends is wrong. Look
 at the ODL mechanism driver for a counter-example.[1] The data is still
 stored in Neutron and all of the semantics of the API are maintained. The
 l2pop driver is to deal with decentralized overlays, so I'm not sure how
 its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in ovs
 driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. 
 incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread loy wolfe
On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between
 the OVS agents and the ODL-managed vswitches would be an issue that would
 have to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


So why not agent based? Neutron shouldn't be treated as just an resource
storage, built-in backends naturally need things like l2pop and dvr for
distributed dynamic topology control,  we couldn't say that something as a
part was tightly coupled.

On the contrary, 3rd backends should adapt themselves to be integrated into
Neutron as thin as they can, focusing on the backend device control but not
re-implement core service logic duplicated with Neutron . BTW, Ofagent is a
good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community.
 In this cycle we have found kindred spirits in the NFV subteam., but we 
 did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. 
 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Kevin Benton
So why not agent based?

Maybe I have an experimental operating system that can't run python. Maybe
the RPC channel between compute nodes and Neutron doesn't satisfy certain
security criteria. Regardless of the reason, it doesn't matter because that
is an implementation detail that should be irrelevant to separate ML2
drivers.

l2pop should be concerned with tunnel endpoints and tunnel endpoints only.
Whether or not you're running a chunk of code responding to messages on an
RPC bus and sending heartbeats should not be Neutron's concern. It defeats
the purpose of ML2 if everything that can bind a port has to be running a
neutron RPC-compatible agent.

The l2pop functionality should become part of the tunnel type drivers and
the mechanism drivers should be able to provide the termination endpoints
for the tunnels using whatever mechanism it chooses. Agent-based drivers
can use the agent DB to do this and then the REST drivers can provide
whatever termination point they want. This solves the interoperability
problem and relaxes this tight coupling between vxlan and agents.


On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between
 the OVS agents and the ODL-managed vswitches would be an issue that would
 have to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


 So why not agent based? Neutron shouldn't be treated as just an resource
 storage, built-in backends naturally need things like l2pop and dvr for
 distributed dynamic topology control,  we couldn't say that something as a
 part was tightly coupled.

 On the contrary, 3rd backends should adapt themselves to be integrated
 into Neutron as thin as they can, focusing on the backend device control
 but not re-implement core service logic duplicated with Neutron . BTW,
 Ofagent is a good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com
 wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 

Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
you probably should consider using the future extension manager in ML2 :

https://review.openstack.org/#/c/89211/

On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky ire...@mellanox.com wrote:
 Hi Andreas,
 We can definitely set some time to discuss this.
 I am usually available from 5 to 14:00 UTC.
 Let's follow up on IRC (irenab).

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Monday, August 25, 2014 11:00 AM
 To: Irena Berezovsky
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
 promic mode adapters

 Hi Irena,
 thanks for your reply. Yes sure, collaboration would be great.
 Do you already have a blueprint out there? Maybe wen can synchup this week to 
 discuss more details? Cause I would like to understand what exactly you're 
 looking for. Normally I'm available form 7 UTC to 16 utc (today only until 13 
 utc). My irc name is scheuran. Maybe we can get in contact this week!

 You also where talking about sriov. I saw some blueprint mentioning sriov  
 macvtap. Do you have any insights into this one, too? What we also would like 
 to do is to introduce macvtap as network virtualization option. Macvtap also 
 registers mac addresses to network adapters...


 Thanks,
 Andreas


 On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
 Hi Andreas,
 Thank you for this initiative.
 We were looking on similar problem for mixing OVS and SR-IOV on same network 
 adapter, which also requires mac addresses registration of OVS ports.
 Please let me know if you would like to collaborate on this effort.

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Friday, August 22, 2014 11:16 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Thanks for your feedback.

 No, I do not yet have code for it. Just wanted to get a feeling if such a 
 feature would get acceptance in the community.
 But if that helps I can sit down and start some prototyping while I'm 
 preparing a blueprint spec in parallel.

 The main part of the implementation I wanted to do on my own to get more 
 familiar with the code base and to get more in touch with the community.
 But of course advice and feedback of experienced neutron developers is 
 essential!

 So I will proceed like this
 - Create a blueprint
 - Commit first pieces of code to get early feedback (e.g. ask via the
 mailing list or irc)
 - Upload a spec (as soon as the repo is available for K)

 Does that make sense for you?

 Thanks,
 Andreas



 On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
  I think this sounds reasonable. Do you have code for this already,
  or are you looking for a developer to help implement it?
 
 
  On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
  scheu...@linux.vnet.ibm.com wrote:
  Hi,
  last week I started discussing an extension to the existing
  neutron
  openvswitch agent to support network adapters that are not in
  promiscuous mode. Now I would like to enhance the round to get
  feedback
  from a broader audience via the mailing list.
 
 
  The Problem
  When driving vlan or flat networking, openvswitch requires an
  network
  adapter in promiscuous mode.
 
 
  Why not having promiscuous mode in your adapter?
  - Admins like to have full control over their environment and
  which
  network packets enter the system.
  - The network adapter just does not have support for it.
 
 
  What to do?
  Linux net-dev driver offer an interface to manually register
  additional
  mac addresses (also called secondary unicast addresses).
  Exploiting this
  one can register additional mac addresses to the network
  adapter. This
  also works via a well known ip user space tool.
 
  `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
 
 
  What to do in openstack?
  As neutron is aware of all the mac addresses that are in use
  it's the
  perfect candidate for doing the mac registrations. The idea is
  to modify
  the neutron openvswitch agent that it does the registration on
  port
  add and port remove via the bridge command.
  There would be a new optional configuration parameter,
  something like
  'non-promisc-mode' that is by default set to false. Only when
  set to
  true, macs get manually registered. Otherwise the agent
  behaves like it
  does today. So I guess only very little changes to the agent
  code are
  required. From my current point of view we do not need any
  changes to
  the ml2 plug

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Mathieu Rohon
l2pop is about l2 networks optimization with tunnel creation and arp
repsonder population (so this is
not only a overlays network optimization. For example ofagent now use
l2pop info for flat and vlan optimization [1]),
This optimization is orthogonal to several agent based mechanism
driver (lb, ovs, ofagent).
I agree that this optimization should be accessible to every MD, by
providing an access to fdb dict directly from ML2.db.
a controler based MD like ODL could use those fdb entries the same way
agents use it, by optimizing the datapath under its control.

[1]https://review.openstack.org/#/c/114119/

On Wed, Aug 27, 2014 at 10:30 AM, Kevin Benton blak...@gmail.com wrote:
So why not agent based?

 Maybe I have an experimental operating system that can't run python. Maybe
 the RPC channel between compute nodes and Neutron doesn't satisfy certain
 security criteria. Regardless of the reason, it doesn't matter because that
 is an implementation detail that should be irrelevant to separate ML2
 drivers.

 l2pop should be concerned with tunnel endpoints and tunnel endpoints only.
 Whether or not you're running a chunk of code responding to messages on an
 RPC bus and sending heartbeats should not be Neutron's concern. It defeats
 the purpose of ML2 if everything that can bind a port has to be running a
 neutron RPC-compatible agent.

 The l2pop functionality should become part of the tunnel type drivers and
 the mechanism drivers should be able to provide the termination endpoints
 for the tunnels using whatever mechanism it chooses. Agent-based drivers can
 use the agent DB to do this and then the REST drivers can provide whatever
 termination point they want. This solves the interoperability problem and
 relaxes this tight coupling between vxlan and agents.


 On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between the
 OVS agents and the ODL-managed vswitches would be an issue that would have
 to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


 So why not agent based? Neutron shouldn't be treated as just an resource
 storage, built-in backends naturally need things like l2pop and dvr for
 distributed dynamic topology control,  we couldn't say that something as a
 part was tightly coupled.

 On the contrary, 3rd backends should adapt themselves to be integrated
 into Neutron as thin as they can, focusing on the backend device control but
 not re-implement core service logic duplicated with Neutron . BTW, Ofagent
 is a good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com
 wrote:

 I think that opensource is not the only factor, it's about built-in
  vs. 3rd backend. Built-in must be opensource, but opensource is not
  necessarily built-in. By my thought, current OVS and linuxbridge are
  built-in, but shim RESTful proxy for all kinds of sdn controller should 
  be
  3rd, for they keep all virtual networking data model and service logic 
  in
  their own places, using Neutron API just as the NB shell (they can't 
  even
  co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so 
 I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them 
 in
 a heterogeneous multi-backend architecture , or 

Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
hi irena,

in the proposal of andreas you want to enforce the non-promisc mode
per l2-agent? so every port managed by this agent will have to be in a
non-promisc state?
at a first read of the mail, I understood that you want to manage that
per port with an extension.
By using an extension, an agent could host promisc and non-promisc
net-adapters, and other MD could potentially leverage this info (at
least LB MD).

On Wed, Aug 27, 2014 at 3:45 PM, Irena Berezovsky ire...@mellanox.com wrote:
 Hi Mathieu,
 We had a short discussion with Andreas about the use case stated below and 
 also considered the SR-IOV related use case.
 It seems that all required changes can be encapsulated in the L2 OVS agent, 
 since it requires to add fdb mac registration on adapted interface.
 What was your idea related to extension manager in ML2?

 Thanks,
 Irena

 -Original Message-
 From: Mathieu Rohon [mailto:]
 Sent: Wednesday, August 27, 2014 3:11 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
 promic mode adapters

 you probably should consider using the future extension manager in ML2 :

 https://review.openstack.org/#/c/89211/

 On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky ire...@mellanox.com 
 wrote:
 Hi Andreas,
 We can definitely set some time to discuss this.
 I am usually available from 5 to 14:00 UTC.
 Let's follow up on IRC (irenab).

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Monday, August 25, 2014 11:00 AM
 To: Irena Berezovsky
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Hi Irena,
 thanks for your reply. Yes sure, collaboration would be great.
 Do you already have a blueprint out there? Maybe wen can synchup this week 
 to discuss more details? Cause I would like to understand what exactly 
 you're looking for. Normally I'm available form 7 UTC to 16 utc (today only 
 until 13 utc). My irc name is scheuran. Maybe we can get in contact this 
 week!

 You also where talking about sriov. I saw some blueprint mentioning sriov  
 macvtap. Do you have any insights into this one, too? What we also would 
 like to do is to introduce macvtap as network virtualization option. Macvtap 
 also registers mac addresses to network adapters...


 Thanks,
 Andreas


 On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
 Hi Andreas,
 Thank you for this initiative.
 We were looking on similar problem for mixing OVS and SR-IOV on same 
 network adapter, which also requires mac addresses registration of OVS 
 ports.
 Please let me know if you would like to collaborate on this effort.

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Friday, August 22, 2014 11:16 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Thanks for your feedback.

 No, I do not yet have code for it. Just wanted to get a feeling if such a 
 feature would get acceptance in the community.
 But if that helps I can sit down and start some prototyping while I'm 
 preparing a blueprint spec in parallel.

 The main part of the implementation I wanted to do on my own to get more 
 familiar with the code base and to get more in touch with the community.
 But of course advice and feedback of experienced neutron developers is 
 essential!

 So I will proceed like this
 - Create a blueprint
 - Commit first pieces of code to get early feedback (e.g. ask via the
 mailing list or irc)
 - Upload a spec (as soon as the repo is available for K)

 Does that make sense for you?

 Thanks,
 Andreas



 On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
  I think this sounds reasonable. Do you have code for this already,
  or are you looking for a developer to help implement it?
 
 
  On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
  scheu...@linux.vnet.ibm.com wrote:
  Hi,
  last week I started discussing an extension to the existing
  neutron
  openvswitch agent to support network adapters that are not in
  promiscuous mode. Now I would like to enhance the round to get
  feedback
  from a broader audience via the mailing list.
 
 
  The Problem
  When driving vlan or flat networking, openvswitch requires an
  network
  adapter in promiscuous mode.
 
 
  Why not having promiscuous mode in your adapter?
  - Admins like to have full control over their environment and
  which
  network packets enter the system.
  - The network adapter just does not have support for it.
 
 
  What to do?
  Linux net-dev driver offer an interface to manually register
  additional

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Kevin Benton
It's more than just an optimization when it comes to overlay networks
though. It's the only way for agents to establish segment connectivity when
something like vxlan multicast discovery isn't possible. It shouldn't be
l2pop's responsibility for something like basic connectivity. That should
be handled by the tunnel type drivers.

I'm fine with l2pop optimizing things like ARP responses and tunnel
pruning, but a network should still be able to function without l2pop.


On Wed, Aug 27, 2014 at 6:36 AM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:

 l2pop is about l2 networks optimization with tunnel creation and arp
 repsonder population (so this is
 not only a overlays network optimization. For example ofagent now use
 l2pop info for flat and vlan optimization [1]),
 This optimization is orthogonal to several agent based mechanism
 driver (lb, ovs, ofagent).
 I agree that this optimization should be accessible to every MD, by
 providing an access to fdb dict directly from ML2.db.
 a controler based MD like ODL could use those fdb entries the same way
 agents use it, by optimizing the datapath under its control.

 [1]https://review.openstack.org/#/c/114119/

 On Wed, Aug 27, 2014 at 10:30 AM, Kevin Benton blak...@gmail.com wrote:
 So why not agent based?
 
  Maybe I have an experimental operating system that can't run python.
 Maybe
  the RPC channel between compute nodes and Neutron doesn't satisfy certain
  security criteria. Regardless of the reason, it doesn't matter because
 that
  is an implementation detail that should be irrelevant to separate ML2
  drivers.
 
  l2pop should be concerned with tunnel endpoints and tunnel endpoints
 only.
  Whether or not you're running a chunk of code responding to messages on
 an
  RPC bus and sending heartbeats should not be Neutron's concern. It
 defeats
  the purpose of ML2 if everything that can bind a port has to be running a
  neutron RPC-compatible agent.
 
  The l2pop functionality should become part of the tunnel type drivers and
  the mechanism drivers should be able to provide the termination endpoints
  for the tunnels using whatever mechanism it chooses. Agent-based drivers
 can
  use the agent DB to do this and then the REST drivers can provide
 whatever
  termination point they want. This solves the interoperability problem and
  relaxes this tight coupling between vxlan and agents.
 
 
  On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe loywo...@gmail.com wrote:
 
 
 
 
  On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com
 wrote:
 
  Ports are bound in order of configured drivers so as long as the
  OpenVswitch driver is put first in the list, it will bind the ports it
 can
  and then ODL would bind the leftovers. [1][2] The only missing
 component is
  that ODL doesn't look like it uses l2pop so establishing tunnels
 between the
  OVS agents and the ODL-managed vswitches would be an issue that would
 have
  to be handled via another process.
 
  Regardless, my original point is that the driver keeps the neutron
  semantics and DB in tact. In my opinion, the lack of compatibility with
  l2pop isn't an issue with the driver, but more of an issue with how
 l2pop
  was designed. It's very tightly coupled to having agents managed by
 Neutron
  via RPC, which shouldn't be necessary when it's primary purpose is to
  establish endpoints for overlay tunnels.
 
 
  So why not agent based? Neutron shouldn't be treated as just an resource
  storage, built-in backends naturally need things like l2pop and dvr for
  distributed dynamic topology control,  we couldn't say that something
 as a
  part was tightly coupled.
 
  On the contrary, 3rd backends should adapt themselves to be integrated
  into Neutron as thin as they can, focusing on the backend device
 control but
  not re-implement core service logic duplicated with Neutron . BTW,
 Ofagent
  is a good example for this style.
 
 
 
 
  1.
 
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
  2.
 
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326
 
 
  On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com
 wrote:
 
 
 
 
  On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com
  wrote:
 
  I think that opensource is not the only factor, it's about
 built-in
   vs. 3rd backend. Built-in must be opensource, but opensource is not
   necessarily built-in. By my thought, current OVS and linuxbridge
 are
   built-in, but shim RESTful proxy for all kinds of sdn controller
 should be
   3rd, for they keep all virtual networking data model and service
 logic in
   their own places, using Neutron API just as the NB shell (they
 can't even
   co-work with built-in l2pop driver for vxlan/gre network type
 today).
 
 
  I understand the point you are trying to make, but this blanket
  statement about the data model of drivers/plugins 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-26 Thread loy wolfe
Forwarded from other thread discussing about incubator:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction between
 a vendor plugin and an open source plugin though?


I think that opensource is not the only factor, it's about built-in vs.
3rd backend. Built-in must be opensource, but opensource is not necessarily
built-in. By my thought, current OVS and linuxbridge are built-in, but shim
RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
all virtual networking data model and service logic in their own places,
using Neutron API just as the NB shell (they can't even co-work with
built-in l2pop driver for vxlan/gre network type today).

As for the Snabb or DPDKOVS (they also plan to support official qemu
vhost-user), or some other similar contributions, if one or two of them win
in the war of this high performance userspace vswitch, and receive large
common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like
 a vendor plugin but is actually completely open source. The development is
 driven by end-user organisations who want to make the standard upstream
 Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if there
 is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-26 Thread Kevin Benton
I think that opensource is not the only factor, it's about built-in vs.
3rd backend. Built-in must be opensource, but opensource is not necessarily
built-in. By my thought, current OVS and linuxbridge are built-in, but shim
RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
all virtual networking data model and service logic in their own places,
using Neutron API just as the NB shell (they can't even co-work with
built-in l2pop driver for vxlan/gre network type today).


I understand the point you are trying to make, but this blanket statement
about the data model of drivers/plugins with REST backends is wrong. Look
at the ODL mechanism driver for a counter-example.[1] The data is still
stored in Neutron and all of the semantics of the API are maintained. The
l2pop driver is to deal with decentralized overlays, so I'm not sure how
its interoperability with the ODL driver is relevant.

1.
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in vs.
 3rd backend. Built-in must be opensource, but opensource is not necessarily
 built-in. By my thought, current OVS and linuxbridge are built-in, but shim
 RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
 all virtual networking data model and service logic in their own places,
 using Neutron API just as the NB shell (they can't even co-work with
 built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks like
 a vendor plugin but is actually completely open source. The development is
 driven by end-user organisations who want to make the standard upstream
 Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-26 Thread loy wolfe
On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket statement
 about the data model of drivers/plugins with REST backends is wrong. Look
 at the ODL mechanism driver for a counter-example.[1] The data is still
 stored in Neutron and all of the semantics of the API are maintained. The
 l2pop driver is to deal with decentralized overlays, so I'm not sure how
 its interoperability with the ODL driver is relevant.


If we create a vxlan network,  then can we bind some ports to built-in ovs
driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
ofagent can co-exist in the same vxlan network, under the common l2pop
mechanism. By that scenery, I'm not sure whether ODL can just add to them
in a heterogeneous multi-backend architecture , or work exclusively and
have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in vs.
 3rd backend. Built-in must be opensource, but opensource is not necessarily
 built-in. By my thought, current OVS and linuxbridge are built-in, but shim
 RESTful proxy for all kinds of sdn controller should be 3rd, for they keep
 all virtual networking data model and service logic in their own places,
 using Neutron API just as the NB shell (they can't even co-work with
 built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >