Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Mathieu Rohon
Hi

On Thu, Jan 16, 2014 at 11:27 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi Bob, Kyle

 I pushed (A) https://review.openstack.org/#/c/67281/.
 so could you review it?

 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 03:13 PM, Kyle Mestery wrote:

 On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Amir

 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,

 I just want to make sure I understand the plan and its consequences. I’m 
 on board with the YAGNI principle of hardwiring mechanism drivers to 
 return their firewall_driver types for now.

 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it 
 correct to say: we’ll need to implement a method such that the ML2 
 mechanism driver is aware of its agents and each of the agents' 
 configured firewall_driver? i.e. additional RPC communication?

 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the 
 vif_security for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to 
 server and store it in the agents_db table

 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent 
 firewall_driver instances in a single agent. i.e. don’t statically 
 configure firewall_driver in the agent, but let the MD choose the 
 firewall_driver for the port based on what firewall_drivers the agent 
 supports.

 I don't see the need for anything that complex, although it could
 certainly be done in any MD+agent that needed it.

 I personally feel statically configuring a firewall driver for an L2
 agent is sufficient right now, and all ports handled by that agent will
 use that firewall driver.

 Clearly, different kinds of L2 agents that coexist within a deployment
 may use different firewall drivers. For example, linuxbridge-agent might
 use iptables-firewall-driver, openvswitch-agent might use
 ovs-firewall-driver, and hyperv-agent might use something else.

 I can also imagine cases where different instances of the same kind of
 L2 agent on different nodes might use different firewall drivers. Just
 as a hypothetical example, lets say that the ovs-firewall-driver
 requires new OVS features (maybe connection tracking). A deployment
 might have this new OVS feature available on some if its nodes, but not
 on others. It could be useful to configure openvswitch-agent on the
 nodes with the new OVS version to use ovs-firewall-driver, and configure
 openvswitch-agent on the nodes without the new OVS version to use
 iptables-firewall-driver. That kind of flexibility seems best supported
 by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
 each node, which is what we currently do.


 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.

 I suggest a hypothetical use case above. Not sure how important it is,
 but I'm hesitant to rule it out without good reason.

 Our community resource is limited, so we should focus on some usecase and
 functionalities.
 If there is no strong supporter for this usecase, we shouldn't do it.
 We should take simplest implementation for our focused usecase.

 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.

 This is certainly one way the MD+agent combination could do it. It would
 require some RPC to transmit the choice of driver or mode to the agent.
 But I really don't think the MD and server have any business worrying
 about which firewall driver class runs in the L2 agent. Theoretically,
 the agent could be written in java;-). And don't forget that users may
 want to plug in a custom firewall driver class instead.

 I think these are the options, in my descending or of current preference:

 1) Configure firewall_driver only in the agent and pass vif_security
 from the agent to the server. Each L2 agent gets the vif_security value
 from its configured driver and includes it in the agents_db RPC data.
 The MD copies the vif_security value from the agents_db to the port
 dictionary.

 2) Configure firewall_driver only in the agent but the hardwire
 vif_security value for each MD. This is a reasonable short term solution
 until we actually have multiple firewall drivers that can work with
 single MD+agent.

 3) Configure firewall_driver only in the agent and configure the
 vif_security value for each MD in the server. This is a slight
 improvement on #2 but doesn't handle the use case above. It seems 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Ian Wells
On 20 January 2014 10:13, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 With such an architecture, we wouldn't have to tell neutron about
 vif_security or vif_type when it creates a port. When Neutron get
 called with port_create, it should only return the tap created.


Not entirely true.  Not every libvirt port is a tap; if you're doing things
with PCI passthrough attachment you want different libvirt configuration
(and, in this instance, also different Xen and everything else
configuration), and you still need vif_type to distinguish.  You just don't
need 101 values for 'this is a *special and unique* sort of software
bridge'.

I don't know if such a proposal is reasonable since I can't find good

 informations about the ability of libvirt to use an already created
 tap, when it creates a VM. It seem to be usable with KVM.
 But I would love to have feedback of the communnity on this
 architecture. May be it has already been discussed on the ML, so
 please give me the pointer.


libvirt will attach to many things, but I'm damned if I can work out if it
will attach to a tap, either.

To my mind, it would make that much more sense if Neutron created,
networked and firewalled a tap and returned it completely set up (versus
now, where the VM can start with a half-configured set of separation and
firewall rules that get patched up asynchronously).
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Jay Pipes
On Mon, 2014-01-20 at 20:43 +0100, Ian Wells wrote:
 To my mind, it would make that much more sense if Neutron created,
 networked and firewalled a tap and returned it completely set up
 (versus now, where the VM can start with a half-configured set of
 separation and firewall rules that get patched up asynchronously).

Amen.

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Robert Kukura
On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,
 
 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.

Agreed. I fully support proposed fix 1, adding enable_security_group
config, at least for ml2. I'm not sure whether making this sort of
change go the openvswitch or linuxbridge plugins at this stage is needed.


 Enabling security group should be a plugin/MD decision, not a driver decision.

I'm not so sure I support proposed fix 2, removing firewall_driver
configuration. I think with proposed fix 1, firewall_driver becomes an
agent-only configuration variable, which seems fine to me, at least for
now. The people working on ovs-firewall-driver need something like this
to choose the between their new driver and the iptables driver. Each L2
agent could obviously revisit this later if needed.

 
 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.

I'm not convinced to support proposed fix 3, basing ml2's vif_security
on the value of vif_type. It seems to me that if vif_type was all that
determines how nova handles security groups, there would be no need for
either the old capabilities or new vif_security port attribute.

I think each ML2 bound MechanismDriver should be able to supply whatever
vif_security (or capabilities) value it needs. It should be free to
determine that however it wants. It could be made configurable on the
server-side as Mathieu suggest below, or could be kept configurable in
the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
the server as I have previously suggested.

As an initial step, until we really have multiple firewall drivers to
choose from, I think we can just hardwire each agent-based
MechanismDriver to return the correct vif_security value for its normal
firewall driver, as we currently do for the capabilities attribute.

Also note that I really like the extend_port_dict() MechanismDriver
methods in Nachi's current patch set. This is a much nicer way for the
bound MechanismDriver to return binding-specific attributes than what
ml2 currently does for vif_type and capabilities. I'm working on a patch
taking that part of Nachi's code, fixing a few things, and extending it
to handle the vif_type attribute as well as the current capabilities
attribute. I'm hoping to post at least a WIP version of this today.

I do support hardwiring the other plugins to return specific
vif_security values, but those values may need to depend on the value of
enable_security_group from proposal 1.

-Bob

 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.
 
 Best,
 Mathieu
 
 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port binding to fix this
 (https://review.openstack.org/#/c/21946/)

 We got discussed about the architecture for ML2 on ML2 weekly meetings, and
 I wanna continue discussion in here.

 Here is my proposal for how to fix it.

 https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Nachi Ueno
Hi Mathieu, Bob

Thank you for your reply
OK let's do (A) - (C) for now.

(A) Remove firewall_driver from server side
 Remove Noop -- I'll write patch for this

(B) update ML2 with extend_port_dict -- Bob will push new review for this

(C) Fix vif_security patch using (1) and (2). -- I'll update the
patch after (A) and (B) merged
 # config is hardwired for each mech drivers for now

(Optional D) Rething firewall_driver config in the agent





2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,

 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.

 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.


 Enabling security group should be a plugin/MD decision, not a driver 
 decision.

 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.


 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.

 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.

 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.

 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.

 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.

 I do support hardwiring the other plugins to return specific
 vif_security values, but those values may need to depend on the value of
 enable_security_group from proposal 1.

 -Bob

 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.

 Best,
 Mathieu

 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port binding to fix this
 (https://review.openstack.org/#/c/21946/)

 We got discussed about the architecture for ML2 on ML2 weekly meetings, and
 I wanna continue discussion in here.

 Here is my proposal for how to fix it.

 https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Amir Sadoughi
Hi all,

I just want to make sure I understand the plan and its consequences. I’m on 
board with the YAGNI principle of hardwiring mechanism drivers to return their 
firewall_driver types for now. 

However, after (A), (B), and (C) are completed, to allow for Open vSwitch-based 
security groups (blueprint ovs-firewall-driver) is it correct to say: we’ll 
need to implement a method such that the ML2 mechanism driver is aware of its 
agents and each of the agents' configured firewall_driver? i.e. additional RPC 
communication?

From yesterday’s meeting: 
http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

16:44:17 rkukura I've suggested that the L2 agent could get the vif_security 
info from its firewall_driver, and include this in its agents_db info
16:44:39 rkukura then the bound MD would return this as the vif_security for 
the port
16:45:47 rkukura existing agents_db RPC would send it from agent to server 
and store it in the agents_db table

Does the above suggestion change with the plan as-is now? From Nachi’s 
response, it seemed like maybe we should support concurrent firewall_driver 
instances in a single agent. i.e. don’t statically configure firewall_driver in 
the agent, but let the MD choose the firewall_driver for the port based on what 
firewall_drivers the agent supports. 

Thanks,

Amir


On Jan 16, 2014, at 11:42 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mathieu, Bob
 
 Thank you for your reply
 OK let's do (A) - (C) for now.
 
 (A) Remove firewall_driver from server side
 Remove Noop -- I'll write patch for this
 
 (B) update ML2 with extend_port_dict -- Bob will push new review for this
 
 (C) Fix vif_security patch using (1) and (2). -- I'll update the
 patch after (A) and (B) merged
 # config is hardwired for each mech drivers for now
 
 (Optional D) Rething firewall_driver config in the agent
 
 
 
 
 
 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,
 
 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.
 
 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.
 
 
 Enabling security group should be a plugin/MD decision, not a driver 
 decision.
 
 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.
 
 
 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.
 
 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.
 
 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.
 
 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.
 
 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.
 
 I do support hardwiring the other plugins to return specific
 vif_security values, but those values may need to depend on the value of
 enable_security_group from proposal 1.
 
 -Bob
 
 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.
 
 Best,
 Mathieu
 
 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks
 
 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Nachi Ueno
Hi Amir

2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,

 I just want to make sure I understand the plan and its consequences. I’m on 
 board with the YAGNI principle of hardwiring mechanism drivers to return 
 their firewall_driver types for now.

 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it correct 
 to say: we’ll need to implement a method such that the ML2 mechanism driver 
 is aware of its agents and each of the agents' configured firewall_driver? 
 i.e. additional RPC communication?

 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its agents_db 
 info
 16:44:39 rkukura then the bound MD would return this as the vif_security 
 for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to server 
 and store it in the agents_db table

 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent firewall_driver 
 instances in a single agent. i.e. don’t statically configure firewall_driver 
 in the agent, but let the MD choose the firewall_driver for the port based on 
 what firewall_drivers the agent supports.

Let's say we have OpenFlowBasedFirewallDriver and
IptablesBasedFirewallDriver in future.
I believe there is no usecase to let user to select such
implementation detail by host.
so it is enough if we have a config security_group_mode=(openflow or
iptables) in OVS MD configuration, then update vif_security based on
this value.


 Thanks,

 Amir


 On Jan 16, 2014, at 11:42 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mathieu, Bob

 Thank you for your reply
 OK let's do (A) - (C) for now.

 (A) Remove firewall_driver from server side
 Remove Noop -- I'll write patch for this

 (B) update ML2 with extend_port_dict -- Bob will push new review for this

 (C) Fix vif_security patch using (1) and (2). -- I'll update the
 patch after (A) and (B) merged
 # config is hardwired for each mech drivers for now

 (Optional D) Rething firewall_driver config in the agent





 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,

 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.

 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.


 Enabling security group should be a plugin/MD decision, not a driver 
 decision.

 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.


 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.

 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.

 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.

 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.

 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.

 I do support hardwiring the other plugins to return specific
 vif_security values, but those values may need to depend on the value of
 enable_security_group from proposal 1.

 -Bob

 Once OVSfirewallDriver 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Kyle Mestery

On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Amir
 
 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,
 
 I just want to make sure I understand the plan and its consequences. I’m on 
 board with the YAGNI principle of hardwiring mechanism drivers to return 
 their firewall_driver types for now.
 
 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it correct 
 to say: we’ll need to implement a method such that the ML2 mechanism driver 
 is aware of its agents and each of the agents' configured firewall_driver? 
 i.e. additional RPC communication?
 
 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html
 
 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the vif_security 
 for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to server 
 and store it in the agents_db table
 
 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent firewall_driver 
 instances in a single agent. i.e. don’t statically configure firewall_driver 
 in the agent, but let the MD choose the firewall_driver for the port based 
 on what firewall_drivers the agent supports.
 
 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.
 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.
 
I agree with your thinking here Nachi. Leaving this as a global
configuration makes the most sense.

 
 Thanks,
 
 Amir
 
 
 On Jan 16, 2014, at 11:42 AM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Mathieu, Bob
 
 Thank you for your reply
 OK let's do (A) - (C) for now.
 
 (A) Remove firewall_driver from server side
Remove Noop -- I'll write patch for this
 
 (B) update ML2 with extend_port_dict -- Bob will push new review for this
 
 (C) Fix vif_security patch using (1) and (2). -- I'll update the
 patch after (A) and (B) merged
# config is hardwired for each mech drivers for now
 
 (Optional D) Rething firewall_driver config in the agent
 
 
 
 
 
 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,
 
 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.
 
 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.
 
 
 Enabling security group should be a plugin/MD decision, not a driver 
 decision.
 
 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.
 
 
 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.
 
 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.
 
 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.
 
 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.
 
 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.
 
 I 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Amir Sadoughi
That also makes sense to me as the simplest option. Looking forward to all of 
your patches.

Thanks,

Amir

On Jan 16, 2014, at 2:13 PM, Kyle Mestery 
mest...@siliconloons.commailto:mest...@siliconloons.com wrote:


On Jan 16, 2014, at 1:37 PM, Nachi Ueno 
na...@ntti3.commailto:na...@ntti3.com wrote:

Hi Amir

2014/1/16 Amir Sadoughi 
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com:
Hi all,

I just want to make sure I understand the plan and its consequences. I’m on 
board with the YAGNI principle of hardwiring mechanism drivers to return their 
firewall_driver types for now.

However, after (A), (B), and (C) are completed, to allow for Open vSwitch-based 
security groups (blueprint ovs-firewall-driver) is it correct to say: we’ll 
need to implement a method such that the ML2 mechanism driver is aware of its 
agents and each of the agents' configured firewall_driver? i.e. additional RPC 
communication?

From yesterday’s meeting: 
http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

16:44:17 rkukura I've suggested that the L2 agent could get the vif_security 
info from its firewall_driver, and include this in its agents_db info
16:44:39 rkukura then the bound MD would return this as the vif_security for 
the port
16:45:47 rkukura existing agents_db RPC would send it from agent to server 
and store it in the agents_db table

Does the above suggestion change with the plan as-is now? From Nachi’s 
response, it seemed like maybe we should support concurrent firewall_driver 
instances in a single agent. i.e. don’t statically configure firewall_driver in 
the agent, but let the MD choose the firewall_driver for the port based on what 
firewall_drivers the agent supports.

Let's say we have OpenFlowBasedFirewallDriver and
IptablesBasedFirewallDriver in future.
I believe there is no usecase to let user to select such
implementation detail by host.
so it is enough if we have a config security_group_mode=(openflow or
iptables) in OVS MD configuration, then update vif_security based on
this value.

I agree with your thinking here Nachi. Leaving this as a global
configuration makes the most sense.


Thanks,

Amir


On Jan 16, 2014, at 11:42 AM, Nachi Ueno 
na...@ntti3.commailto:na...@ntti3.com wrote:

Hi Mathieu, Bob

Thank you for your reply
OK let's do (A) - (C) for now.

(A) Remove firewall_driver from server side
  Remove Noop -- I'll write patch for this

(B) update ML2 with extend_port_dict -- Bob will push new review for this

(C) Fix vif_security patch using (1) and (2). -- I'll update the
patch after (A) and (B) merged
  # config is hardwired for each mech drivers for now

(Optional D) Rething firewall_driver config in the agent





2014/1/16 Robert Kukura rkuk...@redhat.commailto:rkuk...@redhat.com:
On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
Hi,

your proposals make sense. Having the firewall driver configuring so
much things looks pretty stange.

Agreed. I fully support proposed fix 1, adding enable_security_group
config, at least for ml2. I'm not sure whether making this sort of
change go the openvswitch or linuxbridge plugins at this stage is needed.


Enabling security group should be a plugin/MD decision, not a driver decision.

I'm not so sure I support proposed fix 2, removing firewall_driver
configuration. I think with proposed fix 1, firewall_driver becomes an
agent-only configuration variable, which seems fine to me, at least for
now. The people working on ovs-firewall-driver need something like this
to choose the between their new driver and the iptables driver. Each L2
agent could obviously revisit this later if needed.


For ML2, in a first implementation, having vif security based on
vif_type looks good too.

I'm not convinced to support proposed fix 3, basing ml2's vif_security
on the value of vif_type. It seems to me that if vif_type was all that
determines how nova handles security groups, there would be no need for
either the old capabilities or new vif_security port attribute.

I think each ML2 bound MechanismDriver should be able to supply whatever
vif_security (or capabilities) value it needs. It should be free to
determine that however it wants. It could be made configurable on the
server-side as Mathieu suggest below, or could be kept configurable in
the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
the server as I have previously suggested.

As an initial step, until we really have multiple firewall drivers to
choose from, I think we can just hardwire each agent-based
MechanismDriver to return the correct vif_security value for its normal
firewall driver, as we currently do for the capabilities attribute.

Also note that I really like the extend_port_dict() MechanismDriver
methods in Nachi's current patch set. This is a much nicer way for the
bound MechanismDriver to return binding-specific attributes than what
ml2 currently does for vif_type and capabilities. I'm working on a patch
taking that 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Robert Kukura
On 01/16/2014 03:13 PM, Kyle Mestery wrote:
 
 On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Amir

 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,

 I just want to make sure I understand the plan and its consequences. I’m on 
 board with the YAGNI principle of hardwiring mechanism drivers to return 
 their firewall_driver types for now.

 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it correct 
 to say: we’ll need to implement a method such that the ML2 mechanism driver 
 is aware of its agents and each of the agents' configured firewall_driver? 
 i.e. additional RPC communication?

 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the vif_security 
 for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to 
 server and store it in the agents_db table

 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent firewall_driver 
 instances in a single agent. i.e. don’t statically configure 
 firewall_driver in the agent, but let the MD choose the firewall_driver for 
 the port based on what firewall_drivers the agent supports.

I don't see the need for anything that complex, although it could
certainly be done in any MD+agent that needed it.

I personally feel statically configuring a firewall driver for an L2
agent is sufficient right now, and all ports handled by that agent will
use that firewall driver.

Clearly, different kinds of L2 agents that coexist within a deployment
may use different firewall drivers. For example, linuxbridge-agent might
use iptables-firewall-driver, openvswitch-agent might use
ovs-firewall-driver, and hyperv-agent might use something else.

I can also imagine cases where different instances of the same kind of
L2 agent on different nodes might use different firewall drivers. Just
as a hypothetical example, lets say that the ovs-firewall-driver
requires new OVS features (maybe connection tracking). A deployment
might have this new OVS feature available on some if its nodes, but not
on others. It could be useful to configure openvswitch-agent on the
nodes with the new OVS version to use ovs-firewall-driver, and configure
openvswitch-agent on the nodes without the new OVS version to use
iptables-firewall-driver. That kind of flexibility seems best supported
by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
each node, which is what we currently do.


 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.

I suggest a hypothetical use case above. Not sure how important it is,
but I'm hesitant to rule it out without good reason.

 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.

This is certainly one way the MD+agent combination could do it. It would
require some RPC to transmit the choice of driver or mode to the agent.
But I really don't think the MD and server have any business worrying
about which firewall driver class runs in the L2 agent. Theoretically,
the agent could be written in java;-). And don't forget that users may
want to plug in a custom firewall driver class instead.

I think these are the options, in my descending or of current preference:

1) Configure firewall_driver only in the agent and pass vif_security
from the agent to the server. Each L2 agent gets the vif_security value
from its configured driver and includes it in the agents_db RPC data.
The MD copies the vif_security value from the agents_db to the port
dictionary.

2) Configure firewall_driver only in the agent but the hardwire
vif_security value for each MD. This is a reasonable short term solution
until we actually have multiple firewall drivers that can work with
single MD+agent.

3) Configure firewall_driver only in the agent and configure the
vif_security value for each MD in the server. This is a slight
improvement on #2 but doesn't handle the use case above. It seems more
complicated and error prone for the user than #1.

4) Configure the firewall_driver or security_group_mode for each MD in
the server. This would mean some new RPC is needed to for the agent to
fetch the fthis from the server at startup. This could be problematic if
the server isn't running when the L2 agent starts.


 I agree with your thinking here Nachi. Leaving this as a global
 configuration makes the most sense.
 

 Thanks,

 Amir


 On Jan 16, 2014, at 11:42 AM, Nachi Ueno 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Nachi Ueno
Hi Bob, Kyle

I pushed (A) https://review.openstack.org/#/c/67281/.
so could you review it?

2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 03:13 PM, Kyle Mestery wrote:

 On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Amir

 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,

 I just want to make sure I understand the plan and its consequences. I’m 
 on board with the YAGNI principle of hardwiring mechanism drivers to 
 return their firewall_driver types for now.

 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it 
 correct to say: we’ll need to implement a method such that the ML2 
 mechanism driver is aware of its agents and each of the agents' configured 
 firewall_driver? i.e. additional RPC communication?

 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the vif_security 
 for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to 
 server and store it in the agents_db table

 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent 
 firewall_driver instances in a single agent. i.e. don’t statically 
 configure firewall_driver in the agent, but let the MD choose the 
 firewall_driver for the port based on what firewall_drivers the agent 
 supports.

 I don't see the need for anything that complex, although it could
 certainly be done in any MD+agent that needed it.

 I personally feel statically configuring a firewall driver for an L2
 agent is sufficient right now, and all ports handled by that agent will
 use that firewall driver.

 Clearly, different kinds of L2 agents that coexist within a deployment
 may use different firewall drivers. For example, linuxbridge-agent might
 use iptables-firewall-driver, openvswitch-agent might use
 ovs-firewall-driver, and hyperv-agent might use something else.

 I can also imagine cases where different instances of the same kind of
 L2 agent on different nodes might use different firewall drivers. Just
 as a hypothetical example, lets say that the ovs-firewall-driver
 requires new OVS features (maybe connection tracking). A deployment
 might have this new OVS feature available on some if its nodes, but not
 on others. It could be useful to configure openvswitch-agent on the
 nodes with the new OVS version to use ovs-firewall-driver, and configure
 openvswitch-agent on the nodes without the new OVS version to use
 iptables-firewall-driver. That kind of flexibility seems best supported
 by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
 each node, which is what we currently do.


 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.

 I suggest a hypothetical use case above. Not sure how important it is,
 but I'm hesitant to rule it out without good reason.

Our community resource is limited, so we should focus on some usecase and
functionalities.
If there is no strong supporter for this usecase, we shouldn't do it.
We should take simplest implementation for our focused usecase.

 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.

 This is certainly one way the MD+agent combination could do it. It would
 require some RPC to transmit the choice of driver or mode to the agent.
 But I really don't think the MD and server have any business worrying
 about which firewall driver class runs in the L2 agent. Theoretically,
 the agent could be written in java;-). And don't forget that users may
 want to plug in a custom firewall driver class instead.

 I think these are the options, in my descending or of current preference:

 1) Configure firewall_driver only in the agent and pass vif_security
 from the agent to the server. Each L2 agent gets the vif_security value
 from its configured driver and includes it in the agents_db RPC data.
 The MD copies the vif_security value from the agents_db to the port
 dictionary.

 2) Configure firewall_driver only in the agent but the hardwire
 vif_security value for each MD. This is a reasonable short term solution
 until we actually have multiple firewall drivers that can work with
 single MD+agent.

 3) Configure firewall_driver only in the agent and configure the
 vif_security value for each MD in the server. This is a slight
 improvement on #2 but doesn't handle the use case above. It seems more
 complicated and error prone for the user than #1.

 4) Configure the 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Kyle Mestery
On Jan 16, 2014, at 4:27 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Bob, Kyle
 
 I pushed (A) https://review.openstack.org/#/c/67281/.
 so could you review it?
 
Just did, looks good Nachi, thanks!

 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 03:13 PM, Kyle Mestery wrote:
 
 On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Amir
 
 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,
 
 I just want to make sure I understand the plan and its consequences. I’m 
 on board with the YAGNI principle of hardwiring mechanism drivers to 
 return their firewall_driver types for now.
 
 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it 
 correct to say: we’ll need to implement a method such that the ML2 
 mechanism driver is aware of its agents and each of the agents' 
 configured firewall_driver? i.e. additional RPC communication?
 
 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html
 
 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the 
 vif_security for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to 
 server and store it in the agents_db table
 
 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent 
 firewall_driver instances in a single agent. i.e. don’t statically 
 configure firewall_driver in the agent, but let the MD choose the 
 firewall_driver for the port based on what firewall_drivers the agent 
 supports.
 
 I don't see the need for anything that complex, although it could
 certainly be done in any MD+agent that needed it.
 
 I personally feel statically configuring a firewall driver for an L2
 agent is sufficient right now, and all ports handled by that agent will
 use that firewall driver.
 
 Clearly, different kinds of L2 agents that coexist within a deployment
 may use different firewall drivers. For example, linuxbridge-agent might
 use iptables-firewall-driver, openvswitch-agent might use
 ovs-firewall-driver, and hyperv-agent might use something else.
 
 I can also imagine cases where different instances of the same kind of
 L2 agent on different nodes might use different firewall drivers. Just
 as a hypothetical example, lets say that the ovs-firewall-driver
 requires new OVS features (maybe connection tracking). A deployment
 might have this new OVS feature available on some if its nodes, but not
 on others. It could be useful to configure openvswitch-agent on the
 nodes with the new OVS version to use ovs-firewall-driver, and configure
 openvswitch-agent on the nodes without the new OVS version to use
 iptables-firewall-driver. That kind of flexibility seems best supported
 by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
 each node, which is what we currently do.
 
 
 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.
 
 I suggest a hypothetical use case above. Not sure how important it is,
 but I'm hesitant to rule it out without good reason.
 
 Our community resource is limited, so we should focus on some usecase and
 functionalities.
 If there is no strong supporter for this usecase, we shouldn't do it.
 We should take simplest implementation for our focused usecase.
 
 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.
 
 This is certainly one way the MD+agent combination could do it. It would
 require some RPC to transmit the choice of driver or mode to the agent.
 But I really don't think the MD and server have any business worrying
 about which firewall driver class runs in the L2 agent. Theoretically,
 the agent could be written in java;-). And don't forget that users may
 want to plug in a custom firewall driver class instead.
 
 I think these are the options, in my descending or of current preference:
 
 1) Configure firewall_driver only in the agent and pass vif_security
 from the agent to the server. Each L2 agent gets the vif_security value
 from its configured driver and includes it in the agents_db RPC data.
 The MD copies the vif_security value from the agents_db to the port
 dictionary.
 
 2) Configure firewall_driver only in the agent but the hardwire
 vif_security value for each MD. This is a reasonable short term solution
 until we actually have multiple firewall drivers that can work with
 single MD+agent.
 
 3) Configure firewall_driver only in the agent and configure the
 vif_security value for each MD in the server. This is a slight
 improvement on 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Nachi Ueno
Thanks! Kyle

2014/1/16 Kyle Mestery mest...@siliconloons.com:
 On Jan 16, 2014, at 4:27 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Bob, Kyle

 I pushed (A) https://review.openstack.org/#/c/67281/.
 so could you review it?

 Just did, looks good Nachi, thanks!

 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 03:13 PM, Kyle Mestery wrote:

 On Jan 16, 2014, at 1:37 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Amir

 2014/1/16 Amir Sadoughi amir.sadou...@rackspace.com:
 Hi all,

 I just want to make sure I understand the plan and its consequences. I’m 
 on board with the YAGNI principle of hardwiring mechanism drivers to 
 return their firewall_driver types for now.

 However, after (A), (B), and (C) are completed, to allow for Open 
 vSwitch-based security groups (blueprint ovs-firewall-driver) is it 
 correct to say: we’ll need to implement a method such that the ML2 
 mechanism driver is aware of its agents and each of the agents' 
 configured firewall_driver? i.e. additional RPC communication?

 From yesterday’s meeting: 
 http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

 16:44:17 rkukura I've suggested that the L2 agent could get the 
 vif_security info from its firewall_driver, and include this in its 
 agents_db info
 16:44:39 rkukura then the bound MD would return this as the 
 vif_security for the port
 16:45:47 rkukura existing agents_db RPC would send it from agent to 
 server and store it in the agents_db table

 Does the above suggestion change with the plan as-is now? From Nachi’s 
 response, it seemed like maybe we should support concurrent 
 firewall_driver instances in a single agent. i.e. don’t statically 
 configure firewall_driver in the agent, but let the MD choose the 
 firewall_driver for the port based on what firewall_drivers the agent 
 supports.

 I don't see the need for anything that complex, although it could
 certainly be done in any MD+agent that needed it.

 I personally feel statically configuring a firewall driver for an L2
 agent is sufficient right now, and all ports handled by that agent will
 use that firewall driver.

 Clearly, different kinds of L2 agents that coexist within a deployment
 may use different firewall drivers. For example, linuxbridge-agent might
 use iptables-firewall-driver, openvswitch-agent might use
 ovs-firewall-driver, and hyperv-agent might use something else.

 I can also imagine cases where different instances of the same kind of
 L2 agent on different nodes might use different firewall drivers. Just
 as a hypothetical example, lets say that the ovs-firewall-driver
 requires new OVS features (maybe connection tracking). A deployment
 might have this new OVS feature available on some if its nodes, but not
 on others. It could be useful to configure openvswitch-agent on the
 nodes with the new OVS version to use ovs-firewall-driver, and configure
 openvswitch-agent on the nodes without the new OVS version to use
 iptables-firewall-driver. That kind of flexibility seems best supported
 by simply configuring the firewall driver in /ovs_neutron_plugin.ini on
 each node, which is what we currently do.


 Let's say we have OpenFlowBasedFirewallDriver and
 IptablesBasedFirewallDriver in future.
 I believe there is no usecase to let user to select such
 implementation detail by host.

 I suggest a hypothetical use case above. Not sure how important it is,
 but I'm hesitant to rule it out without good reason.

 Our community resource is limited, so we should focus on some usecase and
 functionalities.
 If there is no strong supporter for this usecase, we shouldn't do it.
 We should take simplest implementation for our focused usecase.

 so it is enough if we have a config security_group_mode=(openflow or
 iptables) in OVS MD configuration, then update vif_security based on
 this value.

 This is certainly one way the MD+agent combination could do it. It would
 require some RPC to transmit the choice of driver or mode to the agent.
 But I really don't think the MD and server have any business worrying
 about which firewall driver class runs in the L2 agent. Theoretically,
 the agent could be written in java;-). And don't forget that users may
 want to plug in a custom firewall driver class instead.

 I think these are the options, in my descending or of current preference:

 1) Configure firewall_driver only in the agent and pass vif_security
 from the agent to the server. Each L2 agent gets the vif_security value
 from its configured driver and includes it in the agents_db RPC data.
 The MD copies the vif_security value from the agents_db to the port
 dictionary.

 2) Configure firewall_driver only in the agent but the hardwire
 vif_security value for each MD. This is a reasonable short term solution
 until we actually have multiple firewall drivers that can work with
 single MD+agent.

 3) Configure firewall_driver only in the agent and configure the
 vif_security value for each MD in the 

[openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-15 Thread Nachi Ueno
Hi folks

Security group for OVS agent (ovs plugin or ML2) is being broken.
so we need vif_security port binding to fix this
(https://review.openstack.org/#/c/21946/)

We got discussed about the architecture for ML2 on ML2 weekly meetings, and
I wanna continue discussion in here.

Here is my proposal for how to fix it.

https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev