Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-03-03 Thread Ben Pfaff
On Tue, Mar 03, 2015 at 09:53:23AM +0100, Miguel Ángel Ajo wrote:
 https://review.openstack.org/#/c/159840/1/doc/source/testing/openflow-firewall.rst
   
 
 I may need some help from the OVS experts to answer the questions from  
 henry.hly.
 
 Ben, Thomas, could you please? (let me know if you are not registered to
 the openstack review system, I could answer in your name).

I added a comment.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-03-03 Thread Miguel Ángel Ajo
https://review.openstack.org/#/c/159840/1/doc/source/testing/openflow-firewall.rst
  

I may need some help from the OVS experts to answer the questions from  
henry.hly.

Ben, Thomas, could you please? (let me know if you are not registered to
the openstack review system, I could answer in your name).


Best,
Miguel Ángel Ajo


On Friday, 27 de February de 2015 at 14:50, Miguel Ángel Ajo wrote:

 Ok, I moved the document here [1], and I will eventually submit another patch
 with the testing scripts when those are ready.
  
 Let’s move the discussion to the review!,
  
  
 Best,
 Miguel Ángel Ajo
 [1] https://review.openstack.org/#/c/159840/
  
  
 On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote:
  
  Sounds promising. We'll have to evaluate it for feature parity when the 
  time comes.
   
  On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com 
  (mailto:b...@nicira.com) wrote:
   This sounds quite similar to the planned support in OVN to gateway a
   logical network to a particular VLAN on a physical port, so perhaps it
   will be sufficient.

   On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
If a port is bound with a VLAN segmentation type, it will get a VLAN id 
and
a name of a physical network that it corresponds to. In the current 
plugin,
each agent is configured with a mapping between physical networks and 
OVS
bridges. The agent takes the bound port information and sets up rules to
forward traffic from the VM port to the OVS bridge corresponding to the
physical network. The bridge usually then has a physical interface 
added to
it for the tagged traffic to use to reach the rest of the network.
   
On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com 
(mailto:b...@nicira.com) wrote:
   
 What kind of VLAN support would you need?

 On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
  If OVN chooses not to support VLANs, we will still need the current 
  OVS
  reference anyway so it definitely won't be wasted work.
 
  On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
  majop...@redhat.com (mailto:majop...@redhat.com) wrote:
 
  
   Sharing thoughts that I was having:
  
   May be during the next summit it???s worth discussing the future 
   of the
   reference agent(s), I feel we???ll be replicating a lot of work 
   across
   OVN/OVS/RYU(ofagent) and may be other plugins,
  
   I guess until OVN and it???s integration are ready we can???t 
   stop, so
 it makes
   sense to keep development at our side, also having an independent
 plugin
   can help us iterate faster for new features, yet I expect that OVN
 will be
   more fluent at working with OVS and OpenFlow, as their designers 
   have
   a very deep knowledge of OVS under the hood, and it???s C. ;)
  
   Best regards,
  
   On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com 
   (mailto:majop...@redhat.com) wrote:
  
   On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo 
   wrote:
  
Inline comments follow after this, but I wanted to respond to 
   Brian
   questionwhich has been cut out:
   We???re talking here of doing a preliminary analysis of the 
   networking
   performance,before writing any real code at neutron level.
  
   If that looks right, then we should go into a preliminary (and
 orthogonal
   to iptables/LB)implementation. At that moment we will be able to
 examine
   the scalability of the solutionin regards of switching openflow 
   rules,
   which is going to be severely affectedby the way we use to handle 
   OF
 rules
   in the bridge:
  * via OpenFlow, making the agent a ???real OF controller, 
   with the
   current effort to use  the ryu framework plugin to do that.   
   * via
   cmdline (would be alleviated with the current rootwrap work, but 
   the
 former
   one would be preferred).
   Also, ipset groups can be moved into conjunctive groups in OF 
   (thanks
 Ben
   Pfaff for theexplanation, if you???re reading this ;-))
   Best,Miguel ??ngel
  
  
   On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren 
   wrote:
  
   Hi,
  
   The RFC2544 with near zero packet loss is a pretty standard 
   performance
   benchmark. It is also used in the OPNFV project (
  
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
   ).
  
   Does this mean that OpenStack will have stateful firewalls (or 
   security
   groups)? Any other ideas planned, like ebtables type filtering?
  
   What I am proposing is in the terms of maintaining the 
   statefulness we
   have nowregards security groups 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-27 Thread Miguel Ángel Ajo
Ok, I moved the document here [1], and I will eventually submit another patch
with the testing scripts when those are ready.

Let’s move the discussion to the review!,


Best,
Miguel Ángel Ajo
[1] https://review.openstack.org/#/c/159840/


On Friday, 27 de February de 2015 at 7:03, Kevin Benton wrote:

 Sounds promising. We'll have to evaluate it for feature parity when the time 
 comes.
  
 On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com 
 (mailto:b...@nicira.com) wrote:
  This sounds quite similar to the planned support in OVN to gateway a
  logical network to a particular VLAN on a physical port, so perhaps it
  will be sufficient.
   
  On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
   If a port is bound with a VLAN segmentation type, it will get a VLAN id 
   and
   a name of a physical network that it corresponds to. In the current 
   plugin,
   each agent is configured with a mapping between physical networks and OVS
   bridges. The agent takes the bound port information and sets up rules to
   forward traffic from the VM port to the OVS bridge corresponding to the
   physical network. The bridge usually then has a physical interface added 
   to
   it for the tagged traffic to use to reach the rest of the network.
  
   On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com 
   (mailto:b...@nicira.com) wrote:
  
What kind of VLAN support would you need?
   
On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
 If OVN chooses not to support VLANs, we will still need the current 
 OVS
 reference anyway so it definitely won't be wasted work.

 On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
 majop...@redhat.com (mailto:majop...@redhat.com) wrote:

 
  Sharing thoughts that I was having:
 
  May be during the next summit it???s worth discussing the future of 
  the
  reference agent(s), I feel we???ll be replicating a lot of work 
  across
  OVN/OVS/RYU(ofagent) and may be other plugins,
 
  I guess until OVN and it???s integration are ready we can???t stop, 
  so
it makes
  sense to keep development at our side, also having an independent
plugin
  can help us iterate faster for new features, yet I expect that OVN
will be
  more fluent at working with OVS and OpenFlow, as their designers 
  have
  a very deep knowledge of OVS under the hood, and it???s C. ;)
 
  Best regards,
 
  On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com 
  (mailto:majop...@redhat.com) wrote:
 
  On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo 
  wrote:
 
   Inline comments follow after this, but I wanted to respond to Brian
  questionwhich has been cut out:
  We???re talking here of doing a preliminary analysis of the 
  networking
  performance,before writing any real code at neutron level.
 
  If that looks right, then we should go into a preliminary (and
orthogonal
  to iptables/LB)implementation. At that moment we will be able to
examine
  the scalability of the solutionin regards of switching openflow 
  rules,
  which is going to be severely affectedby the way we use to handle OF
rules
  in the bridge:
 * via OpenFlow, making the agent a ???real OF controller, with 
  the
  current effort to use  the ryu framework plugin to do that.   * 
  via
  cmdline (would be alleviated with the current rootwrap work, but the
former
  one would be preferred).
  Also, ipset groups can be moved into conjunctive groups in OF 
  (thanks
Ben
  Pfaff for theexplanation, if you???re reading this ;-))
  Best,Miguel ??ngel
 
 
  On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
 
  Hi,
 
  The RFC2544 with near zero packet loss is a pretty standard 
  performance
  benchmark. It is also used in the OPNFV project (
 
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
  ).
 
  Does this mean that OpenStack will have stateful firewalls (or 
  security
  groups)? Any other ideas planned, like ebtables type filtering?
 
  What I am proposing is in the terms of maintaining the statefulness 
  we
  have nowregards security groups (RELATED/ESTABLISHED connections are
  allowed back on open ports) while adding a new firewall driver 
  working
only
  with OVS+OF (no iptables or linux bridge).
 
  That will be possible (without auto-populating OF rules in oposite
  directions) due to
  the new connection tracker functionality to be eventually merged 
  into
ovs.
 
 
  -Tapio
 
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com)
wrote:
 
  On 02/25/2015 05:52 AM, Miguel 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Kevin Benton
Sounds promising. We'll have to evaluate it for feature parity when the
time comes.

On Thu, Feb 26, 2015 at 8:21 PM, Ben Pfaff b...@nicira.com wrote:

 This sounds quite similar to the planned support in OVN to gateway a
 logical network to a particular VLAN on a physical port, so perhaps it
 will be sufficient.

 On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
  If a port is bound with a VLAN segmentation type, it will get a VLAN id
 and
  a name of a physical network that it corresponds to. In the current
 plugin,
  each agent is configured with a mapping between physical networks and OVS
  bridges. The agent takes the bound port information and sets up rules to
  forward traffic from the VM port to the OVS bridge corresponding to the
  physical network. The bridge usually then has a physical interface added
 to
  it for the tagged traffic to use to reach the rest of the network.
 
  On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com wrote:
 
   What kind of VLAN support would you need?
  
   On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
If OVN chooses not to support VLANs, we will still need the current
 OVS
reference anyway so it definitely won't be wasted work.
   
On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
majop...@redhat.com wrote:
   

 Sharing thoughts that I was having:

 May be during the next summit it???s worth discussing the future
 of the
 reference agent(s), I feel we???ll be replicating a lot of work
 across
 OVN/OVS/RYU(ofagent) and may be other plugins,

 I guess until OVN and it???s integration are ready we can???t
 stop, so
   it makes
 sense to keep development at our side, also having an independent
   plugin
 can help us iterate faster for new features, yet I expect that OVN
   will be
 more fluent at working with OVS and OpenFlow, as their designers
 have
 a very deep knowledge of OVS under the hood, and it???s C. ;)

 Best regards,

 On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com
 wrote:

 On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo
 wrote:

  Inline comments follow after this, but I wanted to respond to
 Brian
 questionwhich has been cut out:
 We???re talking here of doing a preliminary analysis of the
 networking
 performance,before writing any real code at neutron level.

 If that looks right, then we should go into a preliminary (and
   orthogonal
 to iptables/LB)implementation. At that moment we will be able to
   examine
 the scalability of the solutionin regards of switching openflow
 rules,
 which is going to be severely affectedby the way we use to handle
 OF
   rules
 in the bridge:
* via OpenFlow, making the agent a ???real OF controller, with
 the
 current effort to use  the ryu framework plugin to do that.
  * via
 cmdline (would be alleviated with the current rootwrap work, but
 the
   former
 one would be preferred).
 Also, ipset groups can be moved into conjunctive groups in OF
 (thanks
   Ben
 Pfaff for theexplanation, if you???re reading this ;-))
 Best,Miguel ??ngel


 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren
 wrote:

 Hi,

 The RFC2544 with near zero packet loss is a pretty standard
 performance
 benchmark. It is also used in the OPNFV project (

  
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
 ).

 Does this mean that OpenStack will have stateful firewalls (or
 security
 groups)? Any other ideas planned, like ebtables type filtering?

 What I am proposing is in the terms of maintaining the
 statefulness we
 have nowregards security groups (RELATED/ESTABLISHED connections
 are
 allowed back on open ports) while adding a new firewall driver
 working
   only
 with OVS+OF (no iptables or linux bridge).

 That will be possible (without auto-populating OF rules in oposite
 directions) due to
 the new connection tracker functionality to be eventually merged
 into
   ovs.


 -Tapio

 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com
   wrote:

 On 02/25/2015 05:52 AM, Miguel ??ngel Ajo wrote:

 I???m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there???s a real difference before jumping
 into any
 OpenFlow security group filters when we have connection tracking in
   OVS.

 The plan is to keep all of it in a single multicore host, and make
 all the measures within it, to make sure we just measure the
 difference due to the software layers.

 Suggestions or ideas on what to measure are welcome, there???s an
   initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


 Conditions to be 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Miguel Angel Ajo Pelayo

Sharing thoughts that I was having:

May be during the next summit it’s worth discussing the future of the 
reference agent(s), I feel we’ll be replicating a lot of work across 
OVN/OVS/RYU(ofagent) and may be other plugins,

I guess until OVN and it’s integration are ready we can’t stop, so it makes
sense to keep development at our side, also having an independent plugin
can help us iterate faster for new features, yet I expect that OVN will be
more fluent at working with OVS and OpenFlow, as their designers have
a very deep knowledge of OVS under the hood, and it’s C. ;)

Best regards,

 On 26/2/2015, at 7:57, Miguel Ángel Ajo majop...@redhat.com wrote:
 
 On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
 Inline comments follow after this, but I wanted to respond to Brian 
 questionwhich has been cut out:
 We’re talking here of doing a preliminary analysis of the networking 
 performance,before writing any real code at neutron level.
 
 If that looks right, then we should go into a preliminary (and orthogonal to 
 iptables/LB)implementation. At that moment we will be able to examine the 
 scalability of the solutionin regards of switching openflow rules, which is 
 going to be severely affectedby the way we use to handle OF rules in the 
 bridge:
* via OpenFlow, making the agent a “real OF controller, with the current 
 effort to use  the ryu framework plugin to do that.   * via cmdline 
 (would be alleviated with the current rootwrap work, but the former one 
 would be preferred).
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for theexplanation, if you’re reading this ;-))
 Best,Miguel Ángel
 
 
 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
 Hi,
 
 The RFC2544 with near zero packet loss is a pretty standard performance 
 benchmark. It is also used in the OPNFV project 
 (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
  
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
 
 Does this mean that OpenStack will have stateful firewalls (or security 
 groups)? Any other ideas planned, like ebtables type filtering?
 
 What I am proposing is in the terms of maintaining the statefulness we have 
 nowregards security groups (RELATED/ESTABLISHED connections are allowed back 
 on open ports) while adding a new firewall driver working only with OVS+OF 
 (no iptables or linux bridge).
 That will be possible (without auto-populating OF rules in oposite 
 directions) due to
 the new connection tracker functionality to be eventually merged into ovs.
  
 -Tapio
 
 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
 mailto:rick.jon...@hp.com wrote:
 On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
 I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.
 
 The plan is to keep all of it in a single multicore host, and make
 all the measures within it, to make sure we just measure the
 difference due to the software layers.
 
 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:
 
 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct 
 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
 
 Conditions to be benchmarked
 
 Initial connection establishment time
 Max throughput on the same CPU
 
 Large MTUs and stateless offloads can mask a multitude of path-length 
 sins.  And there is a great deal more to performance than Mbit/s. While 
 some of that may be covered by the first item via the likes of say netperf 
 TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on 
 Mbit/s (which I assume is the focus of the second item) there is something 
 for packet per second performance.  Something like netperf TCP_RR and 
 perhaps aggregate TCP_RR or UDP_RR testing.
 
 Doesn't have to be netperf, that is simply the hammer I wield :)
 
 What follows may be a bit of perfect being the enemy of the good, or 
 mission creep...
 
 On the same CPU would certainly simplify things, but it will almost 
 certainly exhibit different processor data cache behaviour than actually 
 going through a physical network with a multi-core system.  Physical NICs 
 will possibly (probably?) have RSS going, which may cause cache lines to 
 be pulled around.  The way packets will be buffered will differ as well.  
 Etc etc.  How well the different solutions scale with cores is definitely 
 a difference of interest between the two sofware layers.
 
 
 
 Hi rick, thanks for your feedback here, I’ll take it into consideration, 
 specially about the small packet pps measurements, and
 really using physical hosts.
 
 Although I may start with an AIO setup for simplicity, we should
 get more conclusive results from at least two hosts and decent NICs.
 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Kevin Benton
If OVN chooses not to support VLANs, we will still need the current OVS
reference anyway so it definitely won't be wasted work.

On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
majop...@redhat.com wrote:


 Sharing thoughts that I was having:

 May be during the next summit it’s worth discussing the future of the
 reference agent(s), I feel we’ll be replicating a lot of work across
 OVN/OVS/RYU(ofagent) and may be other plugins,

 I guess until OVN and it’s integration are ready we can’t stop, so it makes
 sense to keep development at our side, also having an independent plugin
 can help us iterate faster for new features, yet I expect that OVN will be
 more fluent at working with OVS and OpenFlow, as their designers have
 a very deep knowledge of OVS under the hood, and it’s C. ;)

 Best regards,

 On 26/2/2015, at 7:57, Miguel Ángel Ajo majop...@redhat.com wrote:

 On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:

  Inline comments follow after this, but I wanted to respond to Brian
 questionwhich has been cut out:
 We’re talking here of doing a preliminary analysis of the networking
 performance,before writing any real code at neutron level.

 If that looks right, then we should go into a preliminary (and orthogonal
 to iptables/LB)implementation. At that moment we will be able to examine
 the scalability of the solutionin regards of switching openflow rules,
 which is going to be severely affectedby the way we use to handle OF rules
 in the bridge:
* via OpenFlow, making the agent a “real OF controller, with the
 current effort to use  the ryu framework plugin to do that.   * via
 cmdline (would be alleviated with the current rootwrap work, but the former
 one would be preferred).
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben
 Pfaff for theexplanation, if you’re reading this ;-))
 Best,Miguel Ángel


 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:

 Hi,

 The RFC2544 with near zero packet loss is a pretty standard performance
 benchmark. It is also used in the OPNFV project (
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
 ).

 Does this mean that OpenStack will have stateful firewalls (or security
 groups)? Any other ideas planned, like ebtables type filtering?

 What I am proposing is in the terms of maintaining the statefulness we
 have nowregards security groups (RELATED/ESTABLISHED connections are
 allowed back on open ports) while adding a new firewall driver working only
 with OVS+OF (no iptables or linux bridge).

 That will be possible (without auto-populating OF rules in oposite
 directions) due to
 the new connection tracker functionality to be eventually merged into ovs.


 -Tapio

 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com wrote:

 On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

 I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make
 all the measures within it, to make sure we just measure the
 difference due to the software layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


 Conditions to be benchmarked

 Initial connection establishment time
 Max throughput on the same CPU

 Large MTUs and stateless offloads can mask a multitude of path-length
 sins.  And there is a great deal more to performance than Mbit/s. While
 some of that may be covered by the first item via the likes of say netperf
 TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on
 Mbit/s (which I assume is the focus of the second item) there is something
 for packet per second performance.  Something like netperf TCP_RR and
 perhaps aggregate TCP_RR or UDP_RR testing.

 Doesn't have to be netperf, that is simply the hammer I wield :)

 What follows may be a bit of perfect being the enemy of the good, or
 mission creep...

 On the same CPU would certainly simplify things, but it will almost
 certainly exhibit different processor data cache behaviour than actually
 going through a physical network with a multi-core system.  Physical NICs
 will possibly (probably?) have RSS going, which may cause cache lines to be
 pulled around.  The way packets will be buffered will differ as well.  Etc
 etc.  How well the different solutions scale with cores is definitely a
 difference of interest between the two sofware layers.



 Hi rick, thanks for your feedback here, I’ll take it into consideration,
 specially about the small packet pps measurements, and
 really using physical hosts.

 Although I may start with an AIO setup for simplicity, we should
 get more conclusive results from at least two hosts and decent NICs.

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Ben Pfaff
What kind of VLAN support would you need?

On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
 If OVN chooses not to support VLANs, we will still need the current OVS
 reference anyway so it definitely won't be wasted work.
 
 On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
 majop...@redhat.com wrote:
 
 
  Sharing thoughts that I was having:
 
  May be during the next summit it???s worth discussing the future of the
  reference agent(s), I feel we???ll be replicating a lot of work across
  OVN/OVS/RYU(ofagent) and may be other plugins,
 
  I guess until OVN and it???s integration are ready we can???t stop, so it 
  makes
  sense to keep development at our side, also having an independent plugin
  can help us iterate faster for new features, yet I expect that OVN will be
  more fluent at working with OVS and OpenFlow, as their designers have
  a very deep knowledge of OVS under the hood, and it???s C. ;)
 
  Best regards,
 
  On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com wrote:
 
  On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo wrote:
 
   Inline comments follow after this, but I wanted to respond to Brian
  questionwhich has been cut out:
  We???re talking here of doing a preliminary analysis of the networking
  performance,before writing any real code at neutron level.
 
  If that looks right, then we should go into a preliminary (and orthogonal
  to iptables/LB)implementation. At that moment we will be able to examine
  the scalability of the solutionin regards of switching openflow rules,
  which is going to be severely affectedby the way we use to handle OF rules
  in the bridge:
 * via OpenFlow, making the agent a ???real OF controller, with the
  current effort to use  the ryu framework plugin to do that.   * via
  cmdline (would be alleviated with the current rootwrap work, but the former
  one would be preferred).
  Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben
  Pfaff for theexplanation, if you???re reading this ;-))
  Best,Miguel ??ngel
 
 
  On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
 
  Hi,
 
  The RFC2544 with near zero packet loss is a pretty standard performance
  benchmark. It is also used in the OPNFV project (
  https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
  ).
 
  Does this mean that OpenStack will have stateful firewalls (or security
  groups)? Any other ideas planned, like ebtables type filtering?
 
  What I am proposing is in the terms of maintaining the statefulness we
  have nowregards security groups (RELATED/ESTABLISHED connections are
  allowed back on open ports) while adding a new firewall driver working only
  with OVS+OF (no iptables or linux bridge).
 
  That will be possible (without auto-populating OF rules in oposite
  directions) due to
  the new connection tracker functionality to be eventually merged into ovs.
 
 
  -Tapio
 
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com wrote:
 
  On 02/25/2015 05:52 AM, Miguel ??ngel Ajo wrote:
 
  I???m writing a plan/script to benchmark OVS+OF(CT) vs
  OVS+LB+iptables+ipsets,
  so we can make sure there???s a real difference before jumping into any
  OpenFlow security group filters when we have connection tracking in OVS.
 
  The plan is to keep all of it in a single multicore host, and make
  all the measures within it, to make sure we just measure the
  difference due to the software layers.
 
  Suggestions or ideas on what to measure are welcome, there???s an initial
  draft here:
 
  https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
 
 
  Conditions to be benchmarked
 
  Initial connection establishment time
  Max throughput on the same CPU
 
  Large MTUs and stateless offloads can mask a multitude of path-length
  sins.  And there is a great deal more to performance than Mbit/s. While
  some of that may be covered by the first item via the likes of say netperf
  TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on
  Mbit/s (which I assume is the focus of the second item) there is something
  for packet per second performance.  Something like netperf TCP_RR and
  perhaps aggregate TCP_RR or UDP_RR testing.
 
  Doesn't have to be netperf, that is simply the hammer I wield :)
 
  What follows may be a bit of perfect being the enemy of the good, or
  mission creep...
 
  On the same CPU would certainly simplify things, but it will almost
  certainly exhibit different processor data cache behaviour than actually
  going through a physical network with a multi-core system.  Physical NICs
  will possibly (probably?) have RSS going, which may cause cache lines to be
  pulled around.  The way packets will be buffered will differ as well.  Etc
  etc.  How well the different solutions scale with cores is definitely a
  difference of interest between the two sofware layers.
 
 
 
  Hi rick, thanks for your feedback here, 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Ben Pfaff
This sounds quite similar to the planned support in OVN to gateway a
logical network to a particular VLAN on a physical port, so perhaps it
will be sufficient.

On Thu, Feb 26, 2015 at 05:58:40PM -0800, Kevin Benton wrote:
 If a port is bound with a VLAN segmentation type, it will get a VLAN id and
 a name of a physical network that it corresponds to. In the current plugin,
 each agent is configured with a mapping between physical networks and OVS
 bridges. The agent takes the bound port information and sets up rules to
 forward traffic from the VM port to the OVS bridge corresponding to the
 physical network. The bridge usually then has a physical interface added to
 it for the tagged traffic to use to reach the rest of the network.
 
 On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com wrote:
 
  What kind of VLAN support would you need?
 
  On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
   If OVN chooses not to support VLANs, we will still need the current OVS
   reference anyway so it definitely won't be wasted work.
  
   On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
   majop...@redhat.com wrote:
  
   
Sharing thoughts that I was having:
   
May be during the next summit it???s worth discussing the future of the
reference agent(s), I feel we???ll be replicating a lot of work across
OVN/OVS/RYU(ofagent) and may be other plugins,
   
I guess until OVN and it???s integration are ready we can???t stop, so
  it makes
sense to keep development at our side, also having an independent
  plugin
can help us iterate faster for new features, yet I expect that OVN
  will be
more fluent at working with OVS and OpenFlow, as their designers have
a very deep knowledge of OVS under the hood, and it???s C. ;)
   
Best regards,
   
On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com wrote:
   
On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo wrote:
   
 Inline comments follow after this, but I wanted to respond to Brian
questionwhich has been cut out:
We???re talking here of doing a preliminary analysis of the networking
performance,before writing any real code at neutron level.
   
If that looks right, then we should go into a preliminary (and
  orthogonal
to iptables/LB)implementation. At that moment we will be able to
  examine
the scalability of the solutionin regards of switching openflow rules,
which is going to be severely affectedby the way we use to handle OF
  rules
in the bridge:
   * via OpenFlow, making the agent a ???real OF controller, with the
current effort to use  the ryu framework plugin to do that.   * via
cmdline (would be alleviated with the current rootwrap work, but the
  former
one would be preferred).
Also, ipset groups can be moved into conjunctive groups in OF (thanks
  Ben
Pfaff for theexplanation, if you???re reading this ;-))
Best,Miguel ??ngel
   
   
On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
   
Hi,
   
The RFC2544 with near zero packet loss is a pretty standard performance
benchmark. It is also used in the OPNFV project (
   
  https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
).
   
Does this mean that OpenStack will have stateful firewalls (or security
groups)? Any other ideas planned, like ebtables type filtering?
   
What I am proposing is in the terms of maintaining the statefulness we
have nowregards security groups (RELATED/ESTABLISHED connections are
allowed back on open ports) while adding a new firewall driver working
  only
with OVS+OF (no iptables or linux bridge).
   
That will be possible (without auto-populating OF rules in oposite
directions) due to
the new connection tracker functionality to be eventually merged into
  ovs.
   
   
-Tapio
   
On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com
  wrote:
   
On 02/25/2015 05:52 AM, Miguel ??ngel Ajo wrote:
   
I???m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there???s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in
  OVS.
   
The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.
   
Suggestions or ideas on what to measure are welcome, there???s an
  initial
draft here:
   
https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
   
   
Conditions to be benchmarked
   
Initial connection establishment time
Max throughput on the same CPU
   
Large MTUs and stateless offloads can mask a multitude of path-length
sins.  And there is a great deal more to performance than Mbit/s. While
some of that may be covered by the 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-26 Thread Kevin Benton
If a port is bound with a VLAN segmentation type, it will get a VLAN id and
a name of a physical network that it corresponds to. In the current plugin,
each agent is configured with a mapping between physical networks and OVS
bridges. The agent takes the bound port information and sets up rules to
forward traffic from the VM port to the OVS bridge corresponding to the
physical network. The bridge usually then has a physical interface added to
it for the tagged traffic to use to reach the rest of the network.

On Thu, Feb 26, 2015 at 4:19 PM, Ben Pfaff b...@nicira.com wrote:

 What kind of VLAN support would you need?

 On Thu, Feb 26, 2015 at 02:05:41PM -0800, Kevin Benton wrote:
  If OVN chooses not to support VLANs, we will still need the current OVS
  reference anyway so it definitely won't be wasted work.
 
  On Thu, Feb 26, 2015 at 2:56 AM, Miguel Angel Ajo Pelayo 
  majop...@redhat.com wrote:
 
  
   Sharing thoughts that I was having:
  
   May be during the next summit it???s worth discussing the future of the
   reference agent(s), I feel we???ll be replicating a lot of work across
   OVN/OVS/RYU(ofagent) and may be other plugins,
  
   I guess until OVN and it???s integration are ready we can???t stop, so
 it makes
   sense to keep development at our side, also having an independent
 plugin
   can help us iterate faster for new features, yet I expect that OVN
 will be
   more fluent at working with OVS and OpenFlow, as their designers have
   a very deep knowledge of OVS under the hood, and it???s C. ;)
  
   Best regards,
  
   On 26/2/2015, at 7:57, Miguel ??ngel Ajo majop...@redhat.com wrote:
  
   On Thursday, 26 de February de 2015 at 7:48, Miguel ??ngel Ajo wrote:
  
Inline comments follow after this, but I wanted to respond to Brian
   questionwhich has been cut out:
   We???re talking here of doing a preliminary analysis of the networking
   performance,before writing any real code at neutron level.
  
   If that looks right, then we should go into a preliminary (and
 orthogonal
   to iptables/LB)implementation. At that moment we will be able to
 examine
   the scalability of the solutionin regards of switching openflow rules,
   which is going to be severely affectedby the way we use to handle OF
 rules
   in the bridge:
  * via OpenFlow, making the agent a ???real OF controller, with the
   current effort to use  the ryu framework plugin to do that.   * via
   cmdline (would be alleviated with the current rootwrap work, but the
 former
   one would be preferred).
   Also, ipset groups can be moved into conjunctive groups in OF (thanks
 Ben
   Pfaff for theexplanation, if you???re reading this ;-))
   Best,Miguel ??ngel
  
  
   On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
  
   Hi,
  
   The RFC2544 with near zero packet loss is a pretty standard performance
   benchmark. It is also used in the OPNFV project (
  
 https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
   ).
  
   Does this mean that OpenStack will have stateful firewalls (or security
   groups)? Any other ideas planned, like ebtables type filtering?
  
   What I am proposing is in the terms of maintaining the statefulness we
   have nowregards security groups (RELATED/ESTABLISHED connections are
   allowed back on open ports) while adding a new firewall driver working
 only
   with OVS+OF (no iptables or linux bridge).
  
   That will be possible (without auto-populating OF rules in oposite
   directions) due to
   the new connection tracker functionality to be eventually merged into
 ovs.
  
  
   -Tapio
  
   On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com
 wrote:
  
   On 02/25/2015 05:52 AM, Miguel ??ngel Ajo wrote:
  
   I???m writing a plan/script to benchmark OVS+OF(CT) vs
   OVS+LB+iptables+ipsets,
   so we can make sure there???s a real difference before jumping into any
   OpenFlow security group filters when we have connection tracking in
 OVS.
  
   The plan is to keep all of it in a single multicore host, and make
   all the measures within it, to make sure we just measure the
   difference due to the software layers.
  
   Suggestions or ideas on what to measure are welcome, there???s an
 initial
   draft here:
  
   https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
  
  
   Conditions to be benchmarked
  
   Initial connection establishment time
   Max throughput on the same CPU
  
   Large MTUs and stateless offloads can mask a multitude of path-length
   sins.  And there is a great deal more to performance than Mbit/s. While
   some of that may be covered by the first item via the likes of say
 netperf
   TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus
 on
   Mbit/s (which I assume is the focus of the second item) there is
 something
   for packet per second performance.  Something like netperf TCP_RR and
   perhaps aggregate TCP_RR or UDP_RR testing.
  
   Doesn't have to be 

[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
I’m writing a plan/script to benchmark OVS+OF(CT) vs OVS+LB+iptables+ipsets,  
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make all the 
measures
within it, to make sure we just measure the difference due to the software 
layers.

Suggestions or ideas on what to measure are welcome, there’s an initial draft 
here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct  

Miguel Ángel Ajo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Kyle Mestery
On Wed, Feb 25, 2015 at 7:52 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

  I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make all the
 measures
 within it, to make sure we just measure the difference due to the software
 layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

 This is a good idea Miguel, thanks for taking this on! Might I suggest we
add this document into the neutron tree once you feel it's ready? Having it
exist there may make a lot of sense for people who want to understand your
performance test setup and who may want to run it in the future.

Thanks,
Kyle


 Miguel Ángel Ajo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Kyle Mestery
On Wed, Feb 25, 2015 at 8:49 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 On Wednesday, 25 de February de 2015 at 15:38, Kyle Mestery wrote:

 On Wed, Feb 25, 2015 at 7:52 AM, Miguel Ángel Ajo majop...@redhat.com
 wrote:

  I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make all the
 measures
 within it, to make sure we just measure the difference due to the software
 layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

 This is a good idea Miguel, thanks for taking this on! Might I suggest we
 add this document into the neutron tree once you feel it's ready? Having it
 exist there may make a lot of sense for people who want to understand your
 performance test setup and who may want to run it in the future.


 That’s actually a good idea, so we can use gerrit to review and gather
 comments.
 Where should I put the document?

 in /doc/source we have devref with it’s own index in rst.

 I think it makes sense to add a testing directory there perhaps. But
failing that, even having it in devref would be a good idea.


 Best,
 Miguel Ángel.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Brian Haley
On 02/25/2015 08:52 AM, Miguel Ángel Ajo wrote:
 I’m writing a plan/script to benchmark OVS+OF(CT) vs OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.
 
 The plan is to keep all of it in a single multicore host, and make all the 
 measures
 within it, to make sure we just measure the difference due to the software 
 layers.
 
 Suggestions or ideas on what to measure are welcome, there’s an initial draft 
 here:
 
 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

Thanks for writing this up Miguel.

I realize this is more focusing on performance (how fast the packets flow), but
one of the orthogonal issues to Security Groups in general is the time it takes
for Neutron to apply or update them, for example, iptables_manager.apply().  I
would like to make sure that time doesn't grow any larger than it is today.
This can probably all be scraped from log files, so wouldn't require any special
work, except for testing with a large SG set.

Thanks,

-Brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Rick Jones

On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.

Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


Conditions to be benchmarked

Initial connection establishment time
Max throughput on the same CPU

Large MTUs and stateless offloads can mask a multitude of path-length 
sins.  And there is a great deal more to performance than Mbit/s. While 
some of that may be covered by the first item via the likes of say 
netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
focus on Mbit/s (which I assume is the focus of the second item) there 
is something for packet per second performance.  Something like netperf 
TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.


Doesn't have to be netperf, that is simply the hammer I wield :)

What follows may be a bit of perfect being the enemy of the good, or 
mission creep...


On the same CPU would certainly simplify things, but it will almost 
certainly exhibit different processor data cache behaviour than actually 
going through a physical network with a multi-core system.  Physical 
NICs will possibly (probably?) have RSS going, which may cause cache 
lines to be pulled around.  The way packets will be buffered will differ 
as well.  Etc etc.  How well the different solutions scale with cores is 
definitely a difference of interest between the two sofware layers.


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Tapio Tallgren
Hi,

The RFC2544 with near zero packet loss is a pretty standard performance
benchmark. It is also used in the OPNFV project (
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
).

Does this mean that OpenStack will have stateful firewalls (or security
groups)? Any other ideas planned, like ebtables type filtering?

-Tapio

On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com wrote:

 On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

 I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make
 all the measures within it, to make sure we just measure the
 difference due to the software layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


 Conditions to be benchmarked

 Initial connection establishment time
 Max throughput on the same CPU

 Large MTUs and stateless offloads can mask a multitude of path-length
 sins.  And there is a great deal more to performance than Mbit/s. While
 some of that may be covered by the first item via the likes of say netperf
 TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on
 Mbit/s (which I assume is the focus of the second item) there is something
 for packet per second performance.  Something like netperf TCP_RR and
 perhaps aggregate TCP_RR or UDP_RR testing.

 Doesn't have to be netperf, that is simply the hammer I wield :)

 What follows may be a bit of perfect being the enemy of the good, or
 mission creep...

 On the same CPU would certainly simplify things, but it will almost
 certainly exhibit different processor data cache behaviour than actually
 going through a physical network with a multi-core system.  Physical NICs
 will possibly (probably?) have RSS going, which may cause cache lines to be
 pulled around.  The way packets will be buffered will differ as well.  Etc
 etc.  How well the different solutions scale with cores is definitely a
 difference of interest between the two sofware layers.

 rick


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
-Tapio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
 Inline comments follow after this, but I wanted to respond to Brian question
 which has been cut out:
  
 We’re talking here of doing a preliminary analysis of the networking 
 performance,
 before writing any real code at neutron level.
  
 If that looks right, then we should go into a preliminary (and orthogonal to 
 iptables/LB)
 implementation. At that moment we will be able to examine the scalability of 
 the solution
 in regards of switching openflow rules, which is going to be severely affected
 by the way we use to handle OF rules in the bridge:
  
* via OpenFlow, making the agent a “real OF controller, with the current 
 effort to use
   the ryu framework plugin to do that.
* via cmdline (would be alleviated with the current rootwrap work, but the 
 former one
  would be preferred).
  
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for the
 explanation, if you’re reading this ;-))
  
 Best,
 Miguel Ángel
  
  
  
 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
  Hi,
   
  The RFC2544 with near zero packet loss is a pretty standard performance 
  benchmark. It is also used in the OPNFV project 
  (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
   
  Does this mean that OpenStack will have stateful firewalls (or security 
  groups)? Any other ideas planned, like ebtables type filtering?
   
 What I am proposing is in the terms of maintaining the statefulness we have 
 now
 regards security groups (RELATED/ESTABLISHED connections are allowed back  
 on open ports) while adding a new firewall driver working only with OVS+OF 
 (no iptables  
 or linux bridge).
  
 That will be possible (without auto-populating OF rules in oposite 
 directions) due to
 the new connection tracker functionality to be eventually merged into ovs.
   
  -Tapio
   
   
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com) wrote:
   On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.
 
The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.
 
Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:
 
https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

   Conditions to be benchmarked

   Initial connection establishment time
   Max throughput on the same CPU

   Large MTUs and stateless offloads can mask a multitude of path-length 
   sins.  And there is a great deal more to performance than Mbit/s. While 
   some of that may be covered by the first item via the likes of say 
   netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
   focus on Mbit/s (which I assume is the focus of the second item) there is 
   something for packet per second performance.  Something like netperf 
   TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.

   Doesn't have to be netperf, that is simply the hammer I wield :)

   What follows may be a bit of perfect being the enemy of the good, or 
   mission creep...

   On the same CPU would certainly simplify things, but it will almost 
   certainly exhibit different processor data cache behaviour than actually 
   going through a physical network with a multi-core system.  Physical NICs 
   will possibly (probably?) have RSS going, which may cause cache lines to 
   be pulled around.  The way packets will be buffered will differ as well.  
   Etc etc.  How well the different solutions scale with cores is definitely 
   a difference of interest between the two sofware layers.



Hi rick, thanks for your feedback here, I’ll take it into consideration,  
specially about the small packet pps measurements, and
really using physical hosts.

Although I may start with an AIO setup for simplicity, we should
get more conclusive results from at least two hosts and decent NICs.

I will put all this together in the document, and loop you in for review.  
   rick


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  --  
  -Tapio  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
Inline comments follow after this, but I wanted to respond to Brian question
which has been cut out:

We’re talking here of doing a preliminary analysis of the networking 
performance,
before writing any real code at neutron level.

If that looks right, then we should go into a preliminary (and orthogonal to 
iptables/LB)
implementation. At that moment we will be able to examine the scalability of 
the solution
in regards of switching openflow rules, which is going to be severely affected
by the way we use to handle OF rules in the bridge:

   * via OpenFlow, making the agent a “real OF controller, with the current 
effort to use
  the ryu framework plugin to do that.
   * via cmdline (would be alleviated with the current rootwrap work, but the 
former one
 would be preferred).

Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff 
for the
explanation, if you’re reading this ;-))

Best,
Miguel Ángel



On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:  
 Hi,
  
 The RFC2544 with near zero packet loss is a pretty standard performance 
 benchmark. It is also used in the OPNFV project 
 (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
  
 Does this mean that OpenStack will have stateful firewalls (or security 
 groups)? Any other ideas planned, like ebtables type filtering?
  
What I am proposing is in the terms of maintaining the statefulness we have now
regards security groups (RELATED/ESTABLISHED connections are allowed back  
on open ports) while adding a new firewall driver working only with OVS+OF (no 
iptables  
or linux bridge).

That will be possible (without auto-populating OF rules in oposite directions) 
due to
the new connection tracker functionality to be eventually merged into ovs.
  
  
 -Tapio
  
  
 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
 (mailto:rick.jon...@hp.com) wrote:
  On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
   I’m writing a plan/script to benchmark OVS+OF(CT) vs
   OVS+LB+iptables+ipsets,
   so we can make sure there’s a real difference before jumping into any
   OpenFlow security group filters when we have connection tracking in OVS.

   The plan is to keep all of it in a single multicore host, and make
   all the measures within it, to make sure we just measure the
   difference due to the software layers.

   Suggestions or ideas on what to measure are welcome, there’s an initial
   draft here:

   https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
   
  Conditions to be benchmarked
   
  Initial connection establishment time
  Max throughput on the same CPU
   
  Large MTUs and stateless offloads can mask a multitude of path-length sins. 
   And there is a great deal more to performance than Mbit/s. While some of 
  that may be covered by the first item via the likes of say netperf TCP_CRR 
  or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s 
  (which I assume is the focus of the second item) there is something for 
  packet per second performance.  Something like netperf TCP_RR and perhaps 
  aggregate TCP_RR or UDP_RR testing.
   
  Doesn't have to be netperf, that is simply the hammer I wield :)
   
  What follows may be a bit of perfect being the enemy of the good, or 
  mission creep...
   
  On the same CPU would certainly simplify things, but it will almost 
  certainly exhibit different processor data cache behaviour than actually 
  going through a physical network with a multi-core system.  Physical NICs 
  will possibly (probably?) have RSS going, which may cause cache lines to be 
  pulled around.  The way packets will be buffered will differ as well.  Etc 
  etc.  How well the different solutions scale with cores is definitely a 
  difference of interest between the two sofware layers.
   
  rick
   
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
 --  
 -Tapio  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Ben Pfaff
On Thu, Feb 26, 2015 at 07:48:51AM +0100, Miguel Ángel Ajo wrote:
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for the
 explanation, if you’re reading this ;-))

You're welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev