Re: [ovs-dev] [PATCH] ovn-nbctl, ovn-sbctl, ovs-vsctl: Remove gratuitous NULL checks.

2017-05-29 Thread Miguel Angel Ajo Pelayo
Acked-by: Miguel Angel Ajo 

On Sat, May 27, 2017 at 5:44 AM, Ben Pfaff  wrote:

> These functions all set txn and do not un-set it within their main
> command execution function, so it's gratuitous to check it along this path.
>
> Found by Coverity.
>
> Reported-at: https://scan3.coverity.com/reports.htm#v16889/p10449/
> fileInstanceId=14763082=4305338=180417
> Signed-off-by: Ben Pfaff 
> ---
>  ovn/utilities/ovn-nbctl.c | 9 -
>  ovn/utilities/ovn-sbctl.c | 9 -
>  utilities/ovs-vsctl.c | 9 -
>  3 files changed, 12 insertions(+), 15 deletions(-)
>
> diff --git a/ovn/utilities/ovn-nbctl.c b/ovn/utilities/ovn-nbctl.c
> index 9428a342bec5..b5143e6adfae 100644
> --- a/ovn/utilities/ovn-nbctl.c
> +++ b/ovn/utilities/ovn-nbctl.c
> @@ -3309,11 +3309,10 @@ do_nbctl(const char *args, struct ctl_command
> *commands, size_t n_commands,
>  try_again:
>  /* Our transaction needs to be rerun, or a prerequisite was not met.
> Free
>   * resources and return so that the caller can try again. */
> -if (txn) {
> -ovsdb_idl_txn_abort(txn);
> -ovsdb_idl_txn_destroy(txn);
> -the_idl_txn = NULL;
> -}
> +ovsdb_idl_txn_abort(txn);
> +ovsdb_idl_txn_destroy(txn);
> +the_idl_txn = NULL;
> +
>  ovsdb_symbol_table_destroy(symtab);
>  for (c = commands; c < [n_commands]; c++) {
>  ds_destroy(>output);
> diff --git a/ovn/utilities/ovn-sbctl.c b/ovn/utilities/ovn-sbctl.c
> index 0142062fd13f..4a884232f8fd 100644
> --- a/ovn/utilities/ovn-sbctl.c
> +++ b/ovn/utilities/ovn-sbctl.c
> @@ -1344,11 +1344,10 @@ do_sbctl(const char *args, struct ctl_command
> *commands, size_t n_commands,
>  try_again:
>  /* Our transaction needs to be rerun, or a prerequisite was not met.
> Free
>   * resources and return so that the caller can try again. */
> -if (txn) {
> -ovsdb_idl_txn_abort(txn);
> -ovsdb_idl_txn_destroy(txn);
> -the_idl_txn = NULL;
> -}
> +ovsdb_idl_txn_abort(txn);
> +ovsdb_idl_txn_destroy(txn);
> +the_idl_txn = NULL;
> +
>  ovsdb_symbol_table_destroy(symtab);
>  for (c = commands; c < [n_commands]; c++) {
>  ds_destroy(>output);
> diff --git a/utilities/ovs-vsctl.c b/utilities/ovs-vsctl.c
> index 9fe3df03af66..28c1c4457016 100644
> --- a/utilities/ovs-vsctl.c
> +++ b/utilities/ovs-vsctl.c
> @@ -2670,11 +2670,10 @@ do_vsctl(const char *args, struct ctl_command
> *commands, size_t n_commands,
>  try_again:
>  /* Our transaction needs to be rerun, or a prerequisite was not met.
> Free
>   * resources and return so that the caller can try again. */
> -if (txn) {
> -ovsdb_idl_txn_abort(txn);
> -ovsdb_idl_txn_destroy(txn);
> -the_idl_txn = NULL;
> -}
> +ovsdb_idl_txn_abort(txn);
> +ovsdb_idl_txn_destroy(txn);
> +the_idl_txn = NULL;
> +
>  ovsdb_symbol_table_destroy(symtab);
>  for (c = commands; c < [n_commands]; c++) {
>  ds_destroy(>output);
> --
> 2.10.2
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

2017-05-29 Thread Kevin Traynor
On 05/29/2017 01:22 PM, Ilya Maximets wrote:
> On 26.05.2017 20:14, Kevin Traynor wrote:
>> On 05/26/2017 03:55 PM, Ilya Maximets wrote:
>>> On 10.03.2017 07:27, Daniele Di Proietto wrote:
 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
> Reconfiguration of HW NICs may lead to packet drops.
> In current model all physical ports will be reconfigured each
> time number of PMD threads changed. Since we not stopping
> threads on pmd-cpu-mask changes, this patch will help to further
> decrease port's downtime by setting the maximum possible number
> of wanted tx queues to avoid unnecessary reconfigurations.
>
> Signed-off-by: Ilya Maximets 

 I haven't thought this through a lot, but the last big series we pushed
 on master went exactly in the opposite direction, i.e. created one txq
 for each thread in the datapath.

 I thought this was a good idea because:

 * On some systems with hyperthreading we can have a lot of cpus (we 
 received
reports of systems with 72 cores). If you want to use only a couple of 
 cores
you're still forced to have a lot of unused txqs, which prevent you
 from having
lockless tx.
 * We thought that reconfiguring the number of pmds would not be a frequent
operation.

 Why do you want to reconfigure the threads that often?  Is it to be
 able to support more throughput quickly?
>>>
>>> Yes.
>>>
 In this case I think we shouldn't use the number of cpus,
 but something else coming from the user, so that the administrator can
 balance how
 quickly pmd threads can be reconfigured vs how many txqs should be on
 the system.
 I'm not sure how to make this user friendly though.

 What do you think?
>>>
>>> Right now I'm thinking about combined solution which will respect
>>> both issues (too big number of TXQs and frequent device reconfiguration).
>>> I think, we can implement additional function to get port's limitations.
>>> For now we can request the maximum number of TX queues from netdev and
>>> use it while number of cores less then number of queues.
>>> Something like this:
>>> 
>>> lib/netdev-dpdk.c:
>>> uint32_t netdev_dpdk_get_max_txqs(struct netdev *netdev)
>>> {
>>> struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>>> struct rte_eth_dev_info dev_info;
>>>
>>> ovs_mutex_lock(>mutex);
>>> rte_eth_dev_info_get(dev->port_id, _info);
>>> ovs_mutex_unlock(>mutex);
>>>
>>> return dev_info.max_tx_queues;
>>> }
>>>
>>> lib/dpif-netdev.c:reconfigure_datapath():
>>>
>>> <->
>>> max_tx_queues = netdev_get_max_txqs(port->netdev);
>>> number_of_threads = cmap_count(>poll_threads);
>>> wanted_txqs = MAX(max_tx_queues, number_of_threads);
>>> <->
>>>
>>> In this implementation there will be no additional locking if number of
>>> threads less or equal to maximum possible number of tx queues in HW.
>>>
>>> What do you think? Ian? Darrell?
>>>
>>
>> I'm not sure about using rte_eth_dev_info_get() as IIRC there was
>> problems previously with it reporting a number, but then when you went
>> to configure them they were not all available depending on mode. That
>> was why the trial and error queue configure was put in.
>>
>> What about replacing 'max_tx_queues' above with a number like 16 that is
>> likely to be supported by the ports and unlikely be exceeded by
>> number_of_threads?
>>
>> Kevin.
> 
> Hi Kevin. Thank you for reminding me of this issue.
> 
> But I think that magic numbers is not a good solution.
> 
> One issue in my first implementation is that desired number of queues is
> not actually the same as required number.
> 
> How about something like this:
> <->
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 97f72d3..1a18e4f 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -3448,7 +3448,7 @@ reconfigure_datapath(struct dp_netdev *dp)
>  {
>  struct dp_netdev_pmd_thread *pmd;
>  struct dp_netdev_port *port;
> -int wanted_txqs;
> +int needed_txqs, wanted_txqs;
>  
>  dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq);
>  
> @@ -3456,7 +3456,15 @@ reconfigure_datapath(struct dp_netdev *dp)
>   * on the system and the user configuration. */
>  reconfigure_pmd_threads(dp);
>  
> -wanted_txqs = cmap_count(>poll_threads);
> +/* We need 1 Tx queue for each thread to avoid locking, but we will try
> + * to allocate the maximum possible value to minimize the number of port
> + * reconfigurations. */
> +needed_txqs = cmap_count(>poll_threads);
> +/* (n_cores + 1) is the maximum that we might need to have.
> + * Additional queue is for non-PMD threads. */
> +wanted_txqs = ovs_numa_get_n_cores();
> +ovs_assert(wanted_txqs != OVS_CORE_UNSPEC);
> +wanted_txqs++;
>  
>  /* The number 

Re: [ovs-dev] [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

2017-05-29 Thread Ilya Maximets
On 29.05.2017 16:26, Kevin Traynor wrote:
> On 05/29/2017 01:22 PM, Ilya Maximets wrote:
>> On 26.05.2017 20:14, Kevin Traynor wrote:
>>> On 05/26/2017 03:55 PM, Ilya Maximets wrote:
 On 10.03.2017 07:27, Daniele Di Proietto wrote:
> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
>> Reconfiguration of HW NICs may lead to packet drops.
>> In current model all physical ports will be reconfigured each
>> time number of PMD threads changed. Since we not stopping
>> threads on pmd-cpu-mask changes, this patch will help to further
>> decrease port's downtime by setting the maximum possible number
>> of wanted tx queues to avoid unnecessary reconfigurations.
>>
>> Signed-off-by: Ilya Maximets 
>
> I haven't thought this through a lot, but the last big series we pushed
> on master went exactly in the opposite direction, i.e. created one txq
> for each thread in the datapath.
>
> I thought this was a good idea because:
>
> * On some systems with hyperthreading we can have a lot of cpus (we 
> received
>reports of systems with 72 cores). If you want to use only a couple of 
> cores
>you're still forced to have a lot of unused txqs, which prevent you
> from having
>lockless tx.
> * We thought that reconfiguring the number of pmds would not be a frequent
>operation.
>
> Why do you want to reconfigure the threads that often?  Is it to be
> able to support more throughput quickly?

 Yes.

> In this case I think we shouldn't use the number of cpus,
> but something else coming from the user, so that the administrator can
> balance how
> quickly pmd threads can be reconfigured vs how many txqs should be on
> the system.
> I'm not sure how to make this user friendly though.
>
> What do you think?

 Right now I'm thinking about combined solution which will respect
 both issues (too big number of TXQs and frequent device reconfiguration).
 I think, we can implement additional function to get port's limitations.
 For now we can request the maximum number of TX queues from netdev and
 use it while number of cores less then number of queues.
 Something like this:

 lib/netdev-dpdk.c:
 uint32_t netdev_dpdk_get_max_txqs(struct netdev *netdev)
 {
 struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
 struct rte_eth_dev_info dev_info;

 ovs_mutex_lock(>mutex);
 rte_eth_dev_info_get(dev->port_id, _info);
 ovs_mutex_unlock(>mutex);

 return dev_info.max_tx_queues;
 }

 lib/dpif-netdev.c:reconfigure_datapath():

 <->
 max_tx_queues = netdev_get_max_txqs(port->netdev);
 number_of_threads = cmap_count(>poll_threads);
 wanted_txqs = MAX(max_tx_queues, number_of_threads);
 <->

 In this implementation there will be no additional locking if number of
 threads less or equal to maximum possible number of tx queues in HW.

 What do you think? Ian? Darrell?

>>>
>>> I'm not sure about using rte_eth_dev_info_get() as IIRC there was
>>> problems previously with it reporting a number, but then when you went
>>> to configure them they were not all available depending on mode. That
>>> was why the trial and error queue configure was put in.
>>>
>>> What about replacing 'max_tx_queues' above with a number like 16 that is
>>> likely to be supported by the ports and unlikely be exceeded by
>>> number_of_threads?
>>>
>>> Kevin.
>>
>> Hi Kevin. Thank you for reminding me of this issue.
>>
>> But I think that magic numbers is not a good solution.
>>
>> One issue in my first implementation is that desired number of queues is
>> not actually the same as required number.
>>
>> How about something like this:
>> <->
>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
>> index 97f72d3..1a18e4f 100644
>> --- a/lib/dpif-netdev.c
>> +++ b/lib/dpif-netdev.c
>> @@ -3448,7 +3448,7 @@ reconfigure_datapath(struct dp_netdev *dp)
>>  {
>>  struct dp_netdev_pmd_thread *pmd;
>>  struct dp_netdev_port *port;
>> -int wanted_txqs;
>> +int needed_txqs, wanted_txqs;
>>  
>>  dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq);
>>  
>> @@ -3456,7 +3456,15 @@ reconfigure_datapath(struct dp_netdev *dp)
>>   * on the system and the user configuration. */
>>  reconfigure_pmd_threads(dp);
>>  
>> -wanted_txqs = cmap_count(>poll_threads);
>> +/* We need 1 Tx queue for each thread to avoid locking, but we will try
>> + * to allocate the maximum possible value to minimize the number of port
>> + * reconfigurations. */
>> +needed_txqs = cmap_count(>poll_threads);
>> +/* (n_cores + 1) is the maximum that we might need to have.
>> + * Additional queue is for 

Re: [ovs-dev] [PATCH] ovn-controller: Fix memory leak in create_br_int().

2017-05-29 Thread Miguel Angel Ajo Pelayo
Acked-By: Miguel Angel Ajo 

On Sat, May 27, 2017 at 1:17 AM, Ben Pfaff  wrote:

> Found by Coverity.
>
> Reported-at: https://scan3.coverity.com/reports.htm#v16889/p10449/
> fileInstanceId=14763066=4305324=180404&
> fileStart=251=500
> Signed-off-by: Ben Pfaff 
> ---
>  ovn/controller/ovn-controller.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-
> controller.c
> index f22551d6dcf3..1ff1b5b738a9 100644
> --- a/ovn/controller/ovn-controller.c
> +++ b/ovn/controller/ovn-controller.c
> @@ -238,6 +238,7 @@ create_br_int(struct controller_ctx *ctx,
>  bridges[cfg->n_bridges] = bridge;
>  ovsrec_open_vswitch_verify_bridges(cfg);
>  ovsrec_open_vswitch_set_bridges(cfg, bridges, cfg->n_bridges + 1);
> +free(bridges);
>
>  return bridge;
>  }
> --
> 2.10.2
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

2017-05-29 Thread Ilya Maximets
On 26.05.2017 20:14, Kevin Traynor wrote:
> On 05/26/2017 03:55 PM, Ilya Maximets wrote:
>> On 10.03.2017 07:27, Daniele Di Proietto wrote:
>>> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
 Reconfiguration of HW NICs may lead to packet drops.
 In current model all physical ports will be reconfigured each
 time number of PMD threads changed. Since we not stopping
 threads on pmd-cpu-mask changes, this patch will help to further
 decrease port's downtime by setting the maximum possible number
 of wanted tx queues to avoid unnecessary reconfigurations.

 Signed-off-by: Ilya Maximets 
>>>
>>> I haven't thought this through a lot, but the last big series we pushed
>>> on master went exactly in the opposite direction, i.e. created one txq
>>> for each thread in the datapath.
>>>
>>> I thought this was a good idea because:
>>>
>>> * On some systems with hyperthreading we can have a lot of cpus (we received
>>>reports of systems with 72 cores). If you want to use only a couple of 
>>> cores
>>>you're still forced to have a lot of unused txqs, which prevent you
>>> from having
>>>lockless tx.
>>> * We thought that reconfiguring the number of pmds would not be a frequent
>>>operation.
>>>
>>> Why do you want to reconfigure the threads that often?  Is it to be
>>> able to support more throughput quickly?
>>
>> Yes.
>>
>>> In this case I think we shouldn't use the number of cpus,
>>> but something else coming from the user, so that the administrator can
>>> balance how
>>> quickly pmd threads can be reconfigured vs how many txqs should be on
>>> the system.
>>> I'm not sure how to make this user friendly though.
>>>
>>> What do you think?
>>
>> Right now I'm thinking about combined solution which will respect
>> both issues (too big number of TXQs and frequent device reconfiguration).
>> I think, we can implement additional function to get port's limitations.
>> For now we can request the maximum number of TX queues from netdev and
>> use it while number of cores less then number of queues.
>> Something like this:
>>  
>> lib/netdev-dpdk.c:
>> uint32_t netdev_dpdk_get_max_txqs(struct netdev *netdev)
>> {
>> struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>> struct rte_eth_dev_info dev_info;
>>
>> ovs_mutex_lock(>mutex);
>> rte_eth_dev_info_get(dev->port_id, _info);
>> ovs_mutex_unlock(>mutex);
>>
>> return dev_info.max_tx_queues;
>> }
>>
>> lib/dpif-netdev.c:reconfigure_datapath():
>>
>> <->
>> max_tx_queues = netdev_get_max_txqs(port->netdev);
>> number_of_threads = cmap_count(>poll_threads);
>> wanted_txqs = MAX(max_tx_queues, number_of_threads);
>> <->
>>
>> In this implementation there will be no additional locking if number of
>> threads less or equal to maximum possible number of tx queues in HW.
>>
>> What do you think? Ian? Darrell?
>>
> 
> I'm not sure about using rte_eth_dev_info_get() as IIRC there was
> problems previously with it reporting a number, but then when you went
> to configure them they were not all available depending on mode. That
> was why the trial and error queue configure was put in.
> 
> What about replacing 'max_tx_queues' above with a number like 16 that is
> likely to be supported by the ports and unlikely be exceeded by
> number_of_threads?
> 
> Kevin.

Hi Kevin. Thank you for reminding me of this issue.

But I think that magic numbers is not a good solution.

One issue in my first implementation is that desired number of queues is
not actually the same as required number.

How about something like this:
<->
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 97f72d3..1a18e4f 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -3448,7 +3448,7 @@ reconfigure_datapath(struct dp_netdev *dp)
 {
 struct dp_netdev_pmd_thread *pmd;
 struct dp_netdev_port *port;
-int wanted_txqs;
+int needed_txqs, wanted_txqs;
 
 dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq);
 
@@ -3456,7 +3456,15 @@ reconfigure_datapath(struct dp_netdev *dp)
  * on the system and the user configuration. */
 reconfigure_pmd_threads(dp);
 
-wanted_txqs = cmap_count(>poll_threads);
+/* We need 1 Tx queue for each thread to avoid locking, but we will try
+ * to allocate the maximum possible value to minimize the number of port
+ * reconfigurations. */
+needed_txqs = cmap_count(>poll_threads);
+/* (n_cores + 1) is the maximum that we might need to have.
+ * Additional queue is for non-PMD threads. */
+wanted_txqs = ovs_numa_get_n_cores();
+ovs_assert(wanted_txqs != OVS_CORE_UNSPEC);
+wanted_txqs++;
 
 /* The number of pmd threads might have changed, or a port can be new:
  * adjust the txqs. */
@@ -3469,9 +3477,13 @@ reconfigure_datapath(struct dp_netdev *dp)
 
 /* Check for all the ports that need reconfiguration.  

Re: [ovs-dev] [PATCH v2 1/3] netdev-dpdk: Fix Rx checksum reconfigure.

2017-05-29 Thread Kevin Traynor
On 05/26/2017 03:04 PM, Chandran, Sugesh wrote:
> 
> 
> Regards
> _Sugesh
> 
> 
>> -Original Message-
>> From: Chandran, Sugesh
>> Sent: Wednesday, May 17, 2017 10:50 AM
>> To: Kevin Traynor 
>> Cc: d...@openvswitch.org
>> Subject: RE: [PATCH v2 1/3] netdev-dpdk: Fix Rx checksum reconfigure.
>>
>>
>>
>> Regards
>> _Sugesh
>>
>>> -Original Message-
>>> From: Kevin Traynor [mailto:ktray...@redhat.com]
>>> Sent: Wednesday, May 17, 2017 10:10 AM
>>> To: Chandran, Sugesh 
>>> Cc: d...@openvswitch.org
>>> Subject: Re: [PATCH v2 1/3] netdev-dpdk: Fix Rx checksum reconfigure.
>>>
>>> On 05/16/2017 05:48 PM, Chandran, Sugesh wrote:
 Hi Kevin,
 Thank you for sending out this patch series.
 Have you tested the tunneling decap usecase with checksum offload? I
 am seeing weird behavior when I testing the tunneling with Rx
 checksum offload ON and OFF.(Seeing the same behavior on master as
 well)

 Here is the summary of issue with the steps,

 1) Send tunnel traffic to OVS to do the decap.
 2) Set & unset the checksum offload.
 3) I don't see any performance difference in both case.

 Now I went ahead and put some debug message to see what is
>> happening

 diff --git a/lib/netdev-native-tnl.c b/lib/netdev-native-tnl.c index
 2798324..49ca847 100644
 --- a/lib/netdev-native-tnl.c
 +++ b/lib/netdev-native-tnl.c
 @@ -86,6 +86,7 @@ netdev_tnl_ip_extract_tnl_md(struct dp_packet
>>> *packet, struct flow_tnl *tnl,
  ovs_be32 ip_src, ip_dst;

  if (OVS_UNLIKELY(!dp_packet_ip_checksum_valid(packet))) {
 +VLOG_INFO("Checksum is not validated...");
  if (csum(ip, IP_IHL(ip->ip_ihl_ver) * 4)) {
  VLOG_WARN_RL(_rl, "ip packet has invalid checksum");
  return NULL;
 @@ -182,6 +183,7 @@ udp_extract_tnl_md(struct dp_packet *packet,
 struct flow_tnl *tnl,

  if (udp->udp_csum) {
  if (OVS_UNLIKELY(!dp_packet_l4_checksum_valid(packet))) {
 +VLOG_INFO("Checksum is not validated...");
  uint32_t csum;
  if (netdev_tnl_is_header_ipv6(dp_packet_data(packet))) {
  csum =
 packet_csum_pseudoheader6(dp_packet_l3(packet));
 sugeshch@silpixa00389816:~/repo/ovs_master$

 These debug messages are not showing at all when I am sending the
 traffic. (I tried it with rx checksum ON and OFF)

 Looks like ol_flags are always reporting checksum is good.

 I am using DPDK 17.02 for the testing.
 If I remember correctly it was reporting properly at the time of rx
 checksum
>>> offload.
 Looks like DPDK is reporting checksum valid in all the cases even it
 is
>>> disabled.

 Any inputs on this?

>>>
>>> Hi Sugesh, I was trying to fix the reconfiguration code not applying
>>> the OVSDB value so that's all I tested. I guess it's best to roll back
>>> to your original test and take it from there? You probably tested with
>>> DPDK
>>> 16.11.0 and I see some changes since then (e.g. below). Also, maybe
>>> you were testing enabled/disabled on first configure? It's the same
>>> configure code, but perhaps there is some different state in DPDK when
>>> the port is configured initially.
>>>
>> Yes, I tried to configure initially as well as run time.
>> [Sugesh] Also,
>> At the time of Rx checksum offload implementation, one of the comments
>> suggests not to use any configuration option at all.
>> Keep it ON for all the physical ports when it is supported. The reason being
>> the configuration option is added is due to the vectorization.
>> In the earlier DPDK releases the vectorization will turned off when checksum
>> offload is enabled.
>> I feel that is not the case for the latest DPDK releases (Vectorization is ON
>> with checksum offload).
>> If there is no impact of vectorization, then there is no usecase for having 
>> this
>> configuration option at all.
>> This way there is one less thing for the user to worry about. What do you
>> think?
>> Do you think it makes any backward compatibility issues??
> [Sugesh] Now I have tested with 16.11 and 17.2 DPDK releases.
> Here are the test cases I have run
> 1) Send tunnel traffic, Rx checksum offload ON
> 2) Send tunnel traffic, Rx checksum offload OFF
> 3) Send tunnel traffic, toggle Rx checksum offload configuration at run time.
> 4) Send tunnel traffic(With invalid checksum)
> 
> In all cases, DPDK report checksum status properly. But it doesn't honor the 
> configuration changes at all.

That sounds like a bug in DPDK, no? You should probably let the
maintainer of whichever NIC you are using know about it.

> Considering the vector Rx works with Rx checksum , will you consider removing 
> the
> Configuration option completely and keep it ON implicitly when it can 
> supported in the NIC.
>  

Seems that way 

[ovs-dev] Sage Users List

2017-05-29 Thread diana . johnson





Hi,

Would you be interested Sage Users
contact information for your marketing campaigns?
 
We also have
other technology users like: Oracle users, PeopleSoft users, JD Edwards’s
users, Informatica users, Infor users, Microsoft Dynamics users, Epicor
and many more...
 
Below are the few titles of your interest:


Chief Information Officer
Chief Technology Officer
Software Developers
Chief Engineers
IT Director
IT
Manager
CISO and many more…

If you are looking for
particular titles from your target geography, please let me know and i
will get back to you with more information regarding the same.

Please review and let me know your thoughts.

Await your
response!

Thanks
Diana johnson
Data Specialist

To opt out please response Remove in subject line.
 



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] docs: Document dpdkr ports

2017-05-29 Thread Kevin Traynor
On 05/26/2017 03:12 PM, Stephen Finucane wrote:
> I has an idea what these were but that idea was somewhat incorrect and
> out-of-date. Add a minimal guide to fill in these gaps, along with a
> warning about how useless these things generally are now (yay,
> vhost-user).
> 
> Signed-off-by: Stephen Finucane 
> Cc: Ciara Loftus 
> Cc: Kevin Traynor 
> ---
>  Documentation/topics/dpdk/index.rst  |  1 +
>  Documentation/topics/dpdk/ring.rst   | 80 
> 
>  Documentation/topics/dpdk/vhost-user.rst |  8 ++--
>  3 files changed, 85 insertions(+), 4 deletions(-)
>  create mode 100644 Documentation/topics/dpdk/ring.rst
> 
> diff --git a/Documentation/topics/dpdk/index.rst 
> b/Documentation/topics/dpdk/index.rst
> index 180ebbf..da148b3 100644
> --- a/Documentation/topics/dpdk/index.rst
> +++ b/Documentation/topics/dpdk/index.rst
> @@ -29,3 +29,4 @@ The DPDK Datapath
> :maxdepth: 2
>  
> vhost-user
> +   ring
> diff --git a/Documentation/topics/dpdk/ring.rst 
> b/Documentation/topics/dpdk/ring.rst
> new file mode 100644
> index 000..347d095
> --- /dev/null
> +++ b/Documentation/topics/dpdk/ring.rst
> @@ -0,0 +1,80 @@
> +..
> +  Licensed under the Apache License, Version 2.0 (the "License"); you may
> +  not use this file except in compliance with the License. You may obtain
> +  a copy of the License at
> +
> +  http://www.apache.org/licenses/LICENSE-2.0
> +
> +  Unless required by applicable law or agreed to in writing, software
> +  distributed under the License is distributed on an "AS IS" BASIS, 
> WITHOUT
> +  WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See 
> the
> +  License for the specific language governing permissions and limitations
> +  under the License.
> +
> +  Convention for heading levels in Open vSwitch documentation:
> +
> +  ===  Heading 0 (reserved for the title in a document)
> +  ---  Heading 1
> +  ~~~  Heading 2
> +  +++  Heading 3
> +  '''  Heading 4
> +
> +  Avoid deeper levels because they do not render well.
> +
> +===
> +DPDK Ring Ports
> +===
> +
> +.. warning::
> +
> +   DPDK ring interfaces cannot be used for guest communication and are 
> retained
> +   mainly for backwards compatibility purposes. In nearly all cases,
> +   :doc:`vhost-user ports ` are a better choice and should be 
> used
> +   instead.
> +
> +The DPDK datapath provides DPDK-backed ring ports that are implemented using
> +DPDK's ``librte_ring`` library. For more information of this library, refer 
> to
> +the `DPDK documentation`_.
> +
> +Quick Example
> +-
> +
> +This example demonstrates how to add a ``dpdkr`` port to an existing bridge
> +called ``br0``::
> +
> +$ ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr
> +
> +dpdkr
> +-
> +
> +To use ring ports, you must first add said ports to the switch. Unlike
> +:doc:`vhost-user interfaces `, ring port names must take a 
> specific
> +format, ``dpdkrNN``, where ``NN`` is the port ID. For example::
> +
> +$ ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr
> +
> +Once the port has been added to the switch, they can be used by host 
> processes.
> +A sample loopback application - ``test-dpdkr`` - is included with Open 
> vSwitch.
> +To use this, run the following::
> +
> +$ ./tests/test-dpdkr -c 1 -n 4 --proc-type=secondary -- -n 0
> +
> +Further functionality would require developing your own application. Refer to
> +the `DPDK documentation`_ for more information on how to do this.
> +
> +Adding dpdkr ports to the guest
> +~~~
> +
> +It is **not** possible to use ring ports from guests. Historically, this was

I'm not sure if "**not** possible" is the right language here and above.
I mean, it should be possible but there are external dependencies which
are gone stale and there is perhaps a better alternative now. Maybe "not
recommended" is strong enough

> +possible using a patched version of QEMU and the IVSHMEM feature provided 
> with
> +DPDK. However, this functionality was removed because:
> +
> +- The IVSHMEM library was removed from DPDK in DPDK 16.11
> +
> +- Support for IVSHMEM was never upstreamed to QEMU and has been publicly
> +  rejected by the QEMU community
> +
> +- :doc:`vhost-user interfaces ` are the defacto DPDK-based path 
> to
> +  guests
> +
> +.. _DPDK documentation: 
> https://dpdk.readthedocs.io/en/v17.05/prog_guide/ring_lib.html
> diff --git a/Documentation/topics/dpdk/vhost-user.rst 
> b/Documentation/topics/dpdk/vhost-user.rst
> index 2e2396b..a1c19fd 100644
> --- a/Documentation/topics/dpdk/vhost-user.rst
> +++ b/Documentation/topics/dpdk/vhost-user.rst
> @@ -70,10 +70,10 @@ vhost-user
>  
> Use of vhost-user ports requires QEMU >= 2.2
>  
> -To use vhost-user ports, you must first add said ports to 

Re: [ovs-dev] [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

2017-05-29 Thread Ilya Maximets
On 29.05.2017 18:00, Kevin Traynor wrote:
> On 05/29/2017 02:31 PM, Ilya Maximets wrote:
>> On 29.05.2017 16:26, Kevin Traynor wrote:
>>> On 05/29/2017 01:22 PM, Ilya Maximets wrote:
 On 26.05.2017 20:14, Kevin Traynor wrote:
> On 05/26/2017 03:55 PM, Ilya Maximets wrote:
>> On 10.03.2017 07:27, Daniele Di Proietto wrote:
>>> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
 Reconfiguration of HW NICs may lead to packet drops.
 In current model all physical ports will be reconfigured each
 time number of PMD threads changed. Since we not stopping
 threads on pmd-cpu-mask changes, this patch will help to further
 decrease port's downtime by setting the maximum possible number
 of wanted tx queues to avoid unnecessary reconfigurations.

 Signed-off-by: Ilya Maximets 
>>>
>>> I haven't thought this through a lot, but the last big series we pushed
>>> on master went exactly in the opposite direction, i.e. created one txq
>>> for each thread in the datapath.
>>>
>>> I thought this was a good idea because:
>>>
>>> * On some systems with hyperthreading we can have a lot of cpus (we 
>>> received
>>>reports of systems with 72 cores). If you want to use only a couple 
>>> of cores
>>>you're still forced to have a lot of unused txqs, which prevent you
>>> from having
>>>lockless tx.
>>> * We thought that reconfiguring the number of pmds would not be a 
>>> frequent
>>>operation.
>>>
>>> Why do you want to reconfigure the threads that often?  Is it to be
>>> able to support more throughput quickly?
>>
>> Yes.
>>
>>> In this case I think we shouldn't use the number of cpus,
>>> but something else coming from the user, so that the administrator can
>>> balance how
>>> quickly pmd threads can be reconfigured vs how many txqs should be on
>>> the system.
>>> I'm not sure how to make this user friendly though.
>>>
>>> What do you think?
>>
>> Right now I'm thinking about combined solution which will respect
>> both issues (too big number of TXQs and frequent device reconfiguration).
>> I think, we can implement additional function to get port's limitations.
>> For now we can request the maximum number of TX queues from netdev and
>> use it while number of cores less then number of queues.
>> Something like this:
>>  
>> lib/netdev-dpdk.c:
>> uint32_t netdev_dpdk_get_max_txqs(struct netdev *netdev)
>> {
>> struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>> struct rte_eth_dev_info dev_info;
>>
>> ovs_mutex_lock(>mutex);
>> rte_eth_dev_info_get(dev->port_id, _info);
>> ovs_mutex_unlock(>mutex);
>>
>> return dev_info.max_tx_queues;
>> }
>>
>> lib/dpif-netdev.c:reconfigure_datapath():
>>
>> <->
>> max_tx_queues = netdev_get_max_txqs(port->netdev);
>> number_of_threads = cmap_count(>poll_threads);
>> wanted_txqs = MAX(max_tx_queues, number_of_threads);
>> <->
>>
>> In this implementation there will be no additional locking if number of
>> threads less or equal to maximum possible number of tx queues in HW.
>>
>> What do you think? Ian? Darrell?
>>
>
> I'm not sure about using rte_eth_dev_info_get() as IIRC there was
> problems previously with it reporting a number, but then when you went
> to configure them they were not all available depending on mode. That
> was why the trial and error queue configure was put in.
>
> What about replacing 'max_tx_queues' above with a number like 16 that is
> likely to be supported by the ports and unlikely be exceeded by
> number_of_threads?
>
> Kevin.

 Hi Kevin. Thank you for reminding me of this issue.

 But I think that magic numbers is not a good solution.

 One issue in my first implementation is that desired number of queues is
 not actually the same as required number.

 How about something like this:
 <->
 diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
 index 97f72d3..1a18e4f 100644
 --- a/lib/dpif-netdev.c
 +++ b/lib/dpif-netdev.c
 @@ -3448,7 +3448,7 @@ reconfigure_datapath(struct dp_netdev *dp)
  {
  struct dp_netdev_pmd_thread *pmd;
  struct dp_netdev_port *port;
 -int wanted_txqs;
 +int needed_txqs, wanted_txqs;
  
  dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq);
  
 @@ -3456,7 +3456,15 @@ reconfigure_datapath(struct dp_netdev *dp)
   * on the system and the user configuration. */
  reconfigure_pmd_threads(dp);
  
 -wanted_txqs = cmap_count(>poll_threads);
 +/* We 

Re: [ovs-dev] [PATCH RFC 4/4] dpif-netdev: Don't uninit emc on reload.

2017-05-29 Thread Ferriter, Cian
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Daniele Di Proietto
> Sent: 10 March 2017 04:35
> To: Ilya Maximets 
> Cc: d...@openvswitch.org; Heetae Ahn 
> Subject: Re: [ovs-dev] [PATCH RFC 4/4] dpif-netdev: Don't uninit emc on
> reload.
> 
> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
> > There are many reasons for reloading of pmd threads:
> > * reconfiguration of one of the ports.
> > * Adjusting of static_tx_qid.
> > * Adding new tx/rx ports.
> >
> > In many cases EMC is still useful after reload and uninit will only
> > lead to unnecessary upcalls/classifier lookups.
> >
> > Such behaviour slows down the datapath. Uninit itself slows down the
> > reload path. All this factors leads to additional unexpected
> > latencies/drops on events not directly connected to current PMD
> > thread.
> >
> > Lets not uninitialize emc cache on reload path.
> > 'emc_cache_slow_sweep()' and replacements should free all the
> > old/unwanted entries.
> >
> > Signed-off-by: Ilya Maximets 
> > ---
> >
> > In discussion of original patch[1] Pravin Shelar said:
> > '''
> >   emc cache flows should be free on reload, otherwise flows
> >   might stick around for longer time.
> > '''
> >
> > After that (for another reason) slow sweep was introduced.
> > There are few reasons to move uninit out of reload path described in
> > commit-message.
> >
> > So, this patch sent as RFC, because I wanted to hear some opinions
> > about current situation and drawbacks of such solution.
> >
> > Any thoughts?
> > Thanks.
> >
> > [1]
> > https://mail.openvswitch.org/pipermail/ovs-dev/2014-August/287852.html
> >
> 
> I'm in favor of this approach. Now that we have slow sweep we don't need
> to clear it at every reload.
> 
> Just curious, did you measure any difference in reload time with this patch?
> 
>  We could init and uninit the EMC from the main thread in
> dp_netdev_configure_pmd() and dp_netdev_del_pmd().  This way we
> wouldn't need a special case for the non-pmd thread in the code, but it
> would be slower.
> In any case it seems fine with me
> 
> Thanks,
> 
> Daniele
> 

Hi Ilya,

This approach and your rationale behind it makes sense to me too.

I tested this patch by applying it on top of patches 2/4 and 3/4 of the series 
(patch 1/4 has been applied to master already). The patches applied cleanly on 
top of the current head of master and passed the usual sanity checks (compiles 
with GCC, CLANG, with and without DPDK). This patch also passes checkpatch.py. 
Patch 2/4 does not pass checkpatch.py, I'll reply to that patch separately.

Asides from sanity testing, I also carried out the following test:
While sending a single flow of packets in a simple Phy-Phy test with one PMD 
thread, add a dpdkvhostuser port to the same bridge as the phy ports. Measure 
the "megaflow hits" shown by "ovs-appctl dpif-netdev/pmd-stats-show" before and 
after the dpdkvhostuser port has been added. This test requires the 
"emc-insert-inv-prob" to be set to 1 (doc: 
http://docs.openvswitch.org/en/latest/howto/dpdk/#emc-insertion-probability). I 
also used " ovs-appctl dpif-netdev/pmd-stats-clear" while sending traffic to 
help with counting the amount of classifer lookups (megaflow hits).

On my system before the patch is applied, there are 32 megaflow hits 
consistently each time a dpdkvhostuser port is added to the bridge. As you have 
explained, this is as a result of the call to "emc_cache_uninit" and the 
resulting classifier lookups. After your patch, there are no megaflow hits when 
adding a dpdkvhostuser port. This illustrates the value of your patch

Acked-by: Cian Ferriter 
Tested-by: Cian Ferriter 

> >  lib/dpif-netdev.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index
> > e2b4f39..3bd568d 100644
> > --- a/lib/dpif-netdev.c
> > +++ b/lib/dpif-netdev.c
> > @@ -3648,9 +3648,9 @@ pmd_thread_main(void *f_)
> >  ovs_numa_thread_setaffinity_core(pmd->core_id);
> >  dpdk_set_lcore_id(pmd->core_id);
> >  poll_cnt = pmd_load_queues_and_ports(pmd, _list);
> > +emc_cache_init(>flow_cache);
> >  reload:
> >  pmd_alloc_static_tx_qid(pmd);
> > -emc_cache_init(>flow_cache);
> >
> >  /* List port/core affinity */
> >  for (i = 0; i < poll_cnt; i++) {
> > @@ -3697,13 +3697,13 @@ reload:
> >   * reloading the updated configuration. */
> >  dp_netdev_pmd_reload_done(pmd);
> >
> > -emc_cache_uninit(>flow_cache);
> >  pmd_free_static_tx_qid(pmd);
> >
> >  if (!exiting) {
> >  goto reload;
> >  }
> >
> > +emc_cache_uninit(>flow_cache);
> >  free(poll_list);
> >  pmd_free_cached_ports(pmd);
> >  return NULL;
> > --
> > 2.7.4
> >
> 

Re: [ovs-dev] [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

2017-05-29 Thread Kevin Traynor
On 05/29/2017 02:31 PM, Ilya Maximets wrote:
> On 29.05.2017 16:26, Kevin Traynor wrote:
>> On 05/29/2017 01:22 PM, Ilya Maximets wrote:
>>> On 26.05.2017 20:14, Kevin Traynor wrote:
 On 05/26/2017 03:55 PM, Ilya Maximets wrote:
> On 10.03.2017 07:27, Daniele Di Proietto wrote:
>> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
>>> Reconfiguration of HW NICs may lead to packet drops.
>>> In current model all physical ports will be reconfigured each
>>> time number of PMD threads changed. Since we not stopping
>>> threads on pmd-cpu-mask changes, this patch will help to further
>>> decrease port's downtime by setting the maximum possible number
>>> of wanted tx queues to avoid unnecessary reconfigurations.
>>>
>>> Signed-off-by: Ilya Maximets 
>>
>> I haven't thought this through a lot, but the last big series we pushed
>> on master went exactly in the opposite direction, i.e. created one txq
>> for each thread in the datapath.
>>
>> I thought this was a good idea because:
>>
>> * On some systems with hyperthreading we can have a lot of cpus (we 
>> received
>>reports of systems with 72 cores). If you want to use only a couple 
>> of cores
>>you're still forced to have a lot of unused txqs, which prevent you
>> from having
>>lockless tx.
>> * We thought that reconfiguring the number of pmds would not be a 
>> frequent
>>operation.
>>
>> Why do you want to reconfigure the threads that often?  Is it to be
>> able to support more throughput quickly?
>
> Yes.
>
>> In this case I think we shouldn't use the number of cpus,
>> but something else coming from the user, so that the administrator can
>> balance how
>> quickly pmd threads can be reconfigured vs how many txqs should be on
>> the system.
>> I'm not sure how to make this user friendly though.
>>
>> What do you think?
>
> Right now I'm thinking about combined solution which will respect
> both issues (too big number of TXQs and frequent device reconfiguration).
> I think, we can implement additional function to get port's limitations.
> For now we can request the maximum number of TX queues from netdev and
> use it while number of cores less then number of queues.
> Something like this:
>   
> lib/netdev-dpdk.c:
> uint32_t netdev_dpdk_get_max_txqs(struct netdev *netdev)
> {
> struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> struct rte_eth_dev_info dev_info;
>
> ovs_mutex_lock(>mutex);
> rte_eth_dev_info_get(dev->port_id, _info);
> ovs_mutex_unlock(>mutex);
>
> return dev_info.max_tx_queues;
> }
>
> lib/dpif-netdev.c:reconfigure_datapath():
>
> <->
> max_tx_queues = netdev_get_max_txqs(port->netdev);
> number_of_threads = cmap_count(>poll_threads);
> wanted_txqs = MAX(max_tx_queues, number_of_threads);
> <->
>
> In this implementation there will be no additional locking if number of
> threads less or equal to maximum possible number of tx queues in HW.
>
> What do you think? Ian? Darrell?
>

 I'm not sure about using rte_eth_dev_info_get() as IIRC there was
 problems previously with it reporting a number, but then when you went
 to configure them they were not all available depending on mode. That
 was why the trial and error queue configure was put in.

 What about replacing 'max_tx_queues' above with a number like 16 that is
 likely to be supported by the ports and unlikely be exceeded by
 number_of_threads?

 Kevin.
>>>
>>> Hi Kevin. Thank you for reminding me of this issue.
>>>
>>> But I think that magic numbers is not a good solution.
>>>
>>> One issue in my first implementation is that desired number of queues is
>>> not actually the same as required number.
>>>
>>> How about something like this:
>>> <->
>>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
>>> index 97f72d3..1a18e4f 100644
>>> --- a/lib/dpif-netdev.c
>>> +++ b/lib/dpif-netdev.c
>>> @@ -3448,7 +3448,7 @@ reconfigure_datapath(struct dp_netdev *dp)
>>>  {
>>>  struct dp_netdev_pmd_thread *pmd;
>>>  struct dp_netdev_port *port;
>>> -int wanted_txqs;
>>> +int needed_txqs, wanted_txqs;
>>>  
>>>  dp->last_reconfigure_seq = seq_read(dp->reconfigure_seq);
>>>  
>>> @@ -3456,7 +3456,15 @@ reconfigure_datapath(struct dp_netdev *dp)
>>>   * on the system and the user configuration. */
>>>  reconfigure_pmd_threads(dp);
>>>  
>>> -wanted_txqs = cmap_count(>poll_threads);
>>> +/* We need 1 Tx queue for each thread to avoid locking, but we will try
>>> + * to allocate the maximum possible value to minimize the number of 
>>> port
>>> + 

Re: [ovs-dev] [PATCH 1/2] docs: Clarify the superiority of dpdkvhostuserclient

2017-05-29 Thread Kevin Traynor
On 05/26/2017 03:12 PM, Stephen Finucane wrote:
> Apparently dpdkvhostuser interfaces are inferior to dpdkvhostuserclient.
> Explain why.
> 
> Signed-off-by: Stephen Finucane 
> Cc: Ciara Loftus 
> Cc: Kevin Traynor 
> ---
> I'd like to note what happens to traffic when OVS or a VM is restarted
> for both port types. If someone knows the answer to this, please feel
> free to take ownership of that patch/ask me for a v2.
> ---
>  Documentation/topics/dpdk/vhost-user.rst | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/topics/dpdk/vhost-user.rst 
> b/Documentation/topics/dpdk/vhost-user.rst
> index ba22684..2e2396b 100644
> --- a/Documentation/topics/dpdk/vhost-user.rst
> +++ b/Documentation/topics/dpdk/vhost-user.rst
> @@ -54,8 +54,12 @@ vHost User sockets, and the client connects to the server. 
> Depending on which
>  port type you use, ``dpdkvhostuser`` or ``dpdkvhostuserclient``, a different
>  configuration of the client-server model is used.
>  
> -For vhost-user ports, Open vSwitch acts as the server and QEMU the client.  
> For
> -vhost-user-client ports, Open vSwitch acts as the client and QEMU the server.
> +For vhost-user ports, Open vSwitch acts as the server and QEMU the client. 
> This
> +means if OVS dies, all VMs **must** be restarted. On the other hand, for
> +vhost-user-client ports, OVS acts as the client and QEMU the server. This 
> means
> +OVS can die and be restarted without issue, and it is also possible to 
> restart
> +an instance itself. For this reason, vhost-user-client ports are the 
> preferred
> +type for most use cases.
>  
>  .. _dpdk-vhost-user:
>  
> 
LGTM. Acked-by: Kevin Traynor 
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/4] dpif-netdev: Incremental addition/deletion of PMD threads.

2017-05-29 Thread Ferriter, Cian
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Daniele Di Proietto
> Sent: 10 March 2017 04:13
> To: Ilya Maximets 
> Cc: d...@openvswitch.org; Heetae Ahn 
> Subject: Re: [ovs-dev] [PATCH 2/4] dpif-netdev: Incremental
> addition/deletion of PMD threads.
> 
> 2017-02-21 6:49 GMT-08:00 Ilya Maximets :
> > Currently, change of 'pmd-cpu-mask' is very heavy operation.
> > It requires destroying of all the PMD threads and creating them back.
> > After that, all the threads will sleep until ports' redistribution
> > finished.
> >
> > This patch adds ability to not stop the datapath while adjusting
> > number/placement of PMD threads. All not affected threads will forward
> > traffic without any additional latencies.
> >
> > id-pool created for static tx queue ids to keep them sequential in a
> > flexible way. non-PMD thread will always have static_tx_qid = 0 as it
> > was before.
> >
> > Signed-off-by: Ilya Maximets 
> 
> Thanks for the patch
> 
> The idea looks good to me.
> 
> I'm still looking at the code, and I have one comment below
> 

Hi Ilya,

While reviewing the RFC Patch 4/4 in this series, I ran checkpatch.py over the 
entire series. The tool returned the warning below for this patch. I don't plan 
on reviewing this patch, but I thought I was send this as an FYI.

# ./checkpatch.py 
ovs-dev-2-4-dpif-netdev-Incremental-addition-deletion-of-PMD-threads..patch
WARNING: Line length is >79-characters long
#66 FILE: lib/dpif-netdev.c:1088:
/* We need 1 Tx queue for each possible cpu core + 1 for non-PMD threads. */

Lines checked: 272, Warnings: 1, Errors: 0


> > ---
> >  lib/dpif-netdev.c | 119
> +-
> >  tests/pmd.at  |   2 +-
> >  2 files changed, 91 insertions(+), 30 deletions(-)
> >
> > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index
> > 30907b7..6e575ab 100644
> > --- a/lib/dpif-netdev.c
> > +++ b/lib/dpif-netdev.c
> > @@ -48,6 +48,7 @@
> >  #include "fat-rwlock.h"
> >  #include "flow.h"
> >  #include "hmapx.h"
> > +#include "id-pool.h"
> >  #include "latch.h"
> >  #include "netdev.h"
> >  #include "netdev-vport.h"
> > @@ -241,6 +242,9 @@ struct dp_netdev {
> >
> >  /* Stores all 'struct dp_netdev_pmd_thread's. */
> >  struct cmap poll_threads;
> > +/* id pool for per thread static_tx_qid. */
> > +struct id_pool *tx_qid_pool;
> > +struct ovs_mutex tx_qid_pool_mutex;
> >
> >  /* Protects the access of the 'struct dp_netdev_pmd_thread'
> >   * instance for non-pmd thread. */ @@ -514,7 +518,7 @@ struct
> > dp_netdev_pmd_thread {
> >  /* Queue id used by this pmd thread to send packets on all netdevs if
> >   * XPS disabled for this netdev. All static_tx_qid's are unique and 
> > less
> >   * than 'cmap_count(dp->poll_threads)'. */
> > -const int static_tx_qid;
> > +uint32_t static_tx_qid;
> >
> >  struct ovs_mutex port_mutex;/* Mutex for 'poll_list' and 
> > 'tx_ports'. */
> >  /* List of rx queues to poll. */
> > @@ -594,6 +598,8 @@ static struct dp_netdev_pmd_thread
> *dp_netdev_get_pmd(struct dp_netdev *dp,
> >unsigned
> > core_id);  static struct dp_netdev_pmd_thread *
> > dp_netdev_pmd_get_next(struct dp_netdev *dp, struct cmap_position
> > *pos);
> > +static void dp_netdev_del_pmd(struct dp_netdev *dp,
> > +  struct dp_netdev_pmd_thread *pmd);
> >  static void dp_netdev_destroy_all_pmds(struct dp_netdev *dp, bool
> > non_pmd);  static void dp_netdev_pmd_clear_ports(struct
> > dp_netdev_pmd_thread *pmd);  static void
> > dp_netdev_add_port_tx_to_pmd(struct dp_netdev_pmd_thread *pmd,
> @@ -1077,10 +1083,17 @@ create_dp_netdev(const char *name, const
> struct dpif_class *class,
> >  atomic_init(>emc_insert_min, DEFAULT_EM_FLOW_INSERT_MIN);
> >
> >  cmap_init(>poll_threads);
> > +
> > +ovs_mutex_init(>tx_qid_pool_mutex);
> > +/* We need 1 Tx queue for each possible cpu core + 1 for non-PMD
> threads. */
> > +dp->tx_qid_pool = id_pool_create(0, ovs_numa_get_n_cores() + 1);
> > +
> >  ovs_mutex_init_recursive(>non_pmd_mutex);
> >  ovsthread_key_create(>per_pmd_key, NULL);
> >
> >  ovs_mutex_lock(>port_mutex);
> > +/* non-PMD will be created before all other threads and will
> > + * allocate static_tx_qid = 0. */
> >  dp_netdev_set_nonpmd(dp);
> >
> >  error = do_add_port(dp, name,
> > dpif_netdev_port_open_type(dp->class,
> > @@ -1164,6 +1177,9 @@ dp_netdev_free(struct dp_netdev *dp)
> >  dp_netdev_destroy_all_pmds(dp, true);
> >  cmap_destroy(>poll_threads);
> >
> > +ovs_mutex_destroy(>tx_qid_pool_mutex);
> > +id_pool_destroy(dp->tx_qid_pool);
> > +
> >  ovs_mutex_destroy(>non_pmd_mutex);
> >  ovsthread_key_delete(dp->per_pmd_key);
> >
> > @@ -3175,7 

[ovs-dev] [PATCH] debian.rst: Clarify that "dpkg" needs manual help with dependencies.

2017-05-29 Thread Ben Pfaff
Reported-by: Mircea Ulinic 
Signed-off-by: Ben Pfaff 
---
 AUTHORS.rst|  1 +
 Documentation/intro/install/debian.rst | 14 +-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/AUTHORS.rst b/AUTHORS.rst
index a8bf1ee0c6af..6c02711780d4 100644
--- a/AUTHORS.rst
+++ b/AUTHORS.rst
@@ -477,6 +477,7 @@ Mike Kruze  mkr...@nicira.com
 Mike Qing   mq...@vmware.com
 Min Chenustcer.tonyc...@gmail.com
 Mikael Doverhag mdover...@nicira.com
+Mircea Ulinic   p...@mirceaulinic.net
 Mrinmoy Das mr...@ixiacom.com
 Muhammad Shahbazmshah...@cs.princeton.edu
 Murali Rmuralir...@gmail.com
diff --git a/Documentation/intro/install/debian.rst 
b/Documentation/intro/install/debian.rst
index 6484331442fb..0bf94e4090d7 100644
--- a/Documentation/intro/install/debian.rst
+++ b/Documentation/intro/install/debian.rst
@@ -98,11 +98,15 @@ Installing .deb Packages
 
 
 These instructions apply to installing from Debian packages that you built
-yourself, as described in the previous section, or from packages provided by
-Debian or a Debian derivative distribution such as Ubuntu.  In the former case,
-use a command such as ``dpkg -i`` to install the .deb files that you build, and
-in the latter case use a program such as ``apt-get`` or ``aptitude`` to
-download and install the provided packages.
+yourself, as described in the previous section.  In this case, use a command
+such as ``dpkg -i`` to install the .deb files that you build.  You will have to
+manually install any missing dependencies.
+
+You can also use these instruction to install from packages provided by Debian
+or a Debian derivative distribution such as Ubuntu.  In this case, use a
+program such as ``apt-get`` or ``aptitude`` to download and install the
+provided packages.  These programs will also automatically download and install
+any missing dependencies.
 
 .. important::
   You must be superuser to install Debian packages.
-- 
2.10.2

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 1/3] ovs-router: fix refcnt leak when program terminates.

2017-05-29 Thread Flavio Leitner
Install a handler to flush routes and release devices when
the program is terminating.

Signed-off-by: Flavio Leitner 
---
 lib/ovs-router.c | 34 ++
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/lib/ovs-router.c b/lib/ovs-router.c
index 96871d1..dd23c88 100644
--- a/lib/ovs-router.c
+++ b/lib/ovs-router.c
@@ -33,6 +33,7 @@
 #include "command-line.h"
 #include "compiler.h"
 #include "dpif.h"
+#include "fatal-signal.h"
 #include "openvswitch/dynamic-string.h"
 #include "netdev.h"
 #include "packets.h"
@@ -484,16 +485,33 @@ ovs_router_flush(void)
 seq_change(tnl_conf_seq);
 }
 
+static void
+ovs_router_flush_handler(void *aux OVS_UNUSED)
+{
+ovs_router_flush();
+}
+
 /* May not be called more than once. */
 void
 ovs_router_init(void)
 {
-classifier_init(, NULL);
-unixctl_command_register("ovs/route/add", "ip_addr/prefix_len out_br_name 
[gw] [pkt_mark=mark]", 2, 4,
- ovs_router_add, NULL);
-unixctl_command_register("ovs/route/show", "", 0, 0, ovs_router_show, 
NULL);
-unixctl_command_register("ovs/route/del", "ip_addr/prefix_len 
[pkt_mark=mark]", 1, 2,
- ovs_router_del, NULL);
-unixctl_command_register("ovs/route/lookup", "ip_addr [pkt_mark=mark]", 1, 
2,
- ovs_router_lookup_cmd, NULL);
+static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
+
+if (ovsthread_once_start()) {
+fatal_signal_add_hook(ovs_router_flush_handler, NULL, NULL, true);
+classifier_init(, NULL);
+unixctl_command_register("ovs/route/add",
+ "ip_addr/prefix_len out_br_name [gw] "
+ "[pkt_mark=mark]",
+ 2, 4, ovs_router_add, NULL);
+unixctl_command_register("ovs/route/show", "", 0, 0,
+ ovs_router_show, NULL);
+unixctl_command_register("ovs/route/del", "ip_addr/prefix_len "
+ "[pkt_mark=mark]", 1, 2, ovs_router_del,
+ NULL);
+unixctl_command_register("ovs/route/lookup", "ip_addr "
+ "[pkt_mark=mark]", 1, 2,
+ ovs_router_lookup_cmd, NULL);
+ovsthread_once_done();
+}
 }
-- 
2.9.4

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH 0/3] Allow restart to keep network configuration

2017-05-29 Thread Flavio Leitner

This patchset changes OVS to allow restarts to preserve
bridge's network configuration when using netdev datapath.

Flavio Leitner (3):
  ovs-router: fix refcnt leak when program terminates.
  netdev-linux: make tap devices persistent.
  netdev-linux: maintain original device's state

 lib/netdev-linux.c |  9 +
 lib/ovs-router.c   | 34 ++
 2 files changed, 35 insertions(+), 8 deletions(-)

-- 
2.9.4

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Bug#863661: openvswitch: CVE-2017-9264

2017-05-29 Thread Salvatore Bonaccorso
Source: openvswitch
Version: 2.6.2~pre+git20161223-3
Severity: important
Tags: patch upstream security

Hi,

the following vulnerability was published for openvswitch.

CVE-2017-9264[0]:
| In lib/conntrack.c in the firewall implementation in Open vSwitch (OvS)
| 2.6.1, there is a buffer over-read while parsing malformed TCP, UDP,
| and IPv6 packets in the functions `extract_l3_ipv6`, `extract_l4_tcp`,
| and `extract_l4_udp` that can be triggered remotely.

If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.

For further information see:

[0] https://security-tracker.debian.org/tracker/CVE-2017-9264
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-9264
[1] https://mail.openvswitch.org/pipermail/ovs-dev/2017-March/329323.html

Regards,
Salvatore
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Bug#863662: openvswitch: CVE-2017-9265

2017-05-29 Thread Salvatore Bonaccorso
Source: openvswitch
Version: 2.6.2~pre+git20161223-3
Severity: normal
Tags: upstream patch security

Hi,

the following vulnerability was published for openvswitch.

CVE-2017-9265[0]:
| In Open vSwitch (OvS) v2.7.0, there is a buffer over-read while parsing
| the group mod OpenFlow message sent from the controller in
| `lib/ofp-util.c` in the function `ofputil_pull_ofp15_group_mod`.

this should be only in the OpenFlow 1.5+ support, not sure the message
mentions this is not enabled by default. Affected source it as least
there.

If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.

For further information see:

[0] https://security-tracker.debian.org/tracker/CVE-2017-9265
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-9265
[1] https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/332965.html

Regards,
Salvatore
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Bug#863655: openvswitch: CVE-2017-9263

2017-05-29 Thread Ben Pfaff
notfound 863655 2.3.0+git20140819-1
found 863655 2.6.2~pre+git20161223-3
severity 863655 normal
thanks

On Mon, May 29, 2017 at 09:44:13PM +0200, Salvatore Bonaccorso wrote:
> Source: openvswitch
> Version: 2.3.0+git20140819-1
> Severity: important
> Tags: security upstream patch
> 
> Hi,
> 
> the following vulnerability was published for openvswitch.
> 
> CVE-2017-9263[0]:
> | In Open vSwitch (OvS) 2.7.0, while parsing an OpenFlow role status
> | message, there is a call to the abort() function for undefined role
> | status reasons in the function `ofp_print_role_status_message` in
> | `lib/ofp-print.c` that may be leveraged toward a remote DoS attack by a
> | malicious switch.

This doesn't really make sense.  For a "malicious switch" to leverage
this as a remote DoS, the controller that it talks to has to be
implemented using the OVS code in question.  OVS 2.3 as packaged for
Debian doesn't include a controller,

Open vSwitch 2.6.2 includes two controllers.  The first one,
ovs-testcontroller, is not vulnerable to this in the default
configuration, because it does not print such messages even if it
receives them, unless it is specially configured to do so.  The second
one, ovn-controller, only talks to Open vSwitch directly, not to
arbitrary switches, and only over a trusted Unix domain socket anyway.
In any case, if either of these crashes due to this bug, they
automatically restart themselves.

So, while it is a good idea to fix this, it's not high severity.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Processed: Re: Bug#863655: openvswitch: CVE-2017-9263

2017-05-29 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

> notfound 863655 2.3.0+git20140819-1
Bug #863655 [src:openvswitch] openvswitch: CVE-2017-9263
No longer marked as found in versions openvswitch/2.3.0+git20140819-1.
> found 863655 2.6.2~pre+git20161223-3
Bug #863655 [src:openvswitch] openvswitch: CVE-2017-9263
Marked as found in versions openvswitch/2.6.2~pre+git20161223-3.
> severity 863655 normal
Bug #863655 [src:openvswitch] openvswitch: CVE-2017-9263
Severity set to 'normal' from 'important'
> thanks
Stopping processing here.

Please contact me if you need assistance.
-- 
863655: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=863655
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH V9 13/31] netdev-tc-offloads: Implement netdev flow dump api using tc interface

2017-05-29 Thread Roi Dayan



On 28/05/2017 14:59, Roi Dayan wrote:

From: Paul Blakey 

Signed-off-by: Paul Blakey 
Reviewed-by: Roi Dayan 
Reviewed-by: Simon Horman 
---
 lib/netdev-tc-offloads.c | 184 ---
 1 file changed, 175 insertions(+), 9 deletions(-)

diff --git a/lib/netdev-tc-offloads.c b/lib/netdev-tc-offloads.c
index 2c91b34..ba479a8 100644
--- a/lib/netdev-tc-offloads.c
+++ b/lib/netdev-tc-offloads.c
@@ -150,7 +150,7 @@ get_ufid_tc_mapping(const ovs_u128 *ufid, int *prio, struct 
netdev **netdev)
  *
  * Returns true on success.
  */
-static bool OVS_UNUSED
+static bool
 find_ufid(int prio, int handle, struct netdev *netdev, ovs_u128 *ufid)
 {
 int ifindex = netdev_get_ifindex(netdev);
@@ -188,9 +188,20 @@ int
 netdev_tc_flow_dump_create(struct netdev *netdev,
struct netdev_flow_dump **dump_out)
 {
-struct netdev_flow_dump *dump = xzalloc(sizeof *dump);
+struct netdev_flow_dump *dump;
+int ifindex;
+
+ifindex = netdev_get_ifindex(netdev);
+if (ifindex < 0) {
+VLOG_ERR_RL(_rl, "failed to get ifindex for %s: %s",
+netdev_get_name(netdev), ovs_strerror(-ifindex));
+return -ifindex;
+}

+dump = xzalloc(sizeof *dump);
+dump->nl_dump = xzalloc(sizeof *dump->nl_dump);
 dump->netdev = netdev_ref(netdev);
+tc_dump_flower_start(ifindex, dump->nl_dump);

 *dump_out = dump;

@@ -200,21 +211,176 @@ netdev_tc_flow_dump_create(struct netdev *netdev,
 int
 netdev_tc_flow_dump_destroy(struct netdev_flow_dump *dump)
 {
+nl_dump_done(dump->nl_dump);
 netdev_close(dump->netdev);
+free(dump->nl_dump);
 free(dump);
+return 0;
+}
+
+static int
+parse_tc_flower_to_match(struct tc_flower *flower,
+ struct match *match,
+ struct nlattr **actions,
+ struct dpif_flow_stats *stats,
+ struct ofpbuf *buf) {
+size_t act_off;
+struct tc_flower_key *key = >key;
+struct tc_flower_key *mask = >mask;
+odp_port_t outport = 0;
+
+if (flower->ifindex_out) {
+outport = netdev_ifindex_to_odp_port(flower->ifindex_out);
+if (!outport) {
+return ENOENT;
+}
+}
+
+ofpbuf_clear(buf);
+
+match_init_catchall(match);
+match_set_dl_type(match, key->eth_type);
+match_set_dl_src_masked(match, key->src_mac, mask->src_mac);
+match_set_dl_dst_masked(match, key->dst_mac, mask->dst_mac);
+if (key->vlan_id || key->vlan_prio) {
+match_set_dl_vlan(match, htons(key->vlan_id));
+match_set_dl_vlan_pcp(match, key->vlan_prio);
+match_set_dl_type(match, key->encap_eth_type);
+}
+
+if (key->ip_proto &&
+(key->eth_type == htons(ETH_P_IP)
+ || key->eth_type == htons(ETH_P_IPV6))) {
+match_set_nw_proto(match, key->ip_proto);
+}
+
+match_set_nw_src_masked(match, key->ipv4.ipv4_src, mask->ipv4.ipv4_src);
+match_set_nw_dst_masked(match, key->ipv4.ipv4_dst, mask->ipv4.ipv4_dst);
+
+match_set_ipv6_src_masked(match,
+  >ipv6.ipv6_src, >ipv6.ipv6_src);
+match_set_ipv6_dst_masked(match,
+  >ipv6.ipv6_dst, >ipv6.ipv6_dst);
+
+match_set_tp_dst_masked(match, key->dst_port, mask->dst_port);
+match_set_tp_src_masked(match, key->src_port, mask->src_port);
+
+if (flower->tunnel.tunnel) {
+match_set_tun_id(match, flower->tunnel.id);
+if (flower->tunnel.ipv4.ipv4_dst) {
+match_set_tun_src(match, flower->tunnel.ipv4.ipv4_src);
+match_set_tun_dst(match, flower->tunnel.ipv4.ipv4_dst);
+} else if (!is_all_zeros(>tunnel.ipv6.ipv6_dst,
+   sizeof flower->tunnel.ipv6.ipv6_dst)) {
+match_set_tun_ipv6_src(match, >tunnel.ipv6.ipv6_src);
+match_set_tun_ipv6_dst(match, >tunnel.ipv6.ipv6_dst);
+}
+match_set_tp_dst(match, flower->tunnel.tp_dst);


We have a bug here. we accidently set tp_dst instead of tunnel.tp_dst.
Should be something like:

if (flower->tunnel.tp_dst) {
match_set_tun_dst(match, flower->tunnel.tp_dst);
}

fixed for V10. will wait for more comments before sending V10 though.



+}
+
+act_off = nl_msg_start_nested(buf, OVS_FLOW_ATTR_ACTIONS);
+{
+if (flower->vlan_pop) {
+nl_msg_put_flag(buf, OVS_ACTION_ATTR_POP_VLAN);
+}
+
+if (flower->vlan_push_id || flower->vlan_push_prio) {
+struct ovs_action_push_vlan *push;
+push = nl_msg_put_unspec_zero(buf, OVS_ACTION_ATTR_PUSH_VLAN,
+  sizeof *push);
+
+push->vlan_tpid = htons(ETH_TYPE_VLAN);
+push->vlan_tci = htons(flower->vlan_push_id
+   | (flower->vlan_push_prio << 13)
+   | 

Re: [ovs-dev] [PATCH] ovsdb: Check null before deref in ovsdb_monitor_table_condition_update().

2017-05-29 Thread Liran Schour
Thanks,

- Liran

ovs-dev-boun...@openvswitch.org wrote on 27/05/2017 06:51:15 AM:

> From: Ben Pfaff 
> To: d...@openvswitch.org
> Cc: Ben Pfaff 
> Date: 27/05/2017 06:51 AM
> Subject: [ovs-dev] [PATCH] ovsdb: Check null before deref in 
> ovsdb_monitor_table_condition_update().
> Sent by: ovs-dev-boun...@openvswitch.org
> 
> I believe that this would trigger an ovsdb-server crash if a client 
created
> a plain RFC 7047 "monitor" and later attempted to update its condition.
> 
> Found by Coverity.
> 
> Reported-at: https://scan3.coverity.com/reports.htm#v16889/p10449/
> fileInstanceId=14763017=4305336=180412
> Signed-off-by: Ben Pfaff 
> ---
>  ovsdb/monitor.c | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/ovsdb/monitor.c b/ovsdb/monitor.c
> index 7e6ddcb2aa0b..b98100703091 100644
> --- a/ovsdb/monitor.c
> +++ b/ovsdb/monitor.c
> @@ -1,5 +1,5 @@
>  /*
> - * Copyright (c) 2015 Nicira, Inc.
> + * Copyright (c) 2015, 2017 Nicira, Inc.
>   *
>   * Licensed under the Apache License, Version 2.0 (the "License");
>   * you may not use this file except in compliance with the License.
> @@ -687,15 +687,15 @@ ovsdb_monitor_table_condition_update(
>  const struct ovsdb_table *table,
>  const struct json *cond_json)
>  {
> +if (!condition) {
> +return NULL;
> +}
> +
>  struct ovsdb_monitor_table_condition *mtc =
>  shash_find_data(>tables, table->schema->name);
>  struct ovsdb_error *error;
>  struct ovsdb_condition cond = OVSDB_CONDITION_INITIALIZER();
> 
> -if (!condition) {
> -return NULL;
> -}
> -
>  error = ovsdb_condition_from_json(table->schema, cond_json,
>NULL, );
>  if (error) {
> -- 
> 2.10.2
> 
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> 


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ovn-northd: Avoid null deref for missing outport in build_static_route_flow().

2017-05-29 Thread Miguel Angel Ajo Pelayo
Acked-By: Miguel Angel Ajo 

On Sat, May 27, 2017 at 7:39 AM, Ben Pfaff  wrote:

> Found by Coverity.
>
> Reported-at: https://scan3.coverity.com/reports.htm#v16889/p10449/
> fileInstanceId=14763080=4305186=179788
> Signed-off-by: Ben Pfaff 
> ---
>  ovn/northd/ovn-northd.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index 94bbe7c489b7..4d4930855c39 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -3966,7 +3966,7 @@ build_static_route_flow(struct hmap *lflows, struct
> ovn_datapath *od,
>  }
>  }
>
> - if (!lrp_addr_s) {
> +if (!out_port || !lrp_addr_s) {
>  /* There is no matched out port. */
>  static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
>  VLOG_WARN_RL(, "No path for static route %s; next hop %s",
> --
> 2.10.2
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ovn: Fix encoding of large logical output ports for STT.

2017-05-29 Thread Miguel Angel Ajo Pelayo
Acked-By: Miguel Angel Ajo 

$ cat test.c
#include 
#include 

void main(void) {

printf("uint16_t<<24=%Lx\n", ((uint16_t)0xf123)<<24);
printf("uint64_t<<24=%Lx\n", ((uint64_t)0xf123)<<24);
}

linux: vagrant@gw1 in ~/ovs on
$ ./test
uint16_t<<24=2300
uint64_t<<24=f12300

I tested it, and it seems (that at least for my gcc version 4.8.5 20150623
(Red Hat 4.8.5-11) (GCC) on x64),

a 16 bit shifted, will result in 32bit output.
a 64bit shifted will result in 64bit output.


That could explain why this could have been working and not seen with <256
ports.


On Sat, May 27, 2017 at 6:24 AM, Ben Pfaff  wrote:

> put_encapsulation() is meant to load the logical output port into bits
> 24 to 40 of the tunnel ID metadata field, but 'outport << 24' did not
> have that effect because outport has type uint16_t.  This fixes the
> problem.
>
> Found by Coverity.
>
> Reported-at: https://scan3.coverity.com/reports.htm#v16889/p10449/
> fileInstanceId=14763078=4304791=180391
> Signed-off-by: Ben Pfaff 
> ---
>  ovn/controller/physical.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c
> index 457fc45414bd..532c7252e1ea 100644
> --- a/ovn/controller/physical.c
> +++ b/ovn/controller/physical.c
> @@ -128,8 +128,8 @@ put_encapsulation(enum mf_field_id mff_ovn_geneve,
>  put_load(outport, mff_ovn_geneve, 0, 32, ofpacts);
>  put_move(MFF_LOG_INPORT, 0, mff_ovn_geneve, 16, 15, ofpacts);
>  } else if (tun->type == STT) {
> -put_load(datapath->tunnel_key | (outport << 24), MFF_TUN_ID, 0,
> 64,
> - ofpacts);
> +put_load(datapath->tunnel_key | ((uint64_t) outport << 24),
> + MFF_TUN_ID, 0, 64, ofpacts);
>  put_move(MFF_LOG_INPORT, 0, MFF_TUN_ID, 40, 15, ofpacts);
>  } else if (tun->type == VXLAN) {
>  put_load(datapath->tunnel_key, MFF_TUN_ID, 0, 24, ofpacts);
> --
> 2.10.2
>
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Bug#863661: openvswitch: CVE-2017-9264

2017-05-29 Thread Salvatore Bonaccorso
Hi

On Mon, May 29, 2017 at 04:35:30PM -0700, Ben Pfaff wrote:
> severity 863661 normal
> thanks
> 
> On Mon, May 29, 2017 at 10:14:49PM +0200, Salvatore Bonaccorso wrote:
> > Source: openvswitch
> > Version: 2.6.2~pre+git20161223-3
> > Severity: important
> > Tags: patch upstream security
> > 
> > Hi,
> > 
> > the following vulnerability was published for openvswitch.
> > 
> > CVE-2017-9264[0]:
> > | In lib/conntrack.c in the firewall implementation in Open vSwitch (OvS)
> > | 2.6.1, there is a buffer over-read while parsing malformed TCP, UDP,
> > | and IPv6 packets in the functions `extract_l3_ipv6`, `extract_l4_tcp`,
> > | and `extract_l4_udp` that can be triggered remotely.
> > 
> > If you fix the vulnerability please also make sure to include the
> > CVE (Common Vulnerabilities & Exposures) id in your changelog entry.
> 
> This only affects the userspace datapath, most often used in the context
> of DPDK, which isn't enabled in the Debian packaging.  In addition, the
> fact that it's a buffer overread (which makes it difficult to use to
> crash OVS or change its behavior) and the fact that end-to-end TCP
> checksum verification would catch it leads me to believe that this is
> only "normal" severity, so I'm updating it (with this email).

Thanks for the analysis.

In this case I think normal is ok.

Regards,
Salvatore
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Bug#863655: openvswitch: CVE-2017-9263

2017-05-29 Thread Salvatore Bonaccorso
HI Ben,

On Mon, May 29, 2017 at 01:35:58PM -0700, Ben Pfaff wrote:
> notfound 863655 2.3.0+git20140819-1
> found 863655 2.6.2~pre+git20161223-3
> severity 863655 normal
> thanks
> 
> On Mon, May 29, 2017 at 09:44:13PM +0200, Salvatore Bonaccorso wrote:
> > Source: openvswitch
> > Version: 2.3.0+git20140819-1
> > Severity: important
> > Tags: security upstream patch
> > 
> > Hi,
> > 
> > the following vulnerability was published for openvswitch.
> > 
> > CVE-2017-9263[0]:
> > | In Open vSwitch (OvS) 2.7.0, while parsing an OpenFlow role status
> > | message, there is a call to the abort() function for undefined role
> > | status reasons in the function `ofp_print_role_status_message` in
> > | `lib/ofp-print.c` that may be leveraged toward a remote DoS attack by a
> > | malicious switch.
> 
> This doesn't really make sense.  For a "malicious switch" to leverage
> this as a remote DoS, the controller that it talks to has to be
> implemented using the OVS code in question.  OVS 2.3 as packaged for
> Debian doesn't include a controller,
> 
> Open vSwitch 2.6.2 includes two controllers.  The first one,
> ovs-testcontroller, is not vulnerable to this in the default
> configuration, because it does not print such messages even if it
> receives them, unless it is specially configured to do so.  The second
> one, ovn-controller, only talks to Open vSwitch directly, not to
> arbitrary switches, and only over a trusted Unix domain socket anyway.
> In any case, if either of these crashes due to this bug, they
> automatically restart themselves.

Thanks for your reply (much appreciated) and this analysis! I adjusted
the security-tracker information.

> So, while it is a good idea to fix this, it's not high severity.

Yes might be ok indeed.

Regards,
Salvatore
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Processed: Re: Bug#863661: openvswitch: CVE-2017-9264

2017-05-29 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

> severity 863661 normal
Bug #863661 [src:openvswitch] openvswitch: CVE-2017-9264
Severity set to 'normal' from 'important'
> thanks
Stopping processing here.

Please contact me if you need assistance.
-- 
863661: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=863661
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev