[ovs-dev] REPLY URGENTLY

2017-11-24 Thread naren


FYI
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Normas Internacionales de Información Financiera

2017-11-24 Thread Preparación de Estados Financieros
Reconocimiento y medición de acuerdo a las NIIF 

Preparación de Estados Financieros bajo Normas Internacionales de Información 
Financiera – NIIF/NIC
05 de Diciembre - CPC. José Frank González Sánchez - 9am-6pm

En un mundo que constantemente se encuentra en movimiento, las empresas y las 
personas buscan una evolución constante, por lo que la NIIF buscan un punto en 
común, una homologación y consistencia en los resultados. Es importante que los 
profesionales sean capaces de aplicar las principales Normas Internacionales de 
Información Financiera (NIIF/NIC) relacionadas a los conceptos de Activo, 
Pasivo, Capital, Ingresos y Gastos 

BENEFICIOS DE ASISTIR: 

• Aprenderá a aplicar las normas de las NIIF en la presentación de los Estados 
Financieros, incluyendo las políticas de Contabilidad y revelaciones
• Implementará el reconocimiento y medición de acuerdo a las NIIF para los 
Activos, Pasivos, Capital, Ingresos y Gastos 
• Conocerá los criterios para seleccionar y modificar las políticas contables, 
así como el tratamiento contable y la información a revelar acerca de los 
cambios en las políticas contables, de los cambios en las estimaciones 
contables y de la corrección de errores
• Definirá el contenido de la información financiera intermedia 

¿Requiere la información a la Brevedad? responda este email con la palabra: NIF 
+ nombre - teléfono - correo.


centro telefónico:018002120744


 



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] netdev-dpdk: defer MTU set after interface start

2017-11-24 Thread Matteo Croce
On Fri, Nov 24, 2017 at 9:26 AM, Stokes, Ian  wrote:
>> Some PMD assumes that the RX buffers are already allocated when setting
>> the device MTU, because the RX buffer size depends on the MTU.
>> This worked until 67fe6d635193 added a call to rte_eth_dev_set_mtu() in
>> the init code, which would set the MTU before the RX buffer allocation,
>> triggering a segmentation fault with some PMD:
>>
>> Stack trace of thread 20680:
>> #0  0x559464396534 qede_set_mtu (ovs-vswitchd)
>> #1  0x559464323c41 rte_eth_dev_set_mtu (ovs-vswitchd)
>> #2  0x5594645f5e85 dpdk_eth_dev_queue_setup (ovs-vswitchd)
>> #3  0x5594645f8ae6 netdev_dpdk_reconfigure (ovs-vswitchd)
>> #4  0x55946452225c reconfigure_datapath (ovs-vswitchd)
>> #5  0x559464522d07 do_add_port (ovs-vswitchd)
>> #6  0x559464522e8d dpif_netdev_port_add (ovs-vswitchd)
>> #7  0x559464528dde dpif_port_add (ovs-vswitchd)
>> #8  0x5594644dc0e0 port_add (ovs-vswitchd)
>> #9  0x5594644d2ab1 ofproto_port_add (ovs-vswitchd)
>> #10 0x5594644c0a85 bridge_add_ports__ (ovs-vswitchd)
>> #11 0x5594644c26e8 bridge_reconfigure (ovs-vswitchd)
>> #12 0x5594644c5c49 bridge_run (ovs-vswitchd)
>> #13 0x5594642d4155 main (ovs-vswitchd)
>> #14 0x7f0e1444bc05 __libc_start_main (libc.so.6)
>> #15 0x5594642d8328 _start (ovs-vswitchd)
>>
>> A possible solution could be to move the first call to
>> rte_eth_dev_set_mtu() just after the device start instead of
>> dpdk_eth_dev_queue_setup() which, by the way, set the MTU multiple times
>> as the call to rte_eth_dev_set_mtu() was in a loop.
>>
>> CC: Mark Kavanagh 
>> Fixes: 67fe6d635193 ("netdev-dpdk: use rte_eth_dev_set_mtu.")
>> Signed-off-by: Matteo Croce 
>> ---
>>  lib/netdev-dpdk.c | 14 +++---
>>  1 file changed, 7 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
>> 76e79be25..229aa4a76 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -750,13 +750,6 @@ dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int
>> n_rxq, int n_txq)
>>  break;
>>  }
>>
>> -diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
>> -if (diag) {
>> -VLOG_ERR("Interface %s MTU (%d) setup error: %s",
>> -dev->up.name, dev->mtu, rte_strerror(-diag));
>> -break;
>> -}
>> -
>>  for (i = 0; i < n_txq; i++) {
>>  diag = rte_eth_tx_queue_setup(dev->port_id, i, dev->txq_size,
>>dev->socket_id, NULL); @@ -
>> 849,6 +842,13 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev)
>>  return -diag;
>>  }
>>
>> +diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
>> +if (diag) {
>> +VLOG_ERR("Interface %s MTU (%d) setup error: %s",
>> +dev->up.name, dev->mtu, rte_strerror(-diag));
>> +return -diag;
>> +}
>> +
>
> In my testing I didn't see any issues here, but would like to flag it to our 
> MTU Guru (Mark K).
>
> Any opinion on this? From the API description it looks like it is ok?
>
>
>>  rte_eth_promiscuous_enable(dev->port_id);
>>  rte_eth_allmulticast_enable(dev->port_id);
>>
>> --
>> 2.13.6
>>
>> ___
>> dev mailing list
>> d...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev

The issue only arises with the qede PMD and 67fe6d635193
("netdev-dpdk: use rte_eth_dev_set_mtu.")

Regards,
-- 
Matteo Croce
per aspera ad upstream
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kavanagh, Mark B
Sounds good guys - I'll get cracking on this on Monday.

Cheers,
Mark

>-Original Message-
>From: Jan Scheurich [mailto:jan.scheur...@ericsson.com]
>Sent: Friday, November 24, 2017 4:21 PM
>To: Kevin Traynor ; Kavanagh, Mark B
>; d...@openvswitch.org
>Cc: maxime.coque...@redhat.com; Flavio Leitner ; Franck Baudin
>; Mooney, Sean K ; Ilya Maximets
>; Stokes, Ian ; Loftus, Ciara
>; Darrell Ball ; Aaron Conole
>
>Subject: RE: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU
>feature
>
>+1
>Jan
>
>> -Original Message-
>> From: Kevin Traynor [mailto:ktray...@redhat.com]
>> Sent: Friday, 24 November, 2017 17:11
>> To: Mark Kavanagh ; d...@openvswitch.org
>> Cc: maxime.coque...@redhat.com; Flavio Leitner ; Franck
>Baudin ; Mooney, Sean K
>> ; Ilya Maximets ; Ian
>Stokes ; Loftus, Ciara
>> ; Darrell Ball ; Aaron Conole
>; Jan Scheurich
>> 
>> Subject: Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU
>feature
>>
>> On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
>> > DPDK v17.11 introduces support for the vHost IOMMU feature.
>> > This is a security feature, that restricts the vhost memory
>> > that a virtio device may access.
>> >
>> > This feature also enables the vhost REPLY_ACK protocol, the
>> > implementation of which is known to work in newer versions of
>> > QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
>> > v2.9.0, inclusive). As such, the feature is disabled by default
>> > in (and should remain so, for the aforementioned older QEMU
>> > verions). Starting with QEMU v2.9.1, vhost-iommu-support can
>> > safely be enabled, even without having an IOMMU device, with
>> > no performance penalty.
>> >
>> > This patch adds a new vhost port option, vhost-iommu-support,
>> > to allow enablement of the vhost IOMMU feature:
>> >
>> > $ ovs-vsctl add-port br0 vhost-client-1 \
>> > -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>> >  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>> >  options:vhost-iommu-support=true
>> >
>>
>> Hi Mark, All,
>>
>> I'm thinking about this and whether the current approach provides more
>> than what is actually needed by users at the cost of making the user
>> interface more complex.
>>
>> As an alternative, how about having a global other_config (to be set
>> like vhost-socket-dir can be) for this instead of having to set it for
>> each individual interface. It would mean that it would only have to be
>> set once, instead of having this (ugly?!) option every time a vhost port
>> is added, so it's a less intrusive change and I can't really think that
>> a user would require to do this per vhostclient interface??? It's pain
>> to have to add this at all for a bug in QEMU and I'm sure in 1/2/3 years
>> time someone will say that users could still be using QEMU < 2.9.1 and
>> we can't remove it, so it would be nice to keep it as discreet as
>> possible as we're going to be stuck with it for a while.
>>
>> I assume that a user would only use one version of QEMU at a time and
>> would either want or not want this feature. In the worst case, if that
>> proved completely wrong in the future, then a per interface override
>> could easily be added. Once there's a way to maintain backwards
>> compatibility (which there would be) I'd rather err on the side of
>> introducing just enough enough functionality over increasing complexity
>> for the user.
>>
>> What do you think?
>>
>> thanks,
>> Kevin.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Jan Scheurich
+1 
Jan

> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Friday, 24 November, 2017 17:11
> To: Mark Kavanagh ; d...@openvswitch.org
> Cc: maxime.coque...@redhat.com; Flavio Leitner ; Franck 
> Baudin ; Mooney, Sean K
> ; Ilya Maximets ; Ian Stokes 
> ; Loftus, Ciara
> ; Darrell Ball ; Aaron Conole 
> ; Jan Scheurich
> 
> Subject: Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU 
> feature
> 
> On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
> > DPDK v17.11 introduces support for the vHost IOMMU feature.
> > This is a security feature, that restricts the vhost memory
> > that a virtio device may access.
> >
> > This feature also enables the vhost REPLY_ACK protocol, the
> > implementation of which is known to work in newer versions of
> > QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
> > v2.9.0, inclusive). As such, the feature is disabled by default
> > in (and should remain so, for the aforementioned older QEMU
> > verions). Starting with QEMU v2.9.1, vhost-iommu-support can
> > safely be enabled, even without having an IOMMU device, with
> > no performance penalty.
> >
> > This patch adds a new vhost port option, vhost-iommu-support,
> > to allow enablement of the vhost IOMMU feature:
> >
> > $ ovs-vsctl add-port br0 vhost-client-1 \
> > -- set Interface vhost-client-1 type=dpdkvhostuserclient \
> >  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> >  options:vhost-iommu-support=true
> >
> 
> Hi Mark, All,
> 
> I'm thinking about this and whether the current approach provides more
> than what is actually needed by users at the cost of making the user
> interface more complex.
> 
> As an alternative, how about having a global other_config (to be set
> like vhost-socket-dir can be) for this instead of having to set it for
> each individual interface. It would mean that it would only have to be
> set once, instead of having this (ugly?!) option every time a vhost port
> is added, so it's a less intrusive change and I can't really think that
> a user would require to do this per vhostclient interface??? It's pain
> to have to add this at all for a bug in QEMU and I'm sure in 1/2/3 years
> time someone will say that users could still be using QEMU < 2.9.1 and
> we can't remove it, so it would be nice to keep it as discreet as
> possible as we're going to be stuck with it for a while.
> 
> I assume that a user would only use one version of QEMU at a time and
> would either want or not want this feature. In the worst case, if that
> proved completely wrong in the future, then a per interface override
> could easily be added. Once there's a way to maintain backwards
> compatibility (which there would be) I'd rather err on the side of
> introducing just enough enough functionality over increasing complexity
> for the user.
> 
> What do you think?
> 
> thanks,
> Kevin.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kevin Traynor
On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
> DPDK v17.11 introduces support for the vHost IOMMU feature.
> This is a security feature, that restricts the vhost memory
> that a virtio device may access.
> 
> This feature also enables the vhost REPLY_ACK protocol, the
> implementation of which is known to work in newer versions of
> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
> v2.9.0, inclusive). As such, the feature is disabled by default
> in (and should remain so, for the aforementioned older QEMU
> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
> safely be enabled, even without having an IOMMU device, with
> no performance penalty.
> 
> This patch adds a new vhost port option, vhost-iommu-support,
> to allow enablement of the vhost IOMMU feature:
> 
> $ ovs-vsctl add-port br0 vhost-client-1 \
> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>  options:vhost-iommu-support=true
> 

Hi Mark, All,

I'm thinking about this and whether the current approach provides more
than what is actually needed by users at the cost of making the user
interface more complex.

As an alternative, how about having a global other_config (to be set
like vhost-socket-dir can be) for this instead of having to set it for
each individual interface. It would mean that it would only have to be
set once, instead of having this (ugly?!) option every time a vhost port
is added, so it's a less intrusive change and I can't really think that
a user would require to do this per vhostclient interface??? It's pain
to have to add this at all for a bug in QEMU and I'm sure in 1/2/3 years
time someone will say that users could still be using QEMU < 2.9.1 and
we can't remove it, so it would be nice to keep it as discreet as
possible as we're going to be stuck with it for a while.

I assume that a user would only use one version of QEMU at a time and
would either want or not want this feature. In the worst case, if that
proved completely wrong in the future, then a per interface override
could easily be added. Once there's a way to maintain backwards
compatibility (which there would be) I'd rather err on the side of
introducing just enough enough functionality over increasing complexity
for the user.

What do you think?

thanks,
Kevin.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Revert "dpif_netdev: Refactor dp_netdev_pmd_thread structure."

2017-11-24 Thread Bodireddy, Bhanuprakash
>On 22.11.2017 20:14, Bodireddy, Bhanuprakash wrote:
>>> This reverts commit a807c15796ddc43ba1ffb2a6b0bd2ad4e2b73941.
>>>
>>> Padding and aligning of dp_netdev_pmd_thread structure members is
>>> useless, broken in a several ways and only greatly degrades
>>> maintainability and extensibility of the structure.
>>
>> The idea of my earlier patch was to mark the cache lines and reduce the
>holes while still maintaining the grouping of related members in this 
>structure.
>
>Some of the grouping aspects looks strange. For example, it looks illogical 
>that
>'exit_latch' is grouped with 'flow_table' but not the 'reload_seq' and other
>reload related stuff. It looks strange that statistics and counters spread 
>across
>different groups. So, IMHO, it's not well grouped.

I had to strike a fine balance and some members may be placed in a different 
group
due to their sizes and importance. Let me think if I can make it better.

>
>> Also cache line marking is a good practice to make some one extra cautious
>when extending or editing important structures .
>> Most importantly I was experimenting with prefetching on this structure
>and needed cache line markers for it.
>>
>> I see that you are on ARM (I don't have HW to test) and want to know if this
>commit has any negative affect and any numbers would be appreciated.
>
>Basic VM-VM testing shows stable 0.5% perfromance improvement with
>revert applied.

I did P2P, PVP and PVVP with IXIA and haven't noticed any drop on X86.  

>Padding adds 560 additional bytes of holes.
As the cache line in ARM is 128 , it created holes, I can find a workaround to 
handle this.

>
>> More comments inline.
>>
>>>
>>> Issues:
>>>
>>>1. It's not working because all the instances of struct
>>>   dp_netdev_pmd_thread allocated only by usual malloc. All the
>>>   memory is not aligned to cachelines -> structure almost never
>>>   starts at aligned memory address. This means that any further
>>>   paddings and alignments inside the structure are completely
>>>   useless. Fo example:
>>>
>>>   Breakpoint 1, pmd_thread_main
>>>   (gdb) p pmd
>>>   $49 = (struct dp_netdev_pmd_thread *) 0x1b1af20
>>>   (gdb) p >cacheline1
>>>   $51 = (OVS_CACHE_LINE_MARKER *) 0x1b1af60
>>>   (gdb) p >cacheline0
>>>   $52 = (OVS_CACHE_LINE_MARKER *) 0x1b1af20
>>>   (gdb) p >flow_cache
>>>   $53 = (struct emc_cache *) 0x1b1afe0
>>>
>>>   All of the above addresses shifted from cacheline start by 32B.
>>
>> If you see below all the addresses are 64 byte aligned.
>>
>> (gdb) p pmd
>> $1 = (struct dp_netdev_pmd_thread *) 0x7fc1e9b1a040
>> (gdb) p >cacheline0
>> $2 = (OVS_CACHE_LINE_MARKER *) 0x7fc1e9b1a040
>> (gdb) p >cacheline1
>> $3 = (OVS_CACHE_LINE_MARKER *) 0x7fc1e9b1a080
>> (gdb) p >flow_cache
>> $4 = (struct emc_cache *) 0x7fc1e9b1a0c0
>> (gdb) p >flow_table
>> $5 = (struct cmap *) 0x7fc1e9fba100
>> (gdb) p >stats
>> $6 = (struct dp_netdev_pmd_stats *) 0x7fc1e9fba140
>> (gdb) p >port_mutex
>> $7 = (struct ovs_mutex *) 0x7fc1e9fba180
>> (gdb) p >poll_list
>> $8 = (struct hmap *) 0x7fc1e9fba1c0
>> (gdb) p >tnl_port_cache
>> $9 = (struct hmap *) 0x7fc1e9fba200
>> (gdb) p >stats_zero
>> $10 = (unsigned long long (*)[5]) 0x7fc1e9fba240
>>
>> I tried using xzalloc_cacheline instead of default xzalloc() here.  I
>> tried tens of times and always found that the address is
>> 64 byte aligned and it should start at the beginning of cache line on X86.
>> Not sure why the comment  " (The memory returned will not be at the start
>of  a cache line, though, so don't assume such alignment.)" says otherwise?
>
>Yes, you will always get aligned addressess on your x86 Linux system that
>supports
>posix_memalign() call. The comment says what it says because it will make
>some memory allocation tricks in case posix_memalign() is not available
>(Windows, some MacOS, maybe some Linux systems (not sure)) and the
>address will not be aligned it this case.

I also verified the other case when posix_memalign isn't available and even in 
that case
it returns the address aligned on CACHE_LINE_SIZE boundary. I will send out a 
patch to use
 xzalloc_cacheline for allocating the memory.

>
>>
>>>
>>>   Can we fix it properly? NO.
>>>   OVS currently doesn't have appropriate API to allocate aligned
>>>   memory. The best candidate is 'xmalloc_cacheline()' but it
>>>   clearly states that "The memory returned will not be at the
>>>   start of a cache line, though, so don't assume such alignment".
>>>   And also, this function will never return aligned memory on
>>>   Windows or MacOS.
>>>
>>>2. CACHE_LINE_SIZE is not constant. Different architectures have
>>>   different cache line sizes, but the code assumes that
>>>   CACHE_LINE_SIZE is always equal to 64 bytes. All the structure
>>>   members are grouped by 64 bytes and padded to CACHE_LINE_SIZE.
>>>   This leads to a huge holes in a structures if CACHE_LINE_SIZE
>>>   

[ovs-dev] Bonemer Salmon Filets

2017-11-24 Thread Bonesca - Bonemer
    [ View in browser ]( http://r.newsletter.bonescamail.nl/7xa28kmlaoatrf.html 
)   
 
[
]( http://r.newsletter.bonescamail.nl/track/click/vp48yaictaoatrd ) 
 
Special Offer!!
 
Norwegian Salmon Filets "Bonemer"
 
Prices are for 100 % net weight packed in 10 kilo fixed weight boxes:
 
Filets Trim C skin on 1.3-1.7 kilo
1 box € 8,95 per kilo
10 box € 8,45 per kilo
1 palet € 8,25 per kilo
3 palets €  € 7,95 per kilo!!!
 
Filets Trim C skinless 1.3-1.7 kilo
1 box € 9,65 per kilo
10 box € 9,25 per kilo
1 palet € 8,95 per kilo
3 palets € 8,65 per kilo!!!
 
Scroll down to find the pictures of products offered above :    











 
This email was sent to d...@openvswitch.org
You received this email because you are registered with Bonesca Import en 
Export BV
 
[ Unsubscribe here ]( http://r.newsletter.bonescamail.nl/7xa28kmlaoatrg.html )  

Sent by
[  ]( http://r.newsletter.bonescamail.nl/track/click/vp48yaidlqoatrd )     
© 2017 Bonesca Import en Export BV  

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Revert "dpif_netdev: Refactor dp_netdev_pmd_thread structure."

2017-11-24 Thread Ilya Maximets
On 22.11.2017 20:14, Bodireddy, Bhanuprakash wrote:
>> This reverts commit a807c15796ddc43ba1ffb2a6b0bd2ad4e2b73941.
>>
>> Padding and aligning of dp_netdev_pmd_thread structure members is
>> useless, broken in a several ways and only greatly degrades maintainability
>> and extensibility of the structure.
> 
> The idea of my earlier patch was to mark the cache lines and reduce the holes 
> while still maintaining the grouping of related members in this structure.

Some of the grouping aspects looks strange. For example, it looks illogical that
'exit_latch' is grouped with 'flow_table' but not the 'reload_seq' and other
reload related stuff. It looks strange that statistics and counters spread
across different groups. So, IMHO, it's not well grouped.

> Also cache line marking is a good practice to make some one extra cautious 
> when extending or editing important structures . 
> Most importantly I was experimenting with prefetching on this structure and 
> needed cache line markers for it. 
> 
> I see that you are on ARM (I don't have HW to test) and want to know if this 
> commit has any negative affect and any numbers would be appreciated.

Basic VM-VM testing shows stable 0.5% perfromance improvement with revert 
applied.
Padding adds 560 additional bytes of holes.

> More comments inline.
> 
>>
>> Issues:
>>
>>1. It's not working because all the instances of struct
>>   dp_netdev_pmd_thread allocated only by usual malloc. All the
>>   memory is not aligned to cachelines -> structure almost never
>>   starts at aligned memory address. This means that any further
>>   paddings and alignments inside the structure are completely
>>   useless. Fo example:
>>
>>   Breakpoint 1, pmd_thread_main
>>   (gdb) p pmd
>>   $49 = (struct dp_netdev_pmd_thread *) 0x1b1af20
>>   (gdb) p >cacheline1
>>   $51 = (OVS_CACHE_LINE_MARKER *) 0x1b1af60
>>   (gdb) p >cacheline0
>>   $52 = (OVS_CACHE_LINE_MARKER *) 0x1b1af20
>>   (gdb) p >flow_cache
>>   $53 = (struct emc_cache *) 0x1b1afe0
>>
>>   All of the above addresses shifted from cacheline start by 32B.
> 
> If you see below all the addresses are 64 byte aligned.
> 
> (gdb) p pmd
> $1 = (struct dp_netdev_pmd_thread *) 0x7fc1e9b1a040
> (gdb) p >cacheline0
> $2 = (OVS_CACHE_LINE_MARKER *) 0x7fc1e9b1a040
> (gdb) p >cacheline1
> $3 = (OVS_CACHE_LINE_MARKER *) 0x7fc1e9b1a080
> (gdb) p >flow_cache
> $4 = (struct emc_cache *) 0x7fc1e9b1a0c0
> (gdb) p >flow_table
> $5 = (struct cmap *) 0x7fc1e9fba100
> (gdb) p >stats
> $6 = (struct dp_netdev_pmd_stats *) 0x7fc1e9fba140
> (gdb) p >port_mutex
> $7 = (struct ovs_mutex *) 0x7fc1e9fba180
> (gdb) p >poll_list
> $8 = (struct hmap *) 0x7fc1e9fba1c0
> (gdb) p >tnl_port_cache
> $9 = (struct hmap *) 0x7fc1e9fba200
> (gdb) p >stats_zero
> $10 = (unsigned long long (*)[5]) 0x7fc1e9fba240
> 
> I tried using xzalloc_cacheline instead of default xzalloc() here.  I tried 
> tens of times and always found that the address is
> 64 byte aligned and it should start at the beginning of cache line on X86. 
> Not sure why the comment  " (The memory returned will not be at the start of  
> a cache line, though, so don't assume such alignment.)" says otherwise?

Yes, you will always get aligned addressess on your x86 Linux system that 
supports
posix_memalign() call. The comment says what it says because it will make some
memory allocation tricks in case posix_memalign() is not available (Windows, 
some MacOS,
maybe some Linux systems (not sure)) and the address will not be aligned it 
this case.

> 
>>
>>   Can we fix it properly? NO.
>>   OVS currently doesn't have appropriate API to allocate aligned
>>   memory. The best candidate is 'xmalloc_cacheline()' but it
>>   clearly states that "The memory returned will not be at the
>>   start of a cache line, though, so don't assume such alignment".
>>   And also, this function will never return aligned memory on
>>   Windows or MacOS.
>>
>>2. CACHE_LINE_SIZE is not constant. Different architectures have
>>   different cache line sizes, but the code assumes that
>>   CACHE_LINE_SIZE is always equal to 64 bytes. All the structure
>>   members are grouped by 64 bytes and padded to CACHE_LINE_SIZE.
>>   This leads to a huge holes in a structures if CACHE_LINE_SIZE
>>   differs from 64. This is opposite to portability. If I want
>>   good performance of cmap I need to have CACHE_LINE_SIZE equal
>>   to the real cache line size, but I will have huge holes in the
>>   structures. If you'll take a look to struct rte_mbuf from DPDK
>>   you'll see that it uses 2 defines: RTE_CACHE_LINE_SIZE and
>>   RTE_CACHE_LINE_MIN_SIZE to avoid holes in mbuf structure.
> 
> I understand that ARM and few other processors (like OCTEON) have 128 bytes 
> cache lines.
> But  again curious of performance impact in your case with this new alignment.
> 
>>
>>3. Sizes of system/libc defined types 

Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kevin Traynor
On 11/24/2017 12:58 PM, Kavanagh, Mark B wrote:
>> From: Kevin Traynor [mailto:ktray...@redhat.com]
>> Sent: Friday, November 24, 2017 11:31 AM
>> To: Kavanagh, Mark B ; d...@openvswitch.org
>> Cc: maxime.coque...@redhat.com
>> Subject: Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU
>> feature
>>
>> On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
>>> DPDK v17.11 introduces support for the vHost IOMMU feature.
>>> This is a security feature, that restricts the vhost memory
>>> that a virtio device may access.
>>>
>>> This feature also enables the vhost REPLY_ACK protocol, the
>>> implementation of which is known to work in newer versions of
>>> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
>>> v2.9.0, inclusive). As such, the feature is disabled by default
>>> in (and should remain so, for the aforementioned older QEMU
>>> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
>>> safely be enabled, even without having an IOMMU device, with
>>> no performance penalty.
>>>
>>> This patch adds a new vhost port option, vhost-iommu-support,
>>> to allow enablement of the vhost IOMMU feature:
>>>
>>> $ ovs-vsctl add-port br0 vhost-client-1 \
>>> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>>>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>>>  options:vhost-iommu-support=true
>>>
>>> Note that support for this feature is only implemented for vhost
>>> user client ports (since vhost user ports are considered deprecated).
>>>
>>> Signed-off-by: Mark Kavanagh 
>>> Acked-by: Maxime Coquelin 
>>> Acked-by: Ciara Loftus 
>>> ---
>>>  Documentation/topics/dpdk/vhost-user.rst | 21 +
>>>  NEWS |  1 +
>>>  lib/netdev-dpdk.c| 29 ++---
>>>  vswitchd/vswitch.xml | 10 ++
>>>  4 files changed, 58 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/Documentation/topics/dpdk/vhost-user.rst
>> b/Documentation/topics/dpdk/vhost-user.rst
>>> index 5347995..8dff901 100644
>>> --- a/Documentation/topics/dpdk/vhost-user.rst
>>> +++ b/Documentation/topics/dpdk/vhost-user.rst
>>> @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added to the
>> switch, they must be
>>>  added to the guest. Like vhost-user ports, there are two ways to do this:
>> using
>>>  QEMU directly, or using libvirt. Only the QEMU case is covered here.
>>>
>>> +vhost-user client IOMMU
>>> +~~~
>>> +It is possible to enable IOMMU support for vHost User client ports. This is
>>> +a feature which restricts the vhost memory that a virtio device can access,
>> and
>>> +as such is useful in deployments in which security is a concern. IOMMU mode
>> may
>>> +be enabled on the command line::
>>> +
>>> +$ ovs-vsctl add-port br0 vhost-client-1 \
>>> +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
>>> + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>>> + options:vhost-iommu-support=true
>>> +
>>> +.. important::
>>> +
>>> +Enabling the IOMMU feature also enables the vhost user reply-ack
>> protocol;
>>> +this is known to work on QEMU v2.10.0, but is buggy on older versions
>>> +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is disabled
>> by
>>> +default (and should remain so if using the aforementioned versions of
>> QEMU).
>>> +Starting with QEMU v2.9.1, vhost-iommu-support can safely be enabled,
>> even
>>> +without having an IOMMU device, with no performance penalty.
>>> +
>>>  Adding vhost-user-client ports to the guest (QEMU)
>>>  ~~
>>>
>>> diff --git a/NEWS b/NEWS
>>> index 74e59bf..c15dc24 100644
>>> --- a/NEWS
>>> +++ b/NEWS
>>> @@ -14,6 +14,7 @@ Post-v2.8.0
>>>   * Add support for compiling OVS with the latest Linux 4.13 kernel
>>> - DPDK:
>>>   * Add support for DPDK v17.11
>>> + * Add support for vHost IOMMU feature
>>>
>>>  v2.8.0 - 31 Aug 2017
>>>  
>>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>>> index ed5bf62..2e9633a 100644
>>> --- a/lib/netdev-dpdk.c
>>> +++ b/lib/netdev-dpdk.c
>>> @@ -1424,15 +1424,29 @@ netdev_dpdk_vhost_client_set_config(struct netdev
>> *netdev,
>>>  {
>>>  struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>>>  const char *path;
>>> +bool iommu_enable;
>>> +bool request_reconfigure = false;
>>> +uint64_t vhost_driver_flags_prev = dev->vhost_driver_flags;
>>>
>>>  ovs_mutex_lock(>mutex);
>>>  if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
>>>  path = smap_get(args, "vhost-server-path");
>>>  if (path && strcmp(path, dev->vhost_id)) {
>>>  strcpy(dev->vhost_id, path);
>>> -netdev_request_reconfigure(netdev);
>>> +  

Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kevin Traynor
On 11/24/2017 08:06 AM, Ilya Maximets wrote:
> Hello, Mark.
> 
> Thanks for the patch. Few comments:
> 
> 1. We can not change the RTE_VHOST_USER_IOMMU_SUPPORT flag if device already
>registered, i.e. if (dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT) != 0.
>IMHO, we should print a warning message in that case and should not update
>the device flags and request reconfiguration.
> 

Also, I'm not sure there's any point to request a reconfigure on change
of iommu regardless of the state of the client flag.
rte_vhost_driver_register() will not happen anyway until there is a
vhost-server-path. The only thing I can think of is that it would
decouple of the config and reconfig fns.

>This should be clearly described in commit message and documentation that
>vhost-iommu-support can't be changed if device already fully initialized.
>It should be clear that this option should be set before or simultaneously
>with vhost-server-path.
> 
> 2. General style comment for the patch: According to coding style, you need to
>use braces for if statements even if there is only one statement in it.
> 
> 3. Few more comments inline.
> 
>> DPDK v17.11 introduces support for the vHost IOMMU feature.
>> This is a security feature, that restricts the vhost memory
>> that a virtio device may access.
>>
>> This feature also enables the vhost REPLY_ACK protocol, the
>> implementation of which is known to work in newer versions of
>> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
>> v2.9.0, inclusive). As such, the feature is disabled by default
>> in (and should remain so, for the aforementioned older QEMU
>> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
>> safely be enabled, even without having an IOMMU device, with
>> no performance penalty.
>>
>> This patch adds a new vhost port option, vhost-iommu-support,
>> to allow enablement of the vhost IOMMU feature:
>>
>> $ ovs-vsctl add-port br0 vhost-client-1 \
>> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>>  options:vhost-iommu-support=true
>>
>> Note that support for this feature is only implemented for vhost
>> user client ports (since vhost user ports are considered deprecated).
>>
>> Signed-off-by: Mark Kavanagh 
>> Acked-by: Maxime Coquelin 
>> Acked-by: Ciara Loftus 
>> ---
>>  Documentation/topics/dpdk/vhost-user.rst | 21 +
>>  NEWS |  1 +
>>  lib/netdev-dpdk.c| 29 ++---
>>  vswitchd/vswitch.xml | 10 ++
>>  4 files changed, 58 insertions(+), 3 deletions(-)
>>
>> diff --git a/Documentation/topics/dpdk/vhost-user.rst 
>> b/Documentation/topics/dpdk/vhost-user.rst
>> index 5347995..8dff901 100644
>> --- a/Documentation/topics/dpdk/vhost-user.rst
>> +++ b/Documentation/topics/dpdk/vhost-user.rst
>> @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added to the 
>> switch, they must be
>>  added to the guest. Like vhost-user ports, there are two ways to do this: 
>> using
>>  QEMU directly, or using libvirt. Only the QEMU case is covered here.
>>  
>> +vhost-user client IOMMU
>> +~~~
>> +It is possible to enable IOMMU support for vHost User client ports. This is
>> +a feature which restricts the vhost memory that a virtio device can access, 
>> and
>> +as such is useful in deployments in which security is a concern. IOMMU mode 
>> may
>> +be enabled on the command line::
>> +
>> +$ ovs-vsctl add-port br0 vhost-client-1 \
>> +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
>> + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>> + options:vhost-iommu-support=true
>> +
>> +.. important::
>> +
>> +Enabling the IOMMU feature also enables the vhost user reply-ack 
>> protocol;
>> +this is known to work on QEMU v2.10.0, but is buggy on older versions
>> +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is disabled 
>> by
>> +default (and should remain so if using the aforementioned versions of 
>> QEMU).
>> +Starting with QEMU v2.9.1, vhost-iommu-support can safely be enabled, 
>> even
>> +without having an IOMMU device, with no performance penalty.
>> +
>>  Adding vhost-user-client ports to the guest (QEMU)
>>  ~~
>>  
>> diff --git a/NEWS b/NEWS
>> index 74e59bf..c15dc24 100644
>> --- a/NEWS
>> +++ b/NEWS
>> @@ -14,6 +14,7 @@ Post-v2.8.0
>>   * Add support for compiling OVS with the latest Linux 4.13 kernel
>> - DPDK:
>>   * Add support for DPDK v17.11
>> + * Add support for vHost IOMMU feature
>>  
>>  v2.8.0 - 31 Aug 2017
>>  
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index ed5bf62..2e9633a 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -1424,15 

Re: [ovs-dev] [RFC PATCH v3 8/8] netdev-dpdk: support multi-segment jumbo frames

2017-11-24 Thread Kavanagh, Mark B
>From: O Mahony, Billy
>Sent: Thursday, November 23, 2017 11:23 AM
>To: Kavanagh, Mark B ; d...@openvswitch.org;
>qiud...@chinac.com
>Subject: RE: [ovs-dev] [RFC PATCH v3 8/8] netdev-dpdk: support multi-segment
>jumbo frames
>
>Hi Mark,
>
>Just one comment below.
>
>/Billy
>
>> -Original Message-
>> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
>> boun...@openvswitch.org] On Behalf Of Mark Kavanagh
>> Sent: Tuesday, November 21, 2017 6:29 PM
>> To: d...@openvswitch.org; qiud...@chinac.com
>> Subject: [ovs-dev] [RFC PATCH v3 8/8] netdev-dpdk: support multi-segment
>> jumbo frames
>>
>> Currently, jumbo frame support for OvS-DPDK is implemented by increasing the
>> size of mbufs within a mempool, such that each mbuf within the pool is large
>> enough to contain an entire jumbo frame of a user-defined size. Typically,
>for
>> each user-defined MTU, 'requested_mtu', a new mempool is created, containing
>> mbufs of size ~requested_mtu.
>>
>> With the multi-segment approach, a port uses a single mempool, (containing
>> standard/default-sized mbufs of ~2k bytes), irrespective of the user-
>requested
>> MTU value. To accommodate jumbo frames, mbufs are chained together, where
>> each mbuf in the chain stores a portion of the jumbo frame. Each mbuf in the
>> chain is termed a segment, hence the name.
>>
>> == Enabling multi-segment mbufs ==
>> Multi-segment and single-segment mbufs are mutually exclusive, and the user
>> must decide on which approach to adopt on init. The introduction of a new
>> OVSDB field, 'dpdk-multi-seg-mbufs', facilitates this. This is a global
>boolean
>> value, which determines how jumbo frames are represented across all DPDK
>> ports. In the absence of a user-supplied value, 'dpdk-multi-seg-mbufs'
>defaults
>> to false, i.e. multi-segment mbufs must be explicitly enabled / single-
>segment
>> mbufs remain the default.
>>
>[[BO'M]] Would it be more useful if they multi-segment was enabled by default?
>Does enabling multi-segment mbufs result in much of a performance decrease
>when not-using jumbo frames? Either because jumbo frames are not coming in on
>the ingress port or because the mtu is set not to accept jumbo frames.

Hey Billy,

I think that single-segment should remain the default.

Enabling multi-segment implicitly means that non-vectorized DPDK driver Rx and 
TX functions must be used, which are, by nature, not as performant as their 
vectorized counterparts.

I don't have comparative figures to hand, but I'll note same in the cover 
letter of any subsequent versions of this patchset.

Thanks,
Mark

>
>Obviously not a blocker to this patch-set. Maybe something to be looked at in
>the future.
>
>> Setting the field is identical to setting existing DPDK-specific OVSDB
>> fields:
>>
>> ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true
>> ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
>> ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=4096,0
>> ==> ovs-vsctl set Open_vSwitch . other_config:dpdk-multi-seg-mbufs=true
>>
>> Signed-off-by: Mark Kavanagh 
>> ---
>>  NEWS |  1 +
>>  lib/dpdk.c   |  7 +++
>>  lib/netdev-dpdk.c| 43 ---
>>  lib/netdev-dpdk.h|  1 +
>>  vswitchd/vswitch.xml | 20 
>>  5 files changed, 69 insertions(+), 3 deletions(-)
>>
>> diff --git a/NEWS b/NEWS
>> index c15dc24..657b598 100644
>> --- a/NEWS
>> +++ b/NEWS
>> @@ -15,6 +15,7 @@ Post-v2.8.0
>> - DPDK:
>>   * Add support for DPDK v17.11
>>   * Add support for vHost IOMMU feature
>> + * Add support for multi-segment mbufs
>>
>>  v2.8.0 - 31 Aug 2017
>>  
>> diff --git a/lib/dpdk.c b/lib/dpdk.c
>> index 8da6c32..4c28bd0 100644
>> --- a/lib/dpdk.c
>> +++ b/lib/dpdk.c
>> @@ -450,6 +450,13 @@ dpdk_init__(const struct smap *ovs_other_config)
>>
>>  /* Finally, register the dpdk classes */
>>  netdev_dpdk_register();
>> +
>> +bool multi_seg_mbufs_enable = smap_get_bool(ovs_other_config,
>> +"dpdk-multi-seg-mbufs", false);
>> +if (multi_seg_mbufs_enable) {
>> +VLOG_INFO("DPDK multi-segment mbufs enabled\n");
>> +netdev_dpdk_multi_segment_mbufs_enable();
>> +}
>>  }
>>
>>  void
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 36275bd..293edad
>> 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -65,6 +65,7 @@ enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
>>
>>  VLOG_DEFINE_THIS_MODULE(netdev_dpdk);
>>  static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20);
>> +static bool dpdk_multi_segment_mbufs = false;
>>
>>  #define DPDK_PORT_WATCHDOG_INTERVAL 5
>>
>> @@ -500,6 +501,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, uint16_t
>> frame_len)
>>+ dev->requested_n_txq * dev->requested_txq_size
>>+ MIN(RTE_MAX_LCORE, dev->requested_n_rxq) *
>> NETDEV_MAX_BURST
>> 

Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kavanagh, Mark B
>From: Kevin Traynor [mailto:ktray...@redhat.com]
>Sent: Friday, November 24, 2017 11:31 AM
>To: Kavanagh, Mark B ; d...@openvswitch.org
>Cc: maxime.coque...@redhat.com
>Subject: Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU
>feature
>
>On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
>> DPDK v17.11 introduces support for the vHost IOMMU feature.
>> This is a security feature, that restricts the vhost memory
>> that a virtio device may access.
>>
>> This feature also enables the vhost REPLY_ACK protocol, the
>> implementation of which is known to work in newer versions of
>> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
>> v2.9.0, inclusive). As such, the feature is disabled by default
>> in (and should remain so, for the aforementioned older QEMU
>> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
>> safely be enabled, even without having an IOMMU device, with
>> no performance penalty.
>>
>> This patch adds a new vhost port option, vhost-iommu-support,
>> to allow enablement of the vhost IOMMU feature:
>>
>> $ ovs-vsctl add-port br0 vhost-client-1 \
>> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>>  options:vhost-iommu-support=true
>>
>> Note that support for this feature is only implemented for vhost
>> user client ports (since vhost user ports are considered deprecated).
>>
>> Signed-off-by: Mark Kavanagh 
>> Acked-by: Maxime Coquelin 
>> Acked-by: Ciara Loftus 
>> ---
>>  Documentation/topics/dpdk/vhost-user.rst | 21 +
>>  NEWS |  1 +
>>  lib/netdev-dpdk.c| 29 ++---
>>  vswitchd/vswitch.xml | 10 ++
>>  4 files changed, 58 insertions(+), 3 deletions(-)
>>
>> diff --git a/Documentation/topics/dpdk/vhost-user.rst
>b/Documentation/topics/dpdk/vhost-user.rst
>> index 5347995..8dff901 100644
>> --- a/Documentation/topics/dpdk/vhost-user.rst
>> +++ b/Documentation/topics/dpdk/vhost-user.rst
>> @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added to the
>switch, they must be
>>  added to the guest. Like vhost-user ports, there are two ways to do this:
>using
>>  QEMU directly, or using libvirt. Only the QEMU case is covered here.
>>
>> +vhost-user client IOMMU
>> +~~~
>> +It is possible to enable IOMMU support for vHost User client ports. This is
>> +a feature which restricts the vhost memory that a virtio device can access,
>and
>> +as such is useful in deployments in which security is a concern. IOMMU mode
>may
>> +be enabled on the command line::
>> +
>> +$ ovs-vsctl add-port br0 vhost-client-1 \
>> +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
>> + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>> + options:vhost-iommu-support=true
>> +
>> +.. important::
>> +
>> +Enabling the IOMMU feature also enables the vhost user reply-ack
>protocol;
>> +this is known to work on QEMU v2.10.0, but is buggy on older versions
>> +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is disabled
>by
>> +default (and should remain so if using the aforementioned versions of
>QEMU).
>> +Starting with QEMU v2.9.1, vhost-iommu-support can safely be enabled,
>even
>> +without having an IOMMU device, with no performance penalty.
>> +
>>  Adding vhost-user-client ports to the guest (QEMU)
>>  ~~
>>
>> diff --git a/NEWS b/NEWS
>> index 74e59bf..c15dc24 100644
>> --- a/NEWS
>> +++ b/NEWS
>> @@ -14,6 +14,7 @@ Post-v2.8.0
>>   * Add support for compiling OVS with the latest Linux 4.13 kernel
>> - DPDK:
>>   * Add support for DPDK v17.11
>> + * Add support for vHost IOMMU feature
>>
>>  v2.8.0 - 31 Aug 2017
>>  
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index ed5bf62..2e9633a 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -1424,15 +1424,29 @@ netdev_dpdk_vhost_client_set_config(struct netdev
>*netdev,
>>  {
>>  struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>>  const char *path;
>> +bool iommu_enable;
>> +bool request_reconfigure = false;
>> +uint64_t vhost_driver_flags_prev = dev->vhost_driver_flags;
>>
>>  ovs_mutex_lock(>mutex);
>>  if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
>>  path = smap_get(args, "vhost-server-path");
>>  if (path && strcmp(path, dev->vhost_id)) {
>>  strcpy(dev->vhost_id, path);
>> -netdev_request_reconfigure(netdev);
>> +request_reconfigure = true;
>>  }
>>  }
>> +
>> +iommu_enable = smap_get_bool(args, "vhost-iommu-support", false);
>> +if (iommu_enable)
>> +

Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Kevin Traynor
On 11/16/2017 11:01 AM, Mark Kavanagh wrote:
> DPDK v17.11 introduces support for the vHost IOMMU feature.
> This is a security feature, that restricts the vhost memory
> that a virtio device may access.
> 
> This feature also enables the vhost REPLY_ACK protocol, the
> implementation of which is known to work in newer versions of
> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
> v2.9.0, inclusive). As such, the feature is disabled by default
> in (and should remain so, for the aforementioned older QEMU
> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
> safely be enabled, even without having an IOMMU device, with
> no performance penalty.
> 
> This patch adds a new vhost port option, vhost-iommu-support,
> to allow enablement of the vhost IOMMU feature:
> 
> $ ovs-vsctl add-port br0 vhost-client-1 \
> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>  options:vhost-iommu-support=true
> 
> Note that support for this feature is only implemented for vhost
> user client ports (since vhost user ports are considered deprecated).
> 
> Signed-off-by: Mark Kavanagh 
> Acked-by: Maxime Coquelin 
> Acked-by: Ciara Loftus 
> ---
>  Documentation/topics/dpdk/vhost-user.rst | 21 +
>  NEWS |  1 +
>  lib/netdev-dpdk.c| 29 ++---
>  vswitchd/vswitch.xml | 10 ++
>  4 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/topics/dpdk/vhost-user.rst 
> b/Documentation/topics/dpdk/vhost-user.rst
> index 5347995..8dff901 100644
> --- a/Documentation/topics/dpdk/vhost-user.rst
> +++ b/Documentation/topics/dpdk/vhost-user.rst
> @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added to the 
> switch, they must be
>  added to the guest. Like vhost-user ports, there are two ways to do this: 
> using
>  QEMU directly, or using libvirt. Only the QEMU case is covered here.
>  
> +vhost-user client IOMMU
> +~~~
> +It is possible to enable IOMMU support for vHost User client ports. This is
> +a feature which restricts the vhost memory that a virtio device can access, 
> and
> +as such is useful in deployments in which security is a concern. IOMMU mode 
> may
> +be enabled on the command line::
> +
> +$ ovs-vsctl add-port br0 vhost-client-1 \
> +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
> + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> + options:vhost-iommu-support=true
> +
> +.. important::
> +
> +Enabling the IOMMU feature also enables the vhost user reply-ack 
> protocol;
> +this is known to work on QEMU v2.10.0, but is buggy on older versions
> +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is disabled by
> +default (and should remain so if using the aforementioned versions of 
> QEMU).
> +Starting with QEMU v2.9.1, vhost-iommu-support can safely be enabled, 
> even
> +without having an IOMMU device, with no performance penalty.
> +
>  Adding vhost-user-client ports to the guest (QEMU)
>  ~~
>  
> diff --git a/NEWS b/NEWS
> index 74e59bf..c15dc24 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -14,6 +14,7 @@ Post-v2.8.0
>   * Add support for compiling OVS with the latest Linux 4.13 kernel
> - DPDK:
>   * Add support for DPDK v17.11
> + * Add support for vHost IOMMU feature
>  
>  v2.8.0 - 31 Aug 2017
>  
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index ed5bf62..2e9633a 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -1424,15 +1424,29 @@ netdev_dpdk_vhost_client_set_config(struct netdev 
> *netdev,
>  {
>  struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>  const char *path;
> +bool iommu_enable;
> +bool request_reconfigure = false;
> +uint64_t vhost_driver_flags_prev = dev->vhost_driver_flags;
>  
>  ovs_mutex_lock(>mutex);
>  if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
>  path = smap_get(args, "vhost-server-path");
>  if (path && strcmp(path, dev->vhost_id)) {
>  strcpy(dev->vhost_id, path);
> -netdev_request_reconfigure(netdev);
> +request_reconfigure = true;
>  }
>  }
> +
> +iommu_enable = smap_get_bool(args, "vhost-iommu-support", false);
> +if (iommu_enable)
> +dev->vhost_driver_flags |= RTE_VHOST_USER_IOMMU_SUPPORT;
> +else
> +dev->vhost_driver_flags &= ~RTE_VHOST_USER_IOMMU_SUPPORT;
> +if (vhost_driver_flags_prev != dev->vhost_driver_flags)
> +request_reconfigure = true;
> +
> +if (request_reconfigure)
> +netdev_request_reconfigure(netdev);
>  ovs_mutex_unlock(>mutex);
> 

Re: [ovs-dev] [PATCH v4 2/3] dpif-netdev: Rename rxq_cycle_sort to compare_rxq_cycles.

2017-11-24 Thread O Mahony, Billy
Acked-by: Billy O'Mahony

> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Thursday, November 23, 2017 7:42 PM
> To: d...@openvswitch.org; aserd...@ovn.org; i.maxim...@samsung.com; O
> Mahony, Billy ; Stokes, Ian 
> Cc: Kevin Traynor 
> Subject: [PATCH v4 2/3] dpif-netdev: Rename rxq_cycle_sort to
> compare_rxq_cycles.
> 
> This function is used for comparison between queues as part of the sort. It 
> does
> not do the sort itself.
> As such, give it a more appropriate name.
> 
> Suggested-by: Billy O'Mahony 
> Signed-off-by: Kevin Traynor 
> ---
> 
> V4: Added patch into series after suggestion by Billy
> 
>  lib/dpif-netdev.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index f5cdd92..657df71 
> 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -3446,5 +3446,5 @@ rr_numa_list_destroy(struct rr_numa_list *rr)
>  /* Sort Rx Queues by the processing cycles they are consuming. */  static 
> int -
> rxq_cycle_sort(const void *a, const void *b)
> +compare_rxq_cycles(const void *a, const void *b)
>  {
>  struct dp_netdev_rxq *qa;
> @@ -3535,5 +3535,5 @@ rxq_scheduling(struct dp_netdev *dp, bool pinned)
> OVS_REQUIRES(dp->port_mutex)
>  /* Sort the queues in order of the processing cycles
>   * they consumed during their last pmd interval. */
> -qsort(rxqs, n_rxqs, sizeof *rxqs, rxq_cycle_sort);
> +qsort(rxqs, n_rxqs, sizeof *rxqs, compare_rxq_cycles);
>  }
> 
> --
> 1.8.3.1

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH RFC] netdev-dpdk: Fix device obtain mac address when received first packet in vhost type

2017-11-24 Thread Chen Hailin
Hi Aaron Conole && Jianfeng,

The stp could not work in ovs-dpdk vhostuser.
Because the attached vhost device doesn't have MAC address.

Now we have two ways to solve this problem.
1. The vhost learns MAC address from packet like as my first patch.
2. The virtio notifies MAC address actively to vhost user .

In my opinions,  if we treat it as a device,  we should allocate 
MAC address for the device when the VM started.

Which one do you think better?



Best Regards,
Chen Hailin
che...@arraynetworks.com.cn
 
From: Aaron Conole
Date: 2017-11-18 10:00
To: Hailin Chen
CC: ovs-dev@openvswitch.org; Maxime Coquelin; cl...@arraynetworks.com.cn
Subject: Re: [ovs-dev] [PATCH RFC] netdev-dpdk: Fix device obtain mac address 
when received first packet in vhost type
Hi Hailin,
 
Hailin Chen  writes:
 
> The stp could not work on netdev-dpdk if network is loop.
> Because the stp protocol negotiates designate port by sending
> BPDU packets which contains MAC address.
> However the device doesn't have MAC address in vhostuser type.
> Thus, function send_bpdu_cb would not send BPDU packets.
>
> This patch will set the MAC for device when received first packet.
>
> Signed-off-by: Hailin Chen 
> ---
 
Thanks for the patch.
 
In general, I don't think this is the right approach to deal with this
type of issue.  I believe the problem statement is that OvS bridge is
unaware of the guest MAC address - did I get it right?  In that case, I
would think that a better way to solve this would be to have virtio tell
the mac address of the guest.  I don't recall right now if that's
allowed in the virtio spec, but I do remember some kind of negotiation
features.
 
I've CC'd Maxime, who is one of the maintainers of the virtio code from
DPDK side.  Perhaps there is an alternate way to solve this.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Restore flows is slow using "ovs-save", is there someone who use RPC to speed up flow restore?

2017-11-24 Thread Timothy M. Redaelli
On 11/24/2017 04:07 AM, Sam wrote:
> Hi all,
> 
> I'm working on speed up ovs restart, I found that restore flows is slow
> when there are lots of flows, so I want to use RPC to store flows in
> another process, and then restore from it.
> 
> Is there someone who have worked on this? could you share how faster after
> this change? Thank you~
> 
> Or is there some other ways to speed up? Thank you~!

Hi,
I sent a patch [1], already merged on master branch, that makes ovs-save
much faster by using --bundle (if you are supporting at least OF 1.4)
and file (instead of heredoc).

With that patchset you'll be able to restore 1 million flows with
traffic disruption less than 15 seconds.

[1]
https://patchwork.ozlabs.org/project/openvswitch/list/?series=4973=%2A=both
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] netdev-dpdk: defer MTU set after interface start

2017-11-24 Thread Nitin Katiyar
Hi,
Is it being called after device start? If yes, then *dev_mtu_set() will not 
serve the purpose as this function expects the device to be stopped first.

Regards,
Nitin
-Original Message-
From: Stokes, Ian [mailto:ian.sto...@intel.com] 
Sent: Friday, November 24, 2017 1:57 PM
To: Matteo Croce ; d...@openvswitch.org; Kavanagh, Mark B 

Subject: Re: [ovs-dev] [PATCH] netdev-dpdk: defer MTU set after interface start

> Some PMD assumes that the RX buffers are already allocated when 
> setting the device MTU, because the RX buffer size depends on the MTU.
> This worked until 67fe6d635193 added a call to rte_eth_dev_set_mtu() 
> in the init code, which would set the MTU before the RX buffer 
> allocation, triggering a segmentation fault with some PMD:
> 
> Stack trace of thread 20680:
> #0  0x559464396534 qede_set_mtu (ovs-vswitchd)
> #1  0x559464323c41 rte_eth_dev_set_mtu (ovs-vswitchd)
> #2  0x5594645f5e85 dpdk_eth_dev_queue_setup (ovs-vswitchd)
> #3  0x5594645f8ae6 netdev_dpdk_reconfigure (ovs-vswitchd)
> #4  0x55946452225c reconfigure_datapath (ovs-vswitchd)
> #5  0x559464522d07 do_add_port (ovs-vswitchd)
> #6  0x559464522e8d dpif_netdev_port_add (ovs-vswitchd)
> #7  0x559464528dde dpif_port_add (ovs-vswitchd)
> #8  0x5594644dc0e0 port_add (ovs-vswitchd)
> #9  0x5594644d2ab1 ofproto_port_add (ovs-vswitchd)
> #10 0x5594644c0a85 bridge_add_ports__ (ovs-vswitchd)
> #11 0x5594644c26e8 bridge_reconfigure (ovs-vswitchd)
> #12 0x5594644c5c49 bridge_run (ovs-vswitchd)
> #13 0x5594642d4155 main (ovs-vswitchd)
> #14 0x7f0e1444bc05 __libc_start_main (libc.so.6)
> #15 0x5594642d8328 _start (ovs-vswitchd)
> 
> A possible solution could be to move the first call to
> rte_eth_dev_set_mtu() just after the device start instead of
> dpdk_eth_dev_queue_setup() which, by the way, set the MTU multiple 
> times as the call to rte_eth_dev_set_mtu() was in a loop.
> 
> CC: Mark Kavanagh 
> Fixes: 67fe6d635193 ("netdev-dpdk: use rte_eth_dev_set_mtu.")
> Signed-off-by: Matteo Croce 
> ---
>  lib/netdev-dpdk.c | 14 +++---
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
> 76e79be25..229aa4a76 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -750,13 +750,6 @@ dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, 
> int n_rxq, int n_txq)
>  break;
>  }
> 
> -diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
> -if (diag) {
> -VLOG_ERR("Interface %s MTU (%d) setup error: %s",
> -dev->up.name, dev->mtu, rte_strerror(-diag));
> -break;
> -}
> -
>  for (i = 0; i < n_txq; i++) {
>  diag = rte_eth_tx_queue_setup(dev->port_id, i, dev->txq_size,
>dev->socket_id, NULL); @@ -
> 849,6 +842,13 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev)
>  return -diag;
>  }
> 
> +diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
> +if (diag) {
> +VLOG_ERR("Interface %s MTU (%d) setup error: %s",
> +dev->up.name, dev->mtu, rte_strerror(-diag));
> +return -diag;
> +}
> +

In my testing I didn't see any issues here, but would like to flag it to our 
MTU Guru (Mark K).

Any opinion on this? From the API description it looks like it is ok?


>  rte_eth_promiscuous_enable(dev->port_id);
>  rte_eth_allmulticast_enable(dev->port_id);
> 
> --
> 2.13.6
> 
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Stokes, Ian


> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Stokes, Ian
> Sent: Friday, November 24, 2017 8:19 AM
> To: Ilya Maximets ; ovs-dev@openvswitch.org;
> Kavanagh, Mark B 
> Cc: Maxime Coquelin 
> Subject: Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost
> IOMMU feature
> 
> > Hello, Mark.
> >
> > Thanks for the patch. Few comments:
> >
> 
> Thanks for the feedback Ilya, great minds :). The feedback is quite
> timely, from my point of view I'd like to see these issues resolved as
> quickly as possible. I don't think there's anything 'show stopper' wise in
> what you've raise, we should be able to correct these adjustments.
> 
> 
> @Mark, what do you think?
> 
> Ian
> 
> 
> > 1. We can not change the RTE_VHOST_USER_IOMMU_SUPPORT flag if device
> > already
> >registered, i.e. if (dev->vhost_driver_flags &
> > RTE_VHOST_USER_CLIENT) != 0.
> >IMHO, we should print a warning message in that case and should not
> > update
> >the device flags and request reconfiguration.
> >
> >This should be clearly described in commit message and
> > documentation that
> >vhost-iommu-support can't be changed if device already fully
> > initialized.
> >It should be clear that this option should be set before or
> > simultaneously
> >with vhost-server-path.
> >
> > 2. General style comment for the patch: According to coding style, you
> > need to
> >use braces for if statements even if there is only one statement in
> it.
> >
> > 3. Few more comments inline.
> >
> > > DPDK v17.11 introduces support for the vHost IOMMU feature.
> > > This is a security feature, that restricts the vhost memory that a
> > > virtio device may access.
> > >
> > > This feature also enables the vhost REPLY_ACK protocol, the
> > > implementation of which is known to work in newer versions of QEMU
> > > (i.e. v2.10.0), but is buggy in older versions (v2.7.0 - v2.9.0,
> > > inclusive). As such, the feature is disabled by default in (and
> > > should remain so, for the aforementioned older QEMU verions).
> > > Starting with QEMU v2.9.1, vhost-iommu-support can safely be
> > > enabled, even without having an IOMMU device, with no performance
> penalty.
> > >
> > > This patch adds a new vhost port option, vhost-iommu-support, to
> > > allow enablement of the vhost IOMMU feature:
> > >
> > > $ ovs-vsctl add-port br0 vhost-client-1 \
> > > -- set Interface vhost-client-1 type=dpdkvhostuserclient \
> > >  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> > >  options:vhost-iommu-support=true
> > >
> > > Note that support for this feature is only implemented for vhost
> > > user client ports (since vhost user ports are considered deprecated).
> > >
> > > Signed-off-by: Mark Kavanagh 
> > > Acked-by: Maxime Coquelin 
> > > Acked-by: Ciara Loftus 
> > > ---
> > >  Documentation/topics/dpdk/vhost-user.rst | 21 +
> > >  NEWS |  1 +
> > >  lib/netdev-dpdk.c| 29
> > ++---
> > >  vswitchd/vswitch.xml | 10 ++
> > >  4 files changed, 58 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/Documentation/topics/dpdk/vhost-user.rst
> > > b/Documentation/topics/dpdk/vhost-user.rst
> > > index 5347995..8dff901 100644
> > > --- a/Documentation/topics/dpdk/vhost-user.rst
> > > +++ b/Documentation/topics/dpdk/vhost-user.rst
> > > @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been
> > > added to the switch, they must be  added to the guest. Like
> > > vhost-user ports, there are two ways to do this: using  QEMU
> > > directly, or using
> > libvirt. Only the QEMU case is covered here.
> > >
> > > +vhost-user client IOMMU
> > > +~~~
> > > +It is possible to enable IOMMU support for vHost User client ports.
> > > +This is a feature which restricts the vhost memory that a virtio
> > > +device can access, and as such is useful in deployments in which
> > > +security is a concern. IOMMU mode may be enabled on the command
> line::
> > > +
> > > +$ ovs-vsctl add-port br0 vhost-client-1 \
> > > +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
> > > + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> > > + options:vhost-iommu-support=true
> > > +
> > > +.. important::
> > > +
> > > +Enabling the IOMMU feature also enables the vhost user
> > > + reply-ack
> > protocol;
> > > +this is known to work on QEMU v2.10.0, but is buggy on older
> > versions
> > > +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is
> > disabled by
> > > +default (and should remain so if using the aforementioned
> > > + versions
> > of QEMU).
> > > +Starting with QEMU v2.9.1, vhost-iommu-support can safely be
> > enabled, even
> > > + 

Re: [ovs-dev] [PATCH] netdev-dpdk: defer MTU set after interface start

2017-11-24 Thread Stokes, Ian
> Some PMD assumes that the RX buffers are already allocated when setting
> the device MTU, because the RX buffer size depends on the MTU.
> This worked until 67fe6d635193 added a call to rte_eth_dev_set_mtu() in
> the init code, which would set the MTU before the RX buffer allocation,
> triggering a segmentation fault with some PMD:
> 
> Stack trace of thread 20680:
> #0  0x559464396534 qede_set_mtu (ovs-vswitchd)
> #1  0x559464323c41 rte_eth_dev_set_mtu (ovs-vswitchd)
> #2  0x5594645f5e85 dpdk_eth_dev_queue_setup (ovs-vswitchd)
> #3  0x5594645f8ae6 netdev_dpdk_reconfigure (ovs-vswitchd)
> #4  0x55946452225c reconfigure_datapath (ovs-vswitchd)
> #5  0x559464522d07 do_add_port (ovs-vswitchd)
> #6  0x559464522e8d dpif_netdev_port_add (ovs-vswitchd)
> #7  0x559464528dde dpif_port_add (ovs-vswitchd)
> #8  0x5594644dc0e0 port_add (ovs-vswitchd)
> #9  0x5594644d2ab1 ofproto_port_add (ovs-vswitchd)
> #10 0x5594644c0a85 bridge_add_ports__ (ovs-vswitchd)
> #11 0x5594644c26e8 bridge_reconfigure (ovs-vswitchd)
> #12 0x5594644c5c49 bridge_run (ovs-vswitchd)
> #13 0x5594642d4155 main (ovs-vswitchd)
> #14 0x7f0e1444bc05 __libc_start_main (libc.so.6)
> #15 0x5594642d8328 _start (ovs-vswitchd)
> 
> A possible solution could be to move the first call to
> rte_eth_dev_set_mtu() just after the device start instead of
> dpdk_eth_dev_queue_setup() which, by the way, set the MTU multiple times
> as the call to rte_eth_dev_set_mtu() was in a loop.
> 
> CC: Mark Kavanagh 
> Fixes: 67fe6d635193 ("netdev-dpdk: use rte_eth_dev_set_mtu.")
> Signed-off-by: Matteo Croce 
> ---
>  lib/netdev-dpdk.c | 14 +++---
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
> 76e79be25..229aa4a76 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -750,13 +750,6 @@ dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int
> n_rxq, int n_txq)
>  break;
>  }
> 
> -diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
> -if (diag) {
> -VLOG_ERR("Interface %s MTU (%d) setup error: %s",
> -dev->up.name, dev->mtu, rte_strerror(-diag));
> -break;
> -}
> -
>  for (i = 0; i < n_txq; i++) {
>  diag = rte_eth_tx_queue_setup(dev->port_id, i, dev->txq_size,
>dev->socket_id, NULL); @@ -
> 849,6 +842,13 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev)
>  return -diag;
>  }
> 
> +diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);
> +if (diag) {
> +VLOG_ERR("Interface %s MTU (%d) setup error: %s",
> +dev->up.name, dev->mtu, rte_strerror(-diag));
> +return -diag;
> +}
> +

In my testing I didn't see any issues here, but would like to flag it to our 
MTU Guru (Mark K).

Any opinion on this? From the API description it looks like it is ok?


>  rte_eth_promiscuous_enable(dev->port_id);
>  rte_eth_allmulticast_enable(dev->port_id);
> 
> --
> 2.13.6
> 
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Stokes, Ian
> Hello, Mark.
> 
> Thanks for the patch. Few comments:
> 

Thanks for the feedback Ilya, great minds :). The feedback is quite timely, 
from my point of view I'd like to see these issues resolved as quickly as 
possible. I don't think there's anything 'show stopper' wise in what you've 
raise, we should be able to correct these adjustments.


@Mark, what do you think?

Ian


> 1. We can not change the RTE_VHOST_USER_IOMMU_SUPPORT flag if device
> already
>registered, i.e. if (dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)
> != 0.
>IMHO, we should print a warning message in that case and should not
> update
>the device flags and request reconfiguration.
> 
>This should be clearly described in commit message and documentation
> that
>vhost-iommu-support can't be changed if device already fully
> initialized.
>It should be clear that this option should be set before or
> simultaneously
>with vhost-server-path.
> 
> 2. General style comment for the patch: According to coding style, you
> need to
>use braces for if statements even if there is only one statement in it.
> 
> 3. Few more comments inline.
> 
> > DPDK v17.11 introduces support for the vHost IOMMU feature.
> > This is a security feature, that restricts the vhost memory that a
> > virtio device may access.
> >
> > This feature also enables the vhost REPLY_ACK protocol, the
> > implementation of which is known to work in newer versions of QEMU
> > (i.e. v2.10.0), but is buggy in older versions (v2.7.0 - v2.9.0,
> > inclusive). As such, the feature is disabled by default in (and should
> > remain so, for the aforementioned older QEMU verions). Starting with
> > QEMU v2.9.1, vhost-iommu-support can safely be enabled, even without
> > having an IOMMU device, with no performance penalty.
> >
> > This patch adds a new vhost port option, vhost-iommu-support, to allow
> > enablement of the vhost IOMMU feature:
> >
> > $ ovs-vsctl add-port br0 vhost-client-1 \
> > -- set Interface vhost-client-1 type=dpdkvhostuserclient \
> >  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> >  options:vhost-iommu-support=true
> >
> > Note that support for this feature is only implemented for vhost user
> > client ports (since vhost user ports are considered deprecated).
> >
> > Signed-off-by: Mark Kavanagh 
> > Acked-by: Maxime Coquelin 
> > Acked-by: Ciara Loftus 
> > ---
> >  Documentation/topics/dpdk/vhost-user.rst | 21 +
> >  NEWS |  1 +
> >  lib/netdev-dpdk.c| 29
> ++---
> >  vswitchd/vswitch.xml | 10 ++
> >  4 files changed, 58 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/topics/dpdk/vhost-user.rst
> > b/Documentation/topics/dpdk/vhost-user.rst
> > index 5347995..8dff901 100644
> > --- a/Documentation/topics/dpdk/vhost-user.rst
> > +++ b/Documentation/topics/dpdk/vhost-user.rst
> > @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added
> > to the switch, they must be  added to the guest. Like vhost-user
> > ports, there are two ways to do this: using  QEMU directly, or using
> libvirt. Only the QEMU case is covered here.
> >
> > +vhost-user client IOMMU
> > +~~~
> > +It is possible to enable IOMMU support for vHost User client ports.
> > +This is a feature which restricts the vhost memory that a virtio
> > +device can access, and as such is useful in deployments in which
> > +security is a concern. IOMMU mode may be enabled on the command line::
> > +
> > +$ ovs-vsctl add-port br0 vhost-client-1 \
> > +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
> > + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> > + options:vhost-iommu-support=true
> > +
> > +.. important::
> > +
> > +Enabling the IOMMU feature also enables the vhost user reply-ack
> protocol;
> > +this is known to work on QEMU v2.10.0, but is buggy on older
> versions
> > +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is
> disabled by
> > +default (and should remain so if using the aforementioned versions
> of QEMU).
> > +Starting with QEMU v2.9.1, vhost-iommu-support can safely be
> enabled, even
> > +without having an IOMMU device, with no performance penalty.
> > +
> >  Adding vhost-user-client ports to the guest (QEMU)
> > ~~
> >
> > diff --git a/NEWS b/NEWS
> > index 74e59bf..c15dc24 100644
> > --- a/NEWS
> > +++ b/NEWS
> > @@ -14,6 +14,7 @@ Post-v2.8.0
> >   * Add support for compiling OVS with the latest Linux 4.13 kernel
> > - DPDK:
> >   * Add support for DPDK v17.11
> > + * Add support for vHost IOMMU feature
> >
> >  v2.8.0 - 31 Aug 2017
> >  
> > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
> > ed5bf62..2e9633a 100644
> > --- a/lib/netdev-dpdk.c
> > +++ 

Re: [ovs-dev] [PATCH 0/2] DPDK v17.11 Support

2017-11-24 Thread Stokes, Ian
> This patchset adds support for DPDK v17.11:
> - the first patch introduces minor code updates to accomodate DPDK API
>   changes, and also updates Documentation and travis scripts.
> - the second patch adds a new vhost-user port option,
>   vhost-iommu-support; this is required in order to take advantage of
>   the vHost IOMMU feature introduced in DPDK v17.11.
> 
> Note that the previous RFC version of this patch (Acked x2) is here:
> https://mail.openvswitch.org/pipermail/ovs-dev/2017-November/340731.html.
> 
> Mark Kavanagh (2):
>   netdev-dpdk: DPDK v17.11 upgrade
>   netdev-dpdk: add support for vhost IOMMU feature

Hi Folks,,

I'm anxious to get support for DPDK 17.11 into OVS with DPDK for the 2.9 
release.

I'm going to propose a pull request as of next week including this patch set, I 
would like to see people test their features in terms of compatibility before 
then and review the patch.

I know from the Intel side there has been extensive testing. There has also 
been a sign off as regards dev and integration testing from Red Hat.

I will raise this as an item at next week's community sync meeting. Would 
appreciate any feedback between now and then.

Thanks
Ian
> 
>  .travis/linux-build.sh   |  2 +-
>  Documentation/faq/releases.rst   |  1 +
>  Documentation/intro/install/dpdk.rst | 10 -
>  Documentation/topics/dpdk/ring.rst   |  2 +-
>  Documentation/topics/dpdk/vhost-user.rst | 29 +
>  NEWS |  3 +++
>  lib/netdev-dpdk.c| 36 ++-
> -
>  vswitchd/vswitch.xml | 10 +
>  8 files changed, 76 insertions(+), 17 deletions(-)
> 
> --
> 1.9.3

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-24 Thread Ilya Maximets
Hello, Mark.

Thanks for the patch. Few comments:

1. We can not change the RTE_VHOST_USER_IOMMU_SUPPORT flag if device already
   registered, i.e. if (dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT) != 0.
   IMHO, we should print a warning message in that case and should not update
   the device flags and request reconfiguration.

   This should be clearly described in commit message and documentation that
   vhost-iommu-support can't be changed if device already fully initialized.
   It should be clear that this option should be set before or simultaneously
   with vhost-server-path.

2. General style comment for the patch: According to coding style, you need to
   use braces for if statements even if there is only one statement in it.

3. Few more comments inline.

> DPDK v17.11 introduces support for the vHost IOMMU feature.
> This is a security feature, that restricts the vhost memory
> that a virtio device may access.
> 
> This feature also enables the vhost REPLY_ACK protocol, the
> implementation of which is known to work in newer versions of
> QEMU (i.e. v2.10.0), but is buggy in older versions (v2.7.0 -
> v2.9.0, inclusive). As such, the feature is disabled by default
> in (and should remain so, for the aforementioned older QEMU
> verions). Starting with QEMU v2.9.1, vhost-iommu-support can
> safely be enabled, even without having an IOMMU device, with
> no performance penalty.
> 
> This patch adds a new vhost port option, vhost-iommu-support,
> to allow enablement of the vhost IOMMU feature:
> 
> $ ovs-vsctl add-port br0 vhost-client-1 \
> -- set Interface vhost-client-1 type=dpdkvhostuserclient \
>  options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
>  options:vhost-iommu-support=true
> 
> Note that support for this feature is only implemented for vhost
> user client ports (since vhost user ports are considered deprecated).
> 
> Signed-off-by: Mark Kavanagh 
> Acked-by: Maxime Coquelin 
> Acked-by: Ciara Loftus 
> ---
>  Documentation/topics/dpdk/vhost-user.rst | 21 +
>  NEWS |  1 +
>  lib/netdev-dpdk.c| 29 ++---
>  vswitchd/vswitch.xml | 10 ++
>  4 files changed, 58 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/topics/dpdk/vhost-user.rst 
> b/Documentation/topics/dpdk/vhost-user.rst
> index 5347995..8dff901 100644
> --- a/Documentation/topics/dpdk/vhost-user.rst
> +++ b/Documentation/topics/dpdk/vhost-user.rst
> @@ -250,6 +250,27 @@ Once the vhost-user-client ports have been added to the 
> switch, they must be
>  added to the guest. Like vhost-user ports, there are two ways to do this: 
> using
>  QEMU directly, or using libvirt. Only the QEMU case is covered here.
>  
> +vhost-user client IOMMU
> +~~~
> +It is possible to enable IOMMU support for vHost User client ports. This is
> +a feature which restricts the vhost memory that a virtio device can access, 
> and
> +as such is useful in deployments in which security is a concern. IOMMU mode 
> may
> +be enabled on the command line::
> +
> +$ ovs-vsctl add-port br0 vhost-client-1 \
> +-- set Interface vhost-client-1 type=dpdkvhostuserclient \
> + options:vhost-server-path=$VHOST_USER_SOCKET_PATH   \
> + options:vhost-iommu-support=true
> +
> +.. important::
> +
> +Enabling the IOMMU feature also enables the vhost user reply-ack 
> protocol;
> +this is known to work on QEMU v2.10.0, but is buggy on older versions
> +(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feaure is disabled by
> +default (and should remain so if using the aforementioned versions of 
> QEMU).
> +Starting with QEMU v2.9.1, vhost-iommu-support can safely be enabled, 
> even
> +without having an IOMMU device, with no performance penalty.
> +
>  Adding vhost-user-client ports to the guest (QEMU)
>  ~~
>  
> diff --git a/NEWS b/NEWS
> index 74e59bf..c15dc24 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -14,6 +14,7 @@ Post-v2.8.0
>   * Add support for compiling OVS with the latest Linux 4.13 kernel
> - DPDK:
>   * Add support for DPDK v17.11
> + * Add support for vHost IOMMU feature
>  
>  v2.8.0 - 31 Aug 2017
>  
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index ed5bf62..2e9633a 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -1424,15 +1424,29 @@ netdev_dpdk_vhost_client_set_config(struct netdev 
> *netdev,
>  {
>  struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
>  const char *path;
> +bool iommu_enable;
> +bool request_reconfigure = false;
> +uint64_t vhost_driver_flags_prev = dev->vhost_driver_flags;

This should be done under the mutex.

>  
>  ovs_mutex_lock(>mutex);
>  if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
>  path = smap_get(args,