Re: [ovs-dev] [PATCH v4 0/3] Add support for TSO with DPDK

2020-01-17 Thread Loftus, Ciara
> -Original Message- > From: Flavio Leitner > Sent: Thursday 16 January 2020 17:01 > To: d...@openvswitch.org > Cc: Stokes, Ian ; Loftus, Ciara > ; Ilya Maximets ; > yangy...@inspur.com; txfh2007 ; Flavio Leitner > > Subject: [PATCH v4 0/3] Ad

Re: [ovs-dev] [PATCH] Documentation: add notes for TSO & i40e

2020-01-14 Thread Loftus, Ciara
> -Original Message- > From: David Marchand > Sent: Monday 13 January 2020 15:58 > To: Loftus, Ciara > Cc: ovs dev ; Flavio Leitner ; > Stokes, Ian > Subject: Re: [ovs-dev] [PATCH] Documentation: add notes for TSO & i40e > > On Mon, Jan 13, 2020

Re: [ovs-dev] [PATCH] netdev-dpdk: Avoid undefined behavior processing devargs

2020-01-13 Thread Loftus, Ciara
> > In "Use of library functions" in the C standard, the following statement > is written to apply to all library functions: > > If an argument to a function has an invalid value (such as ... a > null pointer ... the behavior is undefined. > > Later, under the "String handling" section,

Re: [ovs-dev] [PATCH v3 0/3] Add support for TSO with DPDK

2020-01-10 Thread Loftus, Ciara
> -Original Message- > From: Flavio Leitner > Sent: Thursday 9 January 2020 14:45 > To: d...@openvswitch.org > Cc: Stokes, Ian ; Loftus, Ciara > ; Ilya Maximets ; > yangy...@inspur.com; Flavio Leitner > Subject: [PATCH v3 0/3] Add support for TSO with DPDK >

Re: [ovs-dev] [PATCH 0/4] Add support for TSO with DPDK

2019-12-09 Thread Loftus, Ciara
> > Abbreviated as TSO, TCP Segmentation Offload is a feature which enables > the network stack to delegate the TCP segmentation to the NIC reducing > the per packet CPU overhead. > > A guest using vhost-user interface with TSO enabled can send TCP packets > much bigger than the MTU, which saves

Re: [ovs-dev] ovs-dpdk zero-copy rx/tx support between physical nic and vhost user

2019-11-26 Thread Loftus, Ciara
> > Hi Ciara and Ian, > > I'm checking the zero-copy support on both rx/tx side when > using ovs-dpdk with vhostuser and dpdkport. > Assuming PVP, IIUC[1,2], the rx (P to V) does not have zero-copy and > the V to P has zero copy support by enabling dq-zero-copy. > > Am I understanding this

Re: [ovs-dev] [PATCHv5] netdev-afxdp: Add need_wakeup supprt.

2019-10-16 Thread Loftus, Ciara
> > The patch adds support for using need_wakeup flag in AF_XDP rings. > A new option, use_need_wakeup, is added. When this option is used, > it means that OVS has to explicitly wake up the kernel RX, using poll() > syscall and wake up TX, using sendto() syscall. This feature improves > the

Re: [ovs-dev] [PATCH v1] Docs: Remove zero-copy QEMU limitation.

2018-10-23 Thread Loftus, Ciara
> > Remove note regarding zero-copy compatibility with QEMU >= 2.7. > > When zero-copy was introduced to OVS it was incompatible with QEMU >= > 2.7. This issue has since been fixed in DPDK with commit > 803aeecef123 ("vhost: fix dequeue zero copy with virtio1") and > backported to DPDK LTS

Re: [ovs-dev] [PATCH] NEWS: Re-add vhost zero copy support.

2018-07-10 Thread Loftus, Ciara
> > An entry for experimental vhost zero copy support was removed > incorrectly. Re-add this entry to NEWS. > > Reported-by: Eelco Chaudron > Cc: Ciara Loftus > Fixes: c3c722d2c7ee ("Documentation: document ovs-dpdk flow offload") > Signed-off-by: Ian Stokes Acked-by: Ciara Loftus > --- >

Re: [ovs-dev] OVS (master) + DPDK(17.11) + multi-queue

2018-06-20 Thread Loftus, Ciara
> > Hi, > > > On Tue, Jun 19, 2018 at 12:27 AM Ilya Maximets > wrote: > > > Hi, > > According to your log, your NIC has limited size of tx queues: > > > > 2018-06-19T04:34:46.106Z|00089|dpdk|ERR|PMD: Unsupported size of TX > queue > >(max

Re: [ovs-dev] [RFC PATCH] netdev-dpdk: Integrate vHost User PMD

2018-06-01 Thread Loftus, Ciara
> > > On Mon, May 21, 2018 at 04:44:13PM +0100, Ciara Loftus wrote: > > > The vHost PMD brings vHost User port types ('dpdkvhostuser' and > > > 'dpdkvhostuserclient') under control of DPDK's librte_ether API, like > > > all other DPDK netdev types ('dpdk' and 'dpdkr'). In doing so, direct > > >

Re: [ovs-dev] [RFC v7 11/13] netdev-dpdk: copy large packet to multi-seg. mbufs

2018-05-28 Thread Loftus, Ciara
> > From: Mark Kavanagh > > Currently, packets are only copied to a single segment in > the function dpdk_do_tx_copy(). This could be an issue in > the case of jumbo frames, particularly when multi-segment > mbufs are involved. > > This patch calculates the number of

Re: [ovs-dev] [RFC v7 10/13] dp-packet: copy data from multi-seg. DPDK mbuf

2018-05-28 Thread Loftus, Ciara
> > From: Michael Qiu > > When doing packet clone, if packet source is from DPDK driver, > multi-segment must be considered, and copy the segment's data one by > one. > > Also, lots of DPDK mbuf's info is missed during a copy, like packet > type, ol_flags, etc. That

Re: [ovs-dev] [RFC v7 09/13] dp-packet: Handle multi-seg mbufs in resize__().

2018-05-28 Thread Loftus, Ciara
> > When enabled with DPDK OvS relies on mbufs allocated by mempools to > receive and output data on DPDK ports. Until now, each OvS dp_packet has > only one mbuf associated, which is allocated with the maximum possible > size, taking the MTU into account. This approach, however, doesn't allow >

Re: [ovs-dev] [RFC v7 07/13] dp-packet: Handle multi-seg mubfs in shift() func.

2018-05-28 Thread Loftus, Ciara
> > In its current implementation dp_packet_shift() is also unaware of > multi-seg mbufs (that holds data in memory non-contiguously) and assumes > that data exists contiguously in memory, memmove'ing data to perform the > shift. > > To add support for multi-seg mbuds a new set of functions was

Re: [ovs-dev] [RFC v7 05/13] dp-packet: Handle multi-seg mbufs in helper funcs.

2018-05-28 Thread Loftus, Ciara
> > Most helper functions in dp-packet assume that the data held by a > dp_packet is contiguous, and perform operations such as pointer > arithmetic under that assumption. However, with the introduction of > multi-segment mbufs, where data is non-contiguous, such assumptions are > no longer

Re: [ovs-dev] [RFC v7 06/13] dp-packet: Handle multi-seg mbufs in put*() funcs.

2018-05-28 Thread Loftus, Ciara
> > The dp_packet_put*() function - dp_packet_put_uninit(), dp_packet_put() > and dp_packet_put_zeros() - are, in their current implementation, > operating on the data buffer of a dp_packet as if it were contiguous, > which in the case of multi-segment mbufs means they operate on the first > mbuf

Re: [ovs-dev] [RFC v7 01/13] netdev-dpdk: fix mbuf sizing

2018-05-28 Thread Loftus, Ciara
> > From: Mark Kavanagh > > There are numerous factors that must be considered when calculating > the size of an mbuf: > - the data portion of the mbuf must be sized in accordance With Rx > buffer alignment (typically 1024B). So, for example, in order to >

Re: [ovs-dev] [RFC v7 00/13] Support multi-segment mbufs

2018-05-28 Thread Loftus, Ciara
> > Overview > > This patchset introduces support for multi-segment mbufs to OvS-DPDK. > Multi-segment mbufs are typically used when the size of an mbuf is > insufficient to contain the entirety of a packet's data. Instead, the > data is split across numerous mbufs, each carrying a

Re: [ovs-dev] [PATCH v1] netdev-dpdk: Handle ENOTSUP for rte_eth_dev_set_mtu.

2018-05-17 Thread Loftus, Ciara
> > The function rte_eth_dev_set_mtu is not supported for all DPDK drivers. > Currently if it is not supported we return an error in > dpdk_eth_dev_queue_setup. There are two issues with this. > > (i) A device can still function even if rte_eth_dev_set_mtu is not > supported albeit with the

Re: [ovs-dev] [PATCH 1/1] netdev-dpdk: Don't use PMD driver if not configured successfully

2018-05-16 Thread Loftus, Ciara
> > When initialization of the DPDK PMD driver fails > (dpdk_eth_dev_init()), the reconfigure_datapath() function will remove > the port from dp_netdev, and the port is not used. > > Now when bridge_reconfigure() is called again, no changes to the > previous failing netdev configuration are

Re: [ovs-dev] [PATCH v9] netdev-dpdk: Add support for vHost dequeue zero copy (experimental)

2018-01-23 Thread Loftus, Ciara
> > On 19.01.2018 20:19, Ciara Loftus wrote: > > Zero copy is disabled by default. To enable it, set the 'dq-zero-copy' > > option to 'true' when configuring the Interface: > > > > ovs-vsctl set Interface dpdkvhostuserclient0 > > options:vhost-server-path=/tmp/dpdkvhostuserclient0 > >

Re: [ovs-dev] About : Enable optional dequeue zero copy for vHost User

2018-01-23 Thread Loftus, Ciara
...@corp.netease.com] Sent: Wednesday, January 17, 2018 10:41 AM To: Loftus, Ciara <ciara.lof...@intel.com> Cc: d...@dpdk.org Subject: About : Enable optional dequeue zero copy for vHost User Hi Ciara, I am tesing the function of "vHost dequeue zero copy" for vm2vm on a host, and I ha

Re: [ovs-dev] [PATCH v8] netdev-dpdk: Add support for vHost dequeue zero copy (experimental)

2018-01-19 Thread Loftus, Ciara
> > On 05.01.2018 19:13, Ciara Loftus wrote: > > Zero copy is disabled by default. To enable it, set the 'dq-zero-copy' > > option to 'true' when configuring the Interface: > > > > ovs-vsctl set Interface dpdkvhostuserclient0 > > options:vhost-server-path=/tmp/dpdkvhostuserclient0 > >

Re: [ovs-dev] [ovs-dev, v5, 2/2] netdev-dpdk: Enable optional dequeue zero copy for vHost User

2017-12-18 Thread Loftus, Ciara
> > On 18.12.2017 15:28, Loftus, Ciara wrote: > >> > >> Not a full review. > > > > Thanks for your feedback. > > > >> > >> General thoughts: > >> > >> If following conditions are true: > >> > >> 1. W

Re: [ovs-dev] [ovs-dev, v5, 2/2] netdev-dpdk: Enable optional dequeue zero copy for vHost User

2017-12-18 Thread Loftus, Ciara
> > Not a full review. Thanks for your feedback. > > General thoughts: > > If following conditions are true: > > 1. We don't need to add new feature to deprecated vhostuser port. Agree. > > 2. We actually don't need to have ability to change ZC config if vhost-server- > path >already

Re: [ovs-dev] [PATCH v4 0/2] vHost Dequeue Zero Copy

2017-12-08 Thread Loftus, Ciara
> > > > Can you comment on that? Can a user also reduce the problem by > > > configuring > > > a) a larger virtio Tx queue size (up to 1K) in Qemu, or > > > > Is this possible right now without modifying QEMU src? I think the size is > hardcoded to 256 at the moment although it may become > >

Re: [ovs-dev] [PATCH v4 0/2] vHost Dequeue Zero Copy

2017-11-28 Thread Loftus, Ciara
> > Hi Ciara, > > > Thanks for your feedback. The limitation is only placed on phy port queues > on the VP (vhost -> phy) path. VV path and PV path are not > > affected. > > Yes, you are right. VM to VM traffic is copied on transmit to the second VM. > > > > I would much rather put a

Re: [ovs-dev] [PATCH] netdev-dpdk: Remove uneeded call to rte_eth_dev_count().

2017-11-27 Thread Loftus, Ciara
> > The call to rte_eth_dev_count() was added as workaround > for rte_eth_dev_get_port_by_name() not handling cases > when there was no DPDK ports. In recent versions of DPDK, > rte_eth_dev_get_port_by_name() does handle this > case, so the rte_eth_dev_count() call can be removed. > > CC: Ciara

Re: [ovs-dev] [RFC PATCH V2 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-08 Thread Loftus, Ciara
> > DPDK v17.11 introduces support for the vHost IOMMU feature. > This is a security feature, that restricts the vhost memory > that a virtio device may access. > > This feature also enables the vhost REPLY_ACK protocol, the > implementation of which is known to work in newer versions of > QEMU

Re: [ovs-dev] [RFC PATCH 2/2] netdev-dpdk: add support for vhost IOMMU feature

2017-11-07 Thread Loftus, Ciara
> > DPDK v17.11 introduces support for the vHost IOMMU feature. > This is a security feature, that restricts the vhost memory > that a virtio device may access. > > This feature also enables the vhost REPLY_ACK protocol, the > implementation of which is known to work in newer versions of > QEMU

Re: [ovs-dev] [PATCH RFC v2] netdev-dpdk: Allow specification of index for PCI devices

2017-10-19 Thread Loftus, Ciara
> > On 10/17/2017 11:48 AM, Ciara Loftus wrote: > > Some NICs have only one PCI address associated with multiple ports. This > > patch extends the dpdk-devargs option's format to cater for such > > devices. Whereas before only one of N ports associated with one PCI > > address could be added, now

Re: [ovs-dev] [PATCH RFC] netdev-dpdk: Allow specification of index for PCI devices

2017-10-16 Thread Loftus, Ciara
> > Hi Ciara, thanks for working on this patch. A few comments inline. Thanks for your review Ian. > > > Some NICs have only one PCI address associated with multiple ports. This > > patch extends the dpdk-devargs option's format to cater for such devices. > > Whereas before only one of N ports

Re: [ovs-dev] [PATCH v2 2/2] netdev-dpdk: Enable optional dequeue zero copy for vHost User

2017-10-16 Thread Loftus, Ciara
> > Thanks for the v2 Ciara. Comments inline. Thanks for your review Ian. Hope to send a v3 soon. Responses inline. Thanks, Ciara > > > > Enabled per port like so: > > ovs-vsctl set Interface dpdkvhostuserclient0 options:dq-zero-copy=true > > > > The feature is disabled by default and can

Re: [ovs-dev] [PATCH v2 0/2] vHost Dequeue Zero Copy

2017-10-12 Thread Loftus, Ciara
> > Hi Ciara, > > These improvements look very good. I would expect even bigger > improvements for big packets, as long as we don't hit some link bandwidth > limitations. But at least the vhost-vhost cases should benefit. > > Have you also tested larger packet sizes? Hi Jan, Thanks for the

Re: [ovs-dev] [PATCH v4 2/6] netdev-dpdk: Fix mempool names to reflect socket id.

2017-10-09 Thread Loftus, Ciara
> > Create mempool names by considering also the NUMA socket number. > So a name reflects on what socket the mempool is allocated on. > This change is needed for the NUMA-awareness feature. > > CC: Kevin Traynor > CC: Aaron Conole > Reported-by: Ciara

Re: [ovs-dev] [dpdk-users] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-10-06 Thread Loftus, Ciara
> > On Thu, Sep 21, 2017 at 1:58 PM, Loftus, Ciara <ciara.lof...@intel.com> > wrote: > > 21/09/2017 10:04, Loftus, Ciara: > > > > 20/09/2017 19:33, Kevin Traynor: > > > > > On 09/08/2017 10:56 AM, Loftus, Ciara wrote: > > > >

Re: [ovs-dev] [PATCH v3 1/5] netdev-dpdk: fix mempool management with vhu client.

2017-10-06 Thread Loftus, Ciara
> > In a PVP test where vhostuser ports are configured as > clients, OvS crashes when QEMU is launched. > This patch avoids to call dpdk_mp_put() - and erroneously > release the mempool - when it already exists. Thanks for investigating this issue and for the patch. I think the commit message

Re: [ovs-dev] [dpdk-users] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-21 Thread Loftus, Ciara
> 21/09/2017 10:04, Loftus, Ciara: > > > 20/09/2017 19:33, Kevin Traynor: > > > > On 09/08/2017 10:56 AM, Loftus, Ciara wrote: > > > > > It seems the DPDK function rte_eth_dev_get_port_by_name() will > > > > > always return the port ID of

Re: [ovs-dev] [dpdk-users] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-21 Thread Loftus, Ciara
> 20/09/2017 19:33, Kevin Traynor: > > On 09/08/2017 10:56 AM, Loftus, Ciara wrote: > > >> Hi, > > >> > > >> I have compiled and built ovs-dpdk using DPDK v17.08 and OVS v2.8.0. > The > > >> NIC that I am using is Mellanox Conn

Re: [ovs-dev] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-19 Thread Loftus, Ciara
> Thanks for confirming Devendra > > Adding Ciara > There have been some offline discussions regarding the issue. The workaround discussed is a patch to enable backwards compatibility with the old port IDs. Something like the following: – set Interface portX options:dpdk-devargs=dpdkportid0

Re: [ovs-dev] adding dpdk ports sharing same pci address to ovs-dpdk bridge

2017-09-08 Thread Loftus, Ciara
> Hi, > > I have compiled and built ovs-dpdk using DPDK v17.08 and OVS v2.8.0. The > NIC that I am using is Mellanox ConnectX-3 Pro, which is a dual port 10G > NIC. The problem with this NIC is that it provides only one PCI address for > both the 10G ports. > > So when I am trying to add the two

Re: [ovs-dev] [PATCH v4] netdev-dpdk: Implement TCP/UDP TX cksum in ovs-dpdk side

2017-09-05 Thread Loftus, Ciara
> > Currently, the dpdk-vhost side in ovs doesn't support tcp/udp tx cksum. > So L4 packets's cksum were calculated in VM side but performance is not > good. > Implementing tcp/udp tx cksum in ovs-dpdk side improves throughput in > VM->phy->phy->VM situation. And it makes virtio-net

Re: [ovs-dev] [PATCH v3] netdev-dpdk: Implement TCP/UDP TX cksum in ovs-dpdk side

2017-09-01 Thread Loftus, Ciara
Hi Zhenyu, Thanks for the v3. No feedback yet on the common implementation, so for now let's focus on this implementation. Some high level comments: - Moving the calculation to vhost rx we've removed the impact on non-vhost topology performance. - The patch needs a rebase. - checkpatch.py

Re: [ovs-dev] [PATCH 2/3] dpif-netdev: Fix a couple of coding style issues.

2017-09-01 Thread Loftus, Ciara
> > A couple of trivial fixes for a ternery operator placement > and pointer declaration. > > Fixes: 655856ef39b9 ("dpif-netdev: Change rxq_scheduling to use rxq > processing cycles.") > Fixes: a2ac666d5265 ("dpif-netdev: Change definitions of 'idle' & 'processing' > cycles") > Cc:

Re: [ovs-dev] [PATCH v1] netdev-dpdk: Implement TCP/UDP TX cksum in ovs-dpdk side

2017-08-23 Thread Loftus, Ciara
in a rte_vhost library call such that we don't have two separate implementations? Thanks, Ciara > > > > I have some other comments inline. > > > > Thanks, > > Ciara > “ > > > > From: Gao Zhenyu <sysugaozhe...@gmail.com> > Date: Wednesda

Re: [ovs-dev] [PATCH] netdev-dpdk: include dpdk PCI header directly

2017-08-10 Thread Loftus, Ciara
> > On 08/09/2017 10:00 PM, Aaron Conole wrote: > > As part of a devargs rework in DPDK, the PCI header file was removed, and > > needs to be directly included. This isn't required to build with 17.05 or > > earlier, but will be required should a future update happen. > > > > Signed-off-by:

Re: [ovs-dev] [PATCH v1] netdev-dpdk: Implement TCP/UDP TX cksum in ovs-dpdk side

2017-08-08 Thread Loftus, Ciara
> > I would like to implement vhost->vhost part. > > Thanks > Zhenyu Gao > > 2017-08-04 22:52 GMT+08:00 Loftus, Ciara <ciara.lof...@intel.com>: > > > > Currently, the dpdk-vhost side in ovs doesn't support tcp/udp tx cksum. > > So L4 packets's cksum we

Re: [ovs-dev] [PATCH v1] netdev-dpdk: Implement TCP/UDP TX cksum in ovs-dpdk side

2017-08-04 Thread Loftus, Ciara
> > Currently, the dpdk-vhost side in ovs doesn't support tcp/udp tx cksum. > So L4 packets's cksum were calculated in VM side but performance is not > good. > Implementing tcp/udp tx cksum in ovs-dpdk side improves throughput and > makes virtio-net frontend-driver support NETIF_F_SG as well > >

[ovs-dev] [RFC PATCH] netdev-dpdk: Add vHost User PMD

2017-05-30 Thread Loftus, Ciara
Apologies for the misformatted subject header! This cover-letter is in relation to the following patch: https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/333108.html Thanks, Ciara > -Original Message- > From: Loftus, Ciara > Sent: Tuesday, May 30, 2017 2:33 P

Re: [ovs-dev] [RFC PATCH] netdev-dpdk: Add Tx intermediate queue for vhost ports.

2017-05-26 Thread Loftus, Ciara
> > This commit adds the intermediate queue for vHost-user ports. It > improves the throughput in multiple virtual machines deployments and > also in cases with VM doing packet forwarding in kernel stack. > > This patch is aligned with intermediate queue implementation for dpdk > ports that can

Re: [ovs-dev] [PATCH v3] netdev-dpdk: Implement Tx intermediate queue for dpdk ports.

2017-05-26 Thread Loftus, Ciara
> > After packet classification, packets are queued in to batches depending > on the matching netdev flow. Thereafter each batch is processed to > execute the related actions. This becomes particularly inefficient if > there are few packets in each batch as rte_eth_tx_burst() incurs expensive >

Re: [ovs-dev] [PATCH v3] dpif-netdev: Change definitions of 'idle' & 'processing' cycles

2017-03-08 Thread Loftus, Ciara
> > On 02/21/2017 10:49 AM, Jan Scheurich wrote: > >> -Original Message- > >> From: Kevin Traynor [mailto:ktray...@redhat.com] > >> Sent: Friday, 17 February, 2017 17:38 > >> > >> If there are multiple queues in a poll list and only one has packets, > >> the cycles polling the empty

Re: [ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 17.02

2017-02-24 Thread Loftus, Ciara
> > > > > Ciara Loftus writes: > > > > > This commit announces support for DPDK 17.02. Compatibility with DPDK > > > v16.11 is not broken yet thanks to no code changes being needed for the > > > upgrade. > > > > > > Signed-off-by: Ciara Loftus > >

Re: [ovs-dev] [PATCH v3] dpif-netdev: Change definitions of 'idle' & 'processing' cycles

2017-02-20 Thread Loftus, Ciara
> > On 02/17/2017 10:39 AM, Ciara Loftus wrote: > > Instead of counting all polling cycles as processing cycles, only count > > the cycles where packets were received from the polling. > > > > Signed-off-by: Georg Schmuecking > > Signed-off-by: Ciara Loftus

Re: [ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 17.02

2017-02-17 Thread Loftus, Ciara
> > Ciara Loftus writes: > > > This commit announces support for DPDK 17.02. Compatibility with DPDK > > v16.11 is not broken yet thanks to no code changes being needed for the > > upgrade. > > > > Signed-off-by: Ciara Loftus > > --- > > Is it

Re: [ovs-dev] [PATCH v8] dpif-netdev: Conditional EMC insert

2017-02-14 Thread Loftus, Ciara
> > On 02/10/2017 10:57 AM, Ciara Loftus wrote: > > Unconditional insertion of EMC entries results in EMC thrashing at high > > numbers of parallel flows. When this occurs, the performance of the EMC > > often falls below that of the dpcls classifier, rendering the EMC > > practically useless. >

Re: [ovs-dev] [PATCH 1/1] dpif-netdev: Conditional EMC insert

2017-01-26 Thread Loftus, Ciara
> > 2017-01-25 7:52 GMT-08:00 Loftus, Ciara <ciara.lof...@intel.com>: > >> 2017-01-22 11:45 GMT-08:00 Jan Scheurich <jan.scheur...@web.de>: > >> > > >> >> It's not a big deal, since the most important use case we have for > >> >

Re: [ovs-dev] [PATCH 0/1] dpif-netdev: Conditional EMC insert

2017-01-20 Thread Loftus, Ciara
> > On 01/12/2017 04:49 PM, Ciara Loftus wrote: > > This patch is part of the OVS-DPDK performance optimizations presented > > on the OVS fall conference > > (http://openvswitch.org/support/ovscon2016/8/1400-gray.pdf) > > > > The Exact Match Cache does not perform well in use cases with a high >

Re: [ovs-dev] [PATCH] Documentation: Update DPDK doc after port naming change.

2017-01-19 Thread Loftus, Ciara
> > options:dpdk-devargs is always required now. This commit also changes > some of the names from 'dpdk0' to various others. > > netdev-dpdk/detach accepts a PCI id instead of a port name. > > CC: Ciara Loftus > Fixes: 55e075e65ef9("netdev-dpdk: Arbitrary 'dpdk' port

Re: [ovs-dev] [PATCH] netdev: Add 'errp' to set_config().

2017-01-11 Thread Loftus, Ciara
> > Since 55e075e65ef9("netdev-dpdk: Arbitrary 'dpdk' port naming"), > set_config() is used to identify a DPDK device, so it's better to report > its detailed error message to the user. Tunnel devices and patch ports > rely a lot on set_config() as well. > > This commit adds a param to

Re: [ovs-dev] [PATCH] netdev-dpdk: Assign socket id according to device's numa id

2017-01-11 Thread Loftus, Ciara
> > Binbin Xu writes: > > > After the commit "55e075e65ef9ecbd70e5e0fada2704c3d73724d8 > > netdev-dpdk: Arbitrary 'dpdk' port naming", we could hotplug > > attach DPDK ports specified via the 'dpdk-devargs' option. > > > > But the socket id of DPDK ports can't be assigned

Re: [ovs-dev] [PATCH v2 2/3] netdev-dpdk: Arbitrary 'dpdk' port naming

2016-12-15 Thread Loftus, Ciara
> > Thanks for the new version, I'm still testing this. Thanks Daniele. I plan to do some more thorough testing myself. > > One comment inline > > 2016-12-14 9:06 GMT-08:00 Ciara Loftus : > > 'dpdk' ports no longer have naming restrictions. Now, instead of > >

Re: [ovs-dev] [PATCH] netdev-dpdk: Add vHost User PMD

2016-12-06 Thread Loftus, Ciara
> > Thanks for the patch. > > I experience a crash with this patch applied by starting ovs and > immediately adding a vhostuserclient port. It's not reproducible 100% > of the times. > > Program received signal SIGSEGV, Segmentation fault. > rte_eth_xstats_get (port_id=3 '\003',

Re: [ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 16.11

2016-11-24 Thread Loftus, Ciara
> > > On 11/24/2016 04:34 PM, Loftus, Ciara wrote: > >> > >> > >> On 11/24/2016 03:59 PM, Loftus, Ciara wrote: > >>>> > >>>> On 11/24/2016 03:20 PM, Ciara Loftus wrote: > >>>>> This commit announces support for D

Re: [ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 16.11

2016-11-24 Thread Loftus, Ciara
> > > On 11/24/2016 03:59 PM, Loftus, Ciara wrote: > >> > >> On 11/24/2016 03:20 PM, Ciara Loftus wrote: > >>> This commit announces support for DPDK 16.11. Compaitibilty with DPDK > >>> v16.07 is not broken yet thanks to only mino

Re: [ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 16.11

2016-11-24 Thread Loftus, Ciara
> > On 11/24/2016 03:20 PM, Ciara Loftus wrote: > > This commit announces support for DPDK 16.11. Compaitibilty with DPDK > > v16.07 is not broken yet thanks to only minor code changes being needed > > for the upgrade. > > > > Signed-off-by: Ciara Loftus > > --- > >