On 8 January 2015 at 05:39, Reshma Pattan wrote:
> From: Reshma Pattan
>
> 1)New library to provide reordering of out of ordered
> mbufs based on sequence number of mbuf. Library uses reorder
> buffer structure
> which in tern uses two circular buffers called
This patch integrates syn filter to new API in ixgbe/igb driver.
changes:
ixgbe: remove old functions that deal with syn filter
ixgbe: add new functions that deal with syn filter (fit for filter_ctrl API)
e1000: remove old functions that deal with syn filter
e1000: add new functions that deal
> -Original Message-
> From: Zhang, Helin
> Sent: Tuesday, December 16, 2014 4:23 PM
> To: dev at dpdk.org
> Cc: Chen, Jing D; Wu, Jingjing; Liu, Jijiang; Cao, Waterman; Lu, Patrick;
> Rowden, Aaron F; Zhang, Helin
> Subject: [PATCH v3] i40e: workaround for X710 performance issues
>
>
Hi,
I am migrating from DPDK1.7 to DPDK1.8.
My application works fine with DPDK1.7.
I am using 10 Gb Intel 82599 NIC.
I have jumbo frames enabled, with max_rx_pkt_len = 10232
My mbuf dataroom size is 2048+headroom
So naturally the ixgbe_recv_scattered_pkts driver function is triggered for
I finally found the time to try this and I noticed that on a server
with 1 NUMA node, this works, but if server has 2 NUMA nodes than by
default memory policy, reserved hugepages are divided on each node and
again DPDK test app fails for the reason already mentioned. I found
out that 'solution'
Hi,
> -Original Message-
> From: Ananyev, Konstantin
> Sent: Wednesday, January 7, 2015 8:07 PM
> To: Liu, Jijiang; 'Olivier MATZ'
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 0/3] enhance TX checksum command and
> csum forwarding engine
>
>
>
> > -Original
On 01/07/15 08:32, Ouyang Changchun wrote:
> Get the available Rx and Tx queue number when receiving IXGBE_VF_GET_QUEUES
> message from VF.
>
> Signed-off-by: Changchun Ouyang
Reviewed-by: Vlad Zolotarov
>
> changes in v5
>- Add some 'FIX ME' comments for IXGBE_VF_TRANS_VLAN.
>
> ---
>
On 01/07/15 08:32, Ouyang Changchun wrote:
> It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF RSS.
>
> The psrtype will determine how many queues the received packets will
> distribute to,
> and the value of psrtype should depends on both facet: max VF rxq number which
> has
On 01/07/15 08:32, Ouyang Changchun wrote:
> Set VMDq RSS mode if it has VF(VF number is more than 1) and has RSS
> information.
>
> Signed-off-by: Changchun Ouyang
Reviewed-by: Vlad Zolotarov
Some nitpicking below... ;)
>
> changes in v5
>- Assign txmode.mq_mode with ETH_MQ_TX_NONE
On 01/07/15 08:32, Ouyang Changchun wrote:
> This patch enables VF RSS for Niantic, which allow each VF having at most 4
> queues.
> The actual queue number per VF depends on the total number of pool, which is
> determined by the max number of VF at PF initialization stage and the number
> of
>
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> index 27ba175..744156c 100644
> --- a/lib/librte_vhost/virtio-net.c
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -68,7 +68,9 @@ static struct virtio_net_device_ops const *notify_ops;
> static struct virtio_net_config_ll
Hi Frank,
> -Original Message-
> From: Liu, Jijiang
> Sent: Thursday, January 08, 2015 8:52 AM
> To: Ananyev, Konstantin; 'Olivier MATZ'
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 0/3] enhance TX checksum command and csum
> forwarding engine
>
> Hi,
>
> > -Original
On Thu, Jan 08, 2015 at 01:40:54PM +0530, Prashant Upadhyaya wrote:
> Hi,
>
> I am migrating from DPDK1.7 to DPDK1.8.
> My application works fine with DPDK1.7.
> I am using 10 Gb Intel 82599 NIC.
> I have jumbo frames enabled, with max_rx_pkt_len = 10232
> My mbuf dataroom size is 2048+headroom
>
> -Original Message-
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Wednesday, January 7, 2015 5:45 PM
> To: Pattan, Reshma
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/3] librte_reorder: New reorder library
>
> On Wed, Jan 07, 2015 at 04:39:11PM +, Reshma
> -Original Message-
> From: Richard Sanger [mailto:rsangerarj at gmail.com]
> Sent: Wednesday, January 7, 2015 9:10 PM
> To: Pattan, Reshma
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/3] librte_reorder: New reorder library
>
> On 8 January 2015 at 05:39, Reshma Pattan
Hi Steve,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Liang, Cunming
> Sent: Tuesday, December 23, 2014 9:52 AM
> To: Stephen Hemminger; Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore
>
My opinion on this is that the lcore_id is rarely (if ever) used to find the
actual core a thread is being run on. Instead it is used 99% of the time as a
unique array index per thread, and therefore that we can keep that usage by
just assigning a valid lcore_id to any extra threads created.
Hi, Guys,
I'm trying to run a DPDK(1.7.1) application that has been previously tested
on Xen/VMware VM's. I have both iommu=pt and intel_iommu=on.
I would expect things to work as usual but unfortunately the VF I'm taking
is unable to send or receive any packets (The TXQ gets filled out, and the
On 01/08/15 11:19, Vlad Zolotarov wrote:
>
> On 01/07/15 08:32, Ouyang Changchun wrote:
>> Check mq mode for VMDq RSS, handle it correctly instead of returning
>> an error;
>> Also remove the limitation of per pool queue number has max value of
>> 1, because
>> the per pool queue number could
19 matches
Mail list logo