this mailing list...
> is there any other way to search them?
>
> Thanks,
>
> Francesco Montorsi
>
>
>
--
Andriy Berestovskyy
Hey folks,
> On 28 Jul 2016, at 17:47, De Lara Guarch, Pablo intel.com> wrote:
> Fair enough. So you mean to use rte_eth_dev_attach in ethdev library and
> a similar function in cryptodev library?
There is a rte_eth_dev_get_port_by_name() which gets the port id right after
the
On behalf of contributors, thank you so much all the reviewers, maintainers and
un tr?s grand merci ? Thomas for your great job, help and patience ;)
Regards,
Andriy
> On 28 Jul 2016, at 23:39, Thomas Monjalon
> wrote:
>
> Once again, a great release from the impressive DPDK community:
>
already most of the way there.
>
> If people are going to continue to block it because it is a kernel module,
> then IMO, it's better to leave the existing support on igx / ixgbe in place
> instead of stepping backwards to zero support for ethtool.
>
>> While the code wasn't ready at the time, it was a definite improvement
>> over what
>> > we have with KNI today.
>>
--
Andriy Berestovskyy
t; > As an app developer, I didn't realize the max frame size didn't include
>> > VLAN tags. I expected max frame size to be the size of the ethernet
>> > frame
>> > on the wire, which I would expect to include space used by any VLAN or
>> > MPLS
>> > tags.
>> >
>> > Is there anything in the docs or example apps about that? I did some
>> > digging as I was debugging this and didn't notice it, but entirely
>> > possible
>> > I just missed it.
>> >
>> >
>> > > I'm not sure there is a works-in-all-cases solution here.
>> > >
>> >
>> > Andriy's suggestion seems like it points in the right direction.
>> >
>> > From an app developer point of view, I'd expect to have a single max
>> > frame
>> > size value to track and the APIs should take care of any adjustments
>> > required internally. Maybe have rte_pktmbuf_pool_create() add the
>> > additional bytes when it calls rte_mempool_create() under the covers?
>> > Then
>> > it's nice and clean for the API without unexpected side-effects.
>> >
>>
>> It will still have unintended side-effects I think, depending on the
>> resolution
>> of the NIC buffer length paramters. For drivers like ixgbe or e1000, the
>> mempool
>> create call could potentially have to add an additional 1k to each buffer
>> just
>> to be able to store the extra eight bytes.
>
>
> The comments in the ixgbe driver say that the value programmed into SRRCTL
> must be on a 1K boundary. Based on your previous response, it sounded like
> the NIC ignores that limit for VLAN tags, hence the check for the extra 8
> bytes on the mbuf element size. Are you worried about the size resolution on
> mempool elements?
>
> Sounds like I've got to go spend some quality time in the NIC data sheets...
> Maybe I should back up and just ask the higher level question:
>
> What's the right incantation in both the dev_conf structure and in creating
> the mbuf pool to support jumbo frames of some particular size on the wire,
> with or without VLAN tags, without requiring scattered_rx support in an app?
>
> Thanks,
> Jay
--
Andriy Berestovskyy
Hi Jay,
On Tue, Apr 19, 2016 at 10:16 PM, Jay Rolette wrote:
> Should the driver error out in that case instead of only "sort of" working?
+1, we hit the same issue. Error or log message would help.
> If I support a max frame size of 9216 bytes (exactly a 1K multiple to make
> the NIC happy),
Hi Ick-Sung,
Please see inline.
On Mon, Apr 18, 2016 at 2:14 PM, ??? wrote:
> If I take an example, the worker assignment method using (not %) in
> load balancing was not fixed yet.
If the code works, there is nothing to fix, right? ;)
> Question #1) I would like to know how can I
of-band LACP messages will not be handled with
> the expected latency and this may cause the link status to be incorrectly
> marked as down or failure to correctly negotiate with peers.
>
>
> can any one give me example or more detail info ?
>
> I am extremely grateful for it.
--
Andriy Berestovskyy
>>
>> /Arnon
>
> For me, breaking stuff with a black background to gain questionably useful
> colors and/or themes seems like more overhead for cognition of the code for
> not much benefit.
>
> This is going to break the tool people who use a Linux standard framebuffer
> with no X also, isn't it?
>
> Matthew.
--
Andriy Berestovskyy
On Tue, Dec 8, 2015 at 3:47 PM, Andriy Berestovskyy
wrote:
> Fragmented IPv4 packets have no TCP/UDP headers, so we hashed
> random data introducing reordering of the fragments.
Signed-off-by: Andriy Berestovskyy
On Tue, Dec 8, 2015 at 2:23 PM, Andriy Berestovskyy
wrote:
> The following messages might appear after some idle time:
> "PMD: Failed to allocate LACP packet from pool"
>
> The fix ensures the mempool size is greater than the sum
> of TX descriptors.
Signed-off-by: Andriy Berestovskyy
Fragmented IPv4 packets have no TCP/UDP headers, so we hashed
random data introducing reordering of the fragments.
---
drivers/net/bonding/rte_eth_bond_pmd.c | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
The following messages might appear after some idle time:
"PMD: Failed to allocate LACP packet from pool"
The fix ensures the mempool size is greater than the sum
of TX descriptors.
---
drivers/net/bonding/rte_eth_bond_8023ad.c | 24 +++-
1 file changed, 15 insertions(+), 9
b_rx_queues =
> + bond_nb_rx_queues > slave_dev_info->max_rx_queues
> + ? slave_dev_info->max_rx_queues
> + : bond_nb_rx_queues;
> + slave_details->nb_tx_queues =
> + bond_nb_tx_queues > slave_dev_info->max_tx_queues
> + ? slave_dev_info->max_tx_queues
> + : bond_nb_tx_queues;
> +
> /* If slave device doesn't support interrupts then we need to enabled
> * polling to monitor link status */
> if (!(slave_eth_dev->data->dev_flags & RTE_PCI_DRV_INTR_LSC)) {
> diff --git a/drivers/net/bonding/rte_eth_bond_private.h
> b/drivers/net/bonding/rte_eth_bond_private.h
> index 6c47a29..02f6de1 100644
> --- a/drivers/net/bonding/rte_eth_bond_private.h
> +++ b/drivers/net/bonding/rte_eth_bond_private.h
> @@ -101,6 +101,8 @@ struct bond_slave_details {
> uint8_t link_status_poll_enabled;
> uint8_t link_status_wait_to_complete;
> uint8_t last_link_status;
> + uint16_t nb_rx_queues;
> + uint16_t nb_tx_queues;
> /**< Port Id of slave eth_dev */
> struct ether_addr persisted_mac_addr;
>
> @@ -240,7 +242,8 @@ slave_remove(struct bond_dev_private *internals,
>
> void
> slave_add(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev);
> + struct rte_eth_dev *slave_eth_dev,
> + const struct rte_eth_dev_info *slave_dev_info);
>
> uint16_t
> xmit_l2_hash(const struct rte_mbuf *buf, uint8_t slave_count);
> --
> 2.1.4
>
--
Andriy Berestovskyy
>>
>> ERROR HwEmulDPDKPort::init() rte_eth_dev_configure: err=-22, port=0:
>> Unknown error -22
>> EAL: PCI device :03:00.0 on NUMA socket 0
>> EAL: remove driver: 8086:105e rte_em_pmd
>> EAL: PCI memory unmapped at 0x7feb4000
>> EAL: PCI memory unmapped at 0x7feb4002
>>
>> So, for those devices I want to use nb_rx_q=1...
>>
>> Thanks,
>>
>> Francesco Montorsi
>
--
Andriy Berestovskyy
ssion out to me. I somehow missed it.
> Unfortunately it looks like the discussion stopped after Maryam made a
> good proposal so I will vote in on that and hopefully get things started
> again.
>
> Best regards,
> Martin
>
>
>
> On 21.10.15 17:53, Andriy Berestov
###
>
>
> When running the exact same test with DPDK version 2.0 no ierrors are
> reported.
> Is anyone else seeing strange ierrors being reported for Intel Niantic
> cards with DPDK 2.1?
>
> Best regards,
> Martin
>
--
Andriy Berestovskyy
*pu)
>> > > return -1;
>> > >
>> > > dev->features = *pu;
>> > > - if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) {
>> > > - LOG_DEBUG(VHOST_CONFIG,
>> > > - "(%"PRIu64") Mergeable RX buffers enabled\n",
>> > > - dev->device_fh);
>> > > + if (dev->features &
>> > > + ((1 << VIRTIO_NET_F_MRG_RXBUF) | (1ULL << VIRTIO_F_VERSION_1))) {
>> > > vhost_hlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
>> > > } else {
>> > > - LOG_DEBUG(VHOST_CONFIG,
>> > > - "(%"PRIu64") Mergeable RX buffers disabled\n",
>> > > - dev->device_fh);
>> > > vhost_hlen = sizeof(struct virtio_net_hdr);
>> > > }
>> > > + LOG_DEBUG(VHOST_CONFIG,
>> > > + "(%"PRIu64") Mergeable RX buffers %s, virtio 1 %s\n",
>> > > + dev->device_fh,
>> > > + (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" :
>> > > "off",
>> > > + (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" :
>> > > "off");
>> > >
>> > > for (i = 0; i < dev->virt_qp_nb; i++) {
>> > > uint16_t base_idx = i * VIRTIO_QNUM;
>> > > --
>> > > 2.1.0
--
Andriy Berestovskyy
Hi Maryam,
Please see below.
> XEC counts the Number of receive IPv4, TCP, UDP or SCTP XSUM errors
Please note than UDP checksum is optional for IPv4, but UDP packets with zero
checksum hit XEC.
> And general crc errors counts Counts the number of receive packets with CRC
> errors.
Let me
Hi,
Updating to DPDK 2.1 I noticed an issue with the ixgbe stats.
In commit f6bf669b9900 "ixgbe: account more Rx errors" we add XEC
hardware counter (l3_l4_xsum_error) to the ierrors now. The issue is
the UDP packets with zero check sum are counted in XEC and now in
ierrors too.
I've tried to
SFP+ (rev 01).
>
> What is more, is there any particular reason for assuming in
> i40e_xmit_pkts that offloading checksums is unlikely (I mean the line no
> 1307 "if (unlikely(ol_flags & I40E_TX_CKSUM_OFFLOAD_MASK))" at
> dpdk-2.0.0/lib/librte_pmd_i40e/i40e_rxtx.c)?
>
> Regards,
> Angela
--
Andriy Berestovskyy
Hi Zoltan,
On Fri, May 29, 2015 at 7:00 PM, Zoltan Kiss wrote:
> The easy way is just to increase your buffer pool's size to make
> sure that doesn't happen.
Go for it!
> But there is no bulletproof way to calculate such
> a number
Yeah, there are many places for mbufs to stay :( I would
gt; >>> CONFIG_RTE_LIBRTE_VHOST=y
>> >>> CONFIG_RTE_LIBRTE_VHOST_USER=y
>> >>> CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
>> >>>
>> >>> then I run vhost app based on documentation:
>> >>>
>> >>> ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge --socket-mem
>> >>> 3712
>> >>> -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
>> >>>
>> >>> -I use this strange --socket-mem 3712 because of physical limit of
>> >>> memoryon device -with this vhost user I run two KVM machines with
>> >>> followed parameters
>> >>>
>> >>> kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
>> >>> host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m
>> >>> 1024 -mem-path /mnt/huge -mem-prealloc -chardev
>> >>> socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
>> >>> -netdev type=vhost-user,id=hostnet1,chardev=char1
>> >>> -device virtio-net
>> >>> pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6
>> >>> =
>> >>> off,guest_ecn=off
>> >>> -chardev
>> >>> socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
>> >>> -netdev type=vhost-user,id=hostnet2,chardev=char2
>> >>> -device
>> >>> virtio-net-
>> >>> pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6
>> >>> =
>> >>> off,guest_ecn=off
>> >>>
>> >>> After running KVM virtio correctly starting (below logs from vhost app)
>> >> ...
>> >>> VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae0 sz:0xa
>> >>> off:0x0
>> >>> VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb00 sz:0x1000
>> >>> off:0xc
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>> >>> VHOST_CONFIG: vring kick idx:0 file:38
>> >>> VHOST_CONFIG: virtio isn't ready for processing.
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>> >>> VHOST_CONFIG: vring kick idx:1 file:39
>> >>> VHOST_CONFIG: virtio is now ready for processing.
>> >>> VHOST_DATA: (1) Device has been added to data core 2
>> >>>
>> >>> So everything looking good.
>> >>>
>> >>> Maybe it is something trivial but using options: --vm2vm 1 (or) 2
>> >>> --stats 9 it seems that I didn't have connection between VM2VM
>> >>> communication. I set manually IP for eth0 and eth1:
>> >>>
>> >>> on 1 VM
>> >>> ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1
>> >>> 192.168.1.101 netmask 255.255.255.0 up
>> >>>
>> >>> on 2 VM
>> >>> ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1
>> >>> 192.168.0.202 netmask 255.255.255.0 up
>> >>>
>> >>> I notice that in vhostapp are one directional rx/tx queue so I tryied
>> >>> to ping between VM1 to VM2 using both interfaces ping -I eth0
>> >>> 192.168.1.200 ping -I
>> >>> eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1
>> >>> 192.168.0.202
>> >>>
>> >>> on VM2 using tcpdump on both interfaces I didn't see any ICMP requests
>> >>> or traffic
>> >>>
>> >>> And I cant ping between any IP/interfaces, moreover stats show me that:
>> >>>
>> >>> Device statistics
>> >>> Statistics for device 0 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 1 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 2 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 3 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> ==
>> >>>
>> >>> So it seems like any packet didn't leave my VM.
>> >>> also arp table is empty on each VM.
>> >>
>> >>
>>
>>
--
Andriy Berestovskyy
Hey guys,
Can we in function bond_mode_8023ad_activate_slave() try to add to the
slave bond and LACP multicast MACs first? And then we would fall back
into promiscuous mode if the adding has failed.
In other words:
if (rte_eth_dev_mac_addr_add(slave_id, bond_mac) != 0
||
24 matches
Mail list logo