>From: [email protected] [mailto:[email protected]]
>On Behalf Of Kavanagh, Mark B
>Sent: Monday, September 18, 2017 3:36 PM
>To: Nitin Katiyar <[email protected]>; [email protected]
>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>
>>From: Nitin Katiyar [mailto:[email protected]]
>>Sent: Monday, September 18, 2017 3:20 PM
>>To: Kavanagh, Mark B <[email protected]>; [email protected]
>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>
>>Hi,
>>Yes, the tag is configured for VHU port so traffic from VM would be tagged
>>with vlan. Why is it different from 10G (ixgbe) driver? It should allow the
>>packet matching with the configured MTU. Is it expected behavior with i40e
>>driver?
>
>In this instance, the behavior is determined by OvS, and not the DPDK driver;
>see the code snippets below from netdev-dpdk.c:
>
><snip>
>#define MTU_TO_FRAME_LEN(mtu) ((mtu) + ETHER_HDR_LEN + ETHER_CRC_LEN)
># As you can see, the VLAN header is not accounted for as part of a packet's
>overhead
></snip>
Addendum:
Having looked at the driver code, it seems that the both the ixgbe, and i40e,
drivers do indeed make some allowances for VLAN-tagged packets.
However, there are some differences in how the Rx buffers are sized:
int __attribute__((cold))
ixgbe_dev_rx_init(struct rte_eth_dev *dev)
{
...
/*
* Configure the RX buffer size in the BSIZEPACKET field of
* the SRRCTL register of the queue.
* The value is in 1 KB resolution. Valid values can be from
* 1 KB to 16 KB.
*/
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
...
buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
}
...
}
/* Init the RX queue in hardware */
int
i40e_rx_queue_init(struct i40e_rx_queue *rxq)
{
...
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
/* Check if scattered RX needs to be used. */
if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) {
dev_data->scattered_rx = 1;
}
...
}
If VLAN-tagged packets are accepted by one NIC, and not the other, then it most
likely does point to an inconsistency in the DPDK drivers.
I'd advise you to post this issue to the DPDK dev mailing list, and kill the
thread on this list, since it seems to be a DPDK-specific issue.
- Mark
>
>...
>
>
><snip>
> static int
> netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
> OVS_REQUIRES(dpdk_mutex)
> OVS_REQUIRES(dev->mutex)
> {
> ...
> dpdk_mp_put(dev->dpdk_mp);
> dev->dpdk_mp = mp;
> dev->mtu = dev->requested_mtu;
> dev->socket_id = dev->requested_socket_id;
> dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);
> # This line uses the MTU_TO_FRAME_LEN macro to set the upper
> size
>limit on packets that the NIC will accept.
> # The NIC is subsequently configured with this value.
> ...
> }
></snip>
>
>...
>
><snip>
> static int
> dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int n_rxq, int n_txq)
> {
> ...
> if (dev->mtu > ETHER_MTU) {
> conf.rxmode.jumbo_frame = 1;
> conf.rxmode.max_rx_pkt_len = dev->max_packet_len;
> #
>max Rx packet length is set in NIC's config. object.
> ...
> diag = rte_eth_dev_configure(dev->port_id, n_rxq, n_txq,
> &conf); #
>NIC's max Rx packet length is actually set.
> ...
> }
></snip>
>
>Hope this helps,
>Mark
>
>>
>>Thanks,
>>Nitin
>>
>>-----Original Message-----
>>From: Kavanagh, Mark B [mailto:[email protected]]
>>Sent: Monday, September 18, 2017 7:44 PM
>>To: Nitin Katiyar <[email protected]>; [email protected]
>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>
>>>From: Nitin Katiyar [mailto:[email protected]]
>>>Sent: Monday, September 18, 2017 3:02 PM
>>>To: Kavanagh, Mark B <[email protected]>;
>>>[email protected]
>>>Subject: RE: [ovs-dev] MTU in i40e dpdk driver
>>>
>>>Hi,
>>>It is set to 2140.
>>
>>That should accommodate a max packet length of 2158 (i.e. MTU + ETHER_HDR
>>(14B) + ETHER_CRC (4B)).
>>
>>Is the VM inside a VLAN by any chance? The presence of a VLAN tag would
>>account for the additional 4B.
>>
>>-Mark
>>
>>>
>>>compute-0-4:~# ovs-vsctl get Interface dpdk1 mtu
>>>2140
>>>
>>>Regards,
>>>Nitin
>>>
>>>-----Original Message-----
>>>From: Kavanagh, Mark B [mailto:[email protected]]
>>>Sent: Monday, September 18, 2017 7:26 PM
>>>To: Nitin Katiyar <[email protected]>; [email protected]
>>>Subject: Re: [ovs-dev] MTU in i40e dpdk driver
>>>
>>>>From: [email protected]
>>>>[mailto:[email protected]]
>>>>On Behalf Of Nitin Katiyar
>>>>Sent: Monday, September 18, 2017 2:05 PM
>>>>To: [email protected]
>>>>Subject: [ovs-dev] MTU in i40e dpdk driver
>>>>
>>>>Hi,
>>>>We are using OVS-DPDK (2.6 version) with Fortville NIC (configured in
>>>>25G
>>>>mode) being used as dpdk port. The setup involves 2 VMs running on 2
>>>>different computes (destination VM in compute with 10G NIC while
>>>>originating VM is in compute with Fortville NIC). All the interfaces
>>>>in the path are configured with MTU of 2140.
>>>>
>>>>While pinging with size of 2112 (IP packet of 2140 bytes) we found
>>>>that ping response does not reach originating VM (i.e on compute with
>>>>Fortville
>>>NIC) .
>>>>DPDK interface does not show any drop but we don't see any ping
>>>>response received at DPDK port (verified using port-mirroring). We
>>>>also don't see any rule in ovs dpctl for ping response. If we increase
>>>>the MTU of DPDK interface by 4 bytes or reduce the ping size by 4
>>>>bytes then it
>>>works.
>>>>
>>>>The same configuration works between 10G NICs on both sides.
>>>>
>>>>Is it a known issue with i40 dpdk driver?
>>>
>>>Hi Nitin,
>>>
>>>What is the MTU of the DPDK ports in this setup?
>>>
>>> ovs-vscl get Interface <iface_name> mtu
>>>
>>>Thanks,
>>>Mark
>>>
>>>>
>>>>Regards,
>>>>Nitin
>>>>_______________________________________________
>>>>dev mailing list
>>>>[email protected]
>>>>https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>_______________________________________________
>dev mailing list
>[email protected]
>https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev