On Mon Oct 07 2024, Maciej Fijalkowski wrote:
>> +bool igb_xmit_zc(struct igb_ring *tx_ring)
>> +{
>> + unsigned int budget = igb_desc_unused(tx_ring);
>> + struct xsk_buff_pool *pool = tx_ring->xsk_pool;
>> + u32 cmd_type, olinfo_status, nb_pkts, i = 0;
>> + struct xdp_desc *descs = pool->tx_descs;
>> + union e1000_adv_tx_desc *tx_desc = NULL;
>> + struct igb_tx_buffer *tx_buffer_info;
>> + unsigned int total_bytes = 0;
>> + dma_addr_t dma;
>> +
>> + if (!netif_carrier_ok(tx_ring->netdev))
>> + return true;
>> +
>> + if (test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags))
>> + return true;
>> +
>> + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget);
>> + if (!nb_pkts)
>> + return true;
>> +
>> + while (nb_pkts-- > 0) {
>> + dma = xsk_buff_raw_get_dma(pool, descs[i].addr);
>> + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len);
>> +
>> + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
>> + tx_buffer_info->bytecount = descs[i].len;
>> + tx_buffer_info->type = IGB_TYPE_XSK;
>> + tx_buffer_info->xdpf = NULL;
>> + tx_buffer_info->gso_segs = 1;
>> + tx_buffer_info->time_stamp = jiffies;
>> +
>> + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use);
>> + tx_desc->read.buffer_addr = cpu_to_le64(dma);
>> +
>> + /* put descriptor type bits */
>> + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT |
>> + E1000_ADVTXD_DCMD_IFCS;
>> + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT;
>> +
>> + cmd_type |= descs[i].len | IGB_TXD_DCMD;
>
> I forgot if we spoke about this but you still set RS bit for each produced
> desc. Probably we agreed that since cleaning side is shared with 'slow'
> path it would be too much of an effort to address that?Yes, and i believe we agreed that this needs to be addressed later, also for igc. > > Could you add a FIXME/TODO here so that we won't lose this from our > radars? Sure. Thanks, Kurt
signature.asc
Description: PGP signature
