> From: Brian Norris [mailto:[email protected]]
> 
> FYI, I have some more review comments because I'm trying to see why your
> TX path doesn't work all that well. At least, it's not reporting things
> correctly. (I know there's one ACK reporting bug you fixed in a
> follow-up patch, but then, that patch is buggy too I think.)
> 
> > +static int rtw_pci_xmit(struct rtw_dev *rtwdev,
> > +                   struct rtw_tx_pkt_info *pkt_info,
> > +                   struct sk_buff *skb, u8 queue)
> > +{
> > +   struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
> > +   struct rtw_chip_info *chip = rtwdev->chip;
> > +   struct rtw_pci_tx_ring *ring;
> > +   struct rtw_pci_tx_data *tx_data;
> > +   dma_addr_t dma;
> > +   u32 tx_pkt_desc_sz = chip->tx_pkt_desc_sz;
> > +   u32 tx_buf_desc_sz = chip->tx_buf_desc_sz;
> > +   u32 size;
> > +   u32 psb_len;
> > +   u8 *pkt_desc;
> > +   struct rtw_pci_tx_buffer_desc *buf_desc;
> > +   u32 bd_idx;
> > +
> > +   ring = &rtwpci->tx_rings[queue];
> > +
> > +   size = skb->len;
> > +
> > +   if (queue == RTW_TX_QUEUE_BCN)
> > +           rtw_pci_release_rsvd_page(rtwpci, ring);
> > +   else if (!avail_desc(ring->r.wp, ring->r.rp, ring->r.len))
> > +           return -ENOSPC;
> > +
> > +   pkt_desc = skb_push(skb, chip->tx_pkt_desc_sz);
> > +   memset(pkt_desc, 0, tx_pkt_desc_sz);
> > +   pkt_info->qsel = rtw_pci_get_tx_qsel(skb, queue);
> > +   rtw_tx_fill_tx_desc(pkt_info, skb);
> > +   dma = pci_map_single(rtwpci->pdev, skb->data, skb->len,
> > +                        PCI_DMA_TODEVICE);
> > +   if (pci_dma_mapping_error(rtwpci->pdev, dma))
> > +           return -EBUSY;
> > +
> > +   /* after this we got dma mapped, there is no way back */
> > +   buf_desc = get_tx_buffer_desc(ring, tx_buf_desc_sz);
> > +   memset(buf_desc, 0, tx_buf_desc_sz);
> > +   psb_len = (skb->len - 1) / 128 + 1;
> > +   if (queue == RTW_TX_QUEUE_BCN)
> > +           psb_len |= 1 << RTK_PCI_TXBD_OWN_OFFSET;
> > +
> > +   buf_desc[0].psb_len = cpu_to_le16(psb_len);
> > +   buf_desc[0].buf_size = cpu_to_le16(tx_pkt_desc_sz);
> > +   buf_desc[0].dma = cpu_to_le32(dma);
> > +   buf_desc[1].buf_size = cpu_to_le16(size);
> > +   buf_desc[1].dma = cpu_to_le32(dma + tx_pkt_desc_sz);
> > +
> > +   tx_data = rtw_pci_get_tx_data(skb);
> > +   tx_data->dma = dma;
> > +   skb_queue_tail(&ring->queue, skb);
> 
> IIUC, you have no locking for this queue. That seems like a bad idea. It
> then gets pulled off this queue in your ISR, again without a lock. So
> for example, if the only packet in your queue gets completed while you
> are trying to queue another one, you might corrupt the list.
> 

I think skb_queue_tail already has its own spinlock to protect the queue?
Cannot see why the list might be corrupted. Or I misunderstand you.

Yan-Hsuan

Reply via email to