We need to ensure the loads from the descriptor are done after the
MMIO store clearing the interrupts has completed, otherwise we
might still miss work.

A read back from the MMIO register will "push" the posted store and
ioread32 has a barrier on weakly aordered architectures that will
order subsequent accesses.

Signed-off-by: Benjamin Herrenschmidt <b...@kernel.crashing.org>
---
 drivers/net/ethernet/faraday/ftgmac100.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/ethernet/faraday/ftgmac100.c 
b/drivers/net/ethernet/faraday/ftgmac100.c
index 45b8267..95bf5e8 100644
--- a/drivers/net/ethernet/faraday/ftgmac100.c
+++ b/drivers/net/ethernet/faraday/ftgmac100.c
@@ -1349,6 +1349,13 @@ static int ftgmac100_poll(struct napi_struct *napi, int 
budget)
                 */
                iowrite32(FTGMAC100_INT_RXTX,
                          priv->base + FTGMAC100_OFFSET_ISR);
+
+               /* Push the above (and provides a barrier vs. subsequent
+                * reads of the descriptor).
+                */
+               ioread32(priv->base + FTGMAC100_OFFSET_ISR);
+
+               /* Check RX and TX descriptors for more work to do */
                if (ftgmac100_check_rx(priv) ||
                    ftgmac100_tx_buf_cleanable(priv))
                        return budget;
-- 
2.9.3

Reply via email to