4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Haiyang Zhang <haiya...@microsoft.com>

[ Upstream commit 6b81b193b83e87da1ea13217d684b54fccf8ee8a ]

If out ring is full temporarily and receive completion cannot go out,
we may still need to reschedule napi if certain conditions are met.
Otherwise the napi poll might be stopped forever, and cause network
disconnect.

Fixes: 7426b1a51803 ("netvsc: optimize receive completions")
Signed-off-by: Stephen Hemminger <step...@networkplumber.org>
Signed-off-by: Haiyang Zhang <haiya...@microsoft.com>
Signed-off-by: David S. Miller <da...@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 drivers/net/hyperv/netvsc.c |   17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -1250,6 +1250,7 @@ int netvsc_poll(struct napi_struct *napi
        struct hv_device *device = netvsc_channel_to_device(channel);
        struct net_device *ndev = hv_get_drvdata(device);
        int work_done = 0;
+       int ret;
 
        /* If starting a new interval */
        if (!nvchan->desc)
@@ -1261,16 +1262,18 @@ int netvsc_poll(struct napi_struct *napi
                nvchan->desc = hv_pkt_iter_next(channel, nvchan->desc);
        }
 
-       /* If send of pending receive completions suceeded
-        *   and did not exhaust NAPI budget this time
-        *   and not doing busy poll
+       /* Send any pending receive completions */
+       ret = send_recv_completions(ndev, net_device, nvchan);
+
+       /* If it did not exhaust NAPI budget this time
+        *  and not doing busy poll
         * then re-enable host interrupts
-        *     and reschedule if ring is not empty.
+        *  and reschedule if ring is not empty
+        *   or sending receive completion failed.
         */
-       if (send_recv_completions(ndev, net_device, nvchan) == 0 &&
-           work_done < budget &&
+       if (work_done < budget &&
            napi_complete_done(napi, work_done) &&
-           hv_end_read(&channel->inbound) &&
+           (ret || hv_end_read(&channel->inbound)) &&
            napi_schedule_prep(napi)) {
                hv_begin_read(&channel->inbound);
                __napi_schedule(napi);


Reply via email to