On Thu, Dec 14, 2017 at 5:17 PM, Willem de Bruijn <willemdebruijn.ker...@gmail.com> wrote: >>> Well, the patch does not fix hanging VMs, which have been shutdown and >>> can't be killed any more. >>> Because of the stack trace >>> >>> [<ffffffffc0d0e3c5>] vhost_net_ubuf_put_and_wait+0x35/0x60 [vhost_net] >>> [<ffffffffc0d0f264>] vhost_net_ioctl+0x304/0x870 [vhost_net] >>> [<ffffffff9b25460f>] do_vfs_ioctl+0x8f/0x5c0 >>> [<ffffffff9b254bb4>] SyS_ioctl+0x74/0x80 >>> [<ffffffff9b00365b>] do_syscall_64+0x5b/0x100 >>> [<ffffffff9b78e7ab>] entry_SYSCALL64_slow_path+0x25/0x25 >>> [<ffffffffffffffff>] 0xffffffffffffffff >>> >>> I was hoping, that the problems could be related - but that seems not to >>> be true. >> >> However, it turned out, that reverting the complete patchset "Remove UDP >> Fragmentation Offload support" prevent hanging qemu processes. > > That implies a combination of UFO and vhost zerocopy. Disabling > experimental_zcopytx in vhost_net will probably work around the bug > then. > > On the surface the two features are independent. Most of the relevant > UFO code is reverted with the patch mentioned earlier. Missing from > that is protocol stack support, but it is unlikely that your host OS is > generating these UFO packets. > > They are coming from a guest over virtio_net, to which vhost_net then > applies zerocopy. Then the packet(s) is/are either freed without calling > uarg->callback() or queued somewhere for a very long time. > > Looking at the diff-of-diffs between my stable patch and your full revert, > the majority of missing bits beside the procol layer is in device driver > support. Removing that causes the UFO packets to be segmented at any > dev_queue_xmit on their path. skb_segment ensures that when it segments > a large zerocopy packet, all new segments also point to the zerocopy > callback struct (ubuf_info), as the shared memory pages may not be > released until all skbs pointing to them are freed. > > That may be wrong with vhost_zerocopy_callback, which does not use > refcounting. I will look into that. It may be that before the msg_zerocopy > patchsets large packets were copied before entering segmentation. It is > safe to enter segmentation for msg_zerocopy skbs, but not legacy zerocopy > skbs.
If this is the cause, then the following, while not a real solution, would probably also solve resolve the observed issue. diff --git a/net/core/skbuff.c b/net/core/skbuff.c index e140ba49b30a..8fe5bca1d6ae 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -3655,10 +3655,10 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, skb_copy_from_linear_data_offset(head_skb, offset, skb_put(nskb, hsize), hsize); + if (unlikely(skb_orphan_frags_rx(head_skb, GFP_ATOMIC))) + goto err; skb_shinfo(nskb)->tx_flags |= skb_shinfo(head_skb)->tx_flags & SKBTX_SHARED_FRAG; - if (skb_zerocopy_clone(nskb, head_skb, GFP_ATOMIC)) - goto err; This basically converts zerocopy TSO skbs to regular and calls their uarg->callback just before segmenting them.