Re: [PATCH v3 64/75] x86/sev-es: Cache CPUID results for improved performance

2020-05-06 Thread Tom Lendacky




On 5/6/20 1:08 PM, Mike Stunes wrote:




On Apr 28, 2020, at 8:17 AM, Joerg Roedel  wrote:

From: Mike Stunes 

To avoid a future VMEXIT for a subsequent CPUID function, cache the
results returned by CPUID into an xarray.

[tl: coding standard changes, register zero extension]

Signed-off-by: Mike Stunes 
Signed-off-by: Tom Lendacky 
[ jroe...@suse.de: - Wrapped cache handling into vc_handle_cpuid_cached()
   - Used lower_32_bits() where applicable
   - Moved cache_index out of struct es_em_ctxt ]
Co-developed-by: Joerg Roedel 
Signed-off-by: Joerg Roedel 
---
arch/x86/kernel/sev-es-shared.c |  12 ++--
arch/x86/kernel/sev-es.c| 119 +++-
2 files changed, 124 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 03095bc7b563..0303834d4811 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -744,6 +758,91 @@ static enum es_result vc_handle_mmio(struct ghcb *ghcb,
return ret;
}

+static unsigned long sev_es_get_cpuid_cache_index(struct es_em_ctxt *ctxt)
+{
+   unsigned long hi, lo;
+
+   /* Don't attempt to cache until the xarray is initialized */
+   if (!sev_es_cpuid_cache_initialized)
+   return ULONG_MAX;
+
+   lo = lower_32_bits(ctxt->regs->ax);
+
+   /*
+* CPUID 0x000d requires both RCX and XCR0, so it can't be
+* cached.
+*/
+   if (lo == 0x000d)
+   return ULONG_MAX;
+
+   /*
+* Some callers of CPUID don't always set RCX to zero for CPUID
+* functions that don't require RCX, which can result in excessive
+* cached values, so RCX needs to be manually zeroed for use as part
+* of the cache index. Future CPUID values may need RCX, but since
+* they can't be known, they must not be cached.
+*/
+   if (lo > 0x8020)
+   return ULONG_MAX;


If the cache is shared across CPUs, do we also need to exclude function 0x1 
because it contains the LAPIC ID? (Or is the cache per-CPU?)


It's currently not a per-CPU cache, but given what you pointed out, it 
should be if we're going to keep function 0x1 in there. The question will 
be how often is that CPUID issued, as that would determine if (not) 
caching it matters. Or even how often CPUID is issued and whether the 
xarray lock could be under contention if the cache is not per-CPU.


Thanks,
Tom




___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio_net: fix lockdep warning on 32 bit

2020-05-06 Thread David Miller
From: "Michael S. Tsirkin" 
Date: Tue, 5 May 2020 20:01:31 -0400

> - u64_stats_update_end(&rq->stats.syncp);
> + u64_stats_update_end_irqrestore(&rq->stats.syncp);

Need to pass flags to this function.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [vhost:vhost 8/22] drivers/virtio/virtio_mem.c:1375:20: error: implicit declaration of function 'kzalloc'; did you mean 'vzalloc'?

2020-05-06 Thread David Hildenbrand


> Am 06.05.2020 um 22:28 schrieb Michael S. Tsirkin :
> 
> On Tue, May 05, 2020 at 06:22:51PM +0200, David Hildenbrand wrote:
>>> On 05.05.20 18:20, Michael S. Tsirkin wrote:
>>> On Tue, May 05, 2020 at 05:46:44PM +0200, David Hildenbrand wrote:
 On 05.05.20 17:44, Michael S. Tsirkin wrote:
> On Tue, May 05, 2020 at 04:50:13PM +0200, David Hildenbrand wrote:
>> On 05.05.20 16:15, kbuild test robot wrote:
>>> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
>>> vhost
>>> head:   da1742791d8c0c0a8e5471f181549c4726a5c5f9
>>> commit: 7527631e900d464ed2d533f799cb0da2b29cc6f0 [8/22] virtio-mem: 
>>> Paravirtualized memory hotplug
>>> config: x86_64-randconfig-b002-20200505 (attached as .config)
>>> compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
>>> reproduce:
>>>git checkout 7527631e900d464ed2d533f799cb0da2b29cc6f0
>>># save the attached .config to linux build tree
>>>make ARCH=x86_64 
>>> 
>>> If you fix the issue, kindly add following tag as appropriate
>>> Reported-by: kbuild test robot 
>>> 
>>> All error/warnings (new ones prefixed by >>):
>>> 
>>>   drivers/virtio/virtio_mem.c: In function 'virtio_mem_probe':
> drivers/virtio/virtio_mem.c:1375:20: error: implicit declaration of 
> function 'kzalloc'; did you mean 'vzalloc'? 
> [-Werror=implicit-function-declaration]
>>> vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
>>>   ^~~
>>>   vzalloc
> drivers/virtio/virtio_mem.c:1375:18: warning: assignment makes 
> pointer from integer without a cast [-Wint-conversion]
>>> vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
>>> ^
> drivers/virtio/virtio_mem.c:1419:2: error: implicit declaration of 
> function 'kfree'; did you mean 'vfree'? 
> [-Werror=implicit-function-declaration]
>>> kfree(vm);
>>> ^
>>> vfree
>>>   cc1: some warnings being treated as errors
>>> 
>>> vim +1375 drivers/virtio/virtio_mem.c
>> 
>> Guess we simply need
>> 
>> #include 
>> 
>> to make it work for that config.
> 
> 
> OK I added that in the 1st commit that introduced virtio-mem.
 
 Thanks. I have some addon-patches ready, what's the best way to continue
 with these?
>>> 
>>> If these are bugfixes, just respin the series (including this fix).
>> 
>> There are two really minor bugfixes for corner-case error handling and
>> one simplification. I can squash them and resend, makes things easier.
> 
> OK try to do it ASAP, we don't want to repeat the drama we had with vdpa.
> 

Yeah, did some more testing today. Will send v3 out tomorrow.

Cheers!

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [vhost:vhost 8/22] drivers/virtio/virtio_mem.c:1375:20: error: implicit declaration of function 'kzalloc'; did you mean 'vzalloc'?

2020-05-06 Thread Michael S. Tsirkin
On Tue, May 05, 2020 at 06:22:51PM +0200, David Hildenbrand wrote:
> On 05.05.20 18:20, Michael S. Tsirkin wrote:
> > On Tue, May 05, 2020 at 05:46:44PM +0200, David Hildenbrand wrote:
> >> On 05.05.20 17:44, Michael S. Tsirkin wrote:
> >>> On Tue, May 05, 2020 at 04:50:13PM +0200, David Hildenbrand wrote:
>  On 05.05.20 16:15, kbuild test robot wrote:
> > tree:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
> > vhost
> > head:   da1742791d8c0c0a8e5471f181549c4726a5c5f9
> > commit: 7527631e900d464ed2d533f799cb0da2b29cc6f0 [8/22] virtio-mem: 
> > Paravirtualized memory hotplug
> > config: x86_64-randconfig-b002-20200505 (attached as .config)
> > compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
> > reproduce:
> > git checkout 7527631e900d464ed2d533f799cb0da2b29cc6f0
> > # save the attached .config to linux build tree
> > make ARCH=x86_64 
> >
> > If you fix the issue, kindly add following tag as appropriate
> > Reported-by: kbuild test robot 
> >
> > All error/warnings (new ones prefixed by >>):
> >
> >drivers/virtio/virtio_mem.c: In function 'virtio_mem_probe':
> >>> drivers/virtio/virtio_mem.c:1375:20: error: implicit declaration of 
> >>> function 'kzalloc'; did you mean 'vzalloc'? 
> >>> [-Werror=implicit-function-declaration]
> >  vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
> >^~~
> >vzalloc
> >>> drivers/virtio/virtio_mem.c:1375:18: warning: assignment makes 
> >>> pointer from integer without a cast [-Wint-conversion]
> >  vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
> >  ^
> >>> drivers/virtio/virtio_mem.c:1419:2: error: implicit declaration of 
> >>> function 'kfree'; did you mean 'vfree'? 
> >>> [-Werror=implicit-function-declaration]
> >  kfree(vm);
> >  ^
> >  vfree
> >cc1: some warnings being treated as errors
> >
> > vim +1375 drivers/virtio/virtio_mem.c
> 
>  Guess we simply need
> 
>   #include 
> 
>  to make it work for that config.
> >>>
> >>>
> >>> OK I added that in the 1st commit that introduced virtio-mem.
> >>
> >> Thanks. I have some addon-patches ready, what's the best way to continue
> >> with these?
> > 
> > If these are bugfixes, just respin the series (including this fix).
> 
> There are two really minor bugfixes for corner-case error handling and
> one simplification. I can squash them and resend, makes things easier.

OK try to do it ASAP, we don't want to repeat the drama we had with vdpa.

> The other stuff I have are extensions, I will send as add-on.
> 
> Thanks!
> 
> 
> -- 
> Thanks,
> 
> David / dhildenb

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 2/2] virtio-net: fix the XDP truesize calculation for mergeable buffers

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 04:21:15PM +0800, Jason Wang wrote:
> 
> On 2020/5/6 下午3:37, Michael S. Tsirkin wrote:
> > On Wed, May 06, 2020 at 02:16:33PM +0800, Jason Wang wrote:
> > > We should not exclude headroom and tailroom when XDP is set. So this
> > > patch fixes this by initializing the truesize from PAGE_SIZE when XDP
> > > is set.
> > > 
> > > Cc: Jesper Dangaard Brouer
> > > Signed-off-by: Jason Wang
> > Seems too aggressive, we do not use up the whole page for the size.
> > 
> > 
> > 
> 
> For XDP yes, we do:
> 
> static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
>                       struct ewma_pkt_len *avg_pkt_len,
>                       unsigned int room)
> {
>     const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
>     unsigned int len;
> 
>     if (room)
>         return PAGE_SIZE - room;
> 
> ...
> 
> Thanks

Hmm. But that's only for new buffers. Buffers that were outstanding
before xdp was attached don't use the whole page, do they?




Also, with TCP smallqueues blocking the queue like that might be a problem.
Could you try and check performance impact of this?
I looked at what other drivers do and I see they tend to copy the skb
in XDP_PASS case. ATM we don't normally - but should we?

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: performance bug in virtio net xdp

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 04:37:41PM +0800, Jason Wang wrote:
> 
> On 2020/5/6 下午4:08, Michael S. Tsirkin wrote:
> > So for mergeable bufs, we use ewma machinery to guess the correct buffer
> > size. If we don't guess correctly, XDP has to do aggressive copies.
> > 
> > Problem is, xdp paths do not update the ewma at all, except
> > sometimes with XDP_PASS. So whatever we happen to have
> > before we attach XDP, will mostly stay around.
> 
> 
> It looks ok to me since we always use PAGE_SIZE when XDP is enabled in
> get_mergeable_buf_len()?
> 
> Thanks
> 

Oh right. Good point! Answered in another thread.

> > 
> > The fix is probably to update ewma unconditionally.
> > 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: performance bug in virtio net xdp

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 10:37:57AM +0200, Jesper Dangaard Brouer wrote:
> On Wed, 6 May 2020 04:08:27 -0400
> "Michael S. Tsirkin"  wrote:
> 
> > So for mergeable bufs, we use ewma machinery to guess the correct buffer
> > size. If we don't guess correctly, XDP has to do aggressive copies.
> > 
> > Problem is, xdp paths do not update the ewma at all, except
> > sometimes with XDP_PASS. So whatever we happen to have
> > before we attach XDP, will mostly stay around.
> > 
> > The fix is probably to update ewma unconditionally.
> 
> I personally find the code hard to follow, and (I admit) that it took
> me some time to understand this code path (so I might still be wrong).
> 
> In patch[1] I tried to explain (my understanding):
> 
>   In receive_mergeable() the frame size is more dynamic. There are two
>   basic cases: (1) buffer size is based on a exponentially weighted
>   moving average (see DECLARE_EWMA) of packet length. Or (2) in case
>   virtnet_get_headroom() have any headroom then buffer size is
>   PAGE_SIZE. The ctx pointer is this time used for encoding two values;
>   the buffer len "truesize" and headroom. In case (1) if the rx buffer
>   size is underestimated, the packet will have been split over more
>   buffers (num_buf info in virtio_net_hdr_mrg_rxbuf placed in top of
>   buffer area). If that happens the XDP path does a xdp_linearize_page
>   operation.
> 
> 
> The EWMA code is not used when headroom is defined, which e.g. gets
> enabled when running XDP.
> 
> 
> [1] 
> https://lore.kernel.org/netdev/158824572816.2172139.135870273697123.stgit@firesoul/

You are right.
So I guess the problem is just inconsistency?

When XDP program returns XDP_PASS, and it all fits in one page,
then we trigger
ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len);

if it does not trigger XDP_PASS, or does not fit in one page,
then we don't.

Given XDP does not use ewma for sizing, let's not update the average
either.


> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 04:19:40PM +0800, Jason Wang wrote:
> 
> On 2020/5/6 下午3:53, Michael S. Tsirkin wrote:
> > On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote:
> > > We tried to reserve space for vnet header before
> > > xdp.data_hard_start. But this is useless since the packet could be
> > > modified by XDP which may invalidate the information stored in the
> > > header and there's no way for XDP to know the existence of the vnet
> > > header currently.
> > What do you mean? Doesn't XDP_PASS use the header in the buffer?
> 
> 
> We don't, see 436c9453a1ac0 ("virtio-net: keep vnet header zeroed after
> processing XDP")
> 
> If there's other place, it should be a bug.
> 
> 
> > 
> > > So let's just not reserve space for vnet header in this case.
> > In any case, we can find out XDP does head adjustments
> > if we need to.
> 
> 
> But XDP program can modify the packets without adjusting headers.
> 
> Thanks

Then what's the problem?

> 
> > 
> > 
> > > Cc: Jesper Dangaard Brouer 
> > > Signed-off-by: Jason Wang 
> > > ---
> > >   drivers/net/virtio_net.c | 6 +++---
> > >   1 file changed, 3 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 11f722460513..98dd75b665a5 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct 
> > > net_device *dev,
> > >   page = xdp_page;
> > >   }
> > > - xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> > > - xdp.data = xdp.data_hard_start + xdp_headroom;
> > > + xdp.data_hard_start = buf + VIRTNET_RX_PAD;
> > > + xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
> > >   xdp.data_end = xdp.data + len;
> > >   xdp.data_meta = xdp.data;
> > >   xdp.rxq = &rq->xdp_rxq;
> > > @@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct 
> > > net_device *dev,
> > >* the descriptor on if we get an XDP_TX return code.
> > >*/
> > >   data = page_address(xdp_page) + offset;
> > > - xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> > > + xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
> > >   xdp.data = data + vi->hdr_len;
> > >   xdp.data_end = xdp.data + (len - vi->hdr_len);
> > >   xdp.data_meta = xdp.data;
> > > -- 
> > > 2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 04:34:36PM +0800, Jason Wang wrote:
> 
> On 2020/5/6 下午4:21, Jesper Dangaard Brouer wrote:
> > On Wed,  6 May 2020 14:16:32 +0800
> > Jason Wang  wrote:
> > 
> > > We tried to reserve space for vnet header before
> > > xdp.data_hard_start. But this is useless since the packet could be
> > > modified by XDP which may invalidate the information stored in the
> > > header and
> > IMHO above statements are wrong. XDP cannot access memory before
> > xdp.data_hard_start. Thus, it is safe to store a vnet headers before
> > xdp.data_hard_start. (The sfc driver also use this "before" area).
> 
> 
> The problem is if we place vnet header before data_hard_start, virtio-net
> will fail any header adjustment.
> 
> Or do you mean to copy vnet header before data_hard_start before processing
> XDP?
> 
> 
> > 
> > > there's no way for XDP to know the existence of the vnet header currently.
> > It is true that XDP is unaware of this area, which is the way it
> > should be.  Currently the area will survive after calling BPF/XDP.
> > After your change it will be overwritten in xdp_frame cases.
> > 
> > 
> > > So let's just not reserve space for vnet header in this case.
> > I think this is a wrong approach!
> > 
> > We are working on supporting GRO multi-buffer for XDP.  The vnet header
> > contains GRO information (see pahole below sign).
> 
> 
> Another note is that since we need reserve room for skb_shared_info, GRO for
> XDP may probably lead more frag list.
> 
> 
> >   It is currently not
> > used in the XDP case, but we will be working towards using it.
> 
> 
> Good to know that, but I think it can only work when the packet is not
> modified by XDP?
> 
> 
> > There
> > are a lot of unanswered questions on how this will be implemented.
> > Thus, I cannot layout how we are going to leverage this info yet, but
> > your patch are killing this info, which IHMO is going in the wrong
> > direction.
> 
> 
> I can copy vnet header ahead of data_hard_start, does it work for you?
> 
> Thanks

That's likely to be somewhat expensive.


> 
> > 
> > 
> > > Cc: Jesper Dangaard Brouer 
> > > Signed-off-by: Jason Wang 
> > > ---
> > >   drivers/net/virtio_net.c | 6 +++---
> > >   1 file changed, 3 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 11f722460513..98dd75b665a5 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct 
> > > net_device *dev,
> > >   page = xdp_page;
> > >   }
> > > - xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> > > - xdp.data = xdp.data_hard_start + xdp_headroom;
> > > + xdp.data_hard_start = buf + VIRTNET_RX_PAD;
> > > + xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
> > >   xdp.data_end = xdp.data + len;
> > >   xdp.data_meta = xdp.data;
> > >   xdp.rxq = &rq->xdp_rxq;
> > > @@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct 
> > > net_device *dev,
> > >* the descriptor on if we get an XDP_TX return code.
> > >*/
> > >   data = page_address(xdp_page) + offset;
> > > - xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> > > + xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
> > >   xdp.data = data + vi->hdr_len;
> > >   xdp.data_end = xdp.data + (len - vi->hdr_len);
> > >   xdp.data_meta = xdp.data;
> > 
> > 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [GIT PULL] vhost: fixes

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 11:25:46AM +0200, Stefano Garzarella wrote:
> On Wed, May 06, 2020 at 03:19:55AM -0400, Michael S. Tsirkin wrote:
> > On Wed, May 06, 2020 at 03:28:47AM +, Justin He wrote:
> > > Hi Michael
> > > 
> > > > -Original Message-
> > > > From: Michael S. Tsirkin 
> > > > Sent: Monday, May 4, 2020 8:16 PM
> > > > To: Linus Torvalds 
> > > > Cc: k...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > > > net...@vger.kernel.org; linux-ker...@vger.kernel.org; Justin He
> > > > ; ldi...@redhat.com; m...@redhat.com; n...@live.com;
> > > > stefa...@redhat.com
> > > > Subject: [GIT PULL] vhost: fixes
> > > >
> > > > The following changes since commit
> > > > 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c:
> > > >
> > > >   Linux 5.7-rc3 (2020-04-26 13:51:02 -0700)
> > > >
> > > > are available in the Git repository at:
> > > >
> > > >   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
> > > > tags/for_linus
> > > >
> > > > for you to fetch changes up to
> > > > 0b841030625cde5f784dd62aec72d6a766faae70:
> > > >
> > > >   vhost: vsock: kick send_pkt worker once device is started (2020-05-02
> > > > 10:28:21 -0400)
> > > >
> > > > 
> > > > virtio: fixes
> > > >
> > > > A couple of bug fixes.
> > > >
> > > > Signed-off-by: Michael S. Tsirkin 
> > > >
> > > > 
> > > > Jia He (1):
> > > >   vhost: vsock: kick send_pkt worker once device is started
> > > 
> > > Should this fix also be CC-ed to stable? Sorry I forgot to cc it to 
> > > stable.
> > > 
> > > --
> > > Cheers,
> > > Justin (Jia He)
> > 
> > 
> > Go ahead, though recently just including Fixes seems to be enough.
> > 
> 
> The following patch Justin refers to does not contain the "Fixes:" tag:
> 
> 0b841030625c vhost: vsock: kick send_pkt worker once device is started
> 
> 
> I think we should merge it on stable branches, so if needed, I can backport
> and send it.
> 
> Stefano

Go ahead.

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] iommu/virtio: reverse arguments to list_add

2020-05-06 Thread Jean-Philippe Brucker
On Tue, May 05, 2020 at 08:47:47PM +0200, Julia Lawall wrote:
> Elsewhere in the file, there is a list_for_each_entry with
> &vdev->resv_regions as the second argument, suggesting that
> &vdev->resv_regions is the list head.  So exchange the
> arguments on the list_add call to put the list head in the
> second argument.
> 
> Fixes: 2a5a31487445 ("iommu/virtio: Add probe request")
> Signed-off-by: Julia Lawall 

Thanks for the fix. The reason this hasn't blown up so far is
iommu_alloc_resv_region() initializes region->list, but adding more than
one item would break the list.

Reviewed-by: Jean-Philippe Brucker 

> ---
>  drivers/iommu/virtio-iommu.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index d5cac4f46ca5..4e1d11af23c8 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -453,7 +453,7 @@ static int viommu_add_resv_mem(struct viommu_endpoint 
> *vdev,
>   if (!region)
>   return -ENOMEM;
>  
> - list_add(&vdev->resv_regions, ®ion->list);
> + list_add(®ion->list, &vdev->resv_regions);
>   return 0;
>  }
>  
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [GIT PULL] vhost: fixes

2020-05-06 Thread Stefano Garzarella
On Wed, May 06, 2020 at 03:19:55AM -0400, Michael S. Tsirkin wrote:
> On Wed, May 06, 2020 at 03:28:47AM +, Justin He wrote:
> > Hi Michael
> > 
> > > -Original Message-
> > > From: Michael S. Tsirkin 
> > > Sent: Monday, May 4, 2020 8:16 PM
> > > To: Linus Torvalds 
> > > Cc: k...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > > net...@vger.kernel.org; linux-ker...@vger.kernel.org; Justin He
> > > ; ldi...@redhat.com; m...@redhat.com; n...@live.com;
> > > stefa...@redhat.com
> > > Subject: [GIT PULL] vhost: fixes
> > >
> > > The following changes since commit
> > > 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c:
> > >
> > >   Linux 5.7-rc3 (2020-04-26 13:51:02 -0700)
> > >
> > > are available in the Git repository at:
> > >
> > >   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
> > > tags/for_linus
> > >
> > > for you to fetch changes up to
> > > 0b841030625cde5f784dd62aec72d6a766faae70:
> > >
> > >   vhost: vsock: kick send_pkt worker once device is started (2020-05-02
> > > 10:28:21 -0400)
> > >
> > > 
> > > virtio: fixes
> > >
> > > A couple of bug fixes.
> > >
> > > Signed-off-by: Michael S. Tsirkin 
> > >
> > > 
> > > Jia He (1):
> > >   vhost: vsock: kick send_pkt worker once device is started
> > 
> > Should this fix also be CC-ed to stable? Sorry I forgot to cc it to stable.
> > 
> > --
> > Cheers,
> > Justin (Jia He)
> 
> 
> Go ahead, though recently just including Fixes seems to be enough.
> 

The following patch Justin refers to does not contain the "Fixes:" tag:

0b841030625c vhost: vsock: kick send_pkt worker once device is started


I think we should merge it on stable branches, so if needed, I can backport
and send it.

Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: performance bug in virtio net xdp

2020-05-06 Thread Jason Wang


On 2020/5/6 下午4:08, Michael S. Tsirkin wrote:

So for mergeable bufs, we use ewma machinery to guess the correct buffer
size. If we don't guess correctly, XDP has to do aggressive copies.

Problem is, xdp paths do not update the ewma at all, except
sometimes with XDP_PASS. So whatever we happen to have
before we attach XDP, will mostly stay around.



It looks ok to me since we always use PAGE_SIZE when XDP is enabled in 
get_mergeable_buf_len()?


Thanks




The fix is probably to update ewma unconditionally.



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: performance bug in virtio net xdp

2020-05-06 Thread Jesper Dangaard Brouer
On Wed, 6 May 2020 04:08:27 -0400
"Michael S. Tsirkin"  wrote:

> So for mergeable bufs, we use ewma machinery to guess the correct buffer
> size. If we don't guess correctly, XDP has to do aggressive copies.
> 
> Problem is, xdp paths do not update the ewma at all, except
> sometimes with XDP_PASS. So whatever we happen to have
> before we attach XDP, will mostly stay around.
> 
> The fix is probably to update ewma unconditionally.

I personally find the code hard to follow, and (I admit) that it took
me some time to understand this code path (so I might still be wrong).

In patch[1] I tried to explain (my understanding):

  In receive_mergeable() the frame size is more dynamic. There are two
  basic cases: (1) buffer size is based on a exponentially weighted
  moving average (see DECLARE_EWMA) of packet length. Or (2) in case
  virtnet_get_headroom() have any headroom then buffer size is
  PAGE_SIZE. The ctx pointer is this time used for encoding two values;
  the buffer len "truesize" and headroom. In case (1) if the rx buffer
  size is underestimated, the packet will have been split over more
  buffers (num_buf info in virtio_net_hdr_mrg_rxbuf placed in top of
  buffer area). If that happens the XDP path does a xdp_linearize_page
  operation.


The EWMA code is not used when headroom is defined, which e.g. gets
enabled when running XDP.


[1] 
https://lore.kernel.org/netdev/158824572816.2172139.135870273697123.stgit@firesoul/
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Jason Wang


On 2020/5/6 下午4:21, Jesper Dangaard Brouer wrote:

On Wed,  6 May 2020 14:16:32 +0800
Jason Wang  wrote:


We tried to reserve space for vnet header before
xdp.data_hard_start. But this is useless since the packet could be
modified by XDP which may invalidate the information stored in the
header and

IMHO above statements are wrong. XDP cannot access memory before
xdp.data_hard_start. Thus, it is safe to store a vnet headers before
xdp.data_hard_start. (The sfc driver also use this "before" area).



The problem is if we place vnet header before data_hard_start, 
virtio-net will fail any header adjustment.


Or do you mean to copy vnet header before data_hard_start before 
processing XDP?






there's no way for XDP to know the existence of the vnet header currently.

It is true that XDP is unaware of this area, which is the way it
should be.  Currently the area will survive after calling BPF/XDP.
After your change it will be overwritten in xdp_frame cases.



So let's just not reserve space for vnet header in this case.

I think this is a wrong approach!

We are working on supporting GRO multi-buffer for XDP.  The vnet header
contains GRO information (see pahole below sign).



Another note is that since we need reserve room for skb_shared_info, GRO 
for XDP may probably lead more frag list.




  It is currently not
used in the XDP case, but we will be working towards using it.



Good to know that, but I think it can only work when the packet is not 
modified by XDP?




There
are a lot of unanswered questions on how this will be implemented.
Thus, I cannot layout how we are going to leverage this info yet, but
your patch are killing this info, which IHMO is going in the wrong
direction.



I can copy vnet header ahead of data_hard_start, does it work for you?

Thanks






Cc: Jesper Dangaard Brouer 
Signed-off-by: Jason Wang 
---
  drivers/net/virtio_net.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 11f722460513..98dd75b665a5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
page = xdp_page;
}
  
-		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;

-   xdp.data = xdp.data_hard_start + xdp_headroom;
+   xdp.data_hard_start = buf + VIRTNET_RX_PAD;
+   xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
xdp.data_end = xdp.data + len;
xdp.data_meta = xdp.data;
xdp.rxq = &rq->xdp_rxq;
@@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct net_device 
*dev,
 * the descriptor on if we get an XDP_TX return code.
 */
data = page_address(xdp_page) + offset;
-   xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
+   xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
xdp.data = data + vi->hdr_len;
xdp.data_end = xdp.data + (len - vi->hdr_len);
xdp.data_meta = xdp.data;





___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Jesper Dangaard Brouer
On Wed,  6 May 2020 14:16:32 +0800
Jason Wang  wrote:

> We tried to reserve space for vnet header before
> xdp.data_hard_start. But this is useless since the packet could be
> modified by XDP which may invalidate the information stored in the
> header and

IMHO above statements are wrong. XDP cannot access memory before
xdp.data_hard_start. Thus, it is safe to store a vnet headers before
xdp.data_hard_start. (The sfc driver also use this "before" area).

> there's no way for XDP to know the existence of the vnet header currently.

It is true that XDP is unaware of this area, which is the way it
should be.  Currently the area will survive after calling BPF/XDP.
After your change it will be overwritten in xdp_frame cases.


> So let's just not reserve space for vnet header in this case.

I think this is a wrong approach!

We are working on supporting GRO multi-buffer for XDP.  The vnet header
contains GRO information (see pahole below sign). It is currently not
used in the XDP case, but we will be working towards using it.  There
are a lot of unanswered questions on how this will be implemented.
Thus, I cannot layout how we are going to leverage this info yet, but
your patch are killing this info, which IHMO is going in the wrong
direction.


> Cc: Jesper Dangaard Brouer 
> Signed-off-by: Jason Wang 
> ---
>  drivers/net/virtio_net.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 11f722460513..98dd75b665a5 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct net_device 
> *dev,
>   page = xdp_page;
>   }
>  
> - xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> - xdp.data = xdp.data_hard_start + xdp_headroom;
> + xdp.data_hard_start = buf + VIRTNET_RX_PAD;
> + xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
>   xdp.data_end = xdp.data + len;
>   xdp.data_meta = xdp.data;
>   xdp.rxq = &rq->xdp_rxq;
> @@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct 
> net_device *dev,
>* the descriptor on if we get an XDP_TX return code.
>*/
>   data = page_address(xdp_page) + offset;
> - xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> + xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
>   xdp.data = data + vi->hdr_len;
>   xdp.data_end = xdp.data + (len - vi->hdr_len);
>   xdp.data_meta = xdp.data;



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer



$ pahole -C virtio_net_hdr_mrg_rxbuf drivers/net/virtio_net.o
struct virtio_net_hdr_mrg_rxbuf {
struct virtio_net_hdr hdr;   /* 010 */
__virtio16 num_buffers;  /*10 2 */

/* size: 12, cachelines: 1, members: 2 */
/* last cacheline: 12 bytes */
};


$ pahole -C virtio_net_hdr drivers/net/virtio_net.o
struct virtio_net_hdr {
__u8   flags;/* 0 1 */
__u8   gso_type; /* 1 1 */
__virtio16 hdr_len;  /* 2 2 */
__virtio16 gso_size; /* 4 2 */
__virtio16 csum_start;   /* 6 2 */
__virtio16 csum_offset;  /* 8 2 */

/* size: 10, cachelines: 1, members: 6 */
/* last cacheline: 10 bytes */
};

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 2/2] virtio-net: fix the XDP truesize calculation for mergeable buffers

2020-05-06 Thread Jason Wang


On 2020/5/6 下午3:37, Michael S. Tsirkin wrote:

On Wed, May 06, 2020 at 02:16:33PM +0800, Jason Wang wrote:

We should not exclude headroom and tailroom when XDP is set. So this
patch fixes this by initializing the truesize from PAGE_SIZE when XDP
is set.

Cc: Jesper Dangaard Brouer
Signed-off-by: Jason Wang

Seems too aggressive, we do not use up the whole page for the size.





For XDP yes, we do:

static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
                      struct ewma_pkt_len *avg_pkt_len,
                      unsigned int room)
{
    const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
    unsigned int len;

    if (room)
        return PAGE_SIZE - room;

...

Thanks

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Jason Wang


On 2020/5/6 下午3:53, Michael S. Tsirkin wrote:

On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote:

We tried to reserve space for vnet header before
xdp.data_hard_start. But this is useless since the packet could be
modified by XDP which may invalidate the information stored in the
header and there's no way for XDP to know the existence of the vnet
header currently.

What do you mean? Doesn't XDP_PASS use the header in the buffer?



We don't, see 436c9453a1ac0 ("virtio-net: keep vnet header zeroed after 
processing XDP")


If there's other place, it should be a bug.





So let's just not reserve space for vnet header in this case.

In any case, we can find out XDP does head adjustments
if we need to.



But XDP program can modify the packets without adjusting headers.

Thanks






Cc: Jesper Dangaard Brouer 
Signed-off-by: Jason Wang 
---
  drivers/net/virtio_net.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 11f722460513..98dd75b665a5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
page = xdp_page;
}
  
-		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;

-   xdp.data = xdp.data_hard_start + xdp_headroom;
+   xdp.data_hard_start = buf + VIRTNET_RX_PAD;
+   xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
xdp.data_end = xdp.data + len;
xdp.data_meta = xdp.data;
xdp.rxq = &rq->xdp_rxq;
@@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct net_device 
*dev,
 * the descriptor on if we get an XDP_TX return code.
 */
data = page_address(xdp_page) + offset;
-   xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
+   xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
xdp.data = data + vi->hdr_len;
xdp.data_end = xdp.data + (len - vi->hdr_len);
xdp.data_meta = xdp.data;
--
2.20.1


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

performance bug in virtio net xdp

2020-05-06 Thread Michael S. Tsirkin
So for mergeable bufs, we use ewma machinery to guess the correct buffer
size. If we don't guess correctly, XDP has to do aggressive copies.

Problem is, xdp paths do not update the ewma at all, except
sometimes with XDP_PASS. So whatever we happen to have
before we attach XDP, will mostly stay around.

The fix is probably to update ewma unconditionally.

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 02:16:32PM +0800, Jason Wang wrote:
> We tried to reserve space for vnet header before
> xdp.data_hard_start. But this is useless since the packet could be
> modified by XDP which may invalidate the information stored in the
> header and there's no way for XDP to know the existence of the vnet
> header currently.

What do you mean? Doesn't XDP_PASS use the header in the buffer?

> So let's just not reserve space for vnet header in this case.

In any case, we can find out XDP does head adjustments
if we need to.


> Cc: Jesper Dangaard Brouer 
> Signed-off-by: Jason Wang 
> ---
>  drivers/net/virtio_net.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 11f722460513..98dd75b665a5 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct net_device 
> *dev,
>   page = xdp_page;
>   }
>  
> - xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
> - xdp.data = xdp.data_hard_start + xdp_headroom;
> + xdp.data_hard_start = buf + VIRTNET_RX_PAD;
> + xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
>   xdp.data_end = xdp.data + len;
>   xdp.data_meta = xdp.data;
>   xdp.rxq = &rq->xdp_rxq;
> @@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct 
> net_device *dev,
>* the descriptor on if we get an XDP_TX return code.
>*/
>   data = page_address(xdp_page) + offset;
> - xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
> + xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
>   xdp.data = data + vi->hdr_len;
>   xdp.data_end = xdp.data + (len - vi->hdr_len);
>   xdp.data_meta = xdp.data;
> -- 
> 2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next 2/2] virtio-net: fix the XDP truesize calculation for mergeable buffers

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 02:16:33PM +0800, Jason Wang wrote:
> We should not exclude headroom and tailroom when XDP is set. So this
> patch fixes this by initializing the truesize from PAGE_SIZE when XDP
> is set.
> 
> Cc: Jesper Dangaard Brouer 
> Signed-off-by: Jason Wang 

Seems too aggressive, we do not use up the whole page for the size.



> ---
>  drivers/net/virtio_net.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 98dd75b665a5..3f3aa8308918 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1184,7 +1184,7 @@ static int add_recvbuf_mergeable(struct virtnet_info 
> *vi,
>   char *buf;
>   void *ctx;
>   int err;
> - unsigned int len, hole;
> + unsigned int len, hole, truesize;
>  
>   /* Extra tailroom is needed to satisfy XDP's assumption. This
>* means rx frags coalescing won't work, but consider we've
> @@ -1194,6 +1194,7 @@ static int add_recvbuf_mergeable(struct virtnet_info 
> *vi,
>   if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
>   return -ENOMEM;
>  
> + truesize = headroom ? PAGE_SIZE : len;
>   buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset;
>   buf += headroom; /* advance address leaving hole at front of pkt */
>   get_page(alloc_frag->page);

Is this really just on the XDP path? Looks like a confusing way to
detect that.


> @@ -1205,11 +1206,12 @@ static int add_recvbuf_mergeable(struct virtnet_info 
> *vi,
>* the current buffer.
>*/
>   len += hole;
> + truesize += hole;
>   alloc_frag->offset += hole;
>   }
>  
>   sg_init_one(rq->sg, buf, len);
> - ctx = mergeable_len_to_ctx(len, headroom);
> + ctx = mergeable_len_to_ctx(truesize, headroom);
>   err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
>   if (err < 0)
>   put_page(virt_to_head_page(buf));
> -- 
> 2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [GIT PULL] vhost: fixes

2020-05-06 Thread Greg KH
On Wed, May 06, 2020 at 03:19:55AM -0400, Michael S. Tsirkin wrote:
> On Wed, May 06, 2020 at 03:28:47AM +, Justin He wrote:
> > Hi Michael
> > 
> > > -Original Message-
> > > From: Michael S. Tsirkin 
> > > Sent: Monday, May 4, 2020 8:16 PM
> > > To: Linus Torvalds 
> > > Cc: k...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > > net...@vger.kernel.org; linux-ker...@vger.kernel.org; Justin He
> > > ; ldi...@redhat.com; m...@redhat.com; n...@live.com;
> > > stefa...@redhat.com
> > > Subject: [GIT PULL] vhost: fixes
> > >
> > > The following changes since commit
> > > 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c:
> > >
> > >   Linux 5.7-rc3 (2020-04-26 13:51:02 -0700)
> > >
> > > are available in the Git repository at:
> > >
> > >   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
> > > tags/for_linus
> > >
> > > for you to fetch changes up to
> > > 0b841030625cde5f784dd62aec72d6a766faae70:
> > >
> > >   vhost: vsock: kick send_pkt worker once device is started (2020-05-02
> > > 10:28:21 -0400)
> > >
> > > 
> > > virtio: fixes
> > >
> > > A couple of bug fixes.
> > >
> > > Signed-off-by: Michael S. Tsirkin 
> > >
> > > 
> > > Jia He (1):
> > >   vhost: vsock: kick send_pkt worker once device is started
> > 
> > Should this fix also be CC-ed to stable? Sorry I forgot to cc it to stable.
> > 
> > --
> > Cheers,
> > Justin (Jia He)
> 
> 
> Go ahead, though recently just including Fixes seems to be enough.

You are getting lucky if you only apply a "Fixes:" tag, as that is much
lower down our list of things to look at.  We are forced to do that
because of subsystems and maintainers that do not use the cc: stable
tag, as has been documented for the patch 15+ years :(

So please be explicit if you know you want it merged to the stable
trees, otherwise it is not a guarantee at all if you only use "fixes:".

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [GIT PULL] vhost: fixes

2020-05-06 Thread Michael S. Tsirkin
On Wed, May 06, 2020 at 03:28:47AM +, Justin He wrote:
> Hi Michael
> 
> > -Original Message-
> > From: Michael S. Tsirkin 
> > Sent: Monday, May 4, 2020 8:16 PM
> > To: Linus Torvalds 
> > Cc: k...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > net...@vger.kernel.org; linux-ker...@vger.kernel.org; Justin He
> > ; ldi...@redhat.com; m...@redhat.com; n...@live.com;
> > stefa...@redhat.com
> > Subject: [GIT PULL] vhost: fixes
> >
> > The following changes since commit
> > 6a8b55ed4056ea5559ebe4f6a4b247f627870d4c:
> >
> >   Linux 5.7-rc3 (2020-04-26 13:51:02 -0700)
> >
> > are available in the Git repository at:
> >
> >   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 
> > tags/for_linus
> >
> > for you to fetch changes up to
> > 0b841030625cde5f784dd62aec72d6a766faae70:
> >
> >   vhost: vsock: kick send_pkt worker once device is started (2020-05-02
> > 10:28:21 -0400)
> >
> > 
> > virtio: fixes
> >
> > A couple of bug fixes.
> >
> > Signed-off-by: Michael S. Tsirkin 
> >
> > 
> > Jia He (1):
> >   vhost: vsock: kick send_pkt worker once device is started
> 
> Should this fix also be CC-ed to stable? Sorry I forgot to cc it to stable.
> 
> --
> Cheers,
> Justin (Jia He)


Go ahead, though recently just including Fixes seems to be enough.


> 
> >
> > Stefan Hajnoczi (1):
> >   virtio-blk: handle block_device_operations callbacks after hot unplug
> >
> >  drivers/block/virtio_blk.c | 86
> > +-
> >  drivers/vhost/vsock.c  |  5 +++
> >  2 files changed, 83 insertions(+), 8 deletions(-)
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization