Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Michael S. Tsirkin
On Wed, Aug 01, 2018 at 10:05:35AM +0100, Will Deacon wrote:
> Hi Christoph,
> 
> On Wed, Aug 01, 2018 at 01:36:39AM -0700, Christoph Hellwig wrote:
> > On Wed, Aug 01, 2018 at 09:16:38AM +0100, Will Deacon wrote:
> > > On arm/arm64, the problem we have is that legacy virtio devices on the 
> > > MMIO
> > > transport (so definitely not PCI) have historically been advertised by 
> > > qemu
> > > as not being cache coherent, but because the virtio core has bypassed DMA
> > > ops then everything has happened to work. If we blindly enable the arch 
> > > DMA
> > > ops,
> > 
> > No one is suggesting that as far as I can tell.
> 
> Apologies: it's me that wants the DMA ops enabled to handle legacy devices
> behind an IOMMU, but see below.
> 
> > > we'll plumb in the non-coherent ops and start getting data corruption,
> > > so we do need a way to quirk virtio as being "always coherent" if we want 
> > > to
> > > use the DMA ops (which we do, because our emulation platforms have an 
> > > IOMMU
> > > for all virtio devices).
> > 
> > From all that I've gather so far: no you do not want that.  We really
> > need to figure out virtio "dma" interacts with the host / device.
> > 
> > If you look at the current iommu spec it does talk of physical address
> > with a little careveout for VIRTIO_F_IOMMU_PLATFORM.
> 
> That's true, although that doesn't exist in the legacy virtio spec, and we
> have an existing emulation platform which puts legacy virtio devices behind
> an IOMMU. Currently, Linux is unable to boot on this platform unless the
> IOMMU is configured as bypass. If we can use the coherent IOMMU DMA ops,
> then it works perfectly.
> 
> > So between that and our discussion in this thread and its previous
> > iterations I think we need to stick to the current always physical,
> > bypass system dma ops mode of virtio operation as the default.
> 
> As above -- that means we hang during boot because we get stuck trying to
> bring up a virtio-block device whose DMA is aborted by the IOMMU. The easy
> answer is "just upgrade to latest virtio and advertise the presence of the
> IOMMU". I'm pushing for that in future platforms, but it seems a shame not
> to support the current platform, especially given that other systems do have
> hacks in mainline to get virtio working.
> 
> > We just need to figure out how to deal with devices that deviate
> > from the default.  One things is that VIRTIO_F_IOMMU_PLATFORM really
> > should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
> > dma tweaks (offsets, cache flushing), which seems well in spirit of
> > the original design.  The other issue is VIRTIO_F_IO_BARRIER
> > which is very vaguely defined, and which needs a better definition.
> > And last but not least we'll need some text explaining the challenges
> > of hardware devices - I think VIRTIO_F_PLATFORM_DMA + VIRTIO_F_IO_BARRIER
> > is what would basically cover them, but a good description including
> > an explanation of why these matter.
> 
> I agree that this makes sense for future revisions of virtio (or perhaps
> it can just be a clarification to virtio 1.0), but we're still left in the
> dark with legacy devices and it would be nice to have them work on the
> systems which currently exist, even if it's a legacy-only hack in the arch
> code.
> 
> Will


Myself I'm sympathetic to this use-case and I see more uses to this
than just legacy support. But more work is required IMHO.
Will post tomorrow though - it's late here ...

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Michael S. Tsirkin
On Wed, Aug 01, 2018 at 01:36:39AM -0700, Christoph Hellwig wrote:
> On Wed, Aug 01, 2018 at 09:16:38AM +0100, Will Deacon wrote:
> > On arm/arm64, the problem we have is that legacy virtio devices on the MMIO
> > transport (so definitely not PCI) have historically been advertised by qemu
> > as not being cache coherent, but because the virtio core has bypassed DMA
> > ops then everything has happened to work. If we blindly enable the arch DMA
> > ops,
> 
> No one is suggesting that as far as I can tell.
> 
> > we'll plumb in the non-coherent ops and start getting data corruption,
> > so we do need a way to quirk virtio as being "always coherent" if we want to
> > use the DMA ops (which we do, because our emulation platforms have an IOMMU
> > for all virtio devices).
> 
> >From all that I've gather so far: no you do not want that.  We really
> need to figure out virtio "dma" interacts with the host / device.
> 
> If you look at the current iommu spec it does talk of physical address
> with a little careveout for VIRTIO_F_IOMMU_PLATFORM.
> 
> So between that and our discussion in this thread and its previous
> iterations I think we need to stick to the current always physical,
> bypass system dma ops mode of virtio operation as the default.
> 
> We just need to figure out how to deal with devices that deviate
> from the default.  One things is that VIRTIO_F_IOMMU_PLATFORM really
> should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
> dma tweaks (offsets, cache flushing), which seems well in spirit of
> the original design.

Well I wouldn't say that. VIRTIO_F_IOMMU_PLATFORM is for guest
programmable protection which is designed for things like userspace
drivers but still very much which a CPU doing the accesses. I think
VIRTIO_F_IO_BARRIER needs to be extended to VIRTIO_F_PLATFORM_DMA.

>  The other issue is VIRTIO_F_IO_BARRIER
> which is very vaguely defined, and which needs a better definition.
> And last but not least we'll need some text explaining the challenges
> of hardware devices - I think VIRTIO_F_PLATFORM_DMA + VIRTIO_F_IO_BARRIER
> is what would basically cover them, but a good description including
> an explanation of why these matter.

I think VIRTIO_F_IOMMU_PLATFORM + VIRTIO_F_PLATFORM_DMA but yea.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Michael S. Tsirkin
On Tue, Jul 31, 2018 at 03:36:22PM -0500, Benjamin Herrenschmidt wrote:
> On Tue, 2018-07-31 at 10:30 -0700, Christoph Hellwig wrote:
> > > However the question people raise is that DMA API is already full of
> > > arch-specific tricks the likes of which are outlined in your post linked
> > > above. How is this one much worse?
> > 
> > None of these warts is visible to the driver, they are all handled in
> > the architecture (possibly on a per-bus basis).
> > 
> > So for virtio we really need to decide if it has one set of behavior
> > as specified in the virtio spec, or if it behaves exactly as if it
> > was on a PCI bus, or in fact probably both as you lined up.  But no
> > magic arch specific behavior inbetween.
> 
> The only arch specific behaviour is needed in the case where it doesn't
> behave like PCI. In this case, the PCI DMA ops are not suitable, but in
> our secure VMs, we still need to make it use swiotlb in order to bounce
> through non-secure pages.
> 
> It would be nice if "real PCI" was the default

I think you are mixing "real PCI" which isn't coded up yet and IOMMU
bypass which is. IOMMU bypass will maybe with time become unnecessary
since it seems that one can just program an IOMMU in a bypass mode
instead.

It's hard to blame you since right now if you disable IOMMU bypass
you get a real PCI mode. But they are distinct and to allow people
to enable IOMMU by default we will need to teach someone
(virtio or DMA API) about this mode that does follow
translation and protection rules in the IOMMU but runs
on a CPU and so does not need cache flushes and whatnot.

OTOH real PCI mode as opposed to default hypervisor mode does not perform as
well when what you actually have is a hypervisor.

So we'll likely have a mix of these two modes for a while.

> but it's not, VMs are
> created in "legacy" mode all the times and we don't know at VM creation
> time whether it will become a secure VM or not. The way our secure VMs
> work is that they start as a normal VM, load a secure "payload" and
> call the Ultravisor to "become" secure.
> 
> So we're in a bit of a bind here. We need that one-liner optional arch
> hook to make virtio use swiotlb in that "IOMMU bypass" case.
> 
> Ben.

And just to make sure I understand, on your platform DMA APIs do include
some of the cache flushing tricks and this is why you don't want to
declare iommu support in the hypervisor?

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


IEEE Record # 41985: 2018 3rd International Conference on Contemporary Computing and Informatics (IC3I).

2018-08-01 Thread Dr. S K Niranjan Aradhya
*<< Apologies for cross-postings >><<< Please circulate among your friends,
peers and researchers >>>*

IEEE Conference Record No.: #41985;

2018 3rd International Conference on Contemporary Computing and Informatics
(IC3I).

Conference Date : 10 - 12 October 2018
Submission Deadline: 30 July 2018

*Submission Link:http://cmsweb.com.sg/ic3i18/index.php/ic3i18/ic3i18/login
*

IEEE ISBN : 978-1-5386-6894-8
IEEE Part No. : CFP18AWQ-ART

Selected, accepted and extended paper will be published in Scopus Indexed
International Journal of Forensic Software Engineering published by
InderScience

All accepted and presented papers will be submitted to the IEEE for
possible publication in IEEE Xplore Digital Library. Previous edition
indexed in: SCOPUS, ISI Web of Science, Engineering Index, Google, etc.

If you like to join the TPC or propose a special session or symposiums
please write to: secretar...@ic3i.org

General Chair(s)
IC3I  2018 Conference

--
Disclaimer: We have clearly mentioned the subject lines and your email
address won't be misleading in any form. We have found your mail address
through our own efforts on the web search and not through any illegal way.
If you wish to remove your information from our mailing list or no longer
receive future announcements, please email with REMOVE in subject. Your
request to opt-out will be effective within a reasonable amount of time.
 ic3i-cfp.pdf

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker

2018-08-01 Thread Michal Hocko
On Wed 01-08-18 19:12:25, Wei Wang wrote:
> On 07/30/2018 05:00 PM, Michal Hocko wrote:
> > On Fri 27-07-18 17:24:55, Wei Wang wrote:
> > > The OOM notifier is getting deprecated to use for the reasons mentioned
> > > here by Michal Hocko: https://lkml.org/lkml/2018/7/12/314
> > > 
> > > This patch replaces the virtio-balloon oom notifier with a shrinker
> > > to release balloon pages on memory pressure.
> > It would be great to document the replacement. This is not a small
> > change...
> 
> OK. I plan to document the following to the commit log:
> 
>   The OOM notifier is getting deprecated to use for the reasons:
> - As a callout from the oom context, it is too subtle and easy to
>   generate bugs and corner cases which are hard to track;
> - It is called too late (after the reclaiming has been performed).
>   Drivers with large amuont of reclaimable memory is expected to be
>   released them at an early age of memory pressure;
> - The notifier callback isn't aware of the oom contrains;
> Link: https://lkml.org/lkml/2018/7/12/314
> 
> This patch replaces the virtio-balloon oom notifier with a shrinker
> to release balloon pages on memory pressure. Users can set the amount of
> memory pages to release each time a shrinker_scan is called via the
> module parameter balloon_pages_to_shrink, and the default amount is 256
> pages. Historically, the feature VIRTIO_BALLOON_F_DEFLATE_ON_OOM has
> been used to release balloon pages on OOM. We continue to use this
> feature bit for the shrinker, so the shrinker is only registered when
> this feature bit has been negotiated with host.

Do you have any numbers for how does this work in practice? Let's say
you have a medium page cache workload which triggers kswapd to do a
light reclaim? Hardcoded shrinking sounds quite dubious to me but I have
no idea how people expect this to work. Shouldn't this be more
adaptive? How precious are those pages anyway?
-- 
Michal Hocko
SUSE Labs
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker

2018-08-01 Thread Wei Wang

On 07/30/2018 05:00 PM, Michal Hocko wrote:

On Fri 27-07-18 17:24:55, Wei Wang wrote:

The OOM notifier is getting deprecated to use for the reasons mentioned
here by Michal Hocko: https://lkml.org/lkml/2018/7/12/314

This patch replaces the virtio-balloon oom notifier with a shrinker
to release balloon pages on memory pressure.

It would be great to document the replacement. This is not a small
change...


OK. I plan to document the following to the commit log:

  The OOM notifier is getting deprecated to use for the reasons:
- As a callout from the oom context, it is too subtle and easy to
  generate bugs and corner cases which are hard to track;
- It is called too late (after the reclaiming has been performed).
  Drivers with large amuont of reclaimable memory is expected to be
  released them at an early age of memory pressure;
- The notifier callback isn't aware of the oom contrains;
Link: https://lkml.org/lkml/2018/7/12/314

This patch replaces the virtio-balloon oom notifier with a shrinker
to release balloon pages on memory pressure. Users can set the 
amount of

memory pages to release each time a shrinker_scan is called via the
module parameter balloon_pages_to_shrink, and the default amount is 256
pages. Historically, the feature VIRTIO_BALLOON_F_DEFLATE_ON_OOM has
been used to release balloon pages on OOM. We continue to use this
feature bit for the shrinker, so the shrinker is only registered when
this feature bit has been negotiated with host.

In addition, the bug in the replaced virtballoon_oom_notify that only
VIRTIO_BALLOON_ARRAY_PFNS_MAX (i.e 256) balloon pages can be freed
though the user has specified more than that number is fixed in the
shrinker_scan function.


Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()

2018-08-01 Thread Tonghao Zhang
On Wed, Aug 1, 2018 at 2:01 PM Jason Wang  wrote:
>
>
>
> On 2018年08月01日 11:00, xiangxia.m@gmail.com wrote:
> > From: Tonghao Zhang 
> >
> > Factor out generic busy polling logic and will be
> > used for in tx path in the next patch. And with the patch,
> > qemu can set differently the busyloop_timeout for rx queue.
> >
> > In the handle_tx, the busypoll will vhost_net_disable/enable_vq
> > because we have poll the sock. This can improve performance.
> > [This is suggested by Toshiaki Makita ]
> >
> > And when the sock receive skb, we should queue the poll if necessary.
> >
> > Signed-off-by: Tonghao Zhang 
> > ---
> >   drivers/vhost/net.c | 131 
> > 
> >   1 file changed, 91 insertions(+), 40 deletions(-)
> >
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 32c1b52..5b45463 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -440,6 +440,95 @@ static void vhost_net_signal_used(struct 
> > vhost_net_virtqueue *nvq)
> >   nvq->done_idx = 0;
> >   }
> >
> > +static int sk_has_rx_data(struct sock *sk)
> > +{
> > + struct socket *sock = sk->sk_socket;
> > +
> > + if (sock->ops->peek_len)
> > + return sock->ops->peek_len(sock);
> > +
> > + return skb_queue_empty(>sk_receive_queue);
> > +}
> > +
> > +static void vhost_net_busy_poll_try_queue(struct vhost_net *net,
> > +   struct vhost_virtqueue *vq)
> > +{
> > + if (!vhost_vq_avail_empty(>dev, vq)) {
> > + vhost_poll_queue(>poll);
> > + } else if (unlikely(vhost_enable_notify(>dev, vq))) {
> > + vhost_disable_notify(>dev, vq);
> > + vhost_poll_queue(>poll);
> > + }
> > +}
> > +
> > +static void vhost_net_busy_poll_check(struct vhost_net *net,
> > +   struct vhost_virtqueue *rvq,
> > +   struct vhost_virtqueue *tvq,
> > +   bool rx)
> > +{
> > + struct socket *sock = rvq->private_data;
> > +
> > + if (rx)
> > + vhost_net_busy_poll_try_queue(net, tvq);
> > + else if (sock && sk_has_rx_data(sock->sk))
> > + vhost_net_busy_poll_try_queue(net, rvq);
> > + else {
> > + /* On tx here, sock has no rx data, so we
> > +  * will wait for sock wakeup for rx, and
> > +  * vhost_enable_notify() is not needed. */
>
> A possible case is we do have rx data but guest does not refill the rx
> queue. In this case we may lose notifications from guest.
Yes, should consider this case. thanks.
> > + }
> > +}
> > +
> > +static void vhost_net_busy_poll(struct vhost_net *net,
> > + struct vhost_virtqueue *rvq,
> > + struct vhost_virtqueue *tvq,
> > + bool *busyloop_intr,
> > + bool rx)
> > +{
> > + unsigned long busyloop_timeout;
> > + unsigned long endtime;
> > + struct socket *sock;
> > + struct vhost_virtqueue *vq = rx ? tvq : rvq;
> > +
> > + mutex_lock_nested(>mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX);
> > + vhost_disable_notify(>dev, vq);
> > + sock = rvq->private_data;
> > +
> > + busyloop_timeout = rx ? rvq->busyloop_timeout:
> > + tvq->busyloop_timeout;
> > +
> > +
> > + /* Busypoll the sock, so don't need rx wakeups during it. */
> > + if (!rx)
> > + vhost_net_disable_vq(net, vq);
>
> Actually this piece of code is not a factoring out. I would suggest to
> add this in another patch, or on top of this series.
I will add this in another patch.
> > +
> > + preempt_disable();
> > + endtime = busy_clock() + busyloop_timeout;
> > +
> > + while (vhost_can_busy_poll(endtime)) {
> > + if (vhost_has_work(>dev)) {
> > + *busyloop_intr = true;
> > + break;
> > + }
> > +
> > + if ((sock && sk_has_rx_data(sock->sk) &&
> > +  !vhost_vq_avail_empty(>dev, rvq)) ||
> > + !vhost_vq_avail_empty(>dev, tvq))
> > + break;
>
> Some checks were duplicated in vhost_net_busy_poll_check(). Need
> consider to unify them.
OK
> > +
> > + cpu_relax();
> > + }
> > +
> > + preempt_enable();
> > +
> > + if (!rx)
> > + vhost_net_enable_vq(net, vq);
>
> No need to enable rx virtqueue, if we are sure handle_rx() will be
> called soon.
If we disable rx virtqueue in handle_tx and don't send packets from
guest anymore(handle_tx is not called), so we can wake up for sock rx.
so the network is broken.
> > +
> > + vhost_net_busy_poll_check(net, rvq, tvq, rx);
>
> It looks to me just open code all check here is better and easier to be
> reviewed.
will be changed.
> Thanks
>
> > +
> > + mutex_unlock(>mutex);
> > +}
> > +
> >   static int vhost_net_tx_get_vq_desc(struct 

Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Will Deacon
Hi Christoph,

On Wed, Aug 01, 2018 at 01:36:39AM -0700, Christoph Hellwig wrote:
> On Wed, Aug 01, 2018 at 09:16:38AM +0100, Will Deacon wrote:
> > On arm/arm64, the problem we have is that legacy virtio devices on the MMIO
> > transport (so definitely not PCI) have historically been advertised by qemu
> > as not being cache coherent, but because the virtio core has bypassed DMA
> > ops then everything has happened to work. If we blindly enable the arch DMA
> > ops,
> 
> No one is suggesting that as far as I can tell.

Apologies: it's me that wants the DMA ops enabled to handle legacy devices
behind an IOMMU, but see below.

> > we'll plumb in the non-coherent ops and start getting data corruption,
> > so we do need a way to quirk virtio as being "always coherent" if we want to
> > use the DMA ops (which we do, because our emulation platforms have an IOMMU
> > for all virtio devices).
> 
> From all that I've gather so far: no you do not want that.  We really
> need to figure out virtio "dma" interacts with the host / device.
> 
> If you look at the current iommu spec it does talk of physical address
> with a little careveout for VIRTIO_F_IOMMU_PLATFORM.

That's true, although that doesn't exist in the legacy virtio spec, and we
have an existing emulation platform which puts legacy virtio devices behind
an IOMMU. Currently, Linux is unable to boot on this platform unless the
IOMMU is configured as bypass. If we can use the coherent IOMMU DMA ops,
then it works perfectly.

> So between that and our discussion in this thread and its previous
> iterations I think we need to stick to the current always physical,
> bypass system dma ops mode of virtio operation as the default.

As above -- that means we hang during boot because we get stuck trying to
bring up a virtio-block device whose DMA is aborted by the IOMMU. The easy
answer is "just upgrade to latest virtio and advertise the presence of the
IOMMU". I'm pushing for that in future platforms, but it seems a shame not
to support the current platform, especially given that other systems do have
hacks in mainline to get virtio working.

> We just need to figure out how to deal with devices that deviate
> from the default.  One things is that VIRTIO_F_IOMMU_PLATFORM really
> should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
> dma tweaks (offsets, cache flushing), which seems well in spirit of
> the original design.  The other issue is VIRTIO_F_IO_BARRIER
> which is very vaguely defined, and which needs a better definition.
> And last but not least we'll need some text explaining the challenges
> of hardware devices - I think VIRTIO_F_PLATFORM_DMA + VIRTIO_F_IO_BARRIER
> is what would basically cover them, but a good description including
> an explanation of why these matter.

I agree that this makes sense for future revisions of virtio (or perhaps
it can just be a clarification to virtio 1.0), but we're still left in the
dark with legacy devices and it would be nice to have them work on the
systems which currently exist, even if it's a legacy-only hack in the arch
code.

Will
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Christoph Hellwig
On Wed, Aug 01, 2018 at 09:16:38AM +0100, Will Deacon wrote:
> On arm/arm64, the problem we have is that legacy virtio devices on the MMIO
> transport (so definitely not PCI) have historically been advertised by qemu
> as not being cache coherent, but because the virtio core has bypassed DMA
> ops then everything has happened to work. If we blindly enable the arch DMA
> ops,

No one is suggesting that as far as I can tell.

> we'll plumb in the non-coherent ops and start getting data corruption,
> so we do need a way to quirk virtio as being "always coherent" if we want to
> use the DMA ops (which we do, because our emulation platforms have an IOMMU
> for all virtio devices).

>From all that I've gather so far: no you do not want that.  We really
need to figure out virtio "dma" interacts with the host / device.

If you look at the current iommu spec it does talk of physical address
with a little careveout for VIRTIO_F_IOMMU_PLATFORM.

So between that and our discussion in this thread and its previous
iterations I think we need to stick to the current always physical,
bypass system dma ops mode of virtio operation as the default.

We just need to figure out how to deal with devices that deviate
from the default.  One things is that VIRTIO_F_IOMMU_PLATFORM really
should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
dma tweaks (offsets, cache flushing), which seems well in spirit of
the original design.  The other issue is VIRTIO_F_IO_BARRIER
which is very vaguely defined, and which needs a better definition.
And last but not least we'll need some text explaining the challenges
of hardware devices - I think VIRTIO_F_PLATFORM_DMA + VIRTIO_F_IO_BARRIER
is what would basically cover them, but a good description including
an explanation of why these matter.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: KASAN: use-after-free Read in vhost_transport_send_pkt

2018-08-01 Thread Dmitry Vyukov via Virtualization
On Tue, Jul 31, 2018 at 5:43 PM, Stefan Hajnoczi  wrote:
> On Mon, Jul 30, 2018 at 11:15:03AM -0700, syzbot wrote:
>> Hello,
>>
>> syzbot found the following crash on:
>>
>> HEAD commit:acb1872577b3 Linux 4.18-rc7
>> git tree:   upstream
>> console output: https://syzkaller.appspot.com/x/log.txt?x=14eb932c40
>> kernel config:  https://syzkaller.appspot.com/x/.config?x=2dc0cd7c2eefb46f
>> dashboard link: https://syzkaller.appspot.com/bug?extid=bd391451452fb0b93039
>> compiler:   gcc (GCC) 8.0.1 20180413 (experimental)
>>
>> Unfortunately, I don't have any reproducer for this crash yet.
>>
>> IMPORTANT: if you fix the bug, please add the following tag to the commit:
>> Reported-by: syzbot+bd391451452fb0b93...@syzkaller.appspotmail.com
>>
>> netlink: 'syz-executor5': attribute type 2 has an invalid length.
>> binder: 28577:28588 transaction failed 29189/-22, size 0-0 line 2852
>> ==
>> BUG: KASAN: use-after-free in debug_spin_lock_before
>> kernel/locking/spinlock_debug.c:83 [inline]
>> BUG: KASAN: use-after-free in do_raw_spin_lock+0x1c0/0x200
>> kernel/locking/spinlock_debug.c:112
>> Read of size 4 at addr 880194d0ec6c by task syz-executor4/28583
>>
>> CPU: 1 PID: 28583 Comm: syz-executor4 Not tainted 4.18.0-rc7+ #169
>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>> Google 01/01/2011
>> Call Trace:
>>  __dump_stack lib/dump_stack.c:77 [inline]
>>  dump_stack+0x1c9/0x2b4 lib/dump_stack.c:113
>>  print_address_description+0x6c/0x20b mm/kasan/report.c:256
>>  kasan_report_error mm/kasan/report.c:354 [inline]
>>  kasan_report.cold.7+0x242/0x2fe mm/kasan/report.c:412
>>  __asan_report_load4_noabort+0x14/0x20 mm/kasan/report.c:432
>>  debug_spin_lock_before kernel/locking/spinlock_debug.c:83 [inline]
>>  do_raw_spin_lock+0x1c0/0x200 kernel/locking/spinlock_debug.c:112
>>  __raw_spin_lock_bh include/linux/spinlock_api_smp.h:136 [inline]
>>  _raw_spin_lock_bh+0x39/0x40 kernel/locking/spinlock.c:168
>>  spin_lock_bh include/linux/spinlock.h:315 [inline]
>>  vhost_transport_send_pkt+0x12e/0x380 drivers/vhost/vsock.c:223
>
> Thanks for the vsock fuzzing.  This is a useful bug report.

Thanks. But note that syzkaller descriptions for vsock were written by
people who have no idea what is vsock whatsoever:
https://github.com/google/syzkaller/blob/master/sys/linux/socket_vnet.txt
So most likely fuzzing is inefficient and does not test the most
interesting parts of vsock.


> It looks like vhost_vsock_get() needs to involve a reference count so
> that vhost_vsock instances cannot be freed while something is still
> using them.
>
> The reproducer probably involves racing close() with connect().
>
> I am looking into a fix.
>
> Stefan
>
>>  virtio_transport_send_pkt_info+0x31d/0x460
>> net/vmw_vsock/virtio_transport_common.c:190
>>  virtio_transport_connect+0x17c/0x220
>> net/vmw_vsock/virtio_transport_common.c:588
>>  vsock_stream_connect+0x4fb/0xfc0 net/vmw_vsock/af_vsock.c:1197
>>  __sys_connect+0x37d/0x4c0 net/socket.c:1673
>>  __do_sys_connect net/socket.c:1684 [inline]
>>  __se_sys_connect net/socket.c:1681 [inline]
>>  __x64_sys_connect+0x73/0xb0 net/socket.c:1681
>>  do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>  entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> RIP: 0033:0x456a09
>> Code: fd b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7
>> 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff
>> 0f 83 cb b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00
>> RSP: 002b:7fa4aee5bc78 EFLAGS: 0246 ORIG_RAX: 002a
>> RAX: ffda RBX: 7fa4aee5c6d4 RCX: 00456a09
>> RDX: 0010 RSI: 2200 RDI: 0016
>> RBP: 009300a0 R08:  R09: 
>> R10:  R11: 0246 R12: 
>> R13: 004ca838 R14: 004c25fb R15: 
>>
>> Allocated by task 28583:
>>  save_stack+0x43/0xd0 mm/kasan/kasan.c:448
>>  set_track mm/kasan/kasan.c:460 [inline]
>>  kasan_kmalloc+0xc4/0xe0 mm/kasan/kasan.c:553
>>  __do_kmalloc_node mm/slab.c:3682 [inline]
>>  __kmalloc_node+0x47/0x70 mm/slab.c:3689
>>  kmalloc_node include/linux/slab.h:555 [inline]
>>  kvmalloc_node+0xb9/0xf0 mm/util.c:423
>>  kvmalloc include/linux/mm.h:573 [inline]
>>  vhost_vsock_dev_open+0xa2/0x5a0 drivers/vhost/vsock.c:511
>>  misc_open+0x3ca/0x560 drivers/char/misc.c:141
>>  chrdev_open+0x25a/0x770 fs/char_dev.c:417
>>  do_dentry_open+0x818/0xe40 fs/open.c:794
>>  vfs_open+0x139/0x230 fs/open.c:908
>>  do_last fs/namei.c:3399 [inline]
>>  path_openat+0x174a/0x4e10 fs/namei.c:3540
>>  do_filp_open+0x255/0x380 fs/namei.c:3574
>>  do_sys_open+0x584/0x760 fs/open.c:1101
>>  __do_sys_openat fs/open.c:1128 [inline]
>>  __se_sys_openat fs/open.c:1122 [inline]
>>  __x64_sys_openat+0x9d/0x100 fs/open.c:1122
>>  do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
>>  

Re: [RFC 0/4] Virtio uses DMA API for all devices

2018-08-01 Thread Will Deacon
On Tue, Jul 31, 2018 at 03:36:22PM -0500, Benjamin Herrenschmidt wrote:
> On Tue, 2018-07-31 at 10:30 -0700, Christoph Hellwig wrote:
> > > However the question people raise is that DMA API is already full of
> > > arch-specific tricks the likes of which are outlined in your post linked
> > > above. How is this one much worse?
> > 
> > None of these warts is visible to the driver, they are all handled in
> > the architecture (possibly on a per-bus basis).
> > 
> > So for virtio we really need to decide if it has one set of behavior
> > as specified in the virtio spec, or if it behaves exactly as if it
> > was on a PCI bus, or in fact probably both as you lined up.  But no
> > magic arch specific behavior inbetween.
> 
> The only arch specific behaviour is needed in the case where it doesn't
> behave like PCI. In this case, the PCI DMA ops are not suitable, but in
> our secure VMs, we still need to make it use swiotlb in order to bounce
> through non-secure pages.

On arm/arm64, the problem we have is that legacy virtio devices on the MMIO
transport (so definitely not PCI) have historically been advertised by qemu
as not being cache coherent, but because the virtio core has bypassed DMA
ops then everything has happened to work. If we blindly enable the arch DMA
ops, we'll plumb in the non-coherent ops and start getting data corruption,
so we do need a way to quirk virtio as being "always coherent" if we want to
use the DMA ops (which we do, because our emulation platforms have an IOMMU
for all virtio devices).

Will
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()

2018-08-01 Thread Jason Wang



On 2018年08月01日 11:00, xiangxia.m@gmail.com wrote:

From: Tonghao Zhang 

Factor out generic busy polling logic and will be
used for in tx path in the next patch. And with the patch,
qemu can set differently the busyloop_timeout for rx queue.

In the handle_tx, the busypoll will vhost_net_disable/enable_vq
because we have poll the sock. This can improve performance.
[This is suggested by Toshiaki Makita ]

And when the sock receive skb, we should queue the poll if necessary.

Signed-off-by: Tonghao Zhang 
---
  drivers/vhost/net.c | 131 
  1 file changed, 91 insertions(+), 40 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 32c1b52..5b45463 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -440,6 +440,95 @@ static void vhost_net_signal_used(struct 
vhost_net_virtqueue *nvq)
nvq->done_idx = 0;
  }
  
+static int sk_has_rx_data(struct sock *sk)

+{
+   struct socket *sock = sk->sk_socket;
+
+   if (sock->ops->peek_len)
+   return sock->ops->peek_len(sock);
+
+   return skb_queue_empty(>sk_receive_queue);
+}
+
+static void vhost_net_busy_poll_try_queue(struct vhost_net *net,
+ struct vhost_virtqueue *vq)
+{
+   if (!vhost_vq_avail_empty(>dev, vq)) {
+   vhost_poll_queue(>poll);
+   } else if (unlikely(vhost_enable_notify(>dev, vq))) {
+   vhost_disable_notify(>dev, vq);
+   vhost_poll_queue(>poll);
+   }
+}
+
+static void vhost_net_busy_poll_check(struct vhost_net *net,
+ struct vhost_virtqueue *rvq,
+ struct vhost_virtqueue *tvq,
+ bool rx)
+{
+   struct socket *sock = rvq->private_data;
+
+   if (rx)
+   vhost_net_busy_poll_try_queue(net, tvq);
+   else if (sock && sk_has_rx_data(sock->sk))
+   vhost_net_busy_poll_try_queue(net, rvq);
+   else {
+   /* On tx here, sock has no rx data, so we
+* will wait for sock wakeup for rx, and
+* vhost_enable_notify() is not needed. */


A possible case is we do have rx data but guest does not refill the rx 
queue. In this case we may lose notifications from guest.



+   }
+}
+
+static void vhost_net_busy_poll(struct vhost_net *net,
+   struct vhost_virtqueue *rvq,
+   struct vhost_virtqueue *tvq,
+   bool *busyloop_intr,
+   bool rx)
+{
+   unsigned long busyloop_timeout;
+   unsigned long endtime;
+   struct socket *sock;
+   struct vhost_virtqueue *vq = rx ? tvq : rvq;
+
+   mutex_lock_nested(>mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX);
+   vhost_disable_notify(>dev, vq);
+   sock = rvq->private_data;
+
+   busyloop_timeout = rx ? rvq->busyloop_timeout:
+   tvq->busyloop_timeout;
+
+
+   /* Busypoll the sock, so don't need rx wakeups during it. */
+   if (!rx)
+   vhost_net_disable_vq(net, vq);


Actually this piece of code is not a factoring out. I would suggest to 
add this in another patch, or on top of this series.



+
+   preempt_disable();
+   endtime = busy_clock() + busyloop_timeout;
+
+   while (vhost_can_busy_poll(endtime)) {
+   if (vhost_has_work(>dev)) {
+   *busyloop_intr = true;
+   break;
+   }
+
+   if ((sock && sk_has_rx_data(sock->sk) &&
+!vhost_vq_avail_empty(>dev, rvq)) ||
+   !vhost_vq_avail_empty(>dev, tvq))
+   break;


Some checks were duplicated in vhost_net_busy_poll_check(). Need 
consider to unify them.



+
+   cpu_relax();
+   }
+
+   preempt_enable();
+
+   if (!rx)
+   vhost_net_enable_vq(net, vq);


No need to enable rx virtqueue, if we are sure handle_rx() will be 
called soon.



+
+   vhost_net_busy_poll_check(net, rvq, tvq, rx);


It looks to me just open code all check here is better and easier to be 
reviewed.


Thanks


+
+   mutex_unlock(>mutex);
+}
+
  static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
struct vhost_net_virtqueue *nvq,
unsigned int *out_num, unsigned int *in_num,
@@ -753,16 +842,6 @@ static int peek_head_len(struct vhost_net_virtqueue *rvq, 
struct sock *sk)
return len;
  }
  
-static int sk_has_rx_data(struct sock *sk)

-{
-   struct socket *sock = sk->sk_socket;
-
-   if (sock->ops->peek_len)
-   return sock->ops->peek_len(sock);
-
-   return skb_queue_empty(>sk_receive_queue);
-}
-
  static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
  bool