Re: [PATCH net-next v5 0/4] net: vhost: improve performance when enable busyloop

2018-07-10 Thread Jason Wang



On 2018年07月11日 11:49, Tonghao Zhang wrote:

On Wed, Jul 11, 2018 at 10:56 AM Jason Wang  wrote:



On 2018年07月04日 12:31, xiangxia.m@gmail.com wrote:

From: Tonghao Zhang 

This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.

For more performance report, see patch 4.

v4 -> v5:
fix some issues

v3 -> v4:
fix some issues

v2 -> v3:
This patches are splited from previous big patch:
http://patchwork.ozlabs.org/patch/934673/

Tonghao Zhang (4):
vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out busy polling logic to vhost_net_busy_poll()
net: vhost: add rx busy polling in tx path

   drivers/vhost/net.c   | 108 
--
   drivers/vhost/vhost.c |  24 ---
   2 files changed, 67 insertions(+), 65 deletions(-)


Hi, any progress on the new version?

I plan to send a new series of packed virtqueue support of vhost. If you
plan to send it soon, I can wait. Otherwise, I will send my series.

I rebase the codes. and find there is no improvement anymore, the
patches of  makita  may solve the problem. jason you may send your
patches, and I will do some research on busypoll.


I see. Maybe you can try some bi-directional traffic.

Btw, lots of optimizations could be done for busy polling. E.g 
integrating with host NAPI busy polling or a 100% busy polling 
vhost_net. You're welcome to work or propose new ideas.


Thanks




Thanks


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Michael S. Tsirkin
On Wed, Jul 11, 2018 at 07:00:37AM +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 10, 2018 at 10:33:08AM -0700, Linus Torvalds wrote:
> > NAK.
> > 
> > On Tue, Jul 10, 2018 at 2:56 AM Wei Wang  wrote:
> > >
> > > +
> > > +   buf_page = list_first_entry_or_null(pages, struct page, lru);
> > > +   if (!buf_page)
> > > +   return -EINVAL;
> > > +   buf = (__le64 *)page_address(buf_page);
> > 
> > Stop this garbage.
> > 
> > Why the hell would you pass in some crazy "liost of pages" that uses
> > that lru list?
> > 
> > That's just insane shit.
> > 
> > Just pass in a an array to fill in.
> > No idiotic games like this with
> > odd list entries (what's the locking?) and crazy casting to
> > 
> > So if you want an array of page addresses, pass that in as such. If
> > you want to do it in a page, do it with
> > 
> > u64 *array = page_address(page);
> > int nr = PAGE_SIZE / sizeof(u64);
> > 
> > and now you pass that array in to the thing. None of this completely
> > insane crazy crap interfaces.
> 
> Question was raised what to do if there are so many free
> MAX_ORDER pages that their addresses don't fit in a single MAX_ORDER
> page.

Oh you answered already, I spoke too soon. Nevermind, pls ignore me.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Michael S. Tsirkin
On Tue, Jul 10, 2018 at 10:33:08AM -0700, Linus Torvalds wrote:
> NAK.
> 
> On Tue, Jul 10, 2018 at 2:56 AM Wei Wang  wrote:
> >
> > +
> > +   buf_page = list_first_entry_or_null(pages, struct page, lru);
> > +   if (!buf_page)
> > +   return -EINVAL;
> > +   buf = (__le64 *)page_address(buf_page);
> 
> Stop this garbage.
> 
> Why the hell would you pass in some crazy "liost of pages" that uses
> that lru list?
> 
> That's just insane shit.
> 
> Just pass in a an array to fill in.
> No idiotic games like this with
> odd list entries (what's the locking?) and crazy casting to
> 
> So if you want an array of page addresses, pass that in as such. If
> you want to do it in a page, do it with
> 
> u64 *array = page_address(page);
> int nr = PAGE_SIZE / sizeof(u64);
> 
> and now you pass that array in to the thing. None of this completely
> insane crazy crap interfaces.

Question was raised what to do if there are so many free
MAX_ORDER pages that their addresses don't fit in a single MAX_ORDER
page. Yes, only a huge guest would trigger that but it seems
theoretically possible.

I guess an array of arrays then?

An alternative suggestion was not to pass an array at all,
instead peel enough pages off the list to contain
all free entries. Maybe that's too hacky.


> 
> Plus, I still haven't heard an explanation for why you want so many
> pages in the first place, and why you want anything but MAX_ORDER-1.
> 
> So no. This kind of unnecessarily complex code with completely insane
> calling interfaces does not make it into the VM layer.
> 
> Maybe that crazy "let's pass a chain of pages that uses the lru list"
> makes sense to the virtio-balloon code. But you need to understand
> that it makes ZERO conceptual sense to anybody else. And the core VM
> code is about a million times more important than the balloon code in
> this case, so you had better make the interface make sense to *it*.
> 
>Linus
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next v5 0/4] net: vhost: improve performance when enable busyloop

2018-07-10 Thread Tonghao Zhang
On Wed, Jul 11, 2018 at 10:56 AM Jason Wang  wrote:
>
>
>
> On 2018年07月04日 12:31, xiangxia.m@gmail.com wrote:
> > From: Tonghao Zhang 
> >
> > This patches improve the guest receive and transmit performance.
> > On the handle_tx side, we poll the sock receive queue at the same time.
> > handle_rx do that in the same way.
> >
> > For more performance report, see patch 4.
> >
> > v4 -> v5:
> > fix some issues
> >
> > v3 -> v4:
> > fix some issues
> >
> > v2 -> v3:
> > This patches are splited from previous big patch:
> > http://patchwork.ozlabs.org/patch/934673/
> >
> > Tonghao Zhang (4):
> >vhost: lock the vqs one by one
> >net: vhost: replace magic number of lock annotation
> >net: vhost: factor out busy polling logic to vhost_net_busy_poll()
> >net: vhost: add rx busy polling in tx path
> >
> >   drivers/vhost/net.c   | 108 
> > --
> >   drivers/vhost/vhost.c |  24 ---
> >   2 files changed, 67 insertions(+), 65 deletions(-)
> >
>
> Hi, any progress on the new version?
>
> I plan to send a new series of packed virtqueue support of vhost. If you
> plan to send it soon, I can wait. Otherwise, I will send my series.
I rebase the codes. and find there is no improvement anymore, the
patches of  makita  may solve the problem. jason you may send your
patches, and I will do some research on busypoll.

> Thanks
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next v5 0/4] net: vhost: improve performance when enable busyloop

2018-07-10 Thread Jason Wang



On 2018年07月04日 12:31, xiangxia.m@gmail.com wrote:

From: Tonghao Zhang 

This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.

For more performance report, see patch 4.

v4 -> v5:
fix some issues

v3 -> v4:
fix some issues

v2 -> v3:
This patches are splited from previous big patch:
http://patchwork.ozlabs.org/patch/934673/

Tonghao Zhang (4):
   vhost: lock the vqs one by one
   net: vhost: replace magic number of lock annotation
   net: vhost: factor out busy polling logic to vhost_net_busy_poll()
   net: vhost: add rx busy polling in tx path

  drivers/vhost/net.c   | 108 --
  drivers/vhost/vhost.c |  24 ---
  2 files changed, 67 insertions(+), 65 deletions(-)



Hi, any progress on the new version?

I plan to send a new series of packed virtqueue support of vhost. If you 
plan to send it soon, I can wait. Otherwise, I will send my series.


Thanks
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next v2 0/5] virtio: support packed ring

2018-07-10 Thread Jason Wang



On 2018年07月11日 10:27, Tiwei Bie wrote:

Hello everyone,

This patch set implements packed ring support in virtio driver.

Some functional tests have been done with Jason's
packed ring implementation in vhost:

https://lkml.org/lkml/2018/7/3/33

Both of ping and netperf worked as expected.

v1 -> v2:
- Use READ_ONCE() to read event off_wrap and flags together (Jason);
- Add comments related to ccw (Jason);

RFC (v6) -> v1:
- Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
   when event idx is off (Jason);
- Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
- Test the state of the desc at used_idx instead of last_used_idx
   in virtqueue_enable_cb_delayed_packed() (Jason);
- Save wrap counter (as part of queue state) in the return value
   of virtqueue_enable_cb_prepare_packed();
- Refine the packed ring definitions in uapi;
- Rebase on the net-next tree;

RFC v5 -> RFC v6:
- Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
- Define wrap counter as bool (Jason);
- Use ALIGN() in vring_init_packed() (Jason);
- Avoid using pointer to track `next` in detach_buf_packed() (Jason);
- Add comments for barriers (Jason);
- Don't enable RING_PACKED on ccw for now (noticed by Jason);
- Refine the memory barrier in virtqueue_poll();
- Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
- Remove the hacks in virtqueue_enable_cb_prepare_packed();

RFC v4 -> RFC v5:
- Save DMA addr, etc in desc state (Jason);
- Track used wrap counter;

RFC v3 -> RFC v4:
- Make ID allocation support out-of-order (Jason);
- Various fixes for EVENT_IDX support;

RFC v2 -> RFC v3:
- Split into small patches (Jason);
- Add helper virtqueue_use_indirect() (Jason);
- Just set id for the last descriptor of a list (Jason);
- Calculate the prev in virtqueue_add_packed() (Jason);
- Fix/improve desc suppression code (Jason/MST);
- Refine the code layout for XXX_split/packed and wrappers (MST);
- Fix the comments and API in uapi (MST);
- Remove the BUG_ON() for indirect (Jason);
- Some other refinements and bug fixes;

RFC v1 -> RFC v2:
- Add indirect descriptor support - compile test only;
- Add event suppression supprt - compile test only;
- Move vring_packed_init() out of uapi (Jason, MST);
- Merge two loops into one in virtqueue_add_packed() (Jason);
- Split vring_unmap_one() for packed ring and split ring (Jason);
- Avoid using '%' operator (Jason);
- Rename free_head -> next_avail_idx (Jason);
- Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
- Some other refinements and bug fixes;

Thanks!

Tiwei Bie (5):
   virtio: add packed ring definitions
   virtio_ring: support creating packed ring
   virtio_ring: add packed ring support
   virtio_ring: add event idx support in packed ring
   virtio_ring: enable packed ring

  drivers/s390/virtio/virtio_ccw.c   |   14 +
  drivers/virtio/virtio_ring.c   | 1365 ++--
  include/linux/virtio_ring.h|8 +-
  include/uapi/linux/virtio_config.h |3 +
  include/uapi/linux/virtio_ring.h   |   43 +
  5 files changed, 1157 insertions(+), 276 deletions(-)



Acked-by: Jason Wang 

Thanks!

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH net-next v2 5/5] virtio_ring: enable packed ring

2018-07-10 Thread Tiwei Bie
Signed-off-by: Tiwei Bie 
---
 drivers/s390/virtio/virtio_ccw.c | 14 ++
 drivers/virtio/virtio_ring.c |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 8f5c1d7f751a..8654f3a94635 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -765,6 +765,17 @@ static u64 virtio_ccw_get_features(struct virtio_device 
*vdev)
return rc;
 }
 
+static void ccw_transport_features(struct virtio_device *vdev)
+{
+   /*
+* Packed ring isn't enabled on virtio_ccw for now,
+* because virtio_ccw uses some legacy accessors,
+* e.g. virtqueue_get_avail() and virtqueue_get_used()
+* which aren't available in packed ring currently.
+*/
+   __virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED);
+}
+
 static int virtio_ccw_finalize_features(struct virtio_device *vdev)
 {
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
@@ -791,6 +802,9 @@ static int virtio_ccw_finalize_features(struct 
virtio_device *vdev)
/* Give virtio_ring a chance to accept features. */
vring_transport_features(vdev);
 
+   /* Give virtio_ccw a chance to accept features. */
+   ccw_transport_features(vdev);
+
features->index = 0;
features->features = cpu_to_le32((u32)vdev->features);
/* Write the first half of the feature bits to the host. */
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index f79a1e17f7d1..807ed4b362c5 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1968,6 +1968,8 @@ void vring_transport_features(struct virtio_device *vdev)
break;
case VIRTIO_F_IOMMU_PLATFORM:
break;
+   case VIRTIO_F_RING_PACKED:
+   break;
default:
/* We don't understand this bit. */
__virtio_clear_bit(vdev, i);
-- 
2.18.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v2 3/5] virtio_ring: add packed ring support

2018-07-10 Thread Tiwei Bie
This commit introduces the support (without EVENT_IDX) for
packed ring.

Signed-off-by: Tiwei Bie 
---
 drivers/virtio/virtio_ring.c | 495 ++-
 1 file changed, 487 insertions(+), 8 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c4f8abc7445a..f317b485ba54 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -55,12 +55,21 @@
 #define END_USE(vq)
 #endif
 
+#define _VRING_DESC_F_AVAIL(b) ((__u16)(b) << 7)
+#define _VRING_DESC_F_USED(b)  ((__u16)(b) << 15)
+
 struct vring_desc_state {
void *data; /* Data for callback. */
struct vring_desc *indir_desc;  /* Indirect descriptor, if any. */
 };
 
 struct vring_desc_state_packed {
+   void *data; /* Data for callback. */
+   struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
+   int num;/* Descriptor list length. */
+   dma_addr_t addr;/* Buffer DMA addr. */
+   u32 len;/* Buffer length. */
+   u16 flags;  /* Descriptor flags. */
int next;   /* The next desc state. */
 };
 
@@ -660,7 +669,6 @@ static bool virtqueue_poll_split(struct virtqueue *_vq, 
unsigned last_used_idx)
 {
struct vring_virtqueue *vq = to_vvq(_vq);
 
-   virtio_mb(vq->weak_barriers);
return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, 
vq->vring.used->idx);
 }
 
@@ -757,6 +765,72 @@ static inline unsigned vring_size_packed(unsigned int num, 
unsigned long align)
& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
 }
 
+static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
+struct vring_desc_state_packed *state)
+{
+   u16 flags;
+
+   if (!vring_use_dma_api(vq->vq.vdev))
+   return;
+
+   flags = state->flags;
+
+   if (flags & VRING_DESC_F_INDIRECT) {
+   dma_unmap_single(vring_dma_dev(vq),
+state->addr, state->len,
+(flags & VRING_DESC_F_WRITE) ?
+DMA_FROM_DEVICE : DMA_TO_DEVICE);
+   } else {
+   dma_unmap_page(vring_dma_dev(vq),
+  state->addr, state->len,
+  (flags & VRING_DESC_F_WRITE) ?
+  DMA_FROM_DEVICE : DMA_TO_DEVICE);
+   }
+}
+
+static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
+  struct vring_packed_desc *desc)
+{
+   u16 flags;
+
+   if (!vring_use_dma_api(vq->vq.vdev))
+   return;
+
+   flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+   if (flags & VRING_DESC_F_INDIRECT) {
+   dma_unmap_single(vring_dma_dev(vq),
+virtio64_to_cpu(vq->vq.vdev, desc->addr),
+virtio32_to_cpu(vq->vq.vdev, desc->len),
+(flags & VRING_DESC_F_WRITE) ?
+DMA_FROM_DEVICE : DMA_TO_DEVICE);
+   } else {
+   dma_unmap_page(vring_dma_dev(vq),
+  virtio64_to_cpu(vq->vq.vdev, desc->addr),
+  virtio32_to_cpu(vq->vq.vdev, desc->len),
+  (flags & VRING_DESC_F_WRITE) ?
+  DMA_FROM_DEVICE : DMA_TO_DEVICE);
+   }
+}
+
+static struct vring_packed_desc *alloc_indirect_packed(struct virtqueue *_vq,
+  unsigned int total_sg,
+  gfp_t gfp)
+{
+   struct vring_packed_desc *desc;
+
+   /*
+* We require lowmem mappings for the descriptors because
+* otherwise virt_to_phys will give us bogus addresses in the
+* virtqueue.
+*/
+   gfp &= ~__GFP_HIGHMEM;
+
+   desc = kmalloc(total_sg * sizeof(struct vring_packed_desc), gfp);
+
+   return desc;
+}
+
 static inline int virtqueue_add_packed(struct virtqueue *_vq,
   struct scatterlist *sgs[],
   unsigned int total_sg,
@@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue 
*_vq,
   void *ctx,
   gfp_t gfp)
 {
+   struct vring_virtqueue *vq = to_vvq(_vq);
+   struct vring_packed_desc *desc;
+   struct scatterlist *sg;
+   unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
+   __virtio16 uninitialized_var(head_flags), flags;
+   u16 head, avail_wrap_counter, id, curr;
+   bool indirect;
+
+   START_USE(vq);
+
+   BUG_ON(data == NULL);
+   BUG_ON(ctx && vq->indirect);
+
+   if (unlikely(vq->broken)) {
+ 

[PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring

2018-07-10 Thread Tiwei Bie
This commit introduces the EVENT_IDX support in packed ring.

Signed-off-by: Tiwei Bie 
---
 drivers/virtio/virtio_ring.c | 73 
 1 file changed, 65 insertions(+), 8 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index f317b485ba54..f79a1e17f7d1 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1050,7 +1050,7 @@ static inline int virtqueue_add_packed(struct virtqueue 
*_vq,
 static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
 {
struct vring_virtqueue *vq = to_vvq(_vq);
-   u16 flags;
+   u16 new, old, off_wrap, flags, wrap_counter, event_idx;
bool needs_kick;
u32 snapshot;
 
@@ -1059,9 +1059,19 @@ static bool virtqueue_kick_prepare_packed(struct 
virtqueue *_vq)
 * suppressions. */
virtio_mb(vq->weak_barriers);
 
+   old = vq->next_avail_idx - vq->num_added;
+   new = vq->next_avail_idx;
+   vq->num_added = 0;
+
snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
+   off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0x));
flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
 
+   wrap_counter = off_wrap >> 15;
+   event_idx = off_wrap & ~(1 << 15);
+   if (wrap_counter != vq->avail_wrap_counter)
+   event_idx -= vq->vring_packed.num;
+
 #ifdef DEBUG
if (vq->last_add_time_valid) {
WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
@@ -1070,7 +1080,10 @@ static bool virtqueue_kick_prepare_packed(struct 
virtqueue *_vq)
vq->last_add_time_valid = false;
 #endif
 
-   needs_kick = (flags != VRING_EVENT_F_DISABLE);
+   if (flags == VRING_EVENT_F_DESC)
+   needs_kick = vring_need_event(event_idx, new, old);
+   else
+   needs_kick = (flags != VRING_EVENT_F_DISABLE);
END_USE(vq);
return needs_kick;
 }
@@ -1185,6 +1198,15 @@ static void *virtqueue_get_buf_ctx_packed(struct 
virtqueue *_vq,
ret = vq->desc_state_packed[id].data;
detach_buf_packed(vq, id, ctx);
 
+   /* If we expect an interrupt for the next entry, tell host
+* by writing event index and flush out the write before
+* the read in the next get_buf call. */
+   if (vq->event_flags_shadow == VRING_EVENT_F_DESC)
+   virtio_store_mb(vq->weak_barriers,
+   >vring_packed.driver->off_wrap,
+   cpu_to_virtio16(_vq->vdev, vq->last_used_idx |
+   ((u16)vq->used_wrap_counter << 15)));
+
 #ifdef DEBUG
vq->last_add_time_valid = false;
 #endif
@@ -1213,8 +1235,18 @@ static unsigned 
virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
/* We optimistically turn back on interrupts, then check if there was
 * more to do. */
 
+   if (vq->event) {
+   vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
+   vq->last_used_idx |
+   ((u16)vq->used_wrap_counter << 15));
+   /* We need to update event offset and event wrap
+* counter first before updating event flags. */
+   virtio_wmb(vq->weak_barriers);
+   }
+
if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
-   vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
+   vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
+VRING_EVENT_F_ENABLE;
vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
vq->event_flags_shadow);
}
@@ -1238,22 +1270,47 @@ static bool virtqueue_poll_packed(struct virtqueue 
*_vq, unsigned off_wrap)
 static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
 {
struct vring_virtqueue *vq = to_vvq(_vq);
+   u16 bufs, used_idx, wrap_counter;
 
START_USE(vq);
 
/* We optimistically turn back on interrupts, then check if there was
 * more to do. */
 
+   if (vq->event) {
+   /* TODO: tune this threshold */
+   bufs = (vq->vring_packed.num - _vq->num_free) * 3 / 4;
+   wrap_counter = vq->used_wrap_counter;
+
+   used_idx = vq->last_used_idx + bufs;
+   if (used_idx >= vq->vring_packed.num) {
+   used_idx -= vq->vring_packed.num;
+   wrap_counter ^= 1;
+   }
+
+   vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
+   used_idx | (wrap_counter << 15));
+
+   /* We need to update event offset and event wrap
+* counter first before updating event flags. */
+   virtio_wmb(vq->weak_barriers);
+   } else {
+   used_idx = vq->last_used_idx;
+ 

[PATCH net-next v2 2/5] virtio_ring: support creating packed ring

2018-07-10 Thread Tiwei Bie
This commit introduces the support for creating packed ring.
All split ring specific functions are added _split suffix.
Some necessary stubs for packed ring are also added.

Signed-off-by: Tiwei Bie 
---
 drivers/virtio/virtio_ring.c | 801 +++
 include/linux/virtio_ring.h  |   8 +-
 2 files changed, 546 insertions(+), 263 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 814b395007b2..c4f8abc7445a 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -60,11 +60,15 @@ struct vring_desc_state {
struct vring_desc *indir_desc;  /* Indirect descriptor, if any. */
 };
 
+struct vring_desc_state_packed {
+   int next;   /* The next desc state. */
+};
+
 struct vring_virtqueue {
struct virtqueue vq;
 
-   /* Actual memory layout for this queue */
-   struct vring vring;
+   /* Is this a packed ring? */
+   bool packed;
 
/* Can we use weak barriers? */
bool weak_barriers;
@@ -86,11 +90,39 @@ struct vring_virtqueue {
/* Last used index we've seen. */
u16 last_used_idx;
 
-   /* Last written value to avail->flags */
-   u16 avail_flags_shadow;
+   union {
+   /* Available for split ring */
+   struct {
+   /* Actual memory layout for this queue. */
+   struct vring vring;
 
-   /* Last written value to avail->idx in guest byte order */
-   u16 avail_idx_shadow;
+   /* Last written value to avail->flags */
+   u16 avail_flags_shadow;
+
+   /* Last written value to avail->idx in
+* guest byte order. */
+   u16 avail_idx_shadow;
+   };
+
+   /* Available for packed ring */
+   struct {
+   /* Actual memory layout for this queue. */
+   struct vring_packed vring_packed;
+
+   /* Driver ring wrap counter. */
+   bool avail_wrap_counter;
+
+   /* Device ring wrap counter. */
+   bool used_wrap_counter;
+
+   /* Index of the next avail descriptor. */
+   u16 next_avail_idx;
+
+   /* Last written value to driver->flags in
+* guest byte order. */
+   u16 event_flags_shadow;
+   };
+   };
 
/* How to notify other side. FIXME: commonalize hcalls! */
bool (*notify)(struct virtqueue *vq);
@@ -110,11 +142,24 @@ struct vring_virtqueue {
 #endif
 
/* Per-descriptor state. */
-   struct vring_desc_state desc_state[];
+   union {
+   struct vring_desc_state desc_state[1];
+   struct vring_desc_state_packed desc_state_packed[1];
+   };
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+static inline bool virtqueue_use_indirect(struct virtqueue *_vq,
+ unsigned int total_sg)
+{
+   struct vring_virtqueue *vq = to_vvq(_vq);
+
+   /* If the host supports indirect descriptor tables, and we have multiple
+* buffers, then go indirect. FIXME: tune this threshold */
+   return (vq->indirect && total_sg > 1 && vq->vq.num_free);
+}
+
 /*
  * Modern virtio devices have feature bits to specify whether they need a
  * quirk and bypass the IOMMU. If not there, just use the DMA API.
@@ -200,8 +245,17 @@ static dma_addr_t vring_map_single(const struct 
vring_virtqueue *vq,
  cpu_addr, size, direction);
 }
 
-static void vring_unmap_one(const struct vring_virtqueue *vq,
-   struct vring_desc *desc)
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+  dma_addr_t addr)
+{
+   if (!vring_use_dma_api(vq->vq.vdev))
+   return 0;
+
+   return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
+static void vring_unmap_one_split(const struct vring_virtqueue *vq,
+ struct vring_desc *desc)
 {
u16 flags;
 
@@ -225,17 +279,9 @@ static void vring_unmap_one(const struct vring_virtqueue 
*vq,
}
 }
 
-static int vring_mapping_error(const struct vring_virtqueue *vq,
-  dma_addr_t addr)
-{
-   if (!vring_use_dma_api(vq->vq.vdev))
-   return 0;
-
-   return dma_mapping_error(vring_dma_dev(vq), addr);
-}
-
-static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
-unsigned int total_sg, gfp_t gfp)
+static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq,
+  unsigned int total_sg,
+  gfp_t gfp)
 {
struct vring_desc *desc;
   

[PATCH net-next v2 1/5] virtio: add packed ring definitions

2018-07-10 Thread Tiwei Bie
Signed-off-by: Tiwei Bie 
---
 include/uapi/linux/virtio_config.h |  3 +++
 include/uapi/linux/virtio_ring.h   | 43 ++
 2 files changed, 46 insertions(+)

diff --git a/include/uapi/linux/virtio_config.h 
b/include/uapi/linux/virtio_config.h
index 449132c76b1c..1196e1c1d4f6 100644
--- a/include/uapi/linux/virtio_config.h
+++ b/include/uapi/linux/virtio_config.h
@@ -75,6 +75,9 @@
  */
 #define VIRTIO_F_IOMMU_PLATFORM33
 
+/* This feature indicates support for the packed virtqueue layout. */
+#define VIRTIO_F_RING_PACKED   34
+
 /*
  * Does the device support Single Root I/O Virtualization?
  */
diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h
index 6d5d5faa989b..0254a2ba29cf 100644
--- a/include/uapi/linux/virtio_ring.h
+++ b/include/uapi/linux/virtio_ring.h
@@ -44,6 +44,10 @@
 /* This means the buffer contains a list of buffer descriptors. */
 #define VRING_DESC_F_INDIRECT  4
 
+/* Mark a descriptor as available or used. */
+#define VRING_DESC_F_AVAIL (1ul << 7)
+#define VRING_DESC_F_USED  (1ul << 15)
+
 /* The Host uses this in used->flags to advise the Guest: don't kick me when
  * you add a buffer.  It's unreliable, so it's simply an optimization.  Guest
  * will still kick if it's out of buffers. */
@@ -53,6 +57,17 @@
  * optimization.  */
 #define VRING_AVAIL_F_NO_INTERRUPT 1
 
+/* Enable events. */
+#define VRING_EVENT_F_ENABLE   0x0
+/* Disable events. */
+#define VRING_EVENT_F_DISABLE  0x1
+/*
+ * Enable events for a specific descriptor
+ * (as specified by Descriptor Ring Change Event Offset/Wrap Counter).
+ * Only valid if VIRTIO_RING_F_EVENT_IDX has been negotiated.
+ */
+#define VRING_EVENT_F_DESC 0x2
+
 /* We support indirect buffer descriptors */
 #define VIRTIO_RING_F_INDIRECT_DESC28
 
@@ -171,4 +186,32 @@ static inline int vring_need_event(__u16 event_idx, __u16 
new_idx, __u16 old)
return (__u16)(new_idx - event_idx - 1) < (__u16)(new_idx - old);
 }
 
+struct vring_packed_desc_event {
+   /* Descriptor Ring Change Event Offset/Wrap Counter. */
+   __virtio16 off_wrap;
+   /* Descriptor Ring Change Event Flags. */
+   __virtio16 flags;
+};
+
+struct vring_packed_desc {
+   /* Buffer Address. */
+   __virtio64 addr;
+   /* Buffer Length. */
+   __virtio32 len;
+   /* Buffer ID. */
+   __virtio16 id;
+   /* The flags depending on descriptor type. */
+   __virtio16 flags;
+};
+
+struct vring_packed {
+   unsigned int num;
+
+   struct vring_packed_desc *desc;
+
+   struct vring_packed_desc_event *driver;
+
+   struct vring_packed_desc_event *device;
+};
+
 #endif /* _UAPI_LINUX_VIRTIO_RING_H */
-- 
2.18.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v2 0/5] virtio: support packed ring

2018-07-10 Thread Tiwei Bie
Hello everyone,

This patch set implements packed ring support in virtio driver.

Some functional tests have been done with Jason's
packed ring implementation in vhost:

https://lkml.org/lkml/2018/7/3/33

Both of ping and netperf worked as expected.

v1 -> v2:
- Use READ_ONCE() to read event off_wrap and flags together (Jason);
- Add comments related to ccw (Jason);

RFC (v6) -> v1:
- Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
  when event idx is off (Jason);
- Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
- Test the state of the desc at used_idx instead of last_used_idx
  in virtqueue_enable_cb_delayed_packed() (Jason);
- Save wrap counter (as part of queue state) in the return value
  of virtqueue_enable_cb_prepare_packed();
- Refine the packed ring definitions in uapi;
- Rebase on the net-next tree;

RFC v5 -> RFC v6:
- Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
- Define wrap counter as bool (Jason);
- Use ALIGN() in vring_init_packed() (Jason);
- Avoid using pointer to track `next` in detach_buf_packed() (Jason);
- Add comments for barriers (Jason);
- Don't enable RING_PACKED on ccw for now (noticed by Jason);
- Refine the memory barrier in virtqueue_poll();
- Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
- Remove the hacks in virtqueue_enable_cb_prepare_packed();

RFC v4 -> RFC v5:
- Save DMA addr, etc in desc state (Jason);
- Track used wrap counter;

RFC v3 -> RFC v4:
- Make ID allocation support out-of-order (Jason);
- Various fixes for EVENT_IDX support;

RFC v2 -> RFC v3:
- Split into small patches (Jason);
- Add helper virtqueue_use_indirect() (Jason);
- Just set id for the last descriptor of a list (Jason);
- Calculate the prev in virtqueue_add_packed() (Jason);
- Fix/improve desc suppression code (Jason/MST);
- Refine the code layout for XXX_split/packed and wrappers (MST);
- Fix the comments and API in uapi (MST);
- Remove the BUG_ON() for indirect (Jason);
- Some other refinements and bug fixes;

RFC v1 -> RFC v2:
- Add indirect descriptor support - compile test only;
- Add event suppression supprt - compile test only;
- Move vring_packed_init() out of uapi (Jason, MST);
- Merge two loops into one in virtqueue_add_packed() (Jason);
- Split vring_unmap_one() for packed ring and split ring (Jason);
- Avoid using '%' operator (Jason);
- Rename free_head -> next_avail_idx (Jason);
- Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
- Some other refinements and bug fixes;

Thanks!

Tiwei Bie (5):
  virtio: add packed ring definitions
  virtio_ring: support creating packed ring
  virtio_ring: add packed ring support
  virtio_ring: add event idx support in packed ring
  virtio_ring: enable packed ring

 drivers/s390/virtio/virtio_ccw.c   |   14 +
 drivers/virtio/virtio_ring.c   | 1365 ++--
 include/linux/virtio_ring.h|8 +-
 include/uapi/linux/virtio_config.h |3 +
 include/uapi/linux/virtio_ring.h   |   43 +
 5 files changed, 1157 insertions(+), 276 deletions(-)

-- 
2.18.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Linus Torvalds
On Tue, Jul 10, 2018 at 6:24 PM Wei Wang  wrote:
>
> We only get addresses of the "MAX_ORDER-1" blocks into the array. The
> max size of the array that could be allocated by kmalloc is
> KMALLOC_MAX_SIZE (i.e. 4MB on x86). With that max array, we could load
> "4MB / sizeof(u64)" addresses of "MAX_ORDER-1" blocks, that is, 2TB free
> memory at most. We thought about removing that 2TB limitation by passing
> in multiple such max arrays (a list of them).

No.

Stop this already./

You're doing everthing wrong.

If the array has to describe *all* memory you will ever free, then you
have already lost.

Just do it in chunks.

I don't want the VM code to even fill in that big of an array anyway -
this all happens under the zone lock, and you're walking a list that
is bad for caching anyway.

So plan on an interface that allows _incremental_ freeing, because any
plan that starts with "I worry that maybe two TERABYTES of memory
isn't big enough" is so broken that it's laughable.

That was what I tried to encourage with actually removing the pages
form the page list. That would be an _incremental_ interface. You can
remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark
them free for ballooning that way. And if you still feel you have tons
of free memory, just continue removing more pages from the free list.

Notice? Incremental. Not "I want to have a crazy array that is enough
to hold 2TB at one time".

So here's the rule:

 - make it a simple array interface

 - make the array *small*. Not megabytes. Kilobytes. Because if you're
filling in megabytes worth of free pointers while holding the zone
lock, you're doing something wrong.

 - design the interface so that you do not *need* to have this crazy
"all or nothing" approach.

See what I'm trying to push for. Think "low latency". Think "small
arrays". Think "simple and straightforward interfaces".

At no point should you ever worry about "2TB". Never.

   Linus
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [virtio-dev] Re: [PATCH net-next v1 4/5] virtio_ring: add event idx support in packed ring

2018-07-10 Thread Tiwei Bie
On Tue, Jul 10, 2018 at 01:50:03PM +0800, Jason Wang wrote:
> On 2018年07月09日 15:22, Tiwei Bie wrote:
> > @@ -1059,9 +1059,19 @@ static bool virtqueue_kick_prepare_packed(struct 
> > virtqueue *_vq)
> >  * suppressions. */
> > virtio_mb(vq->weak_barriers);
> > +   old = vq->next_avail_idx - vq->num_added;
> > +   new = vq->next_avail_idx;
> > +   vq->num_added = 0;
> > +
> > snapshot = *(u32 *)vq->vring_packed.device;
> 
> I think we should use READ_ONCE() to prevent compiler from re-reading.

I'll do it. Thanks!

Best regards,
Tiwei Bie

> 
> > +   off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0x));
> > flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
> > +   wrap_counter = off_wrap >> 15;
> > +   event_idx = off_wrap & ~(1 << 15);
> > +   if (wrap_counter != vq->avail_wrap_counter)
> > +   event_idx -= vq->vring_packed.num;
> 
> Thanks
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next v1 5/5] virtio_ring: enable packed ring

2018-07-10 Thread Tiwei Bie
On Tue, Jul 10, 2018 at 01:51:20PM +0800, Jason Wang wrote:
> On 2018年07月09日 15:22, Tiwei Bie wrote:
> > Signed-off-by: Tiwei Bie 
> > ---
> >   drivers/s390/virtio/virtio_ccw.c | 8 
> >   drivers/virtio/virtio_ring.c | 2 ++
> >   2 files changed, 10 insertions(+)
> > 
> > diff --git a/drivers/s390/virtio/virtio_ccw.c 
> > b/drivers/s390/virtio/virtio_ccw.c
> > index 8f5c1d7f751a..ff5b85736d8d 100644
> > --- a/drivers/s390/virtio/virtio_ccw.c
> > +++ b/drivers/s390/virtio/virtio_ccw.c
> > @@ -765,6 +765,11 @@ static u64 virtio_ccw_get_features(struct 
> > virtio_device *vdev)
> > return rc;
> >   }
> > +static void ccw_transport_features(struct virtio_device *vdev)
> > +{
> > +   __virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED);
> > +}
> 
> I think we need a better comment to explain why it was disabled here.

Yeah, I'll do it!

Best regards,
Tiwei Bie

> 
> Thanks
> 
> > +
> >   static int virtio_ccw_finalize_features(struct virtio_device *vdev)
> >   {
> > struct virtio_ccw_device *vcdev = to_vc_device(vdev);
> > @@ -791,6 +796,9 @@ static int virtio_ccw_finalize_features(struct 
> > virtio_device *vdev)
> > /* Give virtio_ring a chance to accept features. */
> > vring_transport_features(vdev);
> > +   /* Give virtio_ccw a chance to accept features. */
> > +   ccw_transport_features(vdev);
> > +
> > features->index = 0;
> > features->features = cpu_to_le32((u32)vdev->features);
> > /* Write the first half of the feature bits to the host. */
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 4b3f9e1a3cab..64f20023f088 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -1968,6 +1968,8 @@ void vring_transport_features(struct virtio_device 
> > *vdev)
> > break;
> > case VIRTIO_F_IOMMU_PLATFORM:
> > break;
> > +   case VIRTIO_F_RING_PACKED:
> > +   break;
> > default:
> > /* We don't understand this bit. */
> > __virtio_clear_bit(vdev, i);
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Linus Torvalds
NAK.

On Tue, Jul 10, 2018 at 2:56 AM Wei Wang  wrote:
>
> +
> +   buf_page = list_first_entry_or_null(pages, struct page, lru);
> +   if (!buf_page)
> +   return -EINVAL;
> +   buf = (__le64 *)page_address(buf_page);

Stop this garbage.

Why the hell would you pass in some crazy "liost of pages" that uses
that lru list?

That's just insane shit.

Just pass in a an array to fill in. No idiotic games like this with
odd list entries (what's the locking?) and crazy casting to

So if you want an array of page addresses, pass that in as such. If
you want to do it in a page, do it with

u64 *array = page_address(page);
int nr = PAGE_SIZE / sizeof(u64);

and now you pass that array in to the thing. None of this completely
insane crazy crap interfaces.

Plus, I still haven't heard an explanation for why you want so many
pages in the first place, and why you want anything but MAX_ORDER-1.

So no. This kind of unnecessarily complex code with completely insane
calling interfaces does not make it into the VM layer.

Maybe that crazy "let's pass a chain of pages that uses the lru list"
makes sense to the virtio-balloon code. But you need to understand
that it makes ZERO conceptual sense to anybody else. And the core VM
code is about a million times more important than the balloon code in
this case, so you had better make the interface make sense to *it*.

   Linus
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Wang, Wei W
On Tuesday, July 10, 2018 5:31 PM, Wang, Wei W wrote:
> Subject: [PATCH v35 1/5] mm: support to get hints of free page blocks
> 
> This patch adds support to get free page blocks from a free page list.
> The physical addresses of the blocks are stored to a list of buffers passed
> from the caller. The obtained free page blocks are hints about free pages,
> because there is no guarantee that they are still on the free page list after 
> the
> function returns.
> 
> One use example of this patch is to accelerate live migration by skipping the
> transfer of free pages reported from the guest. A popular method used by
> the hypervisor to track which part of memory is written during live migration
> is to write-protect all the guest memory. So, those pages that are hinted as
> free pages but are written after this function returns will be captured by the
> hypervisor, and they will be added to the next round of memory transfer.
> 
> Suggested-by: Linus Torvalds 
> Signed-off-by: Wei Wang 
> Signed-off-by: Liang Li 
> Cc: Michal Hocko 
> Cc: Andrew Morton 
> Cc: Michael S. Tsirkin 
> Cc: Linus Torvalds 
> ---
>  include/linux/mm.h |  3 ++
>  mm/page_alloc.c| 98
> ++
>  2 files changed, 101 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9f..5ce654f
> 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long *
> zones_size);  extern void free_area_init_node(int nid, unsigned long *
> zones_size,
>   unsigned long zone_start_pfn, unsigned long *zholes_size);
> extern void free_initmem(void);
> +unsigned long max_free_page_blocks(int order); int
> +get_from_free_page_list(int order, struct list_head *pages,
> + unsigned int size, unsigned long *loaded_num);
> 
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100..b67839b
> 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
>   show_swap_cache_info();
>  }
> 
> +/**
> + * max_free_page_blocks - estimate the max number of free page blocks
> + * @order: the order of the free page blocks to estimate
> + *
> + * This function gives a rough estimation of the possible maximum
> +number of
> + * free page blocks a free list may have. The estimation works on an
> +assumption
> + * that all the system pages are on that list.
> + *
> + * Context: Any context.
> + *
> + * Return: The largest number of free page blocks that the free list can 
> have.
> + */
> +unsigned long max_free_page_blocks(int order) {
> + return totalram_pages / (1 << order);
> +}
> +EXPORT_SYMBOL_GPL(max_free_page_blocks);
> +
> +/**
> + * get_from_free_page_list - get hints of free pages from a free page
> +list
> + * @order: the order of the free page list to check
> + * @pages: the list of page blocks used as buffers to load the
> +addresses
> + * @size: the size of each buffer in bytes
> + * @loaded_num: the number of addresses loaded to the buffers
> + *
> + * This function offers hints about free pages. The addresses of free
> +page
> + * blocks are stored to the list of buffers passed from the caller.
> +There is
> + * no guarantee that the obtained free pages are still on the free page
> +list
> + * after the function returns. pfn_to_page on the obtained free pages
> +is
> + * strongly discouraged and if there is an absolute need for that, make
> +sure
> + * to contact MM people to discuss potential problems.
> + *
> + * The addresses are currently stored to a buffer in little endian.
> +This
> + * avoids the overhead of converting endianness by the caller who needs
> +data
> + * in the little endian format. Big endian support can be added on
> +demand in
> + * the future.
> + *
> + * Context: Process context.
> + *
> + * Return: 0 if all the free page block addresses are stored to the buffers;
> + * -ENOSPC if the buffers are not sufficient to store all the
> + * addresses; or -EINVAL if an unexpected argument is received (e.g.
> + * incorrect @order, empty buffer list).
> + */
> +int get_from_free_page_list(int order, struct list_head *pages,
> + unsigned int size, unsigned long *loaded_num) {


Hi Linus,

We  took your original suggestion - pass in pre-allocated buffers to load the 
addresses (now we use a list of pre-allocated page blocks as buffers). Hope 
that suggestion is still acceptable (the advantage of this method was explained 
here: https://lkml.org/lkml/2018/6/28/184).
Look forward to getting your feedback. Thanks.

Best,
Wei 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules

2018-07-10 Thread Wei Wang
In some usages, e.g. virtio-balloon, a kernel module needs to know if
page poisoning is in use. This patch exposes the page_poisoning_enabled
function to kernel modules.

Signed-off-by: Wei Wang 
Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Acked-by: Andrew Morton 
---
 mm/page_poison.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/mm/page_poison.c b/mm/page_poison.c
index aa2b3d3..830f604 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -17,6 +17,11 @@ static int __init early_page_poison_param(char *buf)
 }
 early_param("page_poison", early_page_poison_param);
 
+/**
+ * page_poisoning_enabled - check if page poisoning is enabled
+ *
+ * Return true if page poisoning is enabled, or false if not.
+ */
 bool page_poisoning_enabled(void)
 {
/*
@@ -29,6 +34,7 @@ bool page_poisoning_enabled(void)
(!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
debug_pagealloc_enabled()));
 }
+EXPORT_SYMBOL_GPL(page_poisoning_enabled);
 
 static void poison_page(struct page *page)
 {
-- 
2.7.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON

2018-07-10 Thread Wei Wang
The VIRTIO_BALLOON_F_PAGE_POISON feature bit is used to indicate if the
guest is using page poisoning. Guest writes to the poison_val config
field to tell host about the page poisoning value that is in use.

Suggested-by: Michael S. Tsirkin 
Signed-off-by: Wei Wang 
Cc: Michael S. Tsirkin 
Cc: Michal Hocko 
Cc: Andrew Morton 
---
 drivers/virtio/virtio_balloon.c | 10 ++
 include/uapi/linux/virtio_balloon.h |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 8754154..dd61660 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -869,6 +869,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
struct virtio_balloon *vb;
+   __u32 poison_val;
int err;
 
if (!vdev->config->get) {
@@ -916,6 +917,11 @@ static int virtballoon_probe(struct virtio_device *vdev)
INIT_WORK(>report_free_page_work, report_free_page_func);
vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
+   if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+   memset(_val, PAGE_POISON, sizeof(poison_val));
+   virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+ poison_val, _val);
+   }
}
 
vb->nb.notifier_call = virtballoon_oom_notify;
@@ -1034,6 +1040,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+   if (!page_poisoning_enabled())
+   __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
return 0;
 }
@@ -1043,6 +1052,7 @@ static unsigned int features[] = {
VIRTIO_BALLOON_F_STATS_VQ,
VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+   VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h 
b/include/uapi/linux/virtio_balloon.h
index b77919b..97415ba 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -35,6 +35,7 @@
 #define VIRTIO_BALLOON_F_STATS_VQ  1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM2 /* Deflate balloon on OOM */
 #define VIRTIO_BALLOON_F_FREE_PAGE_HINT3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON   4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -47,6 +48,8 @@ struct virtio_balloon_config {
__u32 actual;
/* Free page report command id, readonly by guest */
__u32 free_page_report_cmd_id;
+   /* Stores PAGE_POISON if page poisoning is in use */
+   __u32 poison_val;
 };
 
 struct virtio_balloon_free_page_hints_cmd {
-- 
2.7.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

2018-07-10 Thread Wei Wang
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free page hints by sending a new cmd id
to the guest via the free_page_report_cmd_id configuration register.

As the first step here, virtio-balloon only reports free page hints from
the max order (i.e. 10) free page list to host. This has generated similar
good results as reporting all free page hints during our tests.

When the guest starts to report, it first sends a start cmd to host via
the free page vq, which acks to host the cmd id received, and tells it the
hint size (e.g. 4MB each on x86). When the guest finishes the reporting,
a stop cmd is sent to host via the vq.

TODO:
- support reporting free page hints from smaller order free page lists
  when there is a need/request from users.

Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michael S. Tsirkin 
Cc: Michal Hocko 
Cc: Andrew Morton 
---
 drivers/virtio/virtio_balloon.c | 399 +---
 include/uapi/linux/virtio_balloon.h |  11 +
 2 files changed, 384 insertions(+), 26 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 9356a1a..8754154 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -43,6 +43,14 @@
 #define OOM_VBALLOON_DEFAULT_PAGES 256
 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
 
+/* The order used to allocate a buffer to load free page hints */
+#define VIRTIO_BALLOON_HINT_BUF_ORDER (MAX_ORDER - 1)
+/* The number of pages a hint buffer has */
+#define VIRTIO_BALLOON_HINT_BUF_PAGES (1 << VIRTIO_BALLOON_HINT_BUF_ORDER)
+/* The size of a hint buffer in bytes */
+#define VIRTIO_BALLOON_HINT_BUF_SIZE (VIRTIO_BALLOON_HINT_BUF_PAGES << \
+ PAGE_SHIFT)
+
 static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES;
 module_param(oom_pages, int, S_IRUSR | S_IWUSR);
 MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
@@ -51,9 +59,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+enum virtio_balloon_vq {
+   VIRTIO_BALLOON_VQ_INFLATE,
+   VIRTIO_BALLOON_VQ_DEFLATE,
+   VIRTIO_BALLOON_VQ_STATS,
+   VIRTIO_BALLOON_VQ_FREE_PAGE,
+   VIRTIO_BALLOON_VQ_MAX
+};
+
 struct virtio_balloon {
struct virtio_device *vdev;
-   struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+   struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+   /* Balloon's own wq for cpu-intensive work items */
+   struct workqueue_struct *balloon_wq;
+   /* The free page reporting work item submitted to the balloon wq */
+   struct work_struct report_free_page_work;
 
/* The balloon servicing is delegated to a freezable workqueue. */
struct work_struct update_balloon_stats_work;
@@ -63,6 +84,15 @@ struct virtio_balloon {
spinlock_t stop_update_lock;
bool stop_update;
 
+   /* Command buffers to start and stop the reporting of hints to host */
+   struct virtio_balloon_free_page_hints_cmd cmd_start;
+   struct virtio_balloon_free_page_hints_cmd cmd_stop;
+
+   /* The cmd id received from host */
+   u32 cmd_id_received;
+   /* The cmd id that is actively in use */
+   u32 cmd_id_active;
+
/* Waiting for host to ack the pages we released. */
wait_queue_head_t acked;
 
@@ -326,17 +356,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-   struct virtio_balloon *vb = vdev->priv;
-   unsigned long flags;
-
-   spin_lock_irqsave(>stop_update_lock, flags);
-   if (!vb->stop_update)
-   queue_work(system_freezable_wq, >update_balloon_size_work);
-   spin_unlock_irqrestore(>stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
s64 target;
@@ -353,6 +372,35 @@ static inline s64 towards_target(struct virtio_balloon *vb)
return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+   struct virtio_balloon *vb = vdev->priv;
+   unsigned long flags;
+   s64 diff = towards_target(vb);
+
+   if (diff) {
+   spin_lock_irqsave(>stop_update_lock, flags);
+   if (!vb->stop_update)
+   queue_work(system_freezable_wq,
+  >update_balloon_size_work);
+   spin_unlock_irqrestore(>stop_update_lock, flags);
+   }
+
+   if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+   virtio_cread(vdev, struct virtio_balloon_config,
+free_page_report_cmd_id, >cmd_id_received);
+   if (vb->cmd_id_received !=
+   VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID &&
+  

[PATCH v35 2/5] virtio-balloon: remove BUG() in init_vqs

2018-07-10 Thread Wei Wang
It's a bit overkill to use BUG when failing to add an entry to the
stats_vq in init_vqs. So remove it and just return the error to the
caller to bail out nicely.

Signed-off-by: Wei Wang 
Cc: Michael S. Tsirkin 
---
 drivers/virtio/virtio_balloon.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 6b237e3..9356a1a 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -455,9 +455,13 @@ static int init_vqs(struct virtio_balloon *vb)
num_stats = update_balloon_stats(vb);
 
sg_init_one(, vb->stats, sizeof(vb->stats[0]) * num_stats);
-   if (virtqueue_add_outbuf(vb->stats_vq, , 1, vb, GFP_KERNEL)
-   < 0)
-   BUG();
+   err = virtqueue_add_outbuf(vb->stats_vq, , 1, vb,
+  GFP_KERNEL);
+   if (err) {
+   dev_warn(>vdev->dev, "%s: add stat_vq failed\n",
+__func__);
+   return err;
+   }
virtqueue_kick(vb->stats_vq);
}
return 0;
-- 
2.7.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v35 1/5] mm: support to get hints of free page blocks

2018-07-10 Thread Wei Wang
This patch adds support to get free page blocks from a free page list.
The physical addresses of the blocks are stored to a list of buffers
passed from the caller. The obtained free page blocks are hints about
free pages, because there is no guarantee that they are still on the free
page list after the function returns.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are hinted as free pages but are written after this function returns will
be captured by the hypervisor, and they will be added to the next round of
memory transfer.

Suggested-by: Linus Torvalds 
Signed-off-by: Wei Wang 
Signed-off-by: Liang Li 
Cc: Michal Hocko 
Cc: Andrew Morton 
Cc: Michael S. Tsirkin 
Cc: Linus Torvalds 
---
 include/linux/mm.h |  3 ++
 mm/page_alloc.c| 98 ++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a0fbb9f..5ce654f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long * zones_size);
 extern void free_area_init_node(int nid, unsigned long * zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
+unsigned long max_free_page_blocks(int order);
+int get_from_free_page_list(int order, struct list_head *pages,
+   unsigned int size, unsigned long *loaded_num);
 
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1521100..b67839b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter, nodemask_t 
*nodemask)
show_swap_cache_info();
 }
 
+/**
+ * max_free_page_blocks - estimate the max number of free page blocks
+ * @order: the order of the free page blocks to estimate
+ *
+ * This function gives a rough estimation of the possible maximum number of
+ * free page blocks a free list may have. The estimation works on an assumption
+ * that all the system pages are on that list.
+ *
+ * Context: Any context.
+ *
+ * Return: The largest number of free page blocks that the free list can have.
+ */
+unsigned long max_free_page_blocks(int order)
+{
+   return totalram_pages / (1 << order);
+}
+EXPORT_SYMBOL_GPL(max_free_page_blocks);
+
+/**
+ * get_from_free_page_list - get hints of free pages from a free page list
+ * @order: the order of the free page list to check
+ * @pages: the list of page blocks used as buffers to load the addresses
+ * @size: the size of each buffer in bytes
+ * @loaded_num: the number of addresses loaded to the buffers
+ *
+ * This function offers hints about free pages. The addresses of free page
+ * blocks are stored to the list of buffers passed from the caller. There is
+ * no guarantee that the obtained free pages are still on the free page list
+ * after the function returns. pfn_to_page on the obtained free pages is
+ * strongly discouraged and if there is an absolute need for that, make sure
+ * to contact MM people to discuss potential problems.
+ *
+ * The addresses are currently stored to a buffer in little endian. This
+ * avoids the overhead of converting endianness by the caller who needs data
+ * in the little endian format. Big endian support can be added on demand in
+ * the future.
+ *
+ * Context: Process context.
+ *
+ * Return: 0 if all the free page block addresses are stored to the buffers;
+ * -ENOSPC if the buffers are not sufficient to store all the
+ * addresses; or -EINVAL if an unexpected argument is received (e.g.
+ * incorrect @order, empty buffer list).
+ */
+int get_from_free_page_list(int order, struct list_head *pages,
+   unsigned int size, unsigned long *loaded_num)
+{
+   struct zone *zone;
+   enum migratetype mt;
+   struct list_head *free_list;
+   struct page *free_page, *buf_page;
+   unsigned long addr;
+   __le64 *buf;
+   unsigned int used_buf_num = 0, entry_index = 0,
+entries = size / sizeof(__le64);
+   *loaded_num = 0;
+
+   /* Validity check */
+   if (order < 0 || order >= MAX_ORDER)
+   return -EINVAL;
+
+   buf_page = list_first_entry_or_null(pages, struct page, lru);
+   if (!buf_page)
+   return -EINVAL;
+   buf = (__le64 *)page_address(buf_page);
+
+   for_each_populated_zone(zone) {
+   spin_lock_irq(>lock);
+   for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+   free_list = >free_area[order].free_list[mt];
+   list_for_each_entry(free_page, free_list, lru) {
+   addr = 

[PATCH v35 0/5] Virtio-balloon: support free page reporting

2018-07-10 Thread Wei Wang
This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Guest: 8G RAM, 4 vCPU
Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
- Idle Guest Live Migration Time (results are averaged over 10 runs):
- Optimization v.s. Legacy = 291ms vs 1757ms --> ~84% reduction
(setting page poisoning zero and enabling ksm don't affect the
 comparison result)
- Guest with Linux Compilation Workload (make bzImage -j4):
- Live Migration Time (average)
  Optimization v.s. Legacy = 1420ms v.s. 2528ms --> ~44% reduction
- Linux Compilation Time
  Optimization v.s. Legacy = 5min8s v.s. 5min12s
  --> no obvious difference

ChangeLog:
v34->v35:
- mm:
   - get_from_free_page_list: use a list of page blocks as buffers to
store addresses, instead of an array of buffers.
- virtio-balloon:
- Allocate a list of buffers, instead of an array of buffers.
- Used buffers are freed after host puts the buffer to the used
  ring; unused buffers are freed immediately when guest finishes
  reporting.
- change uint32_t to u32;
- patch 2 is split out as an independent patch, as it's unrelated
  to the free page hinting feature.
v33->v34:
- mm:
- add a new API max_free_page_blocks, which estimates the max
  number of free page blocks that a free page list may have
- get_from_free_page_list: store addresses to multiple arrays,
  instead of just one array. This removes the limitation of being
  able to report only 2TB free memory (the largest array memory
  that can be allocated on x86 is 4MB, which can store 2^19
  addresses of 4MB free page blocks).
- virtio-balloon:
- Allocate multiple arrays to load free page hints;
- Use the same method in v32 to do guest/host interaction, the
  differeces are
  - the hints are tranferred array by array, instead of
one by one.
  - send the free page block size of a hint along with the cmd
id to host, so that host knows each address represents e.g.
a 4MB memory in our case. 
v32->v33:
- mm/get_from_free_page_list: The new implementation to get free page
  hints based on the suggestions from Linus:
  https://lkml.org/lkml/2018/6/11/764
  This avoids the complex call chain, and looks more prudent.
- virtio-balloon: 
  - use a fix-sized buffer to get free page hints;
  - remove the cmd id related interface. Now host can just send a free
page hint command to the guest (via the host_cmd config register)
to start the reporting. Currentlty the guest reports only the max
order free page hints to host, which has generated similar good
results as before. But the interface used by virtio-balloon to
report can support reporting more orders in the future when there
is a need.
v31->v32:
- virtio-balloon:
- rename cmd_id_use to cmd_id_active;
- report_free_page_func: detach used buffers after host sends a vq
  interrupt, instead of busy waiting for used buffers.
v30->v31:
- virtio-balloon:
- virtio_balloon_send_free_pages: return -EINTR rather than 1 to
  indicate an active stop requested by host; and add more
  comments to explain about access to cmd_id_received without
  locks;
-  add_one_sg: add TODO to comments about possible improvement.
v29->v30:
- mm/walk_free_mem_block: add cond_sched() for each order
v28->v29:
- mm/page_poison: only expose page_poison_enabled(), rather than more
  changes did in v28, as we are not 100% confident about that for now.
- virtio-balloon: use a separate buffer for the stop cmd,