Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2019-01-02 Thread Jason Wang



On 2018/12/31 上午2:30, Michael S. Tsirkin wrote:

On Thu, Dec 27, 2018 at 05:39:21PM +0800, Jason Wang wrote:

On 2018/12/26 下午11:02, Michael S. Tsirkin wrote:

On Wed, Dec 26, 2018 at 11:57:32AM +0800, Jason Wang wrote:

On 2018/12/25 下午8:50, Michael S. Tsirkin wrote:

On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:

On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:

On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:

On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:

On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:

On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
   drivers/vhost/vhost.c | 178 ++
   drivers/vhost/vhost.h |  11 +++
   2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
   }
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.

The pages wil be released during set features.



So no THP, no NUMA rebalancing,

For THP, we will probably miss 2 or 4 pages, but does this really matter
consider the gain we have?

We as in vhost? networking isn't the only thing guest does.
We don't even know if this guest does a lot of networking.
You don't
know what else is in this huge page. Can be something very important
that guest touches all the time.

Well, the probability should be very small consider we usually give several
gigabytes to guest. The rest of the pages that doesn't sit in the same
hugepage with metadata can still be merged by THP.  Anyway, I can test the
differences.

Thanks!


For NUMA rebalancing, I'm even not quite sure if
it can helps for the case of IPC (vhost). It looks to me the worst case it
may cause page to be thrash between nodes if vhost and userspace are running
in two nodes.

So again it's a gain for vhost but has a completely unpredictable effect on
other functionality of the guest.

That's what bothers me with this approach.

So:

- The rest of the pages could still be balanced to other nodes, no?

- try to balance metadata pages (belongs to co-operate processes) itself is
still questionable

I am not sure why. It should be easy enough to force the VCPU and vhost
to move (e.g. start them pinned to 1 cpu, then pin them to another one).
Clearly sometimes this would be necessary for load balancing reasons.

Yes, but it looks to me the part of motivation of auto NUMA is to avoid
manual pinning.

... of memory. Yes.



With autonuma after a while (could take seconds but it will happen) the
memory will migrate.


Yes. As you mentioned during the discuss, I wonder we could do it similarly
through mmu notifier like APIC access page in commit c24ae0dcd3e ("kvm: x86:
Unpin and remove kvm_arch->apic_access_page")

That would be a possible approach.


Yes, this looks possible, and the 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-30 Thread Michael S. Tsirkin
On Thu, Dec 27, 2018 at 05:39:21PM +0800, Jason Wang wrote:
> 
> On 2018/12/26 下午11:02, Michael S. Tsirkin wrote:
> > On Wed, Dec 26, 2018 at 11:57:32AM +0800, Jason Wang wrote:
> > > On 2018/12/25 下午8:50, Michael S. Tsirkin wrote:
> > > > On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:
> > > > > On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:
> > > > > > On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:
> > > > > > > On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:
> > > > > > > > On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:
> > > > > > > > > On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:
> > > > > > > > > > On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> > > > > > > > > > > It was noticed that the copy_user() friends that was used 
> > > > > > > > > > > to access
> > > > > > > > > > > virtqueue metdata tends to be very expensive for dataplane
> > > > > > > > > > > implementation like vhost since it involves lots of 
> > > > > > > > > > > software check,
> > > > > > > > > > > speculation barrier, hardware feature toggling (e.g 
> > > > > > > > > > > SMAP). The
> > > > > > > > > > > extra cost will be more obvious when transferring small 
> > > > > > > > > > > packets.
> > > > > > > > > > > 
> > > > > > > > > > > This patch tries to eliminate those overhead by pin vq 
> > > > > > > > > > > metadata pages
> > > > > > > > > > > and access them through vmap(). During SET_VRING_ADDR, we 
> > > > > > > > > > > will setup
> > > > > > > > > > > those mappings and memory accessors are modified to use 
> > > > > > > > > > > pointers to
> > > > > > > > > > > access the metadata directly.
> > > > > > > > > > > 
> > > > > > > > > > > Note, this was only done when device IOTLB is not 
> > > > > > > > > > > enabled. We could
> > > > > > > > > > > use similar method to optimize it in the future.
> > > > > > > > > > > 
> > > > > > > > > > > Tests shows about ~24% improvement on TX PPS when using 
> > > > > > > > > > > virtio-user +
> > > > > > > > > > > vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not 
> > > > > > > > > > > enabled):
> > > > > > > > > > > 
> > > > > > > > > > > Before: ~5.0Mpps
> > > > > > > > > > > After:  ~6.1Mpps
> > > > > > > > > > > 
> > > > > > > > > > > Signed-off-by: Jason Wang
> > > > > > > > > > > ---
> > > > > > > > > > >   drivers/vhost/vhost.c | 178 
> > > > > > > > > > > ++
> > > > > > > > > > >   drivers/vhost/vhost.h |  11 +++
> > > > > > > > > > >   2 files changed, 189 insertions(+)
> > > > > > > > > > > 
> > > > > > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > > > > > > > index bafe39d2e637..1bd24203afb6 100644
> > > > > > > > > > > --- a/drivers/vhost/vhost.c
> > > > > > > > > > > +++ b/drivers/vhost/vhost.c
> > > > > > > > > > > @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev 
> > > > > > > > > > > *dev,
> > > > > > > > > > >   vq->indirect = NULL;
> > > > > > > > > > >   vq->heads = NULL;
> > > > > > > > > > >   vq->dev = dev;
> > > > > > > > > > > + memset(>avail_ring, 0, 
> > > > > > > > > > > sizeof(vq->avail_ring));
> > > > > > > > > > > + memset(>used_ring, 0, 
> > > > > > > > > > > sizeof(vq->used_ring));
> > > > > > > > > > > + memset(>desc_ring, 0, 
> > > > > > > > > > > sizeof(vq->desc_ring));
> > > > > > > > > > >   mutex_init(>mutex);
> > > > > > > > > > >   vhost_vq_reset(dev, vq);
> > > > > > > > > > >   if (vq->handle_kick)
> > > > > > > > > > > @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct 
> > > > > > > > > > > vhost_dev *dev)
> > > > > > > > > > >   spin_unlock(>iotlb_lock);
> > > > > > > > > > >   }
> > > > > > > > > > > +static int vhost_init_vmap(struct vhost_vmap *map, 
> > > > > > > > > > > unsigned long uaddr,
> > > > > > > > > > > +size_t size, int write)
> > > > > > > > > > > +{
> > > > > > > > > > > + struct page **pages;
> > > > > > > > > > > + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > > > > > > > > > + int npinned;
> > > > > > > > > > > + void *vaddr;
> > > > > > > > > > > +
> > > > > > > > > > > + pages = kmalloc_array(npages, sizeof(struct page *), 
> > > > > > > > > > > GFP_KERNEL);
> > > > > > > > > > > + if (!pages)
> > > > > > > > > > > + return -ENOMEM;
> > > > > > > > > > > +
> > > > > > > > > > > + npinned = get_user_pages_fast(uaddr, npages, write, 
> > > > > > > > > > > pages);
> > > > > > > > > > > + if (npinned != npages)
> > > > > > > > > > > + goto err;
> > > > > > > > > > > +
> > > > > > > > > > As I said I have doubts about the whole approach, but this
> > > > > > > > > > implementation in particular isn't a good idea
> > > > > > > > > > as it keeps the page around forever.
> > > > > > > The pages wil be released during set features.
> > > > > > > 
> > > > > > > 
> > > > > > > > > > So no THP, no NUMA 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-27 Thread Jason Wang



On 2018/12/26 下午11:02, Michael S. Tsirkin wrote:

On Wed, Dec 26, 2018 at 11:57:32AM +0800, Jason Wang wrote:

On 2018/12/25 下午8:50, Michael S. Tsirkin wrote:

On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:

On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:

On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:

On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:

On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:

On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
  drivers/vhost/vhost.c | 178 ++
  drivers/vhost/vhost.h |  11 +++
  2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
  }
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.

The pages wil be released during set features.



So no THP, no NUMA rebalancing,

For THP, we will probably miss 2 or 4 pages, but does this really matter
consider the gain we have?

We as in vhost? networking isn't the only thing guest does.
We don't even know if this guest does a lot of networking.
You don't
know what else is in this huge page. Can be something very important
that guest touches all the time.

Well, the probability should be very small consider we usually give several
gigabytes to guest. The rest of the pages that doesn't sit in the same
hugepage with metadata can still be merged by THP.  Anyway, I can test the
differences.

Thanks!


For NUMA rebalancing, I'm even not quite sure if
it can helps for the case of IPC (vhost). It looks to me the worst case it
may cause page to be thrash between nodes if vhost and userspace are running
in two nodes.

So again it's a gain for vhost but has a completely unpredictable effect on
other functionality of the guest.

That's what bothers me with this approach.

So:

- The rest of the pages could still be balanced to other nodes, no?

- try to balance metadata pages (belongs to co-operate processes) itself is
still questionable

I am not sure why. It should be easy enough to force the VCPU and vhost
to move (e.g. start them pinned to 1 cpu, then pin them to another one).
Clearly sometimes this would be necessary for load balancing reasons.


Yes, but it looks to me the part of motivation of auto NUMA is to avoid
manual pinning.

... of memory. Yes.



With autonuma after a while (could take seconds but it will happen) the
memory will migrate.


Yes. As you mentioned during the discuss, I wonder we could do it similarly
through mmu notifier like APIC access page in commit c24ae0dcd3e ("kvm: x86:
Unpin and remove kvm_arch->apic_access_page")

That would be a possible approach.



Yes, this looks possible, and the conversion seems not hard. Let me have 
a try with this.



[...]



I don't see how a kthread makes any 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-26 Thread Michael S. Tsirkin
On Wed, Dec 26, 2018 at 11:57:32AM +0800, Jason Wang wrote:
> 
> On 2018/12/25 下午8:50, Michael S. Tsirkin wrote:
> > On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:
> > > On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:
> > > > On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:
> > > > > On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:
> > > > > > On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:
> > > > > > > On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:
> > > > > > > > On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> > > > > > > > > It was noticed that the copy_user() friends that was used to 
> > > > > > > > > access
> > > > > > > > > virtqueue metdata tends to be very expensive for dataplane
> > > > > > > > > implementation like vhost since it involves lots of software 
> > > > > > > > > check,
> > > > > > > > > speculation barrier, hardware feature toggling (e.g SMAP). The
> > > > > > > > > extra cost will be more obvious when transferring small 
> > > > > > > > > packets.
> > > > > > > > > 
> > > > > > > > > This patch tries to eliminate those overhead by pin vq 
> > > > > > > > > metadata pages
> > > > > > > > > and access them through vmap(). During SET_VRING_ADDR, we 
> > > > > > > > > will setup
> > > > > > > > > those mappings and memory accessors are modified to use 
> > > > > > > > > pointers to
> > > > > > > > > access the metadata directly.
> > > > > > > > > 
> > > > > > > > > Note, this was only done when device IOTLB is not enabled. We 
> > > > > > > > > could
> > > > > > > > > use similar method to optimize it in the future.
> > > > > > > > > 
> > > > > > > > > Tests shows about ~24% improvement on TX PPS when using 
> > > > > > > > > virtio-user +
> > > > > > > > > vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not 
> > > > > > > > > enabled):
> > > > > > > > > 
> > > > > > > > > Before: ~5.0Mpps
> > > > > > > > > After:  ~6.1Mpps
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Jason Wang
> > > > > > > > > ---
> > > > > > > > >  drivers/vhost/vhost.c | 178 
> > > > > > > > > ++
> > > > > > > > >  drivers/vhost/vhost.h |  11 +++
> > > > > > > > >  2 files changed, 189 insertions(+)
> > > > > > > > > 
> > > > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > > > > > index bafe39d2e637..1bd24203afb6 100644
> > > > > > > > > --- a/drivers/vhost/vhost.c
> > > > > > > > > +++ b/drivers/vhost/vhost.c
> > > > > > > > > @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
> > > > > > > > >   vq->indirect = NULL;
> > > > > > > > >   vq->heads = NULL;
> > > > > > > > >   vq->dev = dev;
> > > > > > > > > + memset(>avail_ring, 0, 
> > > > > > > > > sizeof(vq->avail_ring));
> > > > > > > > > + memset(>used_ring, 0, 
> > > > > > > > > sizeof(vq->used_ring));
> > > > > > > > > + memset(>desc_ring, 0, 
> > > > > > > > > sizeof(vq->desc_ring));
> > > > > > > > >   mutex_init(>mutex);
> > > > > > > > >   vhost_vq_reset(dev, vq);
> > > > > > > > >   if (vq->handle_kick)
> > > > > > > > > @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct 
> > > > > > > > > vhost_dev *dev)
> > > > > > > > >   spin_unlock(>iotlb_lock);
> > > > > > > > >  }
> > > > > > > > > +static int vhost_init_vmap(struct vhost_vmap *map, unsigned 
> > > > > > > > > long uaddr,
> > > > > > > > > +size_t size, int write)
> > > > > > > > > +{
> > > > > > > > > + struct page **pages;
> > > > > > > > > + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > > > > > > > + int npinned;
> > > > > > > > > + void *vaddr;
> > > > > > > > > +
> > > > > > > > > + pages = kmalloc_array(npages, sizeof(struct page *), 
> > > > > > > > > GFP_KERNEL);
> > > > > > > > > + if (!pages)
> > > > > > > > > + return -ENOMEM;
> > > > > > > > > +
> > > > > > > > > + npinned = get_user_pages_fast(uaddr, npages, write, 
> > > > > > > > > pages);
> > > > > > > > > + if (npinned != npages)
> > > > > > > > > + goto err;
> > > > > > > > > +
> > > > > > > > As I said I have doubts about the whole approach, but this
> > > > > > > > implementation in particular isn't a good idea
> > > > > > > > as it keeps the page around forever.
> > > > > The pages wil be released during set features.
> > > > > 
> > > > > 
> > > > > > > > So no THP, no NUMA rebalancing,
> > > > > For THP, we will probably miss 2 or 4 pages, but does this really 
> > > > > matter
> > > > > consider the gain we have?
> > > > We as in vhost? networking isn't the only thing guest does.
> > > > We don't even know if this guest does a lot of networking.
> > > > You don't
> > > > know what else is in this huge page. Can be something very important
> > > > that guest touches all the time.
> > > 
> > > Well, the probability should be very small consider we usually give 
> > > 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-25 Thread Jason Wang



On 2018/12/25 下午8:50, Michael S. Tsirkin wrote:

On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:

On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:

On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:

On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:

On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:

On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
 drivers/vhost/vhost.c | 178 ++
 drivers/vhost/vhost.h |  11 +++
 2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
 }
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.

The pages wil be released during set features.



So no THP, no NUMA rebalancing,

For THP, we will probably miss 2 or 4 pages, but does this really matter
consider the gain we have?

We as in vhost? networking isn't the only thing guest does.
We don't even know if this guest does a lot of networking.
You don't
know what else is in this huge page. Can be something very important
that guest touches all the time.


Well, the probability should be very small consider we usually give several
gigabytes to guest. The rest of the pages that doesn't sit in the same
hugepage with metadata can still be merged by THP.  Anyway, I can test the
differences.

Thanks!


For NUMA rebalancing, I'm even not quite sure if
it can helps for the case of IPC (vhost). It looks to me the worst case it
may cause page to be thrash between nodes if vhost and userspace are running
in two nodes.

So again it's a gain for vhost but has a completely unpredictable effect on
other functionality of the guest.

That's what bothers me with this approach.


So:

- The rest of the pages could still be balanced to other nodes, no?

- try to balance metadata pages (belongs to co-operate processes) itself is
still questionable

I am not sure why. It should be easy enough to force the VCPU and vhost
to move (e.g. start them pinned to 1 cpu, then pin them to another one).
Clearly sometimes this would be necessary for load balancing reasons.



Yes, but it looks to me the part of motivation of auto NUMA is to avoid 
manual pinning.




With autonuma after a while (could take seconds but it will happen) the
memory will migrate.



Yes. As you mentioned during the discuss, I wonder we could do it 
similarly through mmu notifier like APIC access page in commit 
c24ae0dcd3e ("kvm: x86: Unpin and remove kvm_arch->apic_access_page")











This is the price of all GUP users not only vhost itself.

Yes. GUP is just not a great interface for vhost to use.

Zerocopy codes (enabled by defualt) use them for years.

But only for TX and temporarily. We pin, read, unpin.


Probably not. For several reasons that the page will be not be released 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-25 Thread Michael S. Tsirkin
On Tue, Dec 25, 2018 at 06:05:25PM +0800, Jason Wang wrote:
> 
> On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:
> > On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:
> > > On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:
> > > > On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:
> > > > > On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:
> > > > > > On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> > > > > > > It was noticed that the copy_user() friends that was used to 
> > > > > > > access
> > > > > > > virtqueue metdata tends to be very expensive for dataplane
> > > > > > > implementation like vhost since it involves lots of software 
> > > > > > > check,
> > > > > > > speculation barrier, hardware feature toggling (e.g SMAP). The
> > > > > > > extra cost will be more obvious when transferring small packets.
> > > > > > > 
> > > > > > > This patch tries to eliminate those overhead by pin vq metadata 
> > > > > > > pages
> > > > > > > and access them through vmap(). During SET_VRING_ADDR, we will 
> > > > > > > setup
> > > > > > > those mappings and memory accessors are modified to use pointers 
> > > > > > > to
> > > > > > > access the metadata directly.
> > > > > > > 
> > > > > > > Note, this was only done when device IOTLB is not enabled. We 
> > > > > > > could
> > > > > > > use similar method to optimize it in the future.
> > > > > > > 
> > > > > > > Tests shows about ~24% improvement on TX PPS when using 
> > > > > > > virtio-user +
> > > > > > > vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):
> > > > > > > 
> > > > > > > Before: ~5.0Mpps
> > > > > > > After:  ~6.1Mpps
> > > > > > > 
> > > > > > > Signed-off-by: Jason Wang
> > > > > > > ---
> > > > > > > drivers/vhost/vhost.c | 178 
> > > > > > > ++
> > > > > > > drivers/vhost/vhost.h |  11 +++
> > > > > > > 2 files changed, 189 insertions(+)
> > > > > > > 
> > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > > > index bafe39d2e637..1bd24203afb6 100644
> > > > > > > --- a/drivers/vhost/vhost.c
> > > > > > > +++ b/drivers/vhost/vhost.c
> > > > > > > @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
> > > > > > >   vq->indirect = NULL;
> > > > > > >   vq->heads = NULL;
> > > > > > >   vq->dev = dev;
> > > > > > > + memset(>avail_ring, 0, sizeof(vq->avail_ring));
> > > > > > > + memset(>used_ring, 0, sizeof(vq->used_ring));
> > > > > > > + memset(>desc_ring, 0, sizeof(vq->desc_ring));
> > > > > > >   mutex_init(>mutex);
> > > > > > >   vhost_vq_reset(dev, vq);
> > > > > > >   if (vq->handle_kick)
> > > > > > > @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct 
> > > > > > > vhost_dev *dev)
> > > > > > >   spin_unlock(>iotlb_lock);
> > > > > > > }
> > > > > > > +static int vhost_init_vmap(struct vhost_vmap *map, unsigned long 
> > > > > > > uaddr,
> > > > > > > +size_t size, int write)
> > > > > > > +{
> > > > > > > + struct page **pages;
> > > > > > > + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > > > > > + int npinned;
> > > > > > > + void *vaddr;
> > > > > > > +
> > > > > > > + pages = kmalloc_array(npages, sizeof(struct page *), 
> > > > > > > GFP_KERNEL);
> > > > > > > + if (!pages)
> > > > > > > + return -ENOMEM;
> > > > > > > +
> > > > > > > + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > > > > > > + if (npinned != npages)
> > > > > > > + goto err;
> > > > > > > +
> > > > > > As I said I have doubts about the whole approach, but this
> > > > > > implementation in particular isn't a good idea
> > > > > > as it keeps the page around forever.
> > > 
> > > The pages wil be released during set features.
> > > 
> > > 
> > > > > > So no THP, no NUMA rebalancing,
> > > 
> > > For THP, we will probably miss 2 or 4 pages, but does this really matter
> > > consider the gain we have?
> > We as in vhost? networking isn't the only thing guest does.
> > We don't even know if this guest does a lot of networking.
> > You don't
> > know what else is in this huge page. Can be something very important
> > that guest touches all the time.
> 
> 
> Well, the probability should be very small consider we usually give several
> gigabytes to guest. The rest of the pages that doesn't sit in the same
> hugepage with metadata can still be merged by THP.  Anyway, I can test the
> differences.

Thanks!

> 
> > 
> > > For NUMA rebalancing, I'm even not quite sure if
> > > it can helps for the case of IPC (vhost). It looks to me the worst case it
> > > may cause page to be thrash between nodes if vhost and userspace are 
> > > running
> > > in two nodes.
> > 
> > So again it's a gain for vhost but has a completely unpredictable effect on
> > other functionality of the guest.
> > 
> > That's what bothers me with this approach.
> 
> 
> So:
> 
> 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-25 Thread Jason Wang



On 2018/12/25 上午2:10, Michael S. Tsirkin wrote:

On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:

On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:

On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:

On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 178 ++
drivers/vhost/vhost.h |  11 +++
2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
}
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.


The pages wil be released during set features.



So no THP, no NUMA rebalancing,


For THP, we will probably miss 2 or 4 pages, but does this really matter
consider the gain we have?

We as in vhost? networking isn't the only thing guest does.
We don't even know if this guest does a lot of networking.
You don't
know what else is in this huge page. Can be something very important
that guest touches all the time.



Well, the probability should be very small consider we usually give 
several gigabytes to guest. The rest of the pages that doesn't sit in 
the same hugepage with metadata can still be merged by THP.  Anyway, I 
can test the differences.






For NUMA rebalancing, I'm even not quite sure if
it can helps for the case of IPC (vhost). It looks to me the worst case it
may cause page to be thrash between nodes if vhost and userspace are running
in two nodes.


So again it's a gain for vhost but has a completely unpredictable effect on
other functionality of the guest.

That's what bothers me with this approach.



So:

- The rest of the pages could still be balanced to other nodes, no?

- try to balance metadata pages (belongs to co-operate processes) itself 
is still questionable









This is the price of all GUP users not only vhost itself.

Yes. GUP is just not a great interface for vhost to use.


Zerocopy codes (enabled by defualt) use them for years.

But only for TX and temporarily. We pin, read, unpin.



Probably not. For several reasons that the page will be not be released 
soon or held for a very long period of time or even forever.





Your patch is different

- it writes into memory and GUP has known issues with file
   backed memory



The ordinary user for vhost is anonymous pages I think?



- it keeps pages pinned forever




What's more
important, the goal is not to be left too much behind for other backends
like DPDK or AF_XDP (all of which are using GUP).

So these guys assume userspace knows what it's doing.
We can't assume that.


What kind of assumption do you they have?



userspace-controlled
amount of memory locked up and not accounted for.

It's pretty easy to add this since the slow path was still kept. If we
exceeds the limitation, we can switch back to slow 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-24 Thread Michael S. Tsirkin
On Mon, Dec 24, 2018 at 03:53:16PM +0800, Jason Wang wrote:
> 
> On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:
> > On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:
> > > On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:
> > > > On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> > > > > It was noticed that the copy_user() friends that was used to access
> > > > > virtqueue metdata tends to be very expensive for dataplane
> > > > > implementation like vhost since it involves lots of software check,
> > > > > speculation barrier, hardware feature toggling (e.g SMAP). The
> > > > > extra cost will be more obvious when transferring small packets.
> > > > > 
> > > > > This patch tries to eliminate those overhead by pin vq metadata pages
> > > > > and access them through vmap(). During SET_VRING_ADDR, we will setup
> > > > > those mappings and memory accessors are modified to use pointers to
> > > > > access the metadata directly.
> > > > > 
> > > > > Note, this was only done when device IOTLB is not enabled. We could
> > > > > use similar method to optimize it in the future.
> > > > > 
> > > > > Tests shows about ~24% improvement on TX PPS when using virtio-user +
> > > > > vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):
> > > > > 
> > > > > Before: ~5.0Mpps
> > > > > After:  ~6.1Mpps
> > > > > 
> > > > > Signed-off-by: Jason Wang
> > > > > ---
> > > > >drivers/vhost/vhost.c | 178 
> > > > > ++
> > > > >drivers/vhost/vhost.h |  11 +++
> > > > >2 files changed, 189 insertions(+)
> > > > > 
> > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > index bafe39d2e637..1bd24203afb6 100644
> > > > > --- a/drivers/vhost/vhost.c
> > > > > +++ b/drivers/vhost/vhost.c
> > > > > @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
> > > > >   vq->indirect = NULL;
> > > > >   vq->heads = NULL;
> > > > >   vq->dev = dev;
> > > > > + memset(>avail_ring, 0, sizeof(vq->avail_ring));
> > > > > + memset(>used_ring, 0, sizeof(vq->used_ring));
> > > > > + memset(>desc_ring, 0, sizeof(vq->desc_ring));
> > > > >   mutex_init(>mutex);
> > > > >   vhost_vq_reset(dev, vq);
> > > > >   if (vq->handle_kick)
> > > > > @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev 
> > > > > *dev)
> > > > >   spin_unlock(>iotlb_lock);
> > > > >}
> > > > > +static int vhost_init_vmap(struct vhost_vmap *map, unsigned long 
> > > > > uaddr,
> > > > > +size_t size, int write)
> > > > > +{
> > > > > + struct page **pages;
> > > > > + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > > > + int npinned;
> > > > > + void *vaddr;
> > > > > +
> > > > > + pages = kmalloc_array(npages, sizeof(struct page *), 
> > > > > GFP_KERNEL);
> > > > > + if (!pages)
> > > > > + return -ENOMEM;
> > > > > +
> > > > > + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > > > > + if (npinned != npages)
> > > > > + goto err;
> > > > > +
> > > > As I said I have doubts about the whole approach, but this
> > > > implementation in particular isn't a good idea
> > > > as it keeps the page around forever.
> 
> 
> The pages wil be released during set features.
> 
> 
> > > > So no THP, no NUMA rebalancing,
> 
> 
> For THP, we will probably miss 2 or 4 pages, but does this really matter
> consider the gain we have?

We as in vhost? networking isn't the only thing guest does.
We don't even know if this guest does a lot of networking.
You don't
know what else is in this huge page. Can be something very important
that guest touches all the time.

> For NUMA rebalancing, I'm even not quite sure if
> it can helps for the case of IPC (vhost). It looks to me the worst case it
> may cause page to be thrash between nodes if vhost and userspace are running
> in two nodes.


So again it's a gain for vhost but has a completely unpredictable effect on
other functionality of the guest.

That's what bothers me with this approach.




> 
> > > 
> > > This is the price of all GUP users not only vhost itself.
> > Yes. GUP is just not a great interface for vhost to use.
> 
> 
> Zerocopy codes (enabled by defualt) use them for years.

But only for TX and temporarily. We pin, read, unpin.

Your patch is different

- it writes into memory and GUP has known issues with file
  backed memory
- it keeps pages pinned forever



> 
> > 
> > > What's more
> > > important, the goal is not to be left too much behind for other backends
> > > like DPDK or AF_XDP (all of which are using GUP).
> > 
> > So these guys assume userspace knows what it's doing.
> > We can't assume that.
> 
> 
> What kind of assumption do you they have?
> 
> 
> > 
> > > > userspace-controlled
> > > > amount of memory locked up and not accounted for.
> > > 
> > > It's pretty easy to add this since the 

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-23 Thread Jason Wang



On 2018/12/14 下午8:36, Michael S. Tsirkin wrote:

On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:

On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
   drivers/vhost/vhost.c | 178 ++
   drivers/vhost/vhost.h |  11 +++
   2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
   }
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.



The pages wil be released during set features.



So no THP, no NUMA rebalancing,



For THP, we will probably miss 2 or 4 pages, but does this really matter 
consider the gain we have? For NUMA rebalancing, I'm even not quite sure 
if it can helps for the case of IPC (vhost). It looks to me the worst 
case it may cause page to be thrash between nodes if vhost and userspace 
are running in two nodes.





This is the price of all GUP users not only vhost itself.

Yes. GUP is just not a great interface for vhost to use.



Zerocopy codes (enabled by defualt) use them for years.





What's more
important, the goal is not to be left too much behind for other backends
like DPDK or AF_XDP (all of which are using GUP).


So these guys assume userspace knows what it's doing.
We can't assume that.



What kind of assumption do you they have?





userspace-controlled
amount of memory locked up and not accounted for.


It's pretty easy to add this since the slow path was still kept. If we
exceeds the limitation, we can switch back to slow path.


Don't get me wrong it's a great patch in an ideal world.
But then in an ideal world no barriers smap etc are necessary at all.


Again, this is only for metadata accessing not the data which has been used
for years for real use cases.

For SMAP, it makes senses for the address that kernel can not forcast. But
it's not the case for the vhost metadata since we know the address will be
accessed very frequently. For speculation barrier, it helps nothing for the
data path of vhost which is a kthread.

I don't see how a kthread makes any difference. We do have a validation
step which makes some difference.



The problem is not kthread but the address of userspace address. The 
addresses of vq metadata tends to be consistent for a while, and vhost 
knows they will be frequently. SMAP doesn't help too much in this case.


Thanks.





Packet or AF_XDP benefit from
accessing metadata directly, we should do it as well.

Thanks


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-15 Thread David Miller
From: Jason Wang 
Date: Fri, 14 Dec 2018 11:57:35 +0800

> This is the price of all GUP users not only vhost itself. What's more
> important, the goal is not to be left too much behind for other
> backends like DPDK or AF_XDP (all of which are using GUP).

+1


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-14 Thread kbuild test robot
Hi Jason,

I love your patch! Yet something to improve:

[auto build test ERROR on net-next/master]

url:
https://github.com/0day-ci/linux/commits/Jason-Wang/vhost-accelerate-metadata-access-through-vmap/20181214-200417
config: mips-malta_kvm_defconfig (attached as .config)
compiler: mipsel-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.2.0 make.cross ARCH=mips 

All errors (new ones prefixed by >>):

   drivers//vhost/vhost.c: In function 'vhost_init_vmap':
>> drivers//vhost/vhost.c:648:3: error: implicit declaration of function 
>> 'release_pages'; did you mean 'release_task'? 
>> [-Werror=implicit-function-declaration]
  release_pages(pages, npinned);
  ^
  release_task
   cc1: some warnings being treated as errors

vim +648 drivers//vhost/vhost.c

   619  
   620  static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
   621 size_t size, int write)
   622  {
   623  struct page **pages;
   624  int npages = DIV_ROUND_UP(size, PAGE_SIZE);
   625  int npinned;
   626  void *vaddr;
   627  
   628  pages = kmalloc_array(npages, sizeof(struct page *), 
GFP_KERNEL);
   629  if (!pages)
   630  return -ENOMEM;
   631  
   632  npinned = get_user_pages_fast(uaddr, npages, write, pages);
   633  if (npinned != npages)
   634  goto err;
   635  
   636  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
   637  if (!vaddr)
   638  goto err;
   639  
   640  map->pages = pages;
   641  map->addr = vaddr + (uaddr & (PAGE_SIZE - 1));
   642  map->npages = npages;
   643  
   644  return 0;
   645  
   646  err:
   647  if (npinned > 0)
 > 648  release_pages(pages, npinned);
   649  kfree(pages);
   650  return -EFAULT;
   651  }
   652  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-14 Thread Michael S. Tsirkin
On Fri, Dec 14, 2018 at 11:57:35AM +0800, Jason Wang wrote:
> 
> On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:
> > On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> > > It was noticed that the copy_user() friends that was used to access
> > > virtqueue metdata tends to be very expensive for dataplane
> > > implementation like vhost since it involves lots of software check,
> > > speculation barrier, hardware feature toggling (e.g SMAP). The
> > > extra cost will be more obvious when transferring small packets.
> > > 
> > > This patch tries to eliminate those overhead by pin vq metadata pages
> > > and access them through vmap(). During SET_VRING_ADDR, we will setup
> > > those mappings and memory accessors are modified to use pointers to
> > > access the metadata directly.
> > > 
> > > Note, this was only done when device IOTLB is not enabled. We could
> > > use similar method to optimize it in the future.
> > > 
> > > Tests shows about ~24% improvement on TX PPS when using virtio-user +
> > > vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):
> > > 
> > > Before: ~5.0Mpps
> > > After:  ~6.1Mpps
> > > 
> > > Signed-off-by: Jason Wang
> > > ---
> > >   drivers/vhost/vhost.c | 178 ++
> > >   drivers/vhost/vhost.h |  11 +++
> > >   2 files changed, 189 insertions(+)
> > > 
> > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > index bafe39d2e637..1bd24203afb6 100644
> > > --- a/drivers/vhost/vhost.c
> > > +++ b/drivers/vhost/vhost.c
> > > @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
> > >   vq->indirect = NULL;
> > >   vq->heads = NULL;
> > >   vq->dev = dev;
> > > + memset(>avail_ring, 0, sizeof(vq->avail_ring));
> > > + memset(>used_ring, 0, sizeof(vq->used_ring));
> > > + memset(>desc_ring, 0, sizeof(vq->desc_ring));
> > >   mutex_init(>mutex);
> > >   vhost_vq_reset(dev, vq);
> > >   if (vq->handle_kick)
> > > @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
> > >   spin_unlock(>iotlb_lock);
> > >   }
> > > +static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
> > > +size_t size, int write)
> > > +{
> > > + struct page **pages;
> > > + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > + int npinned;
> > > + void *vaddr;
> > > +
> > > + pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> > > + if (!pages)
> > > + return -ENOMEM;
> > > +
> > > + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > > + if (npinned != npages)
> > > + goto err;
> > > +
> > As I said I have doubts about the whole approach, but this
> > implementation in particular isn't a good idea
> > as it keeps the page around forever.
> > So no THP, no NUMA rebalancing,
> 
> 
> This is the price of all GUP users not only vhost itself.

Yes. GUP is just not a great interface for vhost to use.

> What's more
> important, the goal is not to be left too much behind for other backends
> like DPDK or AF_XDP (all of which are using GUP).


So these guys assume userspace knows what it's doing.
We can't assume that.

> 
> > userspace-controlled
> > amount of memory locked up and not accounted for.
> 
> 
> It's pretty easy to add this since the slow path was still kept. If we
> exceeds the limitation, we can switch back to slow path.
> 
> > 
> > Don't get me wrong it's a great patch in an ideal world.
> > But then in an ideal world no barriers smap etc are necessary at all.
> 
> 
> Again, this is only for metadata accessing not the data which has been used
> for years for real use cases.
> 
> For SMAP, it makes senses for the address that kernel can not forcast. But
> it's not the case for the vhost metadata since we know the address will be
> accessed very frequently. For speculation barrier, it helps nothing for the
> data path of vhost which is a kthread.

I don't see how a kthread makes any difference. We do have a validation
step which makes some difference.

> Packet or AF_XDP benefit from
> accessing metadata directly, we should do it as well.
> 
> Thanks


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-13 Thread Jason Wang



On 2018/12/13 下午11:44, Michael S. Tsirkin wrote:

On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:

It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang
---
  drivers/vhost/vhost.c | 178 ++
  drivers/vhost/vhost.h |  11 +++
  2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
  }
  
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,

+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.
So no THP, no NUMA rebalancing,



This is the price of all GUP users not only vhost itself. What's more 
important, the goal is not to be left too much behind for other backends 
like DPDK or AF_XDP (all of which are using GUP).




userspace-controlled
amount of memory locked up and not accounted for.



It's pretty easy to add this since the slow path was still kept. If we 
exceeds the limitation, we can switch back to slow path.





Don't get me wrong it's a great patch in an ideal world.
But then in an ideal world no barriers smap etc are necessary at all.



Again, this is only for metadata accessing not the data which has been 
used for years for real use cases.


For SMAP, it makes senses for the address that kernel can not forcast. 
But it's not the case for the vhost metadata since we know the address 
will be accessed very frequently. For speculation barrier, it helps 
nothing for the data path of vhost which is a kthread. Packet or AF_XDP 
benefit from accessing metadata directly, we should do it as well.


Thanks



Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-13 Thread Michael S. Tsirkin
On Thu, Dec 13, 2018 at 04:18:40PM -0500, Konrad Rzeszutek Wilk wrote:
> .giant snip..
> > > + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > > + if (npinned != npages)
> > > + goto err;
> > > +
> > 
> > As I said I have doubts about the whole approach, but this
> > implementation in particular isn't a good idea
> > as it keeps the page around forever.
> > So no THP, no NUMA rebalancing, userspace-controlled
> > amount of memory locked up and not accounted for.
> > 
> > Don't get me wrong it's a great patch in an ideal world.
> > But then in an ideal world no barriers smap etc are necessary at all.
> 
> So .. suggestions on how this could be accepted? As in other ways
> where we still get vmap and the issues you mentioned are not troubling you?
> 
> Thanks!

I'd suggest leave vmap alone and find ways to speed up accesses
that can fault.

-- 
MST


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-13 Thread Konrad Rzeszutek Wilk
.giant snip..
> > +   npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > +   if (npinned != npages)
> > +   goto err;
> > +
> 
> As I said I have doubts about the whole approach, but this
> implementation in particular isn't a good idea
> as it keeps the page around forever.
> So no THP, no NUMA rebalancing, userspace-controlled
> amount of memory locked up and not accounted for.
> 
> Don't get me wrong it's a great patch in an ideal world.
> But then in an ideal world no barriers smap etc are necessary at all.

So .. suggestions on how this could be accepted? As in other ways
where we still get vmap and the issues you mentioned are not troubling you?

Thanks!


Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-13 Thread Michael S. Tsirkin
On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote:
> It was noticed that the copy_user() friends that was used to access
> virtqueue metdata tends to be very expensive for dataplane
> implementation like vhost since it involves lots of software check,
> speculation barrier, hardware feature toggling (e.g SMAP). The
> extra cost will be more obvious when transferring small packets.
> 
> This patch tries to eliminate those overhead by pin vq metadata pages
> and access them through vmap(). During SET_VRING_ADDR, we will setup
> those mappings and memory accessors are modified to use pointers to
> access the metadata directly.
> 
> Note, this was only done when device IOTLB is not enabled. We could
> use similar method to optimize it in the future.
> 
> Tests shows about ~24% improvement on TX PPS when using virtio-user +
> vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):
> 
> Before: ~5.0Mpps
> After:  ~6.1Mpps
> 
> Signed-off-by: Jason Wang 
> ---
>  drivers/vhost/vhost.c | 178 ++
>  drivers/vhost/vhost.h |  11 +++
>  2 files changed, 189 insertions(+)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index bafe39d2e637..1bd24203afb6 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
>   vq->indirect = NULL;
>   vq->heads = NULL;
>   vq->dev = dev;
> + memset(>avail_ring, 0, sizeof(vq->avail_ring));
> + memset(>used_ring, 0, sizeof(vq->used_ring));
> + memset(>desc_ring, 0, sizeof(vq->desc_ring));
>   mutex_init(>mutex);
>   vhost_vq_reset(dev, vq);
>   if (vq->handle_kick)
> @@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
>   spin_unlock(>iotlb_lock);
>  }
>  
> +static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
> +size_t size, int write)
> +{
> + struct page **pages;
> + int npages = DIV_ROUND_UP(size, PAGE_SIZE);
> + int npinned;
> + void *vaddr;
> +
> + pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> + if (!pages)
> + return -ENOMEM;
> +
> + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> + if (npinned != npages)
> + goto err;
> +

As I said I have doubts about the whole approach, but this
implementation in particular isn't a good idea
as it keeps the page around forever.
So no THP, no NUMA rebalancing, userspace-controlled
amount of memory locked up and not accounted for.

Don't get me wrong it's a great patch in an ideal world.
But then in an ideal world no barriers smap etc are necessary at all.


> + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + goto err;
> +
> + map->pages = pages;
> + map->addr = vaddr + (uaddr & (PAGE_SIZE - 1));
> + map->npages = npages;
> +
> + return 0;
> +
> +err:
> + if (npinned > 0)
> + release_pages(pages, npinned);
> + kfree(pages);
> + return -EFAULT;
> +}
> +
> +static void vhost_uninit_vmap(struct vhost_vmap *map)
> +{
> + if (!map->addr)
> + return;
> +
> + vunmap(map->addr);
> + release_pages(map->pages, map->npages);
> + kfree(map->pages);
> +
> + map->addr = NULL;
> + map->pages = NULL;
> + map->npages = 0;
> +}
> +
> +static void vhost_clean_vmaps(struct vhost_virtqueue *vq)
> +{
> + vhost_uninit_vmap(>avail_ring);
> + vhost_uninit_vmap(>desc_ring);
> + vhost_uninit_vmap(>used_ring);
> +}
> +
> +static int vhost_setup_vmaps(struct vhost_virtqueue *vq, unsigned long avail,
> +  unsigned long desc, unsigned long used)
> +{
> + size_t event = vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0;
> + size_t avail_size, desc_size, used_size;
> + int ret;
> +
> + vhost_clean_vmaps(vq);
> +
> + avail_size = sizeof(*vq->avail) +
> +  sizeof(*vq->avail->ring) * vq->num + event;
> + ret = vhost_init_vmap(>avail_ring, avail, avail_size, false);
> + if (ret) {
> + vq_err(vq, "Fail to setup vmap for avail ring!\n");
> + goto err_avail;
> + }
> +
> + desc_size = sizeof(*vq->desc) * vq->num;
> + ret = vhost_init_vmap(>desc_ring, desc, desc_size, false);
> + if (ret) {
> + vq_err(vq, "Fail to setup vmap for desc ring!\n");
> + goto err_desc;
> + }
> +
> + used_size = sizeof(*vq->used) +
> + sizeof(*vq->used->ring) * vq->num + event;
> + ret = vhost_init_vmap(>used_ring, used, used_size, true);
> + if (ret) {
> + vq_err(vq, "Fail to setup vmap for used ring!\n");
> + goto err_used;
> + }
> +
> + return 0;
> +
> +err_used:
> + vhost_uninit_vmap(>used_ring);
> +err_desc:
> + vhost_uninit_vmap(>avail_ring);
> 

[PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-13 Thread Jason Wang
It was noticed that the copy_user() friends that was used to access
virtqueue metdata tends to be very expensive for dataplane
implementation like vhost since it involves lots of software check,
speculation barrier, hardware feature toggling (e.g SMAP). The
extra cost will be more obvious when transferring small packets.

This patch tries to eliminate those overhead by pin vq metadata pages
and access them through vmap(). During SET_VRING_ADDR, we will setup
those mappings and memory accessors are modified to use pointers to
access the metadata directly.

Note, this was only done when device IOTLB is not enabled. We could
use similar method to optimize it in the future.

Tests shows about ~24% improvement on TX PPS when using virtio-user +
vhost_net + xdp1 on TAP (CONFIG_HARDENED_USERCOPY is not enabled):

Before: ~5.0Mpps
After:  ~6.1Mpps

Signed-off-by: Jason Wang 
---
 drivers/vhost/vhost.c | 178 ++
 drivers/vhost/vhost.h |  11 +++
 2 files changed, 189 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index bafe39d2e637..1bd24203afb6 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -443,6 +443,9 @@ void vhost_dev_init(struct vhost_dev *dev,
vq->indirect = NULL;
vq->heads = NULL;
vq->dev = dev;
+   memset(>avail_ring, 0, sizeof(vq->avail_ring));
+   memset(>used_ring, 0, sizeof(vq->used_ring));
+   memset(>desc_ring, 0, sizeof(vq->desc_ring));
mutex_init(>mutex);
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
@@ -614,6 +617,102 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(>iotlb_lock);
 }
 
+static int vhost_init_vmap(struct vhost_vmap *map, unsigned long uaddr,
+  size_t size, int write)
+{
+   struct page **pages;
+   int npages = DIV_ROUND_UP(size, PAGE_SIZE);
+   int npinned;
+   void *vaddr;
+
+   pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return -ENOMEM;
+
+   npinned = get_user_pages_fast(uaddr, npages, write, pages);
+   if (npinned != npages)
+   goto err;
+
+   vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
+   if (!vaddr)
+   goto err;
+
+   map->pages = pages;
+   map->addr = vaddr + (uaddr & (PAGE_SIZE - 1));
+   map->npages = npages;
+
+   return 0;
+
+err:
+   if (npinned > 0)
+   release_pages(pages, npinned);
+   kfree(pages);
+   return -EFAULT;
+}
+
+static void vhost_uninit_vmap(struct vhost_vmap *map)
+{
+   if (!map->addr)
+   return;
+
+   vunmap(map->addr);
+   release_pages(map->pages, map->npages);
+   kfree(map->pages);
+
+   map->addr = NULL;
+   map->pages = NULL;
+   map->npages = 0;
+}
+
+static void vhost_clean_vmaps(struct vhost_virtqueue *vq)
+{
+   vhost_uninit_vmap(>avail_ring);
+   vhost_uninit_vmap(>desc_ring);
+   vhost_uninit_vmap(>used_ring);
+}
+
+static int vhost_setup_vmaps(struct vhost_virtqueue *vq, unsigned long avail,
+unsigned long desc, unsigned long used)
+{
+   size_t event = vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0;
+   size_t avail_size, desc_size, used_size;
+   int ret;
+
+   vhost_clean_vmaps(vq);
+
+   avail_size = sizeof(*vq->avail) +
+sizeof(*vq->avail->ring) * vq->num + event;
+   ret = vhost_init_vmap(>avail_ring, avail, avail_size, false);
+   if (ret) {
+   vq_err(vq, "Fail to setup vmap for avail ring!\n");
+   goto err_avail;
+   }
+
+   desc_size = sizeof(*vq->desc) * vq->num;
+   ret = vhost_init_vmap(>desc_ring, desc, desc_size, false);
+   if (ret) {
+   vq_err(vq, "Fail to setup vmap for desc ring!\n");
+   goto err_desc;
+   }
+
+   used_size = sizeof(*vq->used) +
+   sizeof(*vq->used->ring) * vq->num + event;
+   ret = vhost_init_vmap(>used_ring, used, used_size, true);
+   if (ret) {
+   vq_err(vq, "Fail to setup vmap for used ring!\n");
+   goto err_used;
+   }
+
+   return 0;
+
+err_used:
+   vhost_uninit_vmap(>used_ring);
+err_desc:
+   vhost_uninit_vmap(>avail_ring);
+err_avail:
+   return -EFAULT;
+}
+
 void vhost_dev_cleanup(struct vhost_dev *dev)
 {
int i;
@@ -626,6 +725,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
if (dev->vqs[i]->call_ctx)
eventfd_ctx_put(dev->vqs[i]->call_ctx);
vhost_vq_reset(dev, dev->vqs[i]);
+   vhost_clean_vmaps(dev->vqs[i]);
}
vhost_dev_free_iovecs(dev);
if (dev->log_ctx)
@@ -873,6 +973,14 @@ static inline void __user *__vhost_get_user(struct 
vhost_virtqueue *vq,
 
 static inline int