Re: [PATCH] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread Jason Wang
On 2019/8/15 下午4:36, 冉 jiang wrote: On 2019/8/15 11:17, Jason Wang wrote: On 2019/8/15 上午11:11, 冉 jiang wrote: On 2019/8/15 11:01, Jason Wang wrote: On 2019/8/14 上午10:06, ? jiang wrote: This change lowers ring buffer reclaim threshold from 1/2*queue to budget for better performance.

Re: [PATCH v2] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread Jason Wang
On 2019/8/15 下午5:42, ? jiang wrote: This change lowers ring buffer reclaim threshold from 1/2*queue to budget for better performance. According to our test with qemu + dpdk, packet dropping happens when the guest is not able to provide free buffer in avail ring timely with default 1/2*queue.

[PATCH V5 5/5] iommu/amd: Convert AMD iommu driver to the dma-iommu api

2019-08-15 Thread Tom Murphy
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Signed-off-by: Tom Murphy --- drivers/iommu/Kconfig | 1 + drivers/iommu/amd_iommu.c | 677 -- 2 files changed, 68

[PATCH v2] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread ? jiang
This change lowers ring buffer reclaim threshold from 1/2*queue to budget for better performance. According to our test with qemu + dpdk, packet dropping happens when the guest is not able to provide free buffer in avail ring timely with default 1/2*queue. The value in the patch has been tested

Re: [PATCH] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread 冉 jiang
On 2019/8/15 17:25, Jason Wang wrote: > > On 2019/8/15 下午4:36, 冉 jiang wrote: >> On 2019/8/15 11:17, Jason Wang wrote: >>> On 2019/8/15 上午11:11, 冉 jiang wrote: On 2019/8/15 11:01, Jason Wang wrote: > On 2019/8/14 上午10:06, ? jiang wrote: >> This change lowers ring buffer reclaim

Re: [PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api

2019-08-15 Thread Tom Murphy
Done, I just sent it there. I don't have any AMD hardware to test on while I'm traveling. However the rebase was very straightforward and the code was tested a month ago on the old linux-next. I only have the AMD conversion done. I will work on rebasing the intel one when I get a chance. On Tue,

[PATCH V5 2/5] iommu: Add gfp parameter to iommu_ops::map

2019-08-15 Thread Tom Murphy
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread:

[PATCH V5 3/5] iommu/dma-iommu: Handle deferred devices

2019-08-15 Thread Tom Murphy
Handle devices which defer their attach to the iommu in the dma-iommu api Signed-off-by: Tom Murphy --- drivers/iommu/dma-iommu.c | 27 ++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index

[PATCH V5 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api

2019-08-15 Thread Tom Murphy
Convert the AMD iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the AMD iommu driver. Change-log: V5: -Rebase on top of linux-next V4: -Rebase on top of linux-next -Split the removing of the unnecessary locking in the amd iommu driver into a seperate

[PATCH V5 1/5] iommu/amd: Remove unnecessary locking from AMD iommu driver

2019-08-15 Thread Tom Murphy
We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap. iommu_map doesn’t lock while mapping and so no two calls should touch the same iova range. The AMD driver already handles the page table page allocations without locks so we can safely remove the locks. Signed-off-by: Tom Murphy

[PATCH V5 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask

2019-08-15 Thread Tom Murphy
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api. Signed-off-by: Tom Murphy --- drivers/iommu/dma-iommu.c | 12 +++- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index

Re: [PATCH] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread 冉 jiang
On 2019/8/15 11:17, Jason Wang wrote: > > On 2019/8/15 上午11:11, 冉 jiang wrote: >> On 2019/8/15 11:01, Jason Wang wrote: >>> On 2019/8/14 上午10:06, ? jiang wrote: This change lowers ring buffer reclaim threshold from 1/2*queue to budget for better performance. According to our test

Re: [PATCH] virtio-net: lower min ring num_free for efficiency

2019-08-15 Thread Jason Wang
On 2019/8/15 下午4:36, 冉 jiang wrote: On 2019/8/15 11:17, Jason Wang wrote: On 2019/8/15 上午11:11, 冉 jiang wrote: On 2019/8/15 11:01, Jason Wang wrote: On 2019/8/14 上午10:06, ? jiang wrote: This change lowers ring buffer reclaim threshold from 1/2*queue to budget for better performance.

Re: DANGER WILL ROBINSON, DANGER

2019-08-15 Thread Jerome Glisse
On Thu, Aug 15, 2019 at 03:19:29PM -0400, Jerome Glisse wrote: > On Tue, Aug 13, 2019 at 02:01:35PM +0300, Adalbert Lazăr wrote: > > On Fri, 9 Aug 2019 09:24:44 -0700, Matthew Wilcox > > wrote: > > > On Fri, Aug 09, 2019 at 07:00:26PM +0300, Adalbert Lazăr wrote: > > > > +++

Re: [PATCH V5 0/9] Fixes for vhost metadata acceleration

2019-08-15 Thread Jason Gunthorpe
On Thu, Aug 15, 2019 at 11:26:46AM +0800, Jason Wang wrote: > > On 2019/8/13 下午7:57, Jason Gunthorpe wrote: > > On Tue, Aug 13, 2019 at 04:31:07PM +0800, Jason Wang wrote: > > > > > What kind of issues do you see? Spinlock is to synchronize GUP with MMU > > > notifier in this series. > > A GUP

Re: DANGER WILL ROBINSON, DANGER

2019-08-15 Thread Jerome Glisse
On Tue, Aug 13, 2019 at 02:01:35PM +0300, Adalbert Lazăr wrote: > On Fri, 9 Aug 2019 09:24:44 -0700, Matthew Wilcox wrote: > > On Fri, Aug 09, 2019 at 07:00:26PM +0300, Adalbert Lazăr wrote: > > > +++ b/include/linux/page-flags.h > > > @@ -417,8 +417,10 @@ PAGEFLAG(Idle, idle, PF_ANY) > > > */