Re: [PATCH] iommu/io-pgtable: Abstract iommu_iotlb_gather access
On Tue, Aug 24, 2021 at 04:33:16PM +0100, Robin Murphy wrote: > > Tested-by: Geert Uytterhoeven > > Thanks for confirming! Sorry for the delay, the new tree containing this fix has been pushed out now. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 01/12] iova: Export alloc_iova_fast() and free_iova_fast()
On Wed, Aug 25, 2021 at 6:35 PM John Garry wrote: > > On 25/08/2021 10:55, Will Deacon wrote: > > On Tue, Aug 24, 2021 at 02:08:33PM -0400, Michael S. Tsirkin wrote: > >> On Wed, Aug 18, 2021 at 08:06:31PM +0800, Xie Yongji wrote: > >>> Export alloc_iova_fast() and free_iova_fast() so that > >>> some modules can make use of the per-CPU cache to get > >>> rid of rbtree spinlock in alloc_iova() and free_iova() > >>> during IOVA allocation. > >>> > >>> Signed-off-by: Xie Yongji > >> > >> > >> This needs ack from iommu maintainers. Guys? > > > > Looks fine to me: > > > > Acked-by: Will Deacon > > > > Will > > ___ > > iommu mailing list > > iommu@lists.linux-foundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/iommu > > . > > > > JFYI, There was a preliminary discussion to move the iova rcache code > (which the iova fast alloc and free functions are based on) out of the > iova code and maybe into dma-iommu (being the only user). There was > other motivation. > Would it be better to move the code into ./lib as a general library? > https://lore.kernel.org/linux-iommu/83de3911-145d-77c8-17c1-981e4ff82...@arm.com/ > > Having more users complicates that... > Do we have some plan for this work? From our test [1], iova_alloc_fast() is much better than iova_alloc(). So I'd like to use it as much as possible [1] https://lore.kernel.org/kvm/cacyct3stexfeg7nrbwpo2j59dpycumzcvm2zcpjave40-ev...@mail.gmail.com/ Thanks, Yongji ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 01/12] iova: Export alloc_iova_fast() and free_iova_fast()
On 25/08/2021 10:55, Will Deacon wrote: On Tue, Aug 24, 2021 at 02:08:33PM -0400, Michael S. Tsirkin wrote: On Wed, Aug 18, 2021 at 08:06:31PM +0800, Xie Yongji wrote: Export alloc_iova_fast() and free_iova_fast() so that some modules can make use of the per-CPU cache to get rid of rbtree spinlock in alloc_iova() and free_iova() during IOVA allocation. Signed-off-by: Xie Yongji This needs ack from iommu maintainers. Guys? Looks fine to me: Acked-by: Will Deacon Will ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu . JFYI, There was a preliminary discussion to move the iova rcache code (which the iova fast alloc and free functions are based on) out of the iova code and maybe into dma-iommu (being the only user). There was other motivation. https://lore.kernel.org/linux-iommu/83de3911-145d-77c8-17c1-981e4ff82...@arm.com/ Having more users complicates that... Thanks, John ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 01/12] iova: Export alloc_iova_fast() and free_iova_fast()
On Tue, Aug 24, 2021 at 02:08:33PM -0400, Michael S. Tsirkin wrote: > On Wed, Aug 18, 2021 at 08:06:31PM +0800, Xie Yongji wrote: > > Export alloc_iova_fast() and free_iova_fast() so that > > some modules can make use of the per-CPU cache to get > > rid of rbtree spinlock in alloc_iova() and free_iova() > > during IOVA allocation. > > > > Signed-off-by: Xie Yongji > > > This needs ack from iommu maintainers. Guys? Looks fine to me: Acked-by: Will Deacon Will ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] eventfd: Enlarge recursion limit to allow vhost to work
Hi guys, Is there any comments or update for this patch? Thanks, Yongji On Fri, Jun 18, 2021 at 4:47 PM He Zhe wrote: > > commit b5e683d5cab8 ("eventfd: track eventfd_signal() recursion depth") > introduces a percpu counter that tracks the percpu recursion depth and > warn if it greater than zero, to avoid potential deadlock and stack > overflow. > > However sometimes different eventfds may be used in parallel. Specifically, > when heavy network load goes through kvm and vhost, working as below, it > would trigger the following call trace. > > - 100.00% >- 66.51% > ret_from_fork > kthread > - vhost_worker > - 33.47% handle_tx_kick > handle_tx > handle_tx_copy > vhost_tx_batch.isra.0 > vhost_add_used_and_signal_n > eventfd_signal > - 33.05% handle_rx_net > handle_rx > vhost_add_used_and_signal_n > eventfd_signal >- 33.49% > ioctl > entry_SYSCALL_64_after_hwframe > do_syscall_64 > __x64_sys_ioctl > ksys_ioctl > do_vfs_ioctl > kvm_vcpu_ioctl > kvm_arch_vcpu_ioctl_run > vmx_handle_exit > handle_ept_misconfig > kvm_io_bus_write > __kvm_io_bus_write > eventfd_signal > > 001: WARNING: CPU: 1 PID: 1503 at fs/eventfd.c:73 eventfd_signal+0x85/0xa0 > snip > 001: Call Trace: > 001: vhost_signal+0x15e/0x1b0 [vhost] > 001: vhost_add_used_and_signal_n+0x2b/0x40 [vhost] > 001: handle_rx+0xb9/0x900 [vhost_net] > 001: handle_rx_net+0x15/0x20 [vhost_net] > 001: vhost_worker+0xbe/0x120 [vhost] > 001: kthread+0x106/0x140 > 001: ? log_used.part.0+0x20/0x20 [vhost] > 001: ? kthread_park+0x90/0x90 > 001: ret_from_fork+0x35/0x40 > 001: ---[ end trace 0003 ]--- > > This patch enlarges the limit to 1 which is the maximum recursion depth we > have found so far. > > The credit of modification for eventfd_signal_count goes to > Xie Yongji > > Signed-off-by: He Zhe > --- > fs/eventfd.c| 3 ++- > include/linux/eventfd.h | 5 - > 2 files changed, 6 insertions(+), 2 deletions(-) > > diff --git a/fs/eventfd.c b/fs/eventfd.c > index e265b6dd4f34..add6af91cacf 100644 > --- a/fs/eventfd.c > +++ b/fs/eventfd.c > @@ -71,7 +71,8 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) > * it returns true, the eventfd_signal() call should be deferred to a > * safe context. > */ > - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) > + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count) > > + EFD_WAKE_COUNT_MAX)) > return 0; > > spin_lock_irqsave(&ctx->wqh.lock, flags); > diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h > index fa0a524baed0..74be152ebe87 100644 > --- a/include/linux/eventfd.h > +++ b/include/linux/eventfd.h > @@ -29,6 +29,9 @@ > #define EFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) > #define EFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS | EFD_SEMAPHORE) > > +/* This is the maximum recursion depth we find so far */ > +#define EFD_WAKE_COUNT_MAX 1 > + > struct eventfd_ctx; > struct file; > > @@ -47,7 +50,7 @@ DECLARE_PER_CPU(int, eventfd_wake_count); > > static inline bool eventfd_signal_count(void) > { > - return this_cpu_read(eventfd_wake_count); > + return this_cpu_read(eventfd_wake_count) > EFD_WAKE_COUNT_MAX; > } > > #else /* CONFIG_EVENTFD */ > -- > 2.17.1 > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu