Re: [PATCH 3/4] iommu/vt-d: Allow IOMMU_DOMAIN_DMA and IOMMU_DOMAIN_IDENTITY to be allocated

2019-03-07 Thread Dmitry Safonov via iommu
On 3/4/19 3:46 PM, James Sewart wrote: > +static inline int domain_is_initialised(struct dmar_domain *domain) > +{ > + return domain->flags & DOMAIN_FLAG_INITIALISED; > +} Maybe check it in intel_iommu_domain_free(), eh? Thanks, Dmitry ___

[PATCH 5/5] iommu/intel: Rename dmar_fault() => dmar_serve_faults()

2018-01-24 Thread Dmitry Safonov via iommu
Fix the return value, parameters and a bit better naming. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c| 8 +++- drivers/iommu/intel-iommu.c | 2 +- drivers/iommu/intel_irq_remapping.c | 2 +- include/linux/dmar.h| 2 +- 4 files changed, 6 ins

[PATCH 1/5] iommu/intel: Add __init for dmar_register_bus_notifier()

2018-01-24 Thread Dmitry Safonov via iommu
It's called only from intel_iommu_init(), which is init function. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 9a7ffd13c7f0..accf58388bdb 100644 --- a/drivers/iommu/dma

[PATCH 3/5] iommu/intel: Introduce clear_primary_faults() helper

2018-01-24 Thread Dmitry Safonov via iommu
To my mind it's a bit more readable - and I will re-use it in the next patch. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c | 12 ++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index accf58388bdb..33fb4244e438 1006

[PATCH 2/5] iommu/intel: Clean/document fault status flags

2018-01-24 Thread Dmitry Safonov via iommu
So one could decode them without opening the specification. Signed-off-by: Dmitry Safonov --- include/linux/intel-iommu.h | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index f3274d9f46a2..a4dc9c2875cc 10

[PATCH 4/5] iommu/intel: Handle DMAR faults on workqueue

2018-01-24 Thread Dmitry Safonov via iommu
dmar_fault() reports/handles/cleans DMAR faults in a cycle one-by-one. The nuisance is that it's set as a irq handler and runs with disabled interrupts - which works OK if you have only a couple of DMAR faults, but becomes a problem if your intel iommu has a plenty of mappings. We have a test that

[PATCH 0/5] iommu/intel: Handle DMAR faults in a wq

2018-01-24 Thread Dmitry Safonov via iommu
A softlockup-panic fix I've meet on kernel test suite. While at it, fix a couple of minor issues. Cc: Alex Williamson Cc: David Woodhouse Cc: Ingo Molnar Cc: Joerg Roedel Cc: Lu Baolu Cc: iommu@lists.linux-foundation.org Dmitry Safonov (5): iommu/intel: Add __init for dmar_register_bus_not

[PATCHv2 0/6] iommu/intel: Handle DMAR faults in a wq

2018-02-12 Thread Dmitry Safonov via iommu
Changes to v2: - Ratelimit printks for dmar faults (6 patch) First version: https://lkml.org/lkml/2018/1/24/364 A softlockup-panic fix I've meet on kernel test suite. While at it, fix a couple of minor issues. Cc: Alex Williamson Cc: David Woodhouse Cc: Ingo Molnar Cc: Joerg Roedel Cc: Lu Ba

[PATCHv2 1/6] iommu/intel: Add __init for dmar_register_bus_notifier()

2018-02-12 Thread Dmitry Safonov via iommu
It's called only from intel_iommu_init(), which is init function. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 9a7ffd13c7f0..accf58388bdb 100644 --- a/drivers/iommu/dma

[PATCHv2 4/6] iommu/intel: Handle DMAR faults on workqueue

2018-02-12 Thread Dmitry Safonov via iommu
dmar_fault() reports/handles/cleans DMAR faults in a cycle one-by-one. The nuisance is that it's set as a irq handler and runs with disabled interrupts - which works OK if you have only a couple of DMAR faults, but becomes a problem if your intel iommu has a plenty of mappings. We have a test that

[PATCHv2 5/6] iommu/intel: Rename dmar_fault() => dmar_serve_faults()

2018-02-12 Thread Dmitry Safonov via iommu
Fix the return value, parameters and a bit better naming. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c| 8 +++- drivers/iommu/intel-iommu.c | 2 +- drivers/iommu/intel_irq_remapping.c | 2 +- include/linux/dmar.h| 2 +- 4 files changed, 6 ins

[PATCHv2 2/6] iommu/intel: Clean/document fault status flags

2018-02-12 Thread Dmitry Safonov via iommu
So one could decode them without opening the specification. Signed-off-by: Dmitry Safonov --- include/linux/intel-iommu.h | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index 8dad3dd26eae..ef169d67df92 10

[PATCHv2 3/6] iommu/intel: Introduce clear_primary_faults() helper

2018-02-12 Thread Dmitry Safonov via iommu
To my mind it's a bit more readable - and I will re-use it in the next patch. Signed-off-by: Dmitry Safonov --- drivers/iommu/dmar.c | 12 ++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index accf58388bdb..33fb4244e438 1006

[PATCHv2 6/6] iommu/intel: Ratelimit each dmar fault printing

2018-02-12 Thread Dmitry Safonov via iommu
There is a ratelimit for printing, but it's incremented each time the cpu recives dmar fault interrupt. While one interrupt may signal about *many* faults. And delayed to wq dmar fault work might receive even more faults to clean, than it was earlier. Ratelimit each fault printing rather than each

Re: [PATCHv2 4/6] iommu/intel: Handle DMAR faults on workqueue

2018-02-13 Thread Dmitry Safonov via iommu
On Tue, 2018-02-13 at 17:35 +0100, Joerg Roedel wrote: > On Mon, Feb 12, 2018 at 04:48:23PM +, Dmitry Safonov wrote: > > dmar_fault() reports/handles/cleans DMAR faults in a cycle one-by- > > one. > > The nuisance is that it's set as a irq handler and runs with > > disabled > > interrupts - whi

Re: [PATCHv2 4/6] iommu/intel: Handle DMAR faults on workqueue

2018-02-15 Thread Dmitry Safonov via iommu
On Tue, 2018-02-13 at 17:38 +, Dmitry Safonov wrote: > On Tue, 2018-02-13 at 17:35 +0100, Joerg Roedel wrote: > > On Mon, Feb 12, 2018 at 04:48:23PM +, Dmitry Safonov wrote: > > > dmar_fault() reports/handles/cleans DMAR faults in a cycle one- > > > by- > > > one. > > > The nuisance is that

[PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-02-15 Thread Dmitry Safonov via iommu
There is a ratelimit for printing, but it's incremented each time the cpu recives dmar fault interrupt. While one interrupt may signal about *many* faults. So, measuring the impact it turns out that reading/clearing one fault takes < 1 usec, and printing info about the fault takes ~170 msec. Havin

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-05 Thread Dmitry Safonov via iommu
Hi Joerg, What do you think about v3? It looks like, I can solve my softlookups with just a bit more proper ratelimiting.. On Thu, 2018-02-15 at 19:17 +, Dmitry Safonov wrote: > There is a ratelimit for printing, but it's incremented each time the > cpu recives dmar fault interrupt. While one

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-13 Thread Dmitry Safonov via iommu
Gentle ping? On Mon, 2018-03-05 at 15:00 +, Dmitry Safonov wrote: > Hi Joerg, > > What do you think about v3? > It looks like, I can solve my softlookups with just a bit more proper > ratelimiting.. > > On Thu, 2018-02-15 at 19:17 +, Dmitry Safonov wrote: > > There is a ratelimit for pri

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-15 Thread Dmitry Safonov via iommu
On Thu, 2018-03-15 at 14:46 +0100, Joerg Roedel wrote: > On Thu, Feb 15, 2018 at 07:17:29PM +, Dmitry Safonov wrote: > > diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c > > index accf58388bdb..6c4ea32ee6a9 100644 > > --- a/drivers/iommu/dmar.c > > +++ b/drivers/iommu/dmar.c > > @@ -161

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-15 Thread Dmitry Safonov via iommu
On Thu, 2018-03-15 at 15:22 +0100, Joerg Roedel wrote: > On Thu, Mar 15, 2018 at 02:13:03PM +, Dmitry Safonov wrote: > > So, you suggest to remove ratelimit at all? > > Do we really need printk flood for each happened fault? > > Imagine, you've hundreds of mappings and then PCI link flapped.. >

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-15 Thread Dmitry Safonov via iommu
On Thu, 2018-03-15 at 14:34 +, Dmitry Safonov wrote: > On Thu, 2018-03-15 at 15:22 +0100, Joerg Roedel wrote: > > On Thu, Mar 15, 2018 at 02:13:03PM +, Dmitry Safonov wrote: > > > So, you suggest to remove ratelimit at all? > > > Do we really need printk flood for each happened fault? > > >

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-15 Thread Dmitry Safonov via iommu
On Thu, 2018-03-15 at 16:28 +0100, Joerg Roedel wrote: > On Thu, Mar 15, 2018 at 02:42:00PM +, Dmitry Safonov wrote: > > But even with loop-limit we will need ratelimit each printk() > > *also*. > > Otherwise loop-limit will be based on time spent printing, not on > > anything else.. > > The pa

Re: [PATCHv3] iommu/intel: Ratelimit each dmar fault printing

2018-03-20 Thread Dmitry Safonov via iommu
On Thu, 2018-03-15 at 16:28 +0100, Joerg Roedel wrote: > On Thu, Mar 15, 2018 at 02:42:00PM +, Dmitry Safonov wrote: > > But even with loop-limit we will need ratelimit each printk() > > *also*. > > Otherwise loop-limit will be based on time spent printing, not on > > anything else.. > > The pa

[PATCHv4 1/2] iommu/vt-d: Ratelimit each dmar fault printing

2018-03-30 Thread Dmitry Safonov via iommu
There is a ratelimit for printing, but it's incremented each time the cpu recives dmar fault interrupt. While one interrupt may signal about *many* faults. So, measuring the impact it turns out that reading/clearing one fault takes < 1 usec, and printing info about the fault takes ~170 msec. Havin

[PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq handler

2018-03-30 Thread Dmitry Safonov via iommu
Theoretically, on some machines faults might be generated faster than they're cleared by CPU. Let's limit the cleaning-loop by number of hw fault registers. Cc: Alex Williamson Cc: David Woodhouse Cc: Ingo Molnar Cc: Joerg Roedel Cc: Lu Baolu Cc: iommu@lists.linux-foundation.org Signed-off-by

Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue

2019-07-23 Thread Dmitry Safonov via iommu
On Tue 23 Jul 2019, 9:17 a.m. Joerg Roedel, wrote: > On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote: > > > @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct > iova_domain *iovad, > > { > > } > > > > +bool has_iova_flush_queue(struct iova_domain *iovad) > > +{ > > +

Re: Patch "iommu/vt-d: Don't queue_iova() if there is no flush queue" has been added to the 5.2-stable tree

2019-07-29 Thread Dmitry Safonov via iommu
Hi Greg, On 7/29/19 5:30 PM, gre...@linuxfoundation.org wrote: > > This is a note to let you know that I've just added the patch titled > > iommu/vt-d: Don't queue_iova() if there is no flush queue > > to the 5.2-stable tree which can be found at: > > http://www.kernel.org/git/?p=linux

[PATCH-4.19-stable 0/2] iommu/vt-d: queue_iova() boot crash backport

2019-07-31 Thread Dmitry Safonov via iommu
Backport commits from master that fix boot failure on some intel machines. Cc: David Woodhouse Cc: Joerg Roedel Cc: Joerg Roedel Cc: Lu Baolu Dmitry Safonov (1): iommu/vt-d: Don't queue_iova() if there is no flush queue Joerg Roedel (1): iommu/iova: Fix compilation error with !CONFIG_IOM

[PATCH-4.14-stable 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue

2019-07-31 Thread Dmitry Safonov via iommu
[ Upstream commit effa467870c7612012885df4e246bdb8ffd8e44c ] Intel VT-d driver was reworked to use common deferred flushing implementation. Previously there was one global per-cpu flush queue, afterwards - one per domain. Before deferring a flush, the queue should be allocated and initialized. C

[PATCH-4.19-stable 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue

2019-07-31 Thread Dmitry Safonov via iommu
[ Upstream commit effa467870c7612012885df4e246bdb8ffd8e44c ] Intel VT-d driver was reworked to use common deferred flushing implementation. Previously there was one global per-cpu flush queue, afterwards - one per domain. Before deferring a flush, the queue should be allocated and initialized. C

[PATCH-4.14-stable 0/2] iommu/vt-d: queue_iova() boot crash backport

2019-07-31 Thread Dmitry Safonov via iommu
Backport commits from master that fix boot failure on some intel machines. I have only boot tested this in a VM. Functional testing for v4.14 is out of my scope as patches differ only on a trivial conflict from v4.19, where I discovered/debugged the issue. While testing v4.14 stable on affected no

Re: [PATCH 4.19 17/32] iommu/vt-d: Dont queue_iova() if there is no flush queue

2019-08-06 Thread Dmitry Safonov via iommu
Hi Pavel, On 8/3/19 10:34 PM, Pavel Machek wrote: > Hi! > >> --- a/drivers/iommu/intel-iommu.c >> +++ b/drivers/iommu/intel-iommu.c >> @@ -3721,7 +3721,7 @@ static void intel_unmap(struct device *d >> >> freelist = domain_unmap(domain, start_pfn, last_pfn); >> >> -if (intel_iommu_str

Re: [PATCHv4 1/2] iommu/vt-d: Ratelimit each dmar fault printing

2018-05-01 Thread Dmitry Safonov via iommu
Hi Joerg, is there anything I may do about those two patches? In 2/2 I've limited loop cnt as discussed in v3. This one solves softlockup for us, might be useful. On Sat, 2018-03-31 at 01:33 +0100, Dmitry Safonov wrote: > There is a ratelimit for printing, but it's incremented each time the > cpu

Re: [PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq handler

2018-05-02 Thread Dmitry Safonov via iommu
Hi Lu, On Wed, 2018-05-02 at 14:34 +0800, Lu Baolu wrote: > Hi, > > On 03/31/2018 08:33 AM, Dmitry Safonov wrote: > > Theoretically, on some machines faults might be generated faster > > than > > they're cleared by CPU. > > Is this a real case? No. 1/2 is a real case and this one was discussed

Re: [PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq handler

2018-05-02 Thread Dmitry Safonov via iommu
On Thu, 2018-05-03 at 07:49 +0800, Lu Baolu wrote: > Hi, > > On 05/02/2018 08:38 PM, Dmitry Safonov wrote: > > Hi Lu, > > > > On Wed, 2018-05-02 at 14:34 +0800, Lu Baolu wrote: > > > Hi, > > > > > > On 03/31/2018 08:33 AM, Dmitry Safonov wrote: > > > > Theoretically, on some machines faults migh

Re: [PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq handler

2018-05-02 Thread Dmitry Safonov via iommu
On Thu, 2018-05-03 at 09:32 +0800, Lu Baolu wrote: > Hi, > > On 05/03/2018 08:52 AM, Dmitry Safonov wrote: > > AFAICS, we're doing fault-clearing in a loop inside irq handler. > > That means that while we're clearing if a fault raises, it'll make > > an irq level triggered (or on edge) on lapic. S

Re: [PATCHv4 2/2] iommu/vt-d: Limit number of faults to clear in irq handler

2018-05-02 Thread Dmitry Safonov via iommu
On Thu, 2018-05-03 at 10:16 +0800, Lu Baolu wrote: > Hi, > > On 05/03/2018 09:59 AM, Dmitry Safonov wrote: > > On Thu, 2018-05-03 at 09:32 +0800, Lu Baolu wrote: > > > Hi, > > > > > > On 05/03/2018 08:52 AM, Dmitry Safonov wrote: > > > > AFAICS, we're doing fault-clearing in a loop inside irq > >

Re: [PATCHv4 1/2] iommu/vt-d: Ratelimit each dmar fault printing

2018-05-03 Thread Dmitry Safonov via iommu
On Thu, 2018-05-03 at 14:40 +0200, Joerg Roedel wrote: > On Wed, May 02, 2018 at 03:22:24AM +0100, Dmitry Safonov wrote: > > Hi Joerg, > > > > is there anything I may do about those two patches? > > In 2/2 I've limited loop cnt as discussed in v3. > > This one solves softlockup for us, might be us

[RFC 0/3] iommu/iova: Unsafe locking in find_iova()

2018-06-21 Thread Dmitry Safonov via iommu
find_iova() looks to be using a bad locking practice: it locks the returned iova only for the search time. And looking in code, the element can be removed from the tree and freed under rbtree lock. That happens during memory hot-unplug and cleanup on module removal. Here I cleanup users of the func

[RFC 1/3] iommu/iova: Find and split iova under rbtree's lock

2018-06-21 Thread Dmitry Safonov via iommu
find_iova() holds iova_rbtree_lock only during the traversing rbtree. After the lock is released, returned iova may be freed (e.g., during module's release). Hold the spinlock during search and removal of iova from the rbtree, eleminating possible use-after-free or/and double-free of iova. Cc: Dav

[RFC 2/3] iommu/iova: Make free_iova() atomic

2018-06-21 Thread Dmitry Safonov via iommu
find_iova() grabs rbtree's spinlock only for the search time. Nothing guaranties that returned iova still exist for __free_iova(). Prevent potential use-after-free and double-free by holding the spinlock all the time iova is being searched and freed. Cc: David Woodhouse Cc: Joerg Roedel Cc: iomm

[RFC 3/3] iommu/iova: Remove find_iova()

2018-06-21 Thread Dmitry Safonov via iommu
This function is potentially dangerous: nothing protects returned iova. As there is no user in tree anymore, delete it. Cc: David Woodhouse Cc: Joerg Roedel Cc: iommu@lists.linux-foundation.org Cc: Dmitry Safonov <0x7f454...@gmail.com> Signed-off-by: Dmitry Safonov --- drivers/iommu/iova.c | 2

Re: [RFC 0/3] iommu/iova: Unsafe locking in find_iova()

2018-07-06 Thread Dmitry Safonov via iommu
On Fri, 2018-07-06 at 15:16 +0200, Joerg Roedel wrote: > On Thu, Jun 21, 2018 at 07:08:20PM +0100, Dmitry Safonov wrote: > > find_iova() looks to be using a bad locking practice: it locks the > > returned iova only for the search time. And looking in code, the > > element can be removed from the t

Re: [RFC 0/3] iommu/iova: Unsafe locking in find_iova()

2018-07-09 Thread Dmitry Safonov via iommu
On Fri, 2018-07-06 at 17:13 +0200, Joerg Roedel wrote: > On Fri, Jul 06, 2018 at 03:10:47PM +0100, Dmitry Safonov wrote: > > Yes, as far as I can see, there are code-paths which may try to > > handle > > it at the same time: > > o memory notifiers for hot-unplug (intel-iommu.c) > > o drivers unload