Re: [PATCH v2 17/17] memblock: use separate iterators for memory and reserved regions

2020-08-02 Thread Ingo Molnar


* Mike Rapoport  wrote:

> From: Mike Rapoport 
> 
> for_each_memblock() is used to iterate over memblock.memory in
> a few places that use data from memblock_region rather than the memory
> ranges.
> 
> Introduce separate for_each_mem_region() and for_each_reserved_mem_region()
> to improve encapsulation of memblock internals from its users.
> 
> Signed-off-by: Mike Rapoport 
> ---
>  .clang-format  |  3 ++-
>  arch/arm64/kernel/setup.c  |  2 +-
>  arch/arm64/mm/numa.c   |  2 +-
>  arch/mips/netlogic/xlp/setup.c |  2 +-
>  arch/x86/mm/numa.c |  2 +-
>  include/linux/memblock.h   | 19 ---
>  mm/memblock.c  |  4 ++--
>  mm/page_alloc.c|  8 
>  8 files changed, 28 insertions(+), 14 deletions(-)

The x86 part:

Acked-by: Ingo Molnar 

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 14/17] x86/setup: simplify reserve_crashkernel()

2020-08-02 Thread Ingo Molnar


* Mike Rapoport  wrote:

> From: Mike Rapoport 
> 
> * Replace magic numbers with defines
> * Replace memblock_find_in_range() + memblock_reserve() with
>   memblock_phys_alloc_range()
> * Stop checking for low memory size in reserve_crashkernel_low(). The
>   allocation from limited range will anyway fail if there is no enough
>   memory, so there is no need for extra traversal of memblock.memory
> 
> Signed-off-by: Mike Rapoport 

Assuming that this got or will get tested with a crash kernel:

Acked-by: Ingo Molnar 

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 13/17] x86/setup: simplify initrd relocation and reservation

2020-08-02 Thread Ingo Molnar


* Mike Rapoport  wrote:

> From: Mike Rapoport 
> 
> Currently, initrd image is reserved very early during setup and then it
> might be relocated and re-reserved after the initial physical memory
> mapping is created. The "late" reservation of memblock verifies that mapped
> memory size exceeds the size of initrd, the checks whether the relocation
> required and, if yes, relocates inirtd to a new memory allocated from
> memblock and frees the old location.
> 
> The check for memory size is excessive as memblock allocation will anyway
> fail if there is not enough memory. Besides, there is no point to allocate
> memory from memblock using memblock_find_in_range() + memblock_reserve()
> when there exists memblock_phys_alloc_range() with required functionality.
> 
> Remove the redundant check and simplify memblock allocation.
> 
> Signed-off-by: Mike Rapoport 

Assuming there's no hidden dependency here breaking something:

  Acked-by: Ingo Molnar 

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 14/15] x86/numa: remove redundant iteration over memblock.reserved

2020-07-28 Thread Ingo Molnar


* Mike Rapoport  wrote:

> On Tue, Jul 28, 2020 at 12:44:40PM +0200, Ingo Molnar wrote:
> > 
> > * Mike Rapoport  wrote:
> > 
> > > From: Mike Rapoport 
> > > 
> > > numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> > > regions to set node ID in memblock.reserved and than traverses
> > > memblock.reserved to update reserved_nodemask to include node IDs that 
> > > were
> > > set in the first loop.
> > > 
> > > Remove redundant traversal over memblock.reserved and update
> > > reserved_nodemask while iterating over numa_meminfo.
> > > 
> > > Signed-off-by: Mike Rapoport 
> > > ---
> > >  arch/x86/mm/numa.c | 26 ++
> > >  1 file changed, 10 insertions(+), 16 deletions(-)
> > 
> > I suspect you'd like to carry this in the -mm tree?
> 
> Yes.
>  
> > Acked-by: Ingo Molnar 
> 
> Thanks!

Assuming it is correct and works. :-)

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 14/15] x86/numa: remove redundant iteration over memblock.reserved

2020-07-28 Thread Ingo Molnar


* Mike Rapoport  wrote:

> From: Mike Rapoport 
> 
> numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> regions to set node ID in memblock.reserved and than traverses
> memblock.reserved to update reserved_nodemask to include node IDs that were
> set in the first loop.
> 
> Remove redundant traversal over memblock.reserved and update
> reserved_nodemask while iterating over numa_meminfo.
> 
> Signed-off-by: Mike Rapoport 
> ---
>  arch/x86/mm/numa.c | 26 ++
>  1 file changed, 10 insertions(+), 16 deletions(-)

I suspect you'd like to carry this in the -mm tree?

Acked-by: Ingo Molnar 

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V3 00/29] stacktrace: Consolidate stack trace usage

2019-04-25 Thread Ingo Molnar


* Thomas Gleixner  wrote:

> - if (unlikely(!ret))
> + if (unlikely(!ret)) {
> + if (!trace->nr_entries) {
> + /*
> +  * If save_trace fails here, the printing might
> +  * trigger a WARN but because of the !nr_entries it
> +  * should not do bad things.
> +  */
> + save_trace(trace);
> + }
>   return print_circular_bug(, target_entry, next, prev);
> + }
>   else if (unlikely(ret < 0))
>   return print_bfs_bug(ret);

Just a minor style nit: the 'else' should probably on the same line as 
the '}' it belongs to, to make it really obvious that the 'if' has an 
'else' branch?

At that point the condition should probably also use balanced curly 
braces.

Interdiff looks good otherwise.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v9 03/13] mm: Add support for eXclusive Page Frame Ownership (XPFO)

2019-04-17 Thread Ingo Molnar

* Nadav Amit  wrote:

> > On Apr 17, 2019, at 10:09 AM, Ingo Molnar  wrote:
> > 
> > 
> > * Khalid Aziz  wrote:
> > 
> >>> I.e. the original motivation of the XPFO patches was to prevent execution 
> >>> of direct kernel mappings. Is this motivation still present if those 
> >>> mappings are non-executable?
> >>> 
> >>> (Sorry if this has been asked and answered in previous discussions.)
> >> 
> >> Hi Ingo,
> >> 
> >> That is a good question. Because of the cost of XPFO, we have to be very
> >> sure we need this protection. The paper from Vasileios, Michalis and
> >> Angelos - <http://www.cs.columbia.edu/~vpk/papers/ret2dir.sec14.pdf>,
> >> does go into how ret2dir attacks can bypass SMAP/SMEP in sections 6.1
> >> and 6.2.
> > 
> > So it would be nice if you could generally summarize external arguments 
> > when defending a patchset, instead of me having to dig through a PDF 
> > which not only causes me to spend time that you probably already spent 
> > reading that PDF, but I might also interpret it incorrectly. ;-)
> > 
> > The PDF you cited says this:
> > 
> >  "Unfortunately, as shown in Table 1, the W^X prop-erty is not enforced 
> >   in many platforms, including x86-64.  In our example, the content of 
> >   user address 0xBEEF000 is also accessible through kernel address 
> >   0x87FF9F08 as plain, executable code."
> > 
> > Is this actually true of modern x86-64 kernels? We've locked down W^X 
> > protections in general.
> 
> As I was curious, I looked at the paper. Here is a quote from it:
> 
> "In x86-64, however, the permissions of physmap are not in sane state.
> Kernels up to v3.8.13 violate the W^X property by mapping the entire region
> as “readable, writeable, and executable” (RWX)—only very recent kernels
> (≥v3.9) use the more conservative RW mapping.”

But v3.8.13 is a 5+ years old kernel, it doesn't count as a "modern" 
kernel in any sense of the word. For any proposed patchset with 
significant complexity and non-trivial costs the benchmark version 
threshold is the "current upstream kernel".

So does that quote address my followup questions:

> Is this actually true of modern x86-64 kernels? We've locked down W^X
> protections in general.
>
> I.e. this conclusion:
>
>   "Therefore, by simply overwriting kfptr with 0x87FF9F08 and
>triggering the kernel to dereference it, an attacker can directly
>execute shell code with kernel privileges."
>
> ... appears to be predicated on imperfect W^X protections on the x86-64
> kernel.
>
> Do such holes exist on the latest x86-64 kernel? If yes, is there a
> reason to believe that these W^X holes cannot be fixed, or that any fix
> would be more expensive than XPFO?

?

What you are proposing here is a XPFO patch-set against recent kernels 
with significant runtime overhead, so my questions about the W^X holes 
are warranted.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC PATCH v9 03/13] mm: Add support for eXclusive Page Frame Ownership (XPFO)

2019-04-17 Thread Ingo Molnar


* Khalid Aziz  wrote:

> > I.e. the original motivation of the XPFO patches was to prevent execution 
> > of direct kernel mappings. Is this motivation still present if those 
> > mappings are non-executable?
> > 
> > (Sorry if this has been asked and answered in previous discussions.)
> 
> Hi Ingo,
> 
> That is a good question. Because of the cost of XPFO, we have to be very
> sure we need this protection. The paper from Vasileios, Michalis and
> Angelos - ,
> does go into how ret2dir attacks can bypass SMAP/SMEP in sections 6.1
> and 6.2.

So it would be nice if you could generally summarize external arguments 
when defending a patchset, instead of me having to dig through a PDF 
which not only causes me to spend time that you probably already spent 
reading that PDF, but I might also interpret it incorrectly. ;-)

The PDF you cited says this:

  "Unfortunately, as shown in Table 1, the W^X prop-erty is not enforced 
   in many platforms, including x86-64.  In our example, the content of 
   user address 0xBEEF000 is also accessible through kernel address 
   0x87FF9F08 as plain, executable code."

Is this actually true of modern x86-64 kernels? We've locked down W^X 
protections in general.

I.e. this conclusion:

  "Therefore, by simply overwriting kfptr with 0x87FF9F08 and 
   triggering the kernel to dereference it, an attacker can directly 
   execute shell code with kernel privileges."

... appears to be predicated on imperfect W^X protections on the x86-64 
kernel.

Do such holes exist on the latest x86-64 kernel? If yes, is there a 
reason to believe that these W^X holes cannot be fixed, or that any fix 
would be more expensive than XPFO?

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v9 03/13] mm: Add support for eXclusive Page Frame Ownership (XPFO)

2019-04-17 Thread Ingo Molnar


[ Sorry, had to trim the Cc: list from hell. Tried to keep all the 
  mailing lists and all x86 developers. ]

* Khalid Aziz  wrote:

> From: Juerg Haefliger 
> 
> This patch adds basic support infrastructure for XPFO which protects 
> against 'ret2dir' kernel attacks. The basic idea is to enforce 
> exclusive ownership of page frames by either the kernel or userspace, 
> unless explicitly requested by the kernel. Whenever a page destined for 
> userspace is allocated, it is unmapped from physmap (the kernel's page 
> table). When such a page is reclaimed from userspace, it is mapped back 
> to physmap. Individual architectures can enable full XPFO support using 
> this infrastructure by supplying architecture specific pieces.

I have a higher level, meta question:

Is there any updated analysis outlining why this XPFO overhead would be 
required on x86-64 kernels running on SMAP/SMEP CPUs which should be all 
recent Intel and AMD CPUs, and with kernel that mark all direct kernel 
mappings as non-executable - which should be all reasonably modern 
kernels later than v4.0 or so?

I.e. the original motivation of the XPFO patches was to prevent execution 
of direct kernel mappings. Is this motivation still present if those 
mappings are non-executable?

(Sorry if this has been asked and answered in previous discussions.)

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-18 Thread Ingo Molnar


* tedheadster  wrote:

> > But you said without the fix it doesn't work at all?  Or is this
> > the same box, just with the aic7xxx controller disabled?
> >
> > In general the patch should only have two effects:
> >
> >  - set a small amount of memory aside for bounce buffering
> >  - switch the default dma_ops from dma_direct_ops to swiotlb_ops
> >
> > I can't really see how either could have such a huge effect, even with
> > swiotlb having a couple more wired up ops for which we'd enable spectre
> > mitigations.
> >
> > So a strict before and after would be very interesting, if it is really
> > just this one change that causes such a huge drop we have hidden dragons
> > somewhere..
> 
> Christoph,
>   I did a very controlled before-and-after and got very sensible
> results. All compiles were close in time with patched and un-patched
> kernels.
> 
> I must have screwed something up with my last round of testing.
> 
> Ingo: I am confident this patch should be accepted.

Thanks for the update, I've re-applied this to tip:x86/urgent.

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-18 Thread Ingo Molnar


* tedheadster  wrote:

> On Sun, Oct 14, 2018 at 3:52 AM Christoph Hellwig  wrote:
> >
> > We already build the swiotlb code for 32b-t kernels with PAE support,
> > but the code to actually use swiotlb has only been enabled for 64-bit
> > kernel for an unknown reason.
> >
> > Before Linux 4.18 we papers over this fact because the networking code,
> > the scsi layer and some random block drivers implenented their own
> > bounce buffering scheme.
> >
> > Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
> > Fixes: ab74cfeb ("net: remove the PCI_DMA_BUS_IS_PHYS check in 
> > illegal_highdma")
> > Reported-by: tedheadster 
> > Tested-by: tedheadster 
> > ---
> >
> 
> Christoph,
>   this fix has causes performance to decrease dramatically. Kernel
> builds that used to take 10-15 minutes are now taking 45-60 minutes on
> the same machine.

Ok, this is way too severe regression, and because the two offending 
commits:

 Fixes: 21e07dba9fb1 ("scsi: reduce use of block bounce buffers")
 Fixes: ab74cfebafa3 ("net: remove the PCI_DMA_BUS_IS_PHYS check in 
illegal_highdma")

... are from half a year ago and are in v4.18 already. Fixes should not 
cause new regressions in any case.

So I've removed this patch from tip:x86/urgent for now, could you please 
re-apply it when you do your testing? I've attached it below.

Thanks,

Ingo

===>
>From 6f3bc8028570e4c326030e8795dbcd57c561b723 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig 
Date: Sun, 14 Oct 2018 09:52:08 +0200
Subject: [PATCH] x86/swiotlb: Enable swiotlb for > 4GiG ram on 32-bit kernels

We already build the swiotlb code for 32b-t kernels with PAE support,
but the code to actually use swiotlb has only been enabled for 64-bit
kernel for an unknown reason.

Before Linux 4.18 we paper over this fact because the networking code,
the scsi layer and some random block drivers implemented their own
bounce buffering scheme.

Fixes: 21e07dba9fb1 ("scsi: reduce use of block bounce buffers")
Fixes: ab74cfebafa3 ("net: remove the PCI_DMA_BUS_IS_PHYS check in 
illegal_highdma")
Reported-by: Matthew Whitehead 
Signed-off-by: Christoph Hellwig 
Signed-off-by: Thomas Gleixner 
Tested-by: Matthew Whitehead 
Cc: konrad.w...@oracle.com
Cc: iommu@lists.linux-foundation.org
Cc: sta...@vger.kernel.org
Link: https://lkml.kernel.org/r/20181014075208.2715-1-...@lst.de
---
 arch/x86/kernel/pci-swiotlb.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index 661583662430..71c0b01d93b1 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
 int __init pci_swiotlb_detect_4gb(void)
 {
/* don't initialize swiotlb if iommu=off (no_iommu=1) */
-#ifdef CONFIG_X86_64
if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
swiotlb = 1;
-#endif
 
/*
 * If SME is active then swiotlb will be set to 1 so that bounce
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] x86: enable swiotlb for > 4GiG ram on 32-bit kernels

2018-10-14 Thread Ingo Molnar


* Thomas Gleixner  wrote:

> On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> 
> > On Sun, Oct 14, 2018 at 10:13:31AM +0200, Thomas Gleixner wrote:
> > > On Sun, 14 Oct 2018, Christoph Hellwig wrote:
> > > 
> > > > We already build the swiotlb code for 32b-t kernels with PAE support,
> > > > but the code to actually use swiotlb has only been enabled for 64-bit
> > > > kernel for an unknown reason.
> > > > 
> > > > Before Linux 4.18 we papers over this fact because the networking code,
> > > > the scsi layer and some random block drivers implenented their own
> > > > bounce buffering scheme.
> > > > 
> > > > Fixes: 21e07dba ("scsi: reduce use of block bounce buffers")
> 
> Please use the first 12 characters of the commit SHA for fixes tags in the
> future, as documented. No need to resend, I fixed it up for you and added a
> Cc: stable as well

For those who have their ~/.gitconfig's from ancient Git history, this can be 
done via:

git config --global core.abbrev 12

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic dma-direct and swiotlb code for x86 V3

2018-03-20 Thread Ingo Molnar

* Christoph Hellwig <h...@lst.de> wrote:

> On Tue, Mar 20, 2018 at 09:37:51AM +0100, Ingo Molnar wrote:
> > > git://git.infradead.org/users/hch/misc.git dma-direct-x86
> > 
> > Btw., what's the upstreaming route for these patches?
> > 
> > While it's a multi-arch series it's all pretty x86-heavy as well so we can 
> > host it 
> > in -tip (in tip:core/dma or such), but feel free to handle it yourself as 
> > well:
> > 
> >   Reviewed-by: Ingo Molnar <mi...@kernel.org>
> 
> Either way is fine with me.  The dma-mapping tree is pretty light this
> cycles, so I don't expect any conflicts.  If you want it feel free to grab
> it, otherwise I'll queue it up.

Ok, I picked your series up into tip:core/dma:

 - added the newly arrived Tested-by's and Reviewed-by's
 - some minor edits to titles/changelogs, no functional changes

will push it all out after testing.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: use generic dma-direct and swiotlb code for x86 V3

2018-03-20 Thread Ingo Molnar

* Christoph Hellwig <h...@lst.de> wrote:

> On Mon, Mar 19, 2018 at 11:27:37AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Mon, Mar 19, 2018 at 11:38:12AM +0100, Christoph Hellwig wrote:
> > > Hi all,
> > > 
> > > this series switches the x86 code the the dma-direct implementation
> > > for direct (non-iommu) dma and the generic swiotlb ops.  This includes
> > > getting rid of the special ops for the AMD memory encryption case and
> > > the STA2x11 SOC.  The generic implementations are based on the x86
> > > code, so they provide the same functionality.
> > 
> > I need to test this on my baremertal and Xen setup - and I lost your
> > git repo URL - any chance you mention point out to me so I can
> > kick of a build?
> 
> git://git.infradead.org/users/hch/misc.git dma-direct-x86

Btw., what's the upstreaming route for these patches?

While it's a multi-arch series it's all pretty x86-heavy as well so we can host 
it 
in -tip (in tip:core/dma or such), but feel free to handle it yourself as well:

  Reviewed-by: Ingo Molnar <mi...@kernel.org>

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] headers: untangle kmemleak.h from mm.h

2018-02-11 Thread Ingo Molnar

* Randy Dunlap <rdun...@infradead.org> wrote:

> From: Randy Dunlap <rdun...@infradead.org>
> 
> Currently  #includes  for no obvious
> reason. It looks like it's only a convenience, so remove kmemleak.h
> from slab.h and add  to any users of kmemleak_*
> that don't already #include it.
> Also remove  from source files that do not use it.
> 
> This is tested on i386 allmodconfig and x86_64 allmodconfig. It
> would be good to run it through the 0day bot for other $ARCHes.
> I have neither the horsepower nor the storage space for the other
> $ARCHes.
> 
> [slab.h is the second most used header file after module.h; kernel.h
> is right there with slab.h. There could be some minor error in the
> counting due to some #includes having comments after them and I
> didn't combine all of those.]
> 
> This is Lingchi patch #1 (death by a thousand cuts, applied to kernel
> header files).
> 
> Signed-off-by: Randy Dunlap <rdun...@infradead.org>

Nice find:

Reviewed-by: Ingo Molnar <mi...@kernel.org>

I agree that it needs to go through 0-day to find any hidden dependencies we 
might 
have grown due to this.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v9 00/38] x86: Secure Memory Encryption (AMD)

2017-07-08 Thread Ingo Molnar

* Tom Lendacky  wrote:

> This patch series provides support for AMD's new Secure Memory Encryption 
> (SME)
> feature.

I'm wondering, what's the typical performance hit to DRAM access latency when 
SME 
is enabled?

On that same note, if the performance hit is noticeable I'd expect SME to not 
be 
enabled in native kernels typically - but still it looks like a useful hardware 
feature. Since it's controlled at the page table level, have you considered 
allowing SME-activated vmas via mmap(), even on kernels that are otherwise not 
using encrypted DRAM?

One would think that putting encryption keys into such encrypted RAM regions 
would 
generally improve robustness against various physical space attacks that want 
to 
extract keys but don't have full control of the CPU.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] iommu/amd: Don't put completion-wait semaphore on stack

2016-09-15 Thread Ingo Molnar

* Joerg Roedel <j...@8bytes.org> wrote:

> Hi Ingo,
> 
> On Thu, Sep 15, 2016 at 07:44:35AM +0200, Ingo Molnar wrote:
> > Yeah, I can still remove it - just zapped it in fact.
> 
> Thanks, and sorry for the hassle. Here is the v2 patch that has the
> correct locking. I tested it with and without lockdep enabled and also
> under some load. Looks all fine.

Thanks a lot for the quick response!

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu/amd: Don't put completion-wait semaphore on stack

2016-09-14 Thread Ingo Molnar

* Joerg Roedel <jroe...@suse.de> wrote:

> On Wed, Sep 14, 2016 at 11:27:12PM +0200, Joerg Roedel wrote:
> > On Wed, Sep 14, 2016 at 05:26:48PM +0200, Ingo Molnar wrote:
> > > 
> > > Cool, thanks! I'll put this into tip:x86/asm which has the virtually 
> > > mapped stack 
> > > patches - ok?
> > 
> > Yeah, sure, that is the best thing to do. Just wait for the v2 I'll
> > sending tomorrow. I just realised that the locking is not correct in one
> > of the cases with this patch and I'd like to fix that first.
> 
> Oh sorry, just saw the tip-bot mail. Let me know whether you can still
> remove it and just take v2 or if you want a follow-on patch.

Yeah, I can still remove it - just zapped it in fact.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu/amd: Don't put completion-wait semaphore on stack

2016-09-14 Thread Ingo Molnar

* Joerg Roedel  wrote:

> From: Joerg Roedel 
> 
> The semaphore used by the AMD IOMMU to signal command
> completion lived on the stack until now, which was safe as
> the driver busy-waited on the semaphore with IRQs disabled,
> so the stack can't go away under the driver.
> 
> But the recently introduced vmap-based stacks break this as
> the physical address of the semaphore can't be determinded
> easily anymore. The driver used the __pa() macro, but that
> only works in the direct-mapping. The result were
> Completion-Wait timeout errors seen by the IOMMU driver,
> breaking system boot.
> 
> Since putting the semaphore on the stack is bad design
> anyway, move the semaphore into 'struct amd_iommu'. It is
> protected by the per-iommu lock and now in the direct
> mapping again. This fixes the Completion-Wait timeout errors
> and makes AMD IOMMU systems boot again with vmap-based
> stacks enabled.
> 
> Reported-by: Borislav Petkov 
> Signed-off-by: Joerg Roedel 
> ---
>  drivers/iommu/amd_iommu.c   | 14 --
>  drivers/iommu/amd_iommu_types.h |  2 ++
>  2 files changed, 10 insertions(+), 6 deletions(-)

Cool, thanks! I'll put this into tip:x86/asm which has the virtually mapped 
stack 
patches - ok?

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH V3 2/2] debugfs: don't assume sizeof(bool) to be 4 bytes

2015-09-16 Thread Ingo Molnar

* Steven Rostedt  wrote:

> But please, next time, go easy on the Cc list. Maybe just use bcc for those 
> not 
> on the list, stating that you BCC'd a lot of people to make sure this is 
> sane, 
> but didn't want to spam everyone with every reply.

Not just that, such a long Cc: list is a semi-guarantee that various list 
engines 
(vger included I think) would drop the mail as spam and nobody else would get 
the 
mail...

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] amd_iommu: IO_PAGE_FAULTs on unity mapped regions during amd_iommu_init()

2013-02-07 Thread Ingo Molnar

* Shuah Khan shuah.k...@hp.com wrote:

 When dma_ops are initialized the unity mappings are created. The
 init_device_table_dma() function makes sure DMA from all devices is
 blocked by default. This opens a short window in time where DMA to
 unity mapped regions is blocked by the IOMMU. Make sure this does not
 happen by initializing the device table after dma_ops.
 
 Reference: http://www.gossamer-threads.com/lists/linux/kernel/1670769
 
 Signed-off-by: Shuah Khan shuah.k...@hp.com
 CC: sta...@vger.kernel.org 3.0
 ---
  arch/x86/kernel/amd_iommu_init.c |   10 +++---
  1 file changed, 7 insertions(+), 3 deletions(-)

That file does not exist anymore, it died 2.5+ years ago:

  403f81d8ee53 iommu/amd: Move missing parts to drivers/iommu

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [git pull] ioapic-cleanups-for-v3.9

2013-01-25 Thread Ingo Molnar

* Joerg Roedel j...@8bytes.org wrote:

 Hi Ingo,
 
 The following changes since commit 7d1f9aeff1ee4a20b1aeb377dd0f579fe9647619:
 
   Linux 3.8-rc4 (2013-01-17 19:25:45 -0800)
 
 are available in the git repository at:
 
   git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 
 tags/ioapic-cleanups-for-v3.9

Hm, there are some not so trivial looking conflicts in 
io_apic.c, due to the MSI patches I applied yesterday:

 5ca72c4f7c41 AHCI: Support multiple MSIs
 08261d87f7d1 PCI/MSI: Enable multiple MSIs with pci_enable_msi_block_auto()
 51906e779f2b x86/MSI: Support multiple MSIs in presense of IRQ remapping

Could you please resolve them and resend?

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 03/28] x86/irq: Use irq_remap specific print_IO_APIC paths only on Intel

2012-07-06 Thread Ingo Molnar

* Joerg Roedel joerg.roe...@amd.com wrote:

 The VT-d IOMMU requires a special setup of the IO-APIC to
 remap its interrupts. Therefore the print_IO_APIC routine
 has seperate code paths to accout for that and print out the
 special setup. This is not required on AMD IOMMU systems, so
 make these path really Intel specific.
 
 Cc: x...@kernel.org
 Cc: Yinghai Lu ying...@kernel.org
 Cc: Suresh Siddha suresh.b.sid...@intel.com
 Signed-off-by: Joerg Roedel joerg.roe...@amd.com
 ---
  arch/x86/include/asm/irq_remapping.h |2 ++
  arch/x86/kernel/apic/io_apic.c   |4 ++--
  drivers/iommu/intel_irq_remapping.c  |2 ++
  drivers/iommu/irq_remapping.c|1 +
  4 files changed, 7 insertions(+), 2 deletions(-)
 
 diff --git a/arch/x86/include/asm/irq_remapping.h 
 b/arch/x86/include/asm/irq_remapping.h
 index 5fb9bbb..228d5e5 100644
 --- a/arch/x86/include/asm/irq_remapping.h
 +++ b/arch/x86/include/asm/irq_remapping.h
 @@ -27,6 +27,7 @@
  #ifdef CONFIG_IRQ_REMAP
  
  extern int irq_remapping_enabled;
 +extern int intel_irq_remap_debug;

Sigh.

Instead of yet another set of global flags thrown around the 
kernel please properly factor out this code, its data structures 
and methods: introduce a single descriptor structure that 
describes this piece of hardware, with debugging flags part of 
this structure - with operations function pointer structure and 
such.

This code came from the we have a single, known type of system 
global IOMMU world - and we now want to transform this into 
something that is properly abstracted out and made flexible, as 
we extend its capabilities .

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 00/10] vt-d irq_remap_ops patchset

2012-04-02 Thread Ingo Molnar

* Suresh Siddha suresh.b.sid...@intel.com wrote:

 Ingo,
 
 Here is the Joerg's irq_remap_ops patchset updated for the latest -tip.
 Simplified some of the naming conventions to follow the irq_remapping
 terminology. There are still some if (irq_remapping_enabled) checks in
 io_apic.c that I would like to roll into the new io_apic_ops. I will
 look into that shortly.

Just wondering, has this been tested on affected hardware, on 
top of latest -tip or so? I realize that most of these are 
clean-ups so they should not break anything, but checking would 
be worth it nevertheless.

Thanks,

Ingo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu