Re: [Buildroot] [PATCH v2] package/glibc: ARC: use upstream 2.32

2020-09-19 Thread Thomas Petazzoni
On Wed, 16 Sep 2020 00:30:33 -0700
Vineet Gupta  wrote:

> ARC glibc port was merged upstream in 2.32
> There's no need to refer to github as it has the exact same version and
> can be retired in future.
> 
> Signed-off-by: Vineet Gupta 
> ---
> Changes since v1:
>   - dropped artifacts related to github version
> ---
>  .../glibc.hash|  2 ++
>  package/glibc/arc-2020.03-release/glibc.hash  |  7 ---
>  package/glibc/glibc.mk| 20 ++-
>  3 files changed, 13 insertions(+), 16 deletions(-)
>  create mode 100644 
> package/glibc/2.32-2-g386543bc4495f658dcce6cd4d11e4ba6574a46f5/glibc.hash
>  delete mode 100644 package/glibc/arc-2020.03-release/glibc.hash

I've added the hashes of the license files in the glibc.hash you've
added, and applied. Thanks!

Thomas
-- 
Thomas Petazzoni, CTO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Linus Torvalds
On Sat, Sep 19, 2020 at 10:39 AM Matthew Wilcox  wrote:
>
> My concern with that is people might use kmap() and then pass the address
> to a different task.  So we need to audit the current users of kmap()
> and convert any that do that into using vmap() instead.

Ahh. Yes, I guess they might do that. It sounds strange, but not
entirely crazy - I could imagine some "PIO thread" that does IO to a
page that has been set up by somebody else using kmap(). Or similar.

Linus

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Matthew Wilcox
On Sat, Sep 19, 2020 at 10:18:54AM -0700, Linus Torvalds wrote:
> On Sat, Sep 19, 2020 at 2:50 AM Thomas Gleixner  wrote:
> >
> > this provides a preemptible variant of kmap_atomic & related
> > interfaces. This is achieved by:
> 
> Ack. This looks really nice, even apart from the new capability.
> 
> The only thing I really reacted to is that the name doesn't make sense
> to me: "kmap_temporary()" seems a bit odd.
> 
> Particularly for an interface that really is basically meant as a
> better replacement of "kmap_atomic()" (but is perhaps also a better
> replacement for "kmap()").
> 
> I think I understand how the name came about: I think the "temporary"
> is there as a distinction from the "longterm" regular kmap(). So I
> think it makes some sense from an internal implementation angle, but I
> don't think it makes a lot of sense from an interface name.
> 
> I don't know what might be a better name, but if we want to emphasize
> that it's thread-private and a one-off, maybe "local" would be a
> better naming, and make it distinct from the "global" nature of the
> old kmap() interface?
> 
> However, another solution might be to just use this new preemptible
> "local" kmap(), and remove the old global one entirely. Yes, the old
> global one caches the page table mapping and that sounds really
> efficient and nice. But it's actually horribly horribly bad, because
> it means that we need to use locking for them. Your new "temporary"
> implementation seems to be fundamentally better locking-wise, and only
> need preemption disabling as locking (and is equally fast for the
> non-highmem case).
> 
> So I wonder if the single-page TLB flush isn't a better model, and
> whether it wouldn't be a lot simpler to just get rid of the old
> complex kmap() entirely, and replace it with this?
> 
> I agree we can't replace the kmap_atomic() version, because maybe
> people depend on the preemption disabling it also implied. But what
> about replacing the non-atomic kmap()?

My concern with that is people might use kmap() and then pass the address
to a different task.  So we need to audit the current users of kmap()
and convert any that do that into using vmap() instead.

I like kmap_local().  Or kmap_thread().

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Linus Torvalds
On Sat, Sep 19, 2020 at 2:50 AM Thomas Gleixner  wrote:
>
> this provides a preemptible variant of kmap_atomic & related
> interfaces. This is achieved by:

Ack. This looks really nice, even apart from the new capability.

The only thing I really reacted to is that the name doesn't make sense
to me: "kmap_temporary()" seems a bit odd.

Particularly for an interface that really is basically meant as a
better replacement of "kmap_atomic()" (but is perhaps also a better
replacement for "kmap()").

I think I understand how the name came about: I think the "temporary"
is there as a distinction from the "longterm" regular kmap(). So I
think it makes some sense from an internal implementation angle, but I
don't think it makes a lot of sense from an interface name.

I don't know what might be a better name, but if we want to emphasize
that it's thread-private and a one-off, maybe "local" would be a
better naming, and make it distinct from the "global" nature of the
old kmap() interface?

However, another solution might be to just use this new preemptible
"local" kmap(), and remove the old global one entirely. Yes, the old
global one caches the page table mapping and that sounds really
efficient and nice. But it's actually horribly horribly bad, because
it means that we need to use locking for them. Your new "temporary"
implementation seems to be fundamentally better locking-wise, and only
need preemption disabling as locking (and is equally fast for the
non-highmem case).

So I wonder if the single-page TLB flush isn't a better model, and
whether it wouldn't be a lot simpler to just get rid of the old
complex kmap() entirely, and replace it with this?

I agree we can't replace the kmap_atomic() version, because maybe
people depend on the preemption disabling it also implied. But what
about replacing the non-atomic kmap()?

Maybe I've missed something.  Is it because the new interface still
does "pagefault_disable()" perhaps?

But does it even need the pagefault_disable() at all? Yes, the
*atomic* one obviously needed it. But why does this new one need to
disable page faults?

[ I'm just reading the patches, I didn't try to apply them and look at
the end result, so I might have missed something ]

The only other worry I would have is just test coverage of this
change. I suspect very few developers use HIGHMEM. And I know the
various test robots don't tend to test 32-bit either.

But apart from that question about naming (and perhaps replacing
kmap() entirely), I very much like it.

Linus

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Daniel Vetter
On Sat, Sep 19, 2020 at 12:35 PM Daniel Vetter  wrote:
>
> On Sat, Sep 19, 2020 at 11:50 AM Thomas Gleixner  wrote:
> >
> > First of all, sorry for the horribly big Cc list!
> >
> > Following up to the discussion in:
> >
> >   https://lore.kernel.org/r/20200914204209.256266...@linutronix.de
> >
> > this provides a preemptible variant of kmap_atomic & related
> > interfaces. This is achieved by:
> >
> >  - Consolidating all kmap atomic implementations in generic code
> >
> >  - Switching from per CPU storage of the kmap index to a per task storage
> >
> >  - Adding a pteval array to the per task storage which contains the ptevals
> >of the currently active temporary kmaps
> >
> >  - Adding context switch code which checks whether the outgoing or the
> >incoming task has active temporary kmaps. If so, the outgoing task's
> >kmaps are removed and the incoming task's kmaps are restored.
> >
> >  - Adding new interfaces k[un]map_temporary*() which are not disabling
> >preemption and can be called from any context (except NMI).
> >
> >Contrary to kmap() which provides preemptible and "persistant" mappings,
> >these interfaces are meant to replace the temporary mappings provided by
> >kmap_atomic*() today.
> >
> > This allows to get rid of conditional mapping choices and allows to have
> > preemptible short term mappings on 64bit which are today enforced to be
> > non-preemptible due to the highmem constraints. It clearly puts overhead on
> > the highmem users, but highmem is slow anyway.
> >
> > This is not a wholesale conversion which makes kmap_atomic magically
> > preemptible because there might be usage sites which rely on the implicit
> > preempt disable. So this needs to be done on a case by case basis and the
> > call sites converted to kmap_temporary.
> >
> > Note, that this is only lightly tested on X86 and completely untested on
> > all other architectures.
> >
> > The lot is also available from
> >
> >git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git highmem
>
> I think it should be the case, but I want to double check: Will
> copy_*_user be allowed within a kmap_temporary section? This would
> allow us to ditch an absolute pile of slowpaths.

(coffee just kicked in) copy_*_user is ofc allowed, but if you hit a
page fault you get a short read/write. This looks like it would remove
the need to handle these in a slowpath, since page faults can now be
served in this new kmap_temporary sections. But this sounds too good
to be true, so I'm wondering what I'm missing.
-Daniel
>
> >
> > Thanks,
> >
> > tglx
> > ---
> >  a/arch/arm/mm/highmem.c   |  121 -
> >  a/arch/microblaze/mm/highmem.c|   78 -
> >  a/arch/nds32/mm/highmem.c |   48 
> >  a/arch/powerpc/mm/highmem.c   |   67 ---
> >  a/arch/sparc/mm/highmem.c |  115 
> >  arch/arc/Kconfig  |1
> >  arch/arc/include/asm/highmem.h|8 +
> >  arch/arc/mm/highmem.c |   44 ---
> >  arch/arm/Kconfig  |1
> >  arch/arm/include/asm/highmem.h|   30 +++--
> >  arch/arm/mm/Makefile  |1
> >  arch/csky/Kconfig |1
> >  arch/csky/include/asm/highmem.h   |4
> >  arch/csky/mm/highmem.c|   75 -
> >  arch/microblaze/Kconfig   |1
> >  arch/microblaze/include/asm/highmem.h |6 -
> >  arch/microblaze/mm/Makefile   |1
> >  arch/microblaze/mm/init.c |6 -
> >  arch/mips/Kconfig |1
> >  arch/mips/include/asm/highmem.h   |4
> >  arch/mips/mm/highmem.c|   77 -
> >  arch/mips/mm/init.c   |3
> >  arch/nds32/Kconfig.cpu|1
> >  arch/nds32/include/asm/highmem.h  |   21 ++-
> >  arch/nds32/mm/Makefile|1
> >  arch/powerpc/Kconfig  |1
> >  arch/powerpc/include/asm/highmem.h|6 -
> >  arch/powerpc/mm/Makefile  |1
> >  arch/powerpc/mm/mem.c |7 -
> >  arch/sparc/Kconfig|1
> >  arch/sparc/include/asm/highmem.h  |7 -
> >  arch/sparc/mm/Makefile|3
> >  arch/sparc/mm/srmmu.c |2
> >  arch/x86/include/asm/fixmap.h |1
> >  arch/x86/include/asm/highmem.h|   12 +-
> >  arch/x86/include/asm/iomap.h  |   29 +++--
> >  arch/x86/mm/highmem_32.c  |   59 --
> >  arch/x86/mm/init_32.c |   15 --
> >  arch/x86/mm/iomap_32.c|   57 --
> >  arch/xtensa/Kconfig   |1
> >  arch/xtensa/include/asm/highmem.h |9 +
> >  arch/xtensa/mm/highmem.c  |   44 ---
> >  b/arch/x86/Kconfig|3
> >  include/linux/highmem.h   |  141 

Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Daniel Vetter
On Sat, Sep 19, 2020 at 11:50 AM Thomas Gleixner  wrote:
>
> First of all, sorry for the horribly big Cc list!
>
> Following up to the discussion in:
>
>   https://lore.kernel.org/r/20200914204209.256266...@linutronix.de
>
> this provides a preemptible variant of kmap_atomic & related
> interfaces. This is achieved by:
>
>  - Consolidating all kmap atomic implementations in generic code
>
>  - Switching from per CPU storage of the kmap index to a per task storage
>
>  - Adding a pteval array to the per task storage which contains the ptevals
>of the currently active temporary kmaps
>
>  - Adding context switch code which checks whether the outgoing or the
>incoming task has active temporary kmaps. If so, the outgoing task's
>kmaps are removed and the incoming task's kmaps are restored.
>
>  - Adding new interfaces k[un]map_temporary*() which are not disabling
>preemption and can be called from any context (except NMI).
>
>Contrary to kmap() which provides preemptible and "persistant" mappings,
>these interfaces are meant to replace the temporary mappings provided by
>kmap_atomic*() today.
>
> This allows to get rid of conditional mapping choices and allows to have
> preemptible short term mappings on 64bit which are today enforced to be
> non-preemptible due to the highmem constraints. It clearly puts overhead on
> the highmem users, but highmem is slow anyway.
>
> This is not a wholesale conversion which makes kmap_atomic magically
> preemptible because there might be usage sites which rely on the implicit
> preempt disable. So this needs to be done on a case by case basis and the
> call sites converted to kmap_temporary.
>
> Note, that this is only lightly tested on X86 and completely untested on
> all other architectures.
>
> The lot is also available from
>
>git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git highmem

I think it should be the case, but I want to double check: Will
copy_*_user be allowed within a kmap_temporary section? This would
allow us to ditch an absolute pile of slowpaths.
-Daniel

>
> Thanks,
>
> tglx
> ---
>  a/arch/arm/mm/highmem.c   |  121 -
>  a/arch/microblaze/mm/highmem.c|   78 -
>  a/arch/nds32/mm/highmem.c |   48 
>  a/arch/powerpc/mm/highmem.c   |   67 ---
>  a/arch/sparc/mm/highmem.c |  115 
>  arch/arc/Kconfig  |1
>  arch/arc/include/asm/highmem.h|8 +
>  arch/arc/mm/highmem.c |   44 ---
>  arch/arm/Kconfig  |1
>  arch/arm/include/asm/highmem.h|   30 +++--
>  arch/arm/mm/Makefile  |1
>  arch/csky/Kconfig |1
>  arch/csky/include/asm/highmem.h   |4
>  arch/csky/mm/highmem.c|   75 -
>  arch/microblaze/Kconfig   |1
>  arch/microblaze/include/asm/highmem.h |6 -
>  arch/microblaze/mm/Makefile   |1
>  arch/microblaze/mm/init.c |6 -
>  arch/mips/Kconfig |1
>  arch/mips/include/asm/highmem.h   |4
>  arch/mips/mm/highmem.c|   77 -
>  arch/mips/mm/init.c   |3
>  arch/nds32/Kconfig.cpu|1
>  arch/nds32/include/asm/highmem.h  |   21 ++-
>  arch/nds32/mm/Makefile|1
>  arch/powerpc/Kconfig  |1
>  arch/powerpc/include/asm/highmem.h|6 -
>  arch/powerpc/mm/Makefile  |1
>  arch/powerpc/mm/mem.c |7 -
>  arch/sparc/Kconfig|1
>  arch/sparc/include/asm/highmem.h  |7 -
>  arch/sparc/mm/Makefile|3
>  arch/sparc/mm/srmmu.c |2
>  arch/x86/include/asm/fixmap.h |1
>  arch/x86/include/asm/highmem.h|   12 +-
>  arch/x86/include/asm/iomap.h  |   29 +++--
>  arch/x86/mm/highmem_32.c  |   59 --
>  arch/x86/mm/init_32.c |   15 --
>  arch/x86/mm/iomap_32.c|   57 --
>  arch/xtensa/Kconfig   |1
>  arch/xtensa/include/asm/highmem.h |9 +
>  arch/xtensa/mm/highmem.c  |   44 ---
>  b/arch/x86/Kconfig|3
>  include/linux/highmem.h   |  141 +++-
>  include/linux/io-mapping.h|2
>  include/linux/sched.h |9 +
>  kernel/sched/core.c   |   10 +
>  mm/Kconfig|3
>  mm/highmem.c  |  192 
> --
>  49 files changed, 422 insertions(+), 909 deletions(-)



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org

[patch RFC 13/15] mm/highmem: Remove the old kmap_atomic cruft

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
---
 include/linux/highmem.h |   65 ++--
 mm/highmem.c|   28 +---
 2 files changed, 28 insertions(+), 65 deletions(-)

--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -94,27 +94,6 @@ static inline void kunmap(struct page *p
  * be used in IRQ contexts, so in some (very limited) cases we need
  * it.
  */
-
-#ifndef CONFIG_KMAP_ATOMIC_GENERIC
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
-void kunmap_atomic_high(void *kvaddr);
-
-static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
-{
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-   return kmap_atomic_high_prot(page, prot);
-}
-
-static inline void __kunmap_atomic(void *vaddr)
-{
-   kunmap_atomic_high(vaddr);
-   pagefault_enable();
-}
-#else /* !CONFIG_KMAP_ATOMIC_GENERIC */
-
 static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 {
preempt_disable();
@@ -127,17 +106,14 @@ static inline void *kmap_atomic_pfn(unsi
return kmap_atomic_pfn_prot(pfn, kmap_prot);
 }
 
-static inline void __kunmap_atomic(void *addr)
+static inline void *kmap_atomic(struct page *page)
 {
-   kumap_atomic_indexed(addr);
+   return kmap_atomic_prot(page, kmap_prot);
 }
 
-
-#endif /* CONFIG_KMAP_ATOMIC_GENERIC */
-
-static inline void *kmap_atomic(struct page *page)
+static inline void __kunmap_atomic(void *addr)
 {
-   return kmap_atomic_prot(page, kmap_prot);
+   kumap_atomic_indexed(addr);
 }
 
 /* declarations for linux/mm/highmem.c */
@@ -233,39 +209,6 @@ static inline void __kunmap_atomic(void
 
 #endif /* CONFIG_HIGHMEM */
 
-#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
-
-DECLARE_PER_CPU(int, __kmap_atomic_idx);
-
-static inline int kmap_atomic_idx_push(void)
-{
-   int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   WARN_ON_ONCE(in_irq() && !irqs_disabled());
-   BUG_ON(idx >= KM_TYPE_NR);
-#endif
-   return idx;
-}
-
-static inline int kmap_atomic_idx(void)
-{
-   return __this_cpu_read(__kmap_atomic_idx) - 1;
-}
-
-static inline void kmap_atomic_idx_pop(void)
-{
-#ifdef CONFIG_DEBUG_HIGHMEM
-   int idx = __this_cpu_dec_return(__kmap_atomic_idx);
-
-   BUG_ON(idx < 0);
-#else
-   __this_cpu_dec(__kmap_atomic_idx);
-#endif
-}
-
-#endif
-
 /*
  * Prevent people trying to call kunmap_atomic() as if it were kunmap()
  * kunmap_atomic() should get the return value of kmap_atomic, not the page.
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -31,10 +31,6 @@
 #include 
 #include 
 
-#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
-DEFINE_PER_CPU(int, __kmap_atomic_idx);
-#endif
-
 /*
  * Virtual_count is not a pure "count".
  *  0 means that it is not mapped, and has not been mapped
@@ -380,6 +376,30 @@ static inline void kmap_high_unmap_tempo
 #endif /* CONFIG_HIGHMEM */
 
 #ifdef CONFIG_KMAP_ATOMIC_GENERIC
+
+static DEFINE_PER_CPU(int, __kmap_atomic_idx);
+
+static inline int kmap_atomic_idx_push(void)
+{
+   int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
+
+   WARN_ON_ONCE(in_irq() && !irqs_disabled());
+   BUG_ON(idx >= KM_TYPE_NR);
+   return idx;
+}
+
+static inline int kmap_atomic_idx(void)
+{
+   return __this_cpu_read(__kmap_atomic_idx) - 1;
+}
+
+static inline void kmap_atomic_idx_pop(void)
+{
+   int idx = __this_cpu_dec_return(__kmap_atomic_idx);
+
+   BUG_ON(idx < 0);
+}
+
 #ifndef arch_kmap_temp_post_map
 # define arch_kmap_temp_post_map(vaddr, pteval)do { } while (0)
 #endif


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 07/15] microblaze/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Michal Simek 
---
Note: Completely untested
---
 arch/microblaze/Kconfig   |1 
 arch/microblaze/include/asm/highmem.h |6 ++
 arch/microblaze/mm/Makefile   |1 
 arch/microblaze/mm/highmem.c  |   78 --
 arch/microblaze/mm/init.c |6 --
 5 files changed, 6 insertions(+), 86 deletions(-)

--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -170,6 +170,7 @@ config XILINX_UNCACHED_SHADOW
 config HIGHMEM
bool "High memory support"
depends on MMU
+   select KMAP_ATOMIC_GENERIC
help
  The address space of Microblaze processors is only 4 Gigabytes large
  and it has to accommodate user address space, kernel address
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -25,7 +25,6 @@
 #include 
 #include 
 
-extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
 /*
@@ -52,6 +51,11 @@ extern pte_t *pkmap_page_table;
 
 #define flush_cache_kmaps(){ flush_icache(); flush_dcache(); }
 
+#define arch_kmap_temp_post_map(vaddr, pteval) \
+   local_flush_tlb_page(NULL, vaddr);
+#define arch_kmap_temp_post_unmap(vaddr)   \
+   local_flush_tlb_page(NULL, vaddr);
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_HIGHMEM_H */
--- a/arch/microblaze/mm/Makefile
+++ b/arch/microblaze/mm/Makefile
@@ -6,4 +6,3 @@
 obj-y := consistent.o init.o
 
 obj-$(CONFIG_MMU) += pgtable.o mmu_context.o fault.o
-obj-$(CONFIG_HIGHMEM) += highmem.o
--- a/arch/microblaze/mm/highmem.c
+++ /dev/null
@@ -1,78 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * highmem.c: virtual kernel memory mappings for high memory
- *
- * PowerPC version, stolen from the i386 version.
- *
- * Used in CONFIG_HIGHMEM systems for memory pages which
- * are not addressable by direct kernel virtual addresses.
- *
- * Copyright (C) 1999 Gerhard Wichert, Siemens AG
- *   gerhard.wich...@pdb.siemens.de
- *
- *
- * Redesigned the x86 32-bit VM architecture to deal with
- * up to 16 Terrabyte physical memory. With current x86 CPUs
- * we now support up to 64 Gigabytes physical RAM.
- *
- * Copyright (C) 1999 Ingo Molnar 
- *
- * Reworked for PowerPC by various contributors. Moved from
- * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp.
- */
-
-#include 
-#include 
-
-/*
- * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
- * gives a more generic (and caching) interface. But kmap_atomic can
- * be used in IRQ contexts, so in some (very limited) cases we need
- * it.
- */
-#include 
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-
-   unsigned long vaddr;
-   int idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(!pte_none(*(kmap_pte-idx)));
-#endif
-   set_pte_at(_mm, vaddr, kmap_pte-idx, mk_pte(page, prot));
-   local_flush_tlb_page(NULL, vaddr);
-
-   return (void *) vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-   int type;
-   unsigned int idx;
-
-   if (vaddr < __fix_to_virt(FIX_KMAP_END))
-   return;
-
-   type = kmap_atomic_idx();
-
-   idx = type + KM_TYPE_NR * smp_processor_id();
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
-#endif
-   /*
-* force other mappings to Oops if they'll try to access
-* this pte without first remap it
-*/
-   pte_clear(_mm, vaddr, kmap_pte-idx);
-   local_flush_tlb_page(NULL, vaddr);
-
-   kmap_atomic_idx_pop();
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -49,17 +49,11 @@ unsigned long lowmem_size;
 EXPORT_SYMBOL(min_low_pfn);
 EXPORT_SYMBOL(max_low_pfn);
 
-#ifdef CONFIG_HIGHMEM
-pte_t *kmap_pte;
-EXPORT_SYMBOL(kmap_pte);
-
 static void __init highmem_init(void)
 {
pr_debug("%x\n", (u32)PKMAP_BASE);
map_page(PKMAP_BASE, 0, 0); /* XXX gross */
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
-
-   kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
 }
 
 static void highmem_setup(void)


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 08/15] mips/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Thomas Bogendoerfer 
Cc: linux-m...@vger.kernel.org
---
Note: Completely untested
---
 arch/mips/Kconfig   |1 
 arch/mips/include/asm/highmem.h |4 +-
 arch/mips/mm/highmem.c  |   77 
 arch/mips/mm/init.c |3 -
 4 files changed, 3 insertions(+), 82 deletions(-)

--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2654,6 +2654,7 @@ config MIPS_CRC_SUPPORT
 config HIGHMEM
bool "High Memory Support"
depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && 
!CPU_MIPS32_3_5_EVA
+   select KMAP_ATOMIC_GENERIC
 
 config CPU_SUPPORTS_HIGHMEM
bool
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -48,11 +48,11 @@ extern pte_t *pkmap_page_table;
 
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic_pfn(unsigned long pfn);
 
 #define flush_cache_kmaps()BUG_ON(cpu_has_dc_aliases)
 
-extern void kmap_init(void);
+#define arch_kmap_temp_post_map(vaddr, pteval) local_flush_tlb_one(vaddr)
+#define arch_kmap_temp_post_unmap(vaddr)   local_flush_tlb_one(vaddr)
 
 #endif /* __KERNEL__ */
 
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -8,8 +8,6 @@
 #include 
 #include 
 
-static pte_t *kmap_pte;
-
 unsigned long highstart_pfn, highend_pfn;
 
 void kmap_flush_tlb(unsigned long addr)
@@ -17,78 +15,3 @@ void kmap_flush_tlb(unsigned long addr)
flush_tlb_one(addr);
 }
 EXPORT_SYMBOL(kmap_flush_tlb);
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(!pte_none(*(kmap_pte - idx)));
-#endif
-   set_pte(kmap_pte-idx, mk_pte(page, prot));
-   local_flush_tlb_one((unsigned long)vaddr);
-
-   return (void*) vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-   int type __maybe_unused;
-
-   if (vaddr < FIXADDR_START)
-   return;
-
-   type = kmap_atomic_idx();
-#ifdef CONFIG_DEBUG_HIGHMEM
-   {
-   int idx = type + KM_TYPE_NR * smp_processor_id();
-
-   BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
-
-   /*
-* force other mappings to Oops if they'll try to access
-* this pte without first remap it
-*/
-   pte_clear(_mm, vaddr, kmap_pte-idx);
-   local_flush_tlb_one(vaddr);
-   }
-#endif
-   kmap_atomic_idx_pop();
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
-
-/*
- * This is the same as kmap_atomic() but can map memory that doesn't
- * have a struct page associated with it.
- */
-void *kmap_atomic_pfn(unsigned long pfn)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   preempt_disable();
-   pagefault_disable();
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL));
-   flush_tlb_one(vaddr);
-
-   return (void*) vaddr;
-}
-
-void __init kmap_init(void)
-{
-   unsigned long kmap_vstart;
-
-   /* cache the first kmap pte */
-   kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
-   kmap_pte = virt_to_kpte(kmap_vstart);
-}
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -402,9 +402,6 @@ void __init paging_init(void)
 
pagetable_init();
 
-#ifdef CONFIG_HIGHMEM
-   kmap_init();
-#endif
 #ifdef CONFIG_ZONE_DMA
max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
 #endif


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 09/15] nds32/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
The mapping code is odd and looks broken. See FIXME in the comment.

Signed-off-by: Thomas Gleixner 
Cc: Nick Hu 
Cc: Greentime Hu 
Cc: Vincent Chen 
---
Note: Completely untested
---
 arch/nds32/Kconfig.cpu   |1 
 arch/nds32/include/asm/highmem.h |   21 +
 arch/nds32/mm/Makefile   |1 
 arch/nds32/mm/highmem.c  |   48 ---
 4 files changed, 17 insertions(+), 54 deletions(-)

--- a/arch/nds32/Kconfig.cpu
+++ b/arch/nds32/Kconfig.cpu
@@ -157,6 +157,7 @@ config HW_SUPPORT_UNALIGNMENT_ACCESS
 config HIGHMEM
bool "High Memory Support"
depends on MMU && !CPU_CACHE_ALIASING
+select KMAP_ATOMIC_GENERIC
help
  The address space of Andes processors is only 4 Gigabytes large
  and it has to accommodate user address space, kernel address
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -45,11 +45,22 @@ extern pte_t *pkmap_page_table;
 extern void kmap_init(void);
 
 /*
- * The following functions are already defined by 
- * when CONFIG_HIGHMEM is not set.
+ * FIXME: The below looks broken vs. a kmap_atomic() in task context which
+ * is interupted and another kmap_atomic() happens in interrupt context.
+ * But what do I know about nds32. -- tglx
  */
-#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic_pfn(unsigned long pfn);
-#endif
+#define arch_kmap_temp_post_map(vaddr, pteval) \
+   do {\
+   __nds32__tlbop_inv(vaddr);  \
+   __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); \
+   __nds32__tlbop_rwr(pteval); \
+   __nds32__isb(); \
+   } while (0)
+
+#define arch_kmap_temp_pre_unmap(vaddr, pte)   \
+   do {\
+   __nds32__tlbop_inv(vaddr);  \
+   __nds32__isb(); \
+   } while (0)
 
 #endif
--- a/arch/nds32/mm/Makefile
+++ b/arch/nds32/mm/Makefile
@@ -3,7 +3,6 @@ obj-y   := extable.o tlb.o fault.o init
mm-nds32.o cacheflush.o proc.o
 
 obj-$(CONFIG_ALIGNMENT_TRAP)   += alignment.o
-obj-$(CONFIG_HIGHMEM)   += highmem.o
 
 ifdef CONFIG_FUNCTION_TRACER
 CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE)
--- a/arch/nds32/mm/highmem.c
+++ /dev/null
@@ -1,48 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-// Copyright (C) 2005-2017 Andes Technology Corporation
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned int idx;
-   unsigned long vaddr, pte;
-   int type;
-   pte_t *ptep;
-
-   type = kmap_atomic_idx_push();
-
-   idx = type + KM_TYPE_NR * smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   pte = (page_to_pfn(page) << PAGE_SHIFT) | prot;
-   ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
-   set_pte(ptep, pte);
-
-   __nds32__tlbop_inv(vaddr);
-   __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN);
-   __nds32__tlbop_rwr(pte);
-   __nds32__isb();
-   return (void *)vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   if (kvaddr >= (void *)FIXADDR_START) {
-   unsigned long vaddr = (unsigned long)kvaddr;
-   pte_t *ptep;
-   kmap_atomic_idx_pop();
-   __nds32__tlbop_inv(vaddr);
-   __nds32__isb();
-   ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
-   set_pte(ptep, 0);
-   }
-}
-EXPORT_SYMBOL(kunmap_atomic_high);


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 15/15] mm/highmem: Provide kmap_temporary*

2020-09-19 Thread Thomas Gleixner
Now that the kmap atomic index is stored in task struct provide a
preemptible variant. On context switch the maps of an outgoing task are
removed and the map of the incoming task are restored. That's obviously
slow, but highmem is slow anyway.

The kmap_temporary and iomap_temporary interfaces can be invoked from both
preemptible and atomic context.

A wholesale conversion of kmap_atomic to be fully preemptible is not
possible because some of the usage sites might rely on the preemption
disable for serialization or per CPUness. Needs to be done on a case by
case basis.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/include/asm/iomap.h |   16 -
 arch/x86/mm/iomap_32.c   |7 +---
 include/linux/highmem.h  |   70 +--
 mm/highmem.c |   18 +--
 4 files changed, 80 insertions(+), 31 deletions(-)

--- a/arch/x86/include/asm/iomap.h
+++ b/arch/x86/include/asm/iomap.h
@@ -13,11 +13,23 @@
 #include 
 #include 
 
-void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
+void __iomem *iomap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot);
+
+static inline void __iomem *iomap_atomic_pfn_prot(unsigned long pfn,
+ pgprot_t prot)
+{
+   preempt_disable();
+   return iomap_temporary_pfn_prot(pfn, prot);
+}
+
+static inline void iounmap_temporary(void __iomem *vaddr)
+{
+   kunmap_temporary_indexed((void __force *)vaddr);
+}
 
 static inline void iounmap_atomic(void __iomem *vaddr)
 {
-   kunmap_atomic_indexed((void __force *)vaddr);
+   iounmap_temporary(vaddr);
preempt_enable();
 }
 
--- a/arch/x86/mm/iomap_32.c
+++ b/arch/x86/mm/iomap_32.c
@@ -44,7 +44,7 @@ void iomap_free(resource_size_t base, un
 }
 EXPORT_SYMBOL_GPL(iomap_free);
 
-void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot)
+void __iomem *iomap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot)
 {
/*
 * For non-PAT systems, translate non-WB request to UC- just in
@@ -60,7 +60,6 @@ void __iomem *iomap_atomic_pfn_prot(unsi
/* Filter out unsupported __PAGE_KERNEL* bits: */
pgprot_val(prot) &= __default_kernel_pte_mask;
 
-   preempt_disable();
-   return (void __force __iomem *)kmap_atomic_pfn_prot(pfn, prot);
+   return (void __force __iomem *)__kmap_temporary_pfn_prot(pfn, prot);
 }
-EXPORT_SYMBOL_GPL(iomap_atomic_pfn_prot);
+EXPORT_SYMBOL_GPL(iomap_temporary_pfn_prot);
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -35,9 +35,9 @@ static inline void invalidate_kernel_vma
  * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft.
  */
 #ifdef CONFIG_KMAP_ATOMIC_GENERIC
-void *kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
-void *kmap_atomic_page_prot(struct page *page, pgprot_t prot);
-void kunmap_atomic_indexed(void *vaddr);
+void *__kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot);
+void *__kmap_temporary_page_prot(struct page *page, pgprot_t prot);
+void kunmap_temporary_indexed(void *vaddr);
 void kmap_switch_temporary(struct task_struct *prev, struct task_struct *next);
 # ifndef ARCH_NEEDS_KMAP_HIGH_GET
 static inline void *arch_kmap_temporary_high_get(struct page *page)
@@ -95,16 +95,35 @@ static inline void kunmap(struct page *p
  * be used in IRQ contexts, so in some (very limited) cases we need
  * it.
  */
-static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+static inline void *kmap_temporary_page_prot(struct page *page, pgprot_t prot)
 {
-   preempt_disable();
-   return kmap_atomic_page_prot(page, prot);
+   return __kmap_temporary_page_prot(page, prot);
 }
 
-static inline void *kmap_atomic_pfn(unsigned long pfn)
+static inline void *kmap_temporary_page(struct page *page)
+{
+   return kmap_temporary_page_prot(page, kmap_prot);
+}
+
+static inline void *kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot)
+{
+   return __kmap_temporary_pfn_prot(pfn, prot);
+}
+
+static inline void *kmap_temporary_pfn(unsigned long pfn)
+{
+   return kmap_temporary_pfn_prot(pfn, kmap_prot);
+}
+
+static inline void __kunmap_temporary(void *vaddr)
+{
+   kunmap_temporary_indexed(vaddr);
+}
+
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 {
preempt_disable();
-   return kmap_atomic_pfn_prot(pfn, kmap_prot);
+   return kmap_temporary_page_prot(page, prot);
 }
 
 static inline void *kmap_atomic(struct page *page)
@@ -112,9 +131,10 @@ static inline void *kmap_atomic(struct p
return kmap_atomic_prot(page, kmap_prot);
 }
 
-static inline void __kunmap_atomic(void *addr)
+static inline void *kmap_atomic_pfn(unsigned long pfn)
 {
-   kumap_atomic_indexed(addr);
+   preempt_disable();
+   return kmap_temporary_pfn_prot(pfn, kmap_prot);
 }
 
 /* declarations for linux/mm/highmem.c */
@@ -177,6 +197,22 @@ static inline void kunmap(struct page *p
 #endif
 }
 
+static inline void 

[patch RFC 11/15] sparc/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: "David S. Miller" 
Cc: sparcli...@vger.kernel.org
---
Note: Completely untested
---
 arch/sparc/Kconfig   |1 
 arch/sparc/include/asm/highmem.h |7 +-
 arch/sparc/mm/Makefile   |3 -
 arch/sparc/mm/highmem.c  |  115 ---
 arch/sparc/mm/srmmu.c|2 
 5 files changed, 6 insertions(+), 122 deletions(-)

--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -137,6 +137,7 @@ config MMU
 config HIGHMEM
bool
default y if SPARC32
+select KMAP_ATOMIC_GENERIC
 
 config ZONE_DMA
bool
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -33,8 +33,6 @@ extern unsigned long highstart_pfn, high
 #define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE)
 extern pte_t *pkmap_page_table;
 
-void kmap_init(void) __init;
-
 /*
  * Right now we initialize only a single pte table. It can be extended
  * easily, subsequent pte tables have to be allocated in one physical
@@ -53,6 +51,11 @@ void kmap_init(void) __init;
 
 #define flush_cache_kmaps()flush_cache_all()
 
+/* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */
+#define arch_kmap_temp_post_map(vaddr, pteval) flush_cache_all()
+#define arch_kmap_temp_post_unmap(vaddr)   flush_cache_all()
+
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_HIGHMEM_H */
--- a/arch/sparc/mm/Makefile
+++ b/arch/sparc/mm/Makefile
@@ -15,6 +15,3 @@ obj-$(CONFIG_SPARC32)   += leon_mm.o
 
 # Only used by sparc64
 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
-
-# Only used by sparc32
-obj-$(CONFIG_HIGHMEM)   += highmem.o
--- a/arch/sparc/mm/highmem.c
+++ /dev/null
@@ -1,115 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- *  highmem.c: virtual kernel memory mappings for high memory
- *
- *  Provides kernel-static versions of atomic kmap functions originally
- *  found as inlines in include/asm-sparc/highmem.h.  These became
- *  needed as kmap_atomic() and kunmap_atomic() started getting
- *  called from within modules.
- *  -- Tomas Szepe , September 2002
- *
- *  But kmap_atomic() and kunmap_atomic() cannot be inlined in
- *  modules because they are loaded with btfixup-ped functions.
- */
-
-/*
- * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
- * gives a more generic (and caching) interface. But kmap_atomic can
- * be used in IRQ contexts, so in some (very limited) cases we need it.
- *
- * XXX This is an old text. Actually, it's good to use atomic kmaps,
- * provided you remember that they are atomic and not try to sleep
- * with a kmap taken, much like a spinlock. Non-atomic kmaps are
- * shared by CPUs, and so precious, and establishing them requires IPI.
- * Atomic kmaps are lightweight and we may have NCPUS more of them.
- */
-#include 
-#include 
-#include 
-
-#include 
-#include 
-#include 
-
-static pte_t *kmap_pte;
-
-void __init kmap_init(void)
-{
-   unsigned long address = __fix_to_virt(FIX_KMAP_BEGIN);
-
-/* cache the first kmap pte */
-kmap_pte = virt_to_kpte(address);
-}
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned long vaddr;
-   long idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-
-/* XXX Fix - Anton */
-#if 0
-   __flush_cache_one(vaddr);
-#else
-   flush_cache_all();
-#endif
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(!pte_none(*(kmap_pte-idx)));
-#endif
-   set_pte(kmap_pte-idx, mk_pte(page, prot));
-/* XXX Fix - Anton */
-#if 0
-   __flush_tlb_one(vaddr);
-#else
-   flush_tlb_all();
-#endif
-
-   return (void*) vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-   int type;
-
-   if (vaddr < FIXADDR_START)
-   return;
-
-   type = kmap_atomic_idx();
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   {
-   unsigned long idx;
-
-   idx = type + KM_TYPE_NR * smp_processor_id();
-   BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN+idx));
-
-   /* XXX Fix - Anton */
-#if 0
-   __flush_cache_one(vaddr);
-#else
-   flush_cache_all();
-#endif
-
-   /*
-* force other mappings to Oops if they'll try to access
-* this pte without first remap it
-*/
-   pte_clear(_mm, vaddr, kmap_pte-idx);
-   /* XXX Fix - Anton */
-#if 0
-   __flush_tlb_one(vaddr);
-#else
-   flush_tlb_all();
-#endif
-   }
-#endif
-
-   kmap_atomic_idx_pop();
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -971,8 +971,6 @@ void __init srmmu_paging_init(void)
 
sparc_context_init(num_contexts);
 
-   kmap_init();
-

[patch RFC 14/15] sched: highmem: Store temporary kmaps in task struct

2020-09-19 Thread Thomas Gleixner
Instead of storing the map per CPU provide and use per task storage. That
prepares for temporary kmaps which are preemptible.

The context switch code is preparatory and not yet in use because
kmap_atomic() runs with preemption disabled. Will be made usable in the
next step.

Signed-off-by: Thomas Gleixner 
---
 include/linux/highmem.h |1 
 include/linux/sched.h   |9 +++
 kernel/sched/core.c |   10 
 mm/highmem.c|   59 ++--
 4 files changed, 72 insertions(+), 7 deletions(-)

--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -38,6 +38,7 @@ static inline void invalidate_kernel_vma
 void *kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
 void *kmap_atomic_page_prot(struct page *page, pgprot_t prot);
 void kunmap_atomic_indexed(void *vaddr);
+void kmap_switch_temporary(struct task_struct *prev, struct task_struct *next);
 # ifndef ARCH_NEEDS_KMAP_HIGH_GET
 static inline void *arch_kmap_temporary_high_get(struct page *page)
 {
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* task_struct member predeclarations (sorted alphabetically): */
 struct audit_context;
@@ -628,6 +629,13 @@ struct wake_q_node {
struct wake_q_node *next;
 };
 
+struct kmap_ctrl {
+#ifdef CONFIG_KMAP_ATOMIC_GENERIC
+   int idx;
+   pte_t   pteval[KM_TYPE_NR];
+#endif
+};
+
 struct task_struct {
 #ifdef CONFIG_THREAD_INFO_IN_TASK
/*
@@ -1280,6 +1288,7 @@ struct task_struct {
unsigned intsequential_io;
unsigned intsequential_io_avg;
 #endif
+   struct kmap_ctrlkmap_ctrl;
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
unsigned long   task_state_change;
 #endif
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3529,6 +3529,15 @@ static inline void finish_lock_switch(st
 # define finish_arch_post_lock_switch()do { } while (0)
 #endif
 
+static inline void kmap_temp_switch(struct task_struct *prev,
+   struct task_struct *next)
+{
+#ifdef CONFIG_HIGHMEM
+   if (unlikely(prev->kmap_ctrl.idx || next->kmap_ctrl.idx))
+   kmap_switch_temporary(prev, next);
+#endif
+}
+
 /**
  * prepare_task_switch - prepare to switch tasks
  * @rq: the runqueue preparing to switch
@@ -3551,6 +3560,7 @@ prepare_task_switch(struct rq *rq, struc
perf_event_task_sched_out(prev, next);
rseq_preempt(prev);
fire_sched_out_preempt_notifiers(prev, next);
+   kmap_temp_switch(prev, next);
prepare_task(next);
prepare_arch_switch(next);
 }
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -370,6 +370,7 @@ void kunmap_high(struct page *page)
if (need_wakeup)
wake_up(pkmap_map_wait);
 }
+
 EXPORT_SYMBOL(kunmap_high);
 #else
 static inline void kmap_high_unmap_temporary(unsigned long vaddr) { }
@@ -377,11 +378,9 @@ static inline void kmap_high_unmap_tempo
 
 #ifdef CONFIG_KMAP_ATOMIC_GENERIC
 
-static DEFINE_PER_CPU(int, __kmap_atomic_idx);
-
 static inline int kmap_atomic_idx_push(void)
 {
-   int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
+   int idx = current->kmap_ctrl.idx++;
 
WARN_ON_ONCE(in_irq() && !irqs_disabled());
BUG_ON(idx >= KM_TYPE_NR);
@@ -390,14 +389,13 @@ static inline int kmap_atomic_idx_push(v
 
 static inline int kmap_atomic_idx(void)
 {
-   return __this_cpu_read(__kmap_atomic_idx) - 1;
+   return current->kmap_ctrl.idx - 1;
 }
 
 static inline void kmap_atomic_idx_pop(void)
 {
-   int idx = __this_cpu_dec_return(__kmap_atomic_idx);
-
-   BUG_ON(idx < 0);
+   current->kmap_ctrl.idx--;
+   BUG_ON(current->kmap_ctrl.idx < 0);
 }
 
 #ifndef arch_kmap_temp_post_map
@@ -447,6 +445,7 @@ static void *__kmap_atomic_pfn_prot(unsi
pteval = pfn_pte(pfn, prot);
set_pte(kmap_pte - idx, pteval);
arch_kmap_temp_post_map(vaddr, pteval);
+   current->kmap_ctrl.pteval[kmap_atomic_idx()] = pteval;
preempt_enable();
 
return (void *)vaddr;
@@ -499,11 +498,57 @@ void kunmap_atomic_indexed(void *vaddr)
arch_kmap_temp_pre_unmap(addr);
pte_clear(_mm, addr, kmap_pte - idx);
arch_kmap_temp_post_unmap(addr);
+   current->kmap_ctrl.pteval[kmap_atomic_idx()] = __pte(0);
kmap_atomic_idx_pop();
preempt_enable();
pagefault_enable();
 }
 EXPORT_SYMBOL(kunmap_atomic_indexed);
+
+void kmap_switch_temporary(struct task_struct *prev, struct task_struct *next)
+{
+   pte_t *kmap_pte = kmap_get_pte();
+   int i;
+
+   /* Clear @prev's kmaps */
+   for (i = 0; i < prev->kmap_ctrl.idx; i++) {
+   pte_t pteval = prev->kmap_ctrl.pteval[i];
+   unsigned long addr;
+   int idx;
+
+   if (WARN_ON_ONCE(pte_none(pteval)))
+ 

[patch RFC 12/15] xtensa/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Chris Zankel 
Cc: Max Filippov 
Cc: linux-xte...@linux-xtensa.org
---
Note: Completely untested
---
 arch/xtensa/Kconfig   |1 
 arch/xtensa/include/asm/highmem.h |9 +++
 arch/xtensa/mm/highmem.c  |   44 +++---
 3 files changed, 14 insertions(+), 40 deletions(-)

--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -679,6 +679,7 @@ endchoice
 config HIGHMEM
bool "High Memory Support"
depends on MMU
+select KMAP_ATOMIC_GENERIC
help
  Linux can use the full amount of RAM in the system by
  default. However, the default MMUv2 setup only maps the
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -68,6 +68,15 @@ static inline void flush_cache_kmaps(voi
flush_cache_all();
 }
 
+enum fixed_addresses kmap_temp_map_idx(int type, unsigned long pfn);
+#define arch_kmap_temp_map_idx kmap_temp_map_idx
+
+enum fixed_addresses kmap_temp_unmap_idx(int type, unsigned long addr);
+#define arch_kmap_temp_unmap_idx   kmap_temp_unmap_idx
+
+#define arch_kmap_temp_post_unmap(vaddr)   \
+   local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE)
+
 void kmap_init(void);
 
 #endif
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -12,8 +12,6 @@
 #include 
 #include 
 
-static pte_t *kmap_pte;
-
 #if DCACHE_WAY_SIZE > PAGE_SIZE
 unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
 wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
@@ -37,55 +35,21 @@ static inline enum fixed_addresses kmap_
color;
 }
 
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
+enum fixed_addresses kmap_temp_map_idx(int type, unsigned long pfn)
 {
-   enum fixed_addresses idx;
-   unsigned long vaddr;
-
-   idx = kmap_idx(kmap_atomic_idx_push(),
-  DCACHE_ALIAS(page_to_phys(page)));
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(!pte_none(*(kmap_pte + idx)));
-#endif
-   set_pte(kmap_pte + idx, mk_pte(page, prot));
-
-   return (void *)vaddr;
+   return kmap_idx(type, DCACHE_ALIAS(pfn << PAGE_SHIFT);
 }
-EXPORT_SYMBOL(kmap_atomic_high_prot);
 
-void kunmap_atomic_high(void *kvaddr)
+enum fixed_addresses kmap_temp_unmap_idx(int type, unsigned long addr)
 {
-   if (kvaddr >= (void *)FIXADDR_START &&
-   kvaddr < (void *)FIXADDR_TOP) {
-   int idx = kmap_idx(kmap_atomic_idx(),
-  DCACHE_ALIAS((unsigned long)kvaddr));
-
-   /*
-* Force other mappings to Oops if they'll try to access this
-* pte without first remap it.  Keeping stale mappings around
-* is a bad idea also, in case the page changes cacheability
-* attributes or becomes a protected page in a hypervisor.
-*/
-   pte_clear(_mm, kvaddr, kmap_pte + idx);
-   local_flush_tlb_kernel_range((unsigned long)kvaddr,
-(unsigned long)kvaddr + PAGE_SIZE);
-
-   kmap_atomic_idx_pop();
-   }
+   return kmap_idx(type, DCACHE_ALIAS(addr));
 }
-EXPORT_SYMBOL(kunmap_atomic_high);
 
 void __init kmap_init(void)
 {
-   unsigned long kmap_vstart;
-
/* Check if this memory layout is broken because PKMAP overlaps
 * page table.
 */
BUILD_BUG_ON(PKMAP_BASE < TLBTEMP_BASE_1 + TLBTEMP_SIZE);
-   /* cache the first kmap pte */
-   kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
-   kmap_pte = virt_to_kpte(kmap_vstart);
kmap_waitqueues_init();
 }


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 10/15] powerpc/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: linuxppc-...@lists.ozlabs.org
---
Note: Completely untested
---
 arch/powerpc/Kconfig   |1 
 arch/powerpc/include/asm/highmem.h |6 ++-
 arch/powerpc/mm/Makefile   |1 
 arch/powerpc/mm/highmem.c  |   67 -
 arch/powerpc/mm/mem.c  |7 ---
 5 files changed, 6 insertions(+), 76 deletions(-)

--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -406,6 +406,7 @@ menu "Kernel options"
 config HIGHMEM
bool "High memory support"
depends on PPC32
+select KMAP_ATOMIC_GENERIC
 
 source "kernel/Kconfig.hz"
 
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -29,7 +29,6 @@
 #include 
 #include 
 
-extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
 /*
@@ -60,6 +59,11 @@ extern pte_t *pkmap_page_table;
 
 #define flush_cache_kmaps()flush_cache_all()
 
+#define arch_kmap_temp_post_map(vaddr, pteval) \
+   local_flush_tlb_page(NULL, vaddr)
+#define arch_kmap_temp_post_unmap(vaddr)   \
+   local_flush_tlb_page(NULL, vaddr)
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_HIGHMEM_H */
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -16,7 +16,6 @@ obj-$(CONFIG_NEED_MULTIPLE_NODES) += num
 obj-$(CONFIG_PPC_MM_SLICES)+= slice.o
 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
 obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
-obj-$(CONFIG_HIGHMEM)  += highmem.o
 obj-$(CONFIG_PPC_COPRO_BASE)   += copro_fault.o
 obj-$(CONFIG_PPC_PTDUMP)   += ptdump/
 obj-$(CONFIG_KASAN)+= kasan/
--- a/arch/powerpc/mm/highmem.c
+++ /dev/null
@@ -1,67 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * highmem.c: virtual kernel memory mappings for high memory
- *
- * PowerPC version, stolen from the i386 version.
- *
- * Used in CONFIG_HIGHMEM systems for memory pages which
- * are not addressable by direct kernel virtual addresses.
- *
- * Copyright (C) 1999 Gerhard Wichert, Siemens AG
- *   gerhard.wich...@pdb.siemens.de
- *
- *
- * Redesigned the x86 32-bit VM architecture to deal with
- * up to 16 Terrabyte physical memory. With current x86 CPUs
- * we now support up to 64 Gigabytes physical RAM.
- *
- * Copyright (C) 1999 Ingo Molnar 
- *
- * Reworked for PowerPC by various contributors. Moved from
- * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp.
- */
-
-#include 
-#include 
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   WARN_ON(IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !pte_none(*(kmap_pte - 
idx)));
-   __set_pte_at(_mm, vaddr, kmap_pte-idx, mk_pte(page, prot), 1);
-   local_flush_tlb_page(NULL, vaddr);
-
-   return (void*) vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-
-   if (vaddr < __fix_to_virt(FIX_KMAP_END))
-   return;
-
-   if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM)) {
-   int type = kmap_atomic_idx();
-   unsigned int idx;
-
-   idx = type + KM_TYPE_NR * smp_processor_id();
-   WARN_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
-
-   /*
-* force other mappings to Oops if they'll try to access
-* this pte without first remap it
-*/
-   pte_clear(_mm, vaddr, kmap_pte-idx);
-   local_flush_tlb_page(NULL, vaddr);
-   }
-
-   kmap_atomic_idx_pop();
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -60,11 +60,6 @@
 unsigned long long memory_limit;
 bool init_mem_is_free;
 
-#ifdef CONFIG_HIGHMEM
-pte_t *kmap_pte;
-EXPORT_SYMBOL(kmap_pte);
-#endif
-
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
  unsigned long size, pgprot_t vma_prot)
 {
@@ -233,8 +228,6 @@ void __init paging_init(void)
 
map_kernel_page(PKMAP_BASE, 0, __pgprot(0));/* XXX gross */
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
-
-   kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
 #endif /* CONFIG_HIGHMEM */
 
printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 02/15] highmem: Provide generic variant of kmap_atomic*

2020-09-19 Thread Thomas Gleixner
The kmap_atomic* interfaces in all architectures are pretty much the same
except for post map operations (flush) and pre- and post unmap operations.

Provide a generic variant for that.

Signed-off-by: Thomas Gleixner 
---
 include/linux/highmem.h |   87 ---
 mm/Kconfig  |3 +
 mm/highmem.c|  119 +++-
 3 files changed, 192 insertions(+), 17 deletions(-)

--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -31,9 +31,22 @@ static inline void invalidate_kernel_vma
 
 #include 
 
+/*
+ * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft.
+ */
+#ifdef CONFIG_KMAP_ATOMIC_GENERIC
+void *kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
+void *kmap_atomic_page_prot(struct page *page, pgprot_t prot);
+void kunmap_atomic_indexed(void *vaddr);
+# ifndef ARCH_NEEDS_KMAP_HIGH_GET
+static inline void *arch_kmap_temporary_high_get(struct page *page)
+{
+   return NULL;
+}
+# endif
+#endif
+
 #ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
-extern void kunmap_atomic_high(void *kvaddr);
 #include 
 
 #ifndef ARCH_HAS_KMAP_FLUSH_TLB
@@ -81,6 +94,11 @@ static inline void kunmap(struct page *p
  * be used in IRQ contexts, so in some (very limited) cases we need
  * it.
  */
+
+#ifndef CONFIG_KMAP_ATOMIC_GENERIC
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
+void kunmap_atomic_high(void *kvaddr);
+
 static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 {
preempt_disable();
@@ -89,7 +107,38 @@ static inline void *kmap_atomic_prot(str
return page_address(page);
return kmap_atomic_high_prot(page, prot);
 }
-#define kmap_atomic(page)  kmap_atomic_prot(page, kmap_prot)
+
+static inline void __kunmap_atomic(void *vaddr)
+{
+   kunmap_atomic_high(vaddr);
+   pagefault_enable();
+}
+#else /* !CONFIG_KMAP_ATOMIC_GENERIC */
+
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+{
+   preempt_disable();
+   return kmap_atomic_page_prot(page, prot);
+}
+
+static inline void *kmap_atomic_pfn(unsigned long pfn)
+{
+   preempt_disable();
+   return kmap_atomic_pfn_prot(pfn, kmap_prot);
+}
+
+static inline void __kunmap_atomic(void *addr)
+{
+   kumap_atomic_indexed(addr);
+}
+
+
+#endif /* CONFIG_KMAP_ATOMIC_GENERIC */
+
+static inline void *kmap_atomic(struct page *page)
+{
+   return kmap_atomic_prot(page, kmap_prot);
+}
 
 /* declarations for linux/mm/highmem.c */
 unsigned int nr_free_highpages(void);
@@ -157,21 +206,29 @@ static inline void *kmap_atomic(struct p
pagefault_disable();
return page_address(page);
 }
-#define kmap_atomic_prot(page, prot)   kmap_atomic(page)
 
-static inline void kunmap_atomic_high(void *addr)
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+{
+   return kmap_atomic(page);
+}
+
+static inline void *kmap_atomic_pfn(unsigned long pfn)
+{
+   return kmap_atomic(pfn_to_page(pfn));
+}
+
+static inline void __kunmap_atomic(void *addr)
 {
/*
 * Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
-* handles re-enabling faults + preemption
+* handles preemption
 */
 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
kunmap_flush_on_unmap(addr);
 #endif
+   pagefault_enable();
 }
 
-#define kmap_atomic_pfn(pfn)   kmap_atomic(pfn_to_page(pfn))
-
 #define kmap_flush_unused()do {} while(0)
 
 #endif /* CONFIG_HIGHMEM */
@@ -213,14 +270,12 @@ static inline void kmap_atomic_idx_pop(v
  * Prevent people trying to call kunmap_atomic() as if it were kunmap()
  * kunmap_atomic() should get the return value of kmap_atomic, not the page.
  */
-#define kunmap_atomic(addr) \
-do {\
-   BUILD_BUG_ON(__same_type((addr), struct page *));   \
-   kunmap_atomic_high(addr);  \
-   pagefault_enable(); \
-   preempt_enable();   \
-} while (0)
-
+#define kunmap_atomic(addr)\
+   do {\
+   BUILD_BUG_ON(__same_type((addr), struct page *));   \
+   __kunmap_atomic(addr);  \
+   preempt_enable();   \
+   } while (0)
 
 /* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */
 #ifndef clear_user_highpage
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -868,4 +868,7 @@ config ARCH_HAS_HUGEPD
 config MAPPING_DIRTY_HELPERS
 bool
 
+config KMAP_ATOMIC_GENERIC
+   bool
+
 endmenu
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -314,6 +314,15 @@ void *kmap_high_get(struct page *page)

[patch RFC 06/15] csky/mm/highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Guo Ren 
Cc: linux-c...@vger.kernel.org
---
Note: Completely untested
---
 arch/csky/Kconfig   |1 
 arch/csky/include/asm/highmem.h |4 +-
 arch/csky/mm/highmem.c  |   75 
 3 files changed, 5 insertions(+), 75 deletions(-)

--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -285,6 +285,7 @@ config NR_CPUS
 config HIGHMEM
bool "High Memory Support"
depends on !CPU_CK610
+   select KMAP_ATOMIC_GENERIC
default y
 
 config FORCE_MAX_ZONEORDER
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -32,10 +32,12 @@ extern pte_t *pkmap_page_table;
 
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic_pfn(unsigned long pfn);
 
 #define flush_cache_kmaps() do {} while (0)
 
+#define arch_kmap_temp_post_map(vaddr, pteval) kmap_flush_tlb(vaddr)
+#define arch_kmap_temp_post_unmap(vaddr)   kmap_flush_tlb(vaddr)
+
 extern void kmap_init(void);
 
 #endif /* __KERNEL__ */
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -9,8 +9,6 @@
 #include 
 #include 
 
-static pte_t *kmap_pte;
-
 unsigned long highstart_pfn, highend_pfn;
 
 void kmap_flush_tlb(unsigned long addr)
@@ -19,67 +17,7 @@ void kmap_flush_tlb(unsigned long addr)
 }
 EXPORT_SYMBOL(kmap_flush_tlb);
 
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(!pte_none(*(kmap_pte - idx)));
-#endif
-   set_pte(kmap_pte-idx, mk_pte(page, prot));
-   flush_tlb_one((unsigned long)vaddr);
-
-   return (void *)vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-   int idx;
-
-   if (vaddr < FIXADDR_START)
-   return;
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   idx = KM_TYPE_NR*smp_processor_id() + kmap_atomic_idx();
-
-   BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
-
-   pte_clear(_mm, vaddr, kmap_pte - idx);
-   flush_tlb_one(vaddr);
-#else
-   (void) idx; /* to kill a warning */
-#endif
-   kmap_atomic_idx_pop();
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
-
-/*
- * This is the same as kmap_atomic() but can map memory that doesn't
- * have a struct page associated with it.
- */
-void *kmap_atomic_pfn(unsigned long pfn)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   pagefault_disable();
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL));
-   flush_tlb_one(vaddr);
-
-   return (void *) vaddr;
-}
-
-static void __init kmap_pages_init(void)
+void __init kmap_init(void)
 {
unsigned long vaddr;
pgd_t *pgd;
@@ -96,14 +34,3 @@ static void __init kmap_pages_init(void)
pte = pte_offset_kernel(pmd, vaddr);
pkmap_page_table = pte;
 }
-
-void __init kmap_init(void)
-{
-   unsigned long vaddr;
-
-   kmap_pages_init();
-
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN);
-
-   kmap_pte = pte_offset_kernel((pmd_t *)pgd_offset_k(vaddr), vaddr);
-}


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 05/15] ARM: highmem: Switch to generic kmap atomic

2020-09-19 Thread Thomas Gleixner
Signed-off-by: Thomas Gleixner 
Cc: Russell King 
Cc: Arnd Bergmann 
Cc: linux-arm-ker...@lists.infradead.org
---
Note: Completely untested
---
 arch/arm/Kconfig   |1 
 arch/arm/include/asm/highmem.h |   30 +++---
 arch/arm/mm/Makefile   |1 
 arch/arm/mm/highmem.c  |  121 -
 4 files changed, 23 insertions(+), 130 deletions(-)

--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1499,6 +1499,7 @@ config HAVE_ARCH_PFN_VALID
 config HIGHMEM
bool "High Memory Support"
depends on MMU
+   select KMAP_ATOMIC_GENERIC
help
  The address space of ARM processors is only 4 Gigabytes large
  and it has to accommodate user address space, kernel address
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -46,19 +46,33 @@ extern pte_t *pkmap_page_table;
 
 #ifdef ARCH_NEEDS_KMAP_HIGH_GET
 extern void *kmap_high_get(struct page *page);
+
+#ifdef CONFIG_DEBUG_HIGHMEM
+extern void *arch_kmap_temporary_high_get(struct page *page);
 #else
+static inline void *arch_kmap_temporary_high_get(struct page *page)
+{
+   return kmap_high_get(page);
+}
+#endif /* !CONFIG_DEBUG_HIGHMEM */
+
+#else /* ARCH_NEEDS_KMAP_HIGH_GET */
 static inline void *kmap_high_get(struct page *page)
 {
return NULL;
 }
-#endif
+#endif /* !ARCH_NEEDS_KMAP_HIGH_GET */
 
-/*
- * The following functions are already defined by 
- * when CONFIG_HIGHMEM is not set.
- */
-#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic_pfn(unsigned long pfn);
-#endif
+#define arch_kmap_temp_post_map(vaddr, pteval) \
+   local_flush_tlb_kernel_page(vaddr)
+
+#define arch_kmap_temp_pre_unmap(vaddr)
\
+do {   \
+   if (cache_is_vivt())\
+   __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); \
+} while (0)
+
+#define arch_kmap_temp_post_unmap(vaddr)   \
+   local_flush_tlb_kernel_page(vaddr)
 
 #endif
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -19,7 +19,6 @@ obj-$(CONFIG_MODULES) += proc-syms.o
 obj-$(CONFIG_DEBUG_VIRTUAL)+= physaddr.o
 
 obj-$(CONFIG_ALIGNMENT_TRAP)   += alignment.o
-obj-$(CONFIG_HIGHMEM)  += highmem.o
 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
 obj-$(CONFIG_ARM_PV_FIXUP) += pv-fixup-asm.o
 
--- a/arch/arm/mm/highmem.c
+++ /dev/null
@@ -1,121 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * arch/arm/mm/highmem.c -- ARM highmem support
- *
- * Author: Nicolas Pitre
- * Created:september 8, 2008
- * Copyright:  Marvell Semiconductors Inc.
- */
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "mm.h"
-
-static inline void set_fixmap_pte(int idx, pte_t pte)
-{
-   unsigned long vaddr = __fix_to_virt(idx);
-   pte_t *ptep = virt_to_kpte(vaddr);
-
-   set_pte_ext(ptep, pte, 0);
-   local_flush_tlb_kernel_page(vaddr);
-}
-
-static inline pte_t get_fixmap_pte(unsigned long vaddr)
-{
-   pte_t *ptep = virt_to_kpte(vaddr);
-
-   return *ptep;
-}
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned int idx;
-   unsigned long vaddr;
-   void *kmap;
-   int type;
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   /*
-* There is no cache coherency issue when non VIVT, so force the
-* dedicated kmap usage for better debugging purposes in that case.
-*/
-   if (!cache_is_vivt())
-   kmap = NULL;
-   else
-#endif
-   kmap = kmap_high_get(page);
-   if (kmap)
-   return kmap;
-
-   type = kmap_atomic_idx_push();
-
-   idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
-   vaddr = __fix_to_virt(idx);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   /*
-* With debugging enabled, kunmap_atomic forces that entry to 0.
-* Make sure it was indeed properly unmapped.
-*/
-   BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
-#endif
-   /*
-* When debugging is off, kunmap_atomic leaves the previous mapping
-* in place, so the contained TLB flush ensures the TLB is updated
-* with the new mapping.
-*/
-   set_fixmap_pte(idx, mk_pte(page, prot));
-
-   return (void *)vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-   int idx, type;
-
-   if (kvaddr >= (void *)FIXADDR_START) {
-   type = kmap_atomic_idx();
-   idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
-
-   if (cache_is_vivt())
-   __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE);
-#ifdef CONFIG_DEBUG_HIGHMEM
-   BUG_ON(vaddr != __fix_to_virt(idx));
-   

[patch RFC 03/15] x86/mm/highmem: Use generic kmap atomic implementation

2020-09-19 Thread Thomas Gleixner
Convert X86 to the generic kmap atomic implementation.

Make the iomap_atomic() naming convention consistent while at it.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/Kconfig   |3 +-
 arch/x86/include/asm/fixmap.h  |1 
 arch/x86/include/asm/highmem.h |   12 ++--
 arch/x86/include/asm/iomap.h   |   17 ++-
 arch/x86/mm/highmem_32.c   |   59 -
 arch/x86/mm/init_32.c  |   15 --
 arch/x86/mm/iomap_32.c |   58 ++--
 include/linux/io-mapping.h |2 -
 8 files changed, 25 insertions(+), 142 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -14,10 +14,11 @@ config X86_32
select ARCH_WANT_IPC_PARSE_VERSION
select CLKSRC_I8253
select CLONE_BACKWARDS
+   select GENERIC_VDSO_32
select HAVE_DEBUG_STACKOVERFLOW
+   select KMAP_ATOMIC_GENERIC
select MODULES_USE_ELF_REL
select OLD_SIGACTION
-   select GENERIC_VDSO_32
 
 config X86_64
def_bool y
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -151,7 +151,6 @@ extern void reserve_top_address(unsigned
 
 extern int fixmaps_set;
 
-extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
 void __native_set_fixmap(enum fixed_addresses idx, pte_t pte);
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,11 +58,17 @@ extern unsigned long highstart_pfn, high
 #define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-void *kmap_atomic_pfn(unsigned long pfn);
-void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
-
 #define flush_cache_kmaps()do { } while (0)
 
+#definearch_kmap_temp_post_map(vaddr, pteval)  \
+   arch_flush_lazy_mmu_mode()
+
+#definearch_kmap_temp_post_unmap(vaddr)\
+   do {\
+   flush_tlb_one_kernel((vaddr));  \
+   arch_flush_lazy_mmu_mode(); \
+   } while (0)
+
 extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn,
unsigned long end_pfn);
 
--- a/arch/x86/include/asm/iomap.h
+++ b/arch/x86/include/asm/iomap.h
@@ -9,19 +9,20 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
-void __iomem *
-iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
+void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot);
 
-void
-iounmap_atomic(void __iomem *kvaddr);
+static inline void iounmap_atomic(void __iomem *vaddr)
+{
+   kunmap_atomic_indexed((void __force *)vaddr);
+   preempt_enable();
+}
 
-int
-iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);
+int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);
 
-void
-iomap_free(resource_size_t base, unsigned long size);
+void iomap_free(resource_size_t base, unsigned long size);
 
 #endif /* _ASM_X86_IOMAP_H */
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -4,65 +4,6 @@
 #include  /* for totalram_pages */
 #include 
 
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   unsigned long vaddr;
-   int idx, type;
-
-   type = kmap_atomic_idx_push();
-   idx = type + KM_TYPE_NR*smp_processor_id();
-   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   BUG_ON(!pte_none(*(kmap_pte-idx)));
-   set_pte(kmap_pte-idx, mk_pte(page, prot));
-   arch_flush_lazy_mmu_mode();
-
-   return (void *)vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-/*
- * This is the same as kmap_atomic() but can map memory that doesn't
- * have a struct page associated with it.
- */
-void *kmap_atomic_pfn(unsigned long pfn)
-{
-   return kmap_atomic_prot_pfn(pfn, kmap_prot);
-}
-EXPORT_SYMBOL_GPL(kmap_atomic_pfn);
-
-void kunmap_atomic_high(void *kvaddr)
-{
-   unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
-
-   if (vaddr >= __fix_to_virt(FIX_KMAP_END) &&
-   vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) {
-   int idx, type;
-
-   type = kmap_atomic_idx();
-   idx = type + KM_TYPE_NR * smp_processor_id();
-
-#ifdef CONFIG_DEBUG_HIGHMEM
-   WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
-#endif
-   /*
-* Force other mappings to Oops if they'll try to access this
-* pte without first remap it.  Keeping stale mappings around
-* is a bad idea also, in case the page changes cacheability
-* attributes or becomes a protected page in a hypervisor.
-*/
-   kpte_clear_flush(kmap_pte-idx, vaddr);
-   kmap_atomic_idx_pop();
-   arch_flush_lazy_mmu_mode();
-   }
-#ifdef CONFIG_DEBUG_HIGHMEM
-   else {
-   BUG_ON(vaddr < 

[patch RFC 04/15] arc/mm/highmem: Use generic kmap atomic implementation

2020-09-19 Thread Thomas Gleixner
Adopt the map ordering to match the other architectures and the generic
code.

Signed-off-by: Thomas Gleixner 
Cc: Vineet Gupta 
Cc: linux-snps-arc@lists.infradead.org
---
Note: Completely untested
---
 arch/arc/Kconfig   |1 
 arch/arc/include/asm/highmem.h |8 ++-
 arch/arc/mm/highmem.c  |   44 -
 3 files changed, 9 insertions(+), 44 deletions(-)

--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -508,6 +508,7 @@ config LINUX_RAM_BASE
 config HIGHMEM
bool "High Memory Support"
select ARCH_DISCONTIGMEM_ENABLE
+   select KMAP_ATOMIC_GENERIC
help
  With ARC 2G:2G address split, only upper 2G is directly addressable by
  kernel. Enable this to potentially allow access to rest of 2G and PAE
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -15,7 +15,10 @@
 #define FIXMAP_BASE(PAGE_OFFSET - FIXMAP_SIZE - PKMAP_SIZE)
 #define FIXMAP_SIZEPGDIR_SIZE  /* only 1 PGD worth */
 #define KM_TYPE_NR ((FIXMAP_SIZE >> PAGE_SHIFT)/NR_CPUS)
-#define FIXMAP_ADDR(nr)(FIXMAP_BASE + ((nr) << PAGE_SHIFT))
+
+#define FIX_KMAP_BEGIN (0)
+#define FIX_KMAP_END   ((FIXMAP_SIZE >> PAGE_SHIFT) - 1)
+#define FIXADDR_TOP(FIXMAP_BASE + FIXMAP_SIZE - PAGE_SIZE)
 
 /* start after fixmap area */
 #define PKMAP_BASE (FIXMAP_BASE + FIXMAP_SIZE)
@@ -29,6 +32,9 @@
 
 extern void kmap_init(void);
 
+#define arch_kmap_temp_post_unmap(vaddr)   \
+   local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE)
+
 static inline void flush_cache_kmaps(void)
 {
flush_cache_all();
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -47,48 +47,6 @@
  */
 
 extern pte_t * pkmap_page_table;
-static pte_t * fixmap_page_table;
-
-void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
-{
-   int idx, cpu_idx;
-   unsigned long vaddr;
-
-   cpu_idx = kmap_atomic_idx_push();
-   idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
-   vaddr = FIXMAP_ADDR(idx);
-
-   set_pte_at(_mm, vaddr, fixmap_page_table + idx,
-  mk_pte(page, prot));
-
-   return (void *)vaddr;
-}
-EXPORT_SYMBOL(kmap_atomic_high_prot);
-
-void kunmap_atomic_high(void *kv)
-{
-   unsigned long kvaddr = (unsigned long)kv;
-
-   if (kvaddr >= FIXMAP_BASE && kvaddr < (FIXMAP_BASE + FIXMAP_SIZE)) {
-
-   /*
-* Because preemption is disabled, this vaddr can be associated
-* with the current allocated index.
-* But in case of multiple live kmap_atomic(), it still relies 
on
-* callers to unmap in right order.
-*/
-   int cpu_idx = kmap_atomic_idx();
-   int idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
-
-   WARN_ON(kvaddr != FIXMAP_ADDR(idx));
-
-   pte_clear(_mm, kvaddr, fixmap_page_table + idx);
-   local_flush_tlb_kernel_range(kvaddr, kvaddr + PAGE_SIZE);
-
-   kmap_atomic_idx_pop();
-   }
-}
-EXPORT_SYMBOL(kunmap_atomic_high);
 
 static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
 {
@@ -113,5 +71,5 @@ void __init kmap_init(void)
pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE);
 
BUILD_BUG_ON(LAST_PKMAP > PTRS_PER_PTE);
-   fixmap_page_table = alloc_kmap_pgtable(FIXMAP_BASE);
+   alloc_kmap_pgtable(FIXMAP_BASE);
 }


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-19 Thread Thomas Gleixner
First of all, sorry for the horribly big Cc list!

Following up to the discussion in:

  https://lore.kernel.org/r/20200914204209.256266...@linutronix.de

this provides a preemptible variant of kmap_atomic & related
interfaces. This is achieved by:

 - Consolidating all kmap atomic implementations in generic code

 - Switching from per CPU storage of the kmap index to a per task storage

 - Adding a pteval array to the per task storage which contains the ptevals
   of the currently active temporary kmaps

 - Adding context switch code which checks whether the outgoing or the
   incoming task has active temporary kmaps. If so, the outgoing task's
   kmaps are removed and the incoming task's kmaps are restored.

 - Adding new interfaces k[un]map_temporary*() which are not disabling
   preemption and can be called from any context (except NMI).

   Contrary to kmap() which provides preemptible and "persistant" mappings,
   these interfaces are meant to replace the temporary mappings provided by
   kmap_atomic*() today.

This allows to get rid of conditional mapping choices and allows to have
preemptible short term mappings on 64bit which are today enforced to be
non-preemptible due to the highmem constraints. It clearly puts overhead on
the highmem users, but highmem is slow anyway.

This is not a wholesale conversion which makes kmap_atomic magically
preemptible because there might be usage sites which rely on the implicit
preempt disable. So this needs to be done on a case by case basis and the
call sites converted to kmap_temporary.

Note, that this is only lightly tested on X86 and completely untested on
all other architectures.

The lot is also available from

   git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git highmem

Thanks,

tglx
---
 a/arch/arm/mm/highmem.c   |  121 -
 a/arch/microblaze/mm/highmem.c|   78 -
 a/arch/nds32/mm/highmem.c |   48 
 a/arch/powerpc/mm/highmem.c   |   67 ---
 a/arch/sparc/mm/highmem.c |  115 
 arch/arc/Kconfig  |1 
 arch/arc/include/asm/highmem.h|8 +
 arch/arc/mm/highmem.c |   44 ---
 arch/arm/Kconfig  |1 
 arch/arm/include/asm/highmem.h|   30 +++--
 arch/arm/mm/Makefile  |1 
 arch/csky/Kconfig |1 
 arch/csky/include/asm/highmem.h   |4 
 arch/csky/mm/highmem.c|   75 -
 arch/microblaze/Kconfig   |1 
 arch/microblaze/include/asm/highmem.h |6 -
 arch/microblaze/mm/Makefile   |1 
 arch/microblaze/mm/init.c |6 -
 arch/mips/Kconfig |1 
 arch/mips/include/asm/highmem.h   |4 
 arch/mips/mm/highmem.c|   77 -
 arch/mips/mm/init.c   |3 
 arch/nds32/Kconfig.cpu|1 
 arch/nds32/include/asm/highmem.h  |   21 ++-
 arch/nds32/mm/Makefile|1 
 arch/powerpc/Kconfig  |1 
 arch/powerpc/include/asm/highmem.h|6 -
 arch/powerpc/mm/Makefile  |1 
 arch/powerpc/mm/mem.c |7 -
 arch/sparc/Kconfig|1 
 arch/sparc/include/asm/highmem.h  |7 -
 arch/sparc/mm/Makefile|3 
 arch/sparc/mm/srmmu.c |2 
 arch/x86/include/asm/fixmap.h |1 
 arch/x86/include/asm/highmem.h|   12 +-
 arch/x86/include/asm/iomap.h  |   29 +++--
 arch/x86/mm/highmem_32.c  |   59 --
 arch/x86/mm/init_32.c |   15 --
 arch/x86/mm/iomap_32.c|   57 --
 arch/xtensa/Kconfig   |1 
 arch/xtensa/include/asm/highmem.h |9 +
 arch/xtensa/mm/highmem.c  |   44 ---
 b/arch/x86/Kconfig|3 
 include/linux/highmem.h   |  141 +++-
 include/linux/io-mapping.h|2 
 include/linux/sched.h |9 +
 kernel/sched/core.c   |   10 +
 mm/Kconfig|3 
 mm/highmem.c  |  192 --
 49 files changed, 422 insertions(+), 909 deletions(-)
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[patch RFC 01/15] mm/highmem: Un-EXPORT __kmap_atomic_idx()

2020-09-19 Thread Thomas Gleixner
Nothing in modules can use that.

Signed-off-by: Thomas Gleixner 
---
 mm/highmem.c |2 --
 1 file changed, 2 deletions(-)

--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -108,8 +108,6 @@ static inline wait_queue_head_t *get_pkm
 atomic_long_t _totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(_totalhigh_pages);
 
-EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx);
-
 unsigned int nr_free_highpages (void)
 {
struct zone *zone;


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc