Re: [PATCH] virtio-blk: free vblk-vqs in error path of virtblk_probe()

2020-06-30 Thread Jens Axboe
On 6/14/20 10:14 PM, Hou Tao wrote:
> Else there will be memory leak if alloc_disk() fails.

Applied, thanks.

-- 
Jens Axboe

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio-blk: free vblk-vqs in error path of virtblk_probe()

2020-06-30 Thread Ming Lei
On Mon, Jun 15, 2020 at 12:14:59PM +0800, Hou Tao wrote:
> Else there will be memory leak if alloc_disk() fails.
> 
> Fixes: 6a27b656fc02 ("block: virtio-blk: support multi virt queues per 
> virtio-blk device")
> Signed-off-by: Hou Tao 
> ---
>  drivers/block/virtio_blk.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 9d21bf0f155e..980df853ee49 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -878,6 +878,7 @@ static int virtblk_probe(struct virtio_device *vdev)
>   put_disk(vblk->disk);
>  out_free_vq:
>   vdev->config->del_vqs(vdev);
> + kfree(vblk->vqs);
>  out_free_vblk:
>   kfree(vblk);
>  out_free_index:
> -- 
> 2.25.0.4.g0ad7144999
> 

Reviewed-by: Ming Lei 

-- 
Ming

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y

2020-06-30 Thread Peter Zijlstra
On Tue, Jun 30, 2020 at 09:47:30PM +0200, Marco Elver wrote:
> I do wonder, though, if there is some way to make the compiler do
> something better for us. Clearly, implementing real
> memory_order_consume hasn't worked out until today. But maybe the
> compiler could promote dependent loads to acquires if it recognizes it
> lost dependencies during optimizations. Just thinking out loud, it
> probably still has some weird corner case that will break. ;-)

I'd be very hesitant to let the compiler upgrade the ordering for us,
specifically because we're not using C11 crud and are using a lot of
inline asm.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y

2020-06-30 Thread Arnd Bergmann
On Tue, Jun 30, 2020 at 7:39 PM Will Deacon  wrote:
> +#define __READ_ONCE(x) \
> +({ \
> +   int atomic = 1; \
> +   union { __unqual_scalar_typeof(x) __val; char __c[1]; } __u;\
> +   typeof(&(x)) __x = &(x);\
> +   switch (sizeof(x)) {\
...
> +   atomic ? (typeof(x))__u.__val : (*(volatile typeof(x) *)__x);   \
> +})

This expands (x) nine times (five in __unqual_scala_typeof()), which can
lead to significant code bloat after preprocessing if something passes a
compound expression into READ_ONCE().
The compiler works it out eventually, but we've seen an actual slowdown
in compile speed from this recently, especially on clang.

I think if you move the

typeof(&(x)) __x = &(x);

line first, all other instances can use typeof(*__x) instead of typeof(x)
and avoid this problem. Once we make gcc-4.9 the minimum version,
this could be further improved to

   __auto_type __x = &(x);

   Arnd
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h

2020-06-30 Thread Arnd Bergmann
On Tue, Jun 30, 2020 at 7:37 PM Will Deacon  wrote:
>
> In preparation for allowing architectures to define their own
> implementation of the READ_ONCE() macro, move the generic
> {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
> file and into a new 'rwonce.h' header under 'asm-generic'.
>
> Acked-by: Paul E. McKenney 
> Signed-off-by: Will Deacon 
> ---
>  include/asm-generic/Kbuild   |  1 +
>  include/asm-generic/rwonce.h | 91 
>  include/linux/compiler.h | 83 +---

Very nice, this has the added benefit of allowing us to stop including
asm/barrier.h once linux/compiler.h gets changed to not include
asm/rwonce.h.

The asm/barrier.h header has a circular dependency, pulling in
linux/compiler.h itself.

   Arnd
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 14/18] arm64: Reduce the number of header files pulled into vmlinux.lds.S

2020-06-30 Thread Will Deacon
Although vmlinux.lds.S smells like an assembly file and is compiled
with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to
create our linker script. This means that any assembly macros defined
by headers that it includes will result in a helpful link error:

| aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error

In preparation for an arm64-private asm/rwonce.h implementation, which
will end up pulling assembly macros into linux/compiler.h, reduce the
number of headers we include directly and transitively in vmlinux.lds.S

Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/kernel-pgtable.h |  2 +-
 arch/arm64/include/asm/memory.h | 11 ++-
 arch/arm64/include/asm/uaccess.h|  1 +
 arch/arm64/kernel/entry.S   |  1 +
 arch/arm64/kernel/vmlinux.lds.S |  1 -
 arch/arm64/kvm/hyp-init.S   |  1 +
 6 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kernel-pgtable.h 
b/arch/arm64/include/asm/kernel-pgtable.h
index 3bf626f6fe0c..329fb15f6bac 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KERNEL_PGTABLE_H
 #define __ASM_KERNEL_PGTABLE_H
 
-#include 
+#include 
 #include 
 
 /*
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index a1871bb32bb1..9d4bf58cf7b3 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -10,11 +10,8 @@
 #ifndef __ASM_MEMORY_H
 #define __ASM_MEMORY_H
 
-#include 
 #include 
 #include 
-#include 
-#include 
 #include 
 
 /*
@@ -157,11 +154,15 @@
 #endif
 
 #ifndef __ASSEMBLY__
-extern u64 vabits_actual;
-#define PAGE_END   (_PAGE_END(vabits_actual))
 
 #include 
+#include 
 #include 
+#include 
+#include 
+
+extern u64 vabits_actual;
+#define PAGE_END   (_PAGE_END(vabits_actual))
 
 extern s64 physvirt_offset;
 extern s64 memstart_addr;
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index bc5c7b091152..8d7c466f809b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -19,6 +19,7 @@
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5304d193c79d..b668aad3b762 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 6827da7f3aa5..e1e7c0431b4d 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -10,7 +10,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 6e6ed5581eed..076544393c3c 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -6,6 +6,7 @@
 
 #include 
 
+#include 
 #include 
 #include 
 #include 
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 11/18] tools/memory-model: Remove smp_read_barrier_depends() from informal doc

2020-06-30 Thread Will Deacon
smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.

Acked-by: Alan Stern 
Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 .../Documentation/explanation.txt | 26 +--
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt 
b/tools/memory-model/Documentation/explanation.txt
index e91a2eb19592..01adf9e0ebac 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
 In practice, this difficulty is solved by inserting a special fence
 between P1's two loads when the kernel is compiled for the Alpha
 architecture.  In fact, as of version 4.15, the kernel automatically
-adds this fence (called smp_read_barrier_depends() and defined as
-nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
-load.  The effect of the fence is to cause the CPU not to execute any
-po-later instructions until after the local cache has finished
-processing all the stores it has already received.  Thus, if the code
-was changed to:
+adds this fence after every READ_ONCE() and atomic load on Alpha.  The
+effect of the fence is to cause the CPU not to execute any po-later
+instructions until after the local cache has finished processing all
+the stores it has already received.  Thus, if the code was changed to:
 
P1()
{
@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather 
than accessed
 directly.
 
 The LKMM requires that smp_rmb(), acquire fences, and strong fences
-share this property with smp_read_barrier_depends(): They do not allow
-the CPU to execute any po-later instructions (or po-later loads in the
-case of smp_rmb()) until all outstanding stores have been processed by
-the local cache.  In the case of a strong fence, the CPU first has to
-wait for all of its po-earlier stores to propagate to every other CPU
-in the system; then it has to wait for the local cache to process all
-the stores received as of that time -- not just the stores received
-when the strong fence began.
+share this property: They do not allow the CPU to execute any po-later
+instructions (or po-later loads in the case of smp_rmb()) until all
+outstanding stores have been processed by the local cache.  In the
+case of a strong fence, the CPU first has to wait for all of its
+po-earlier stores to propagate to every other CPU in the system; then
+it has to wait for the local cache to process all the stores received
+as of that time -- not just the stores received when the strong fence
+began.
 
 And of course, none of this matters for any architecture other than
 Alpha.
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 09/18] Documentation/barriers: Remove references to [smp_]read_barrier_depends()

2020-06-30 Thread Will Deacon
The [smp_]read_barrier_depends() barrier macros no longer exist as
part of the Linux memory model, so remove all references to them from
the Documentation/ directory.

Although this is fairly mechanical on the whole, we drop the "CACHE
COHERENCY" section entirely from 'memory-barriers.txt' as it doesn't
make any sense now that the dependency barriers have been removed.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 .../RCU/Design/Requirements/Requirements.rst  |   2 +-
 Documentation/memory-barriers.txt | 156 +-
 2 files changed, 9 insertions(+), 149 deletions(-)

diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst 
b/Documentation/RCU/Design/Requirements/Requirements.rst
index 75b8ca007a11..50d5c43c48b0 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.rst
+++ b/Documentation/RCU/Design/Requirements/Requirements.rst
@@ -463,7 +463,7 @@ again without disrupting RCU readers.
 This guarantee was only partially premeditated. DYNIX/ptx used an
 explicit memory barrier for publication, but had nothing resembling
 ``rcu_dereference()`` for subscription, nor did it have anything
-resembling the ``smp_read_barrier_depends()`` that was later subsumed
+resembling the dependency-ordering barrier that was later subsumed
 into ``rcu_dereference()`` and later still into ``READ_ONCE()``. The
 need for these operations made itself known quite suddenly at a
 late-1990s meeting with the DEC Alpha architects, back in the days when
diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index eaabc3134294..4e55aba3eb4a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -553,12 +553,12 @@ There are certain things that the Linux kernel memory 
barriers do not guarantee:
 DATA DEPENDENCY BARRIERS (HISTORICAL)
 -
 
-As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
-added to READ_ONCE(), which means that about the only people who
-need to pay attention to this section are those working on DEC Alpha
-architecture-specific code and those working on READ_ONCE() itself.
-For those who need it, and for those who are interested in the history,
-here is the story of data-dependency barriers.
+As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
+DEC Alpha, which means that about the only people who need to pay attention
+to this section are those working on DEC Alpha architecture-specific code
+and those working on READ_ONCE() itself.  For those who need it, and for
+those who are interested in the history, here is the story of
+data-dependency barriers.
 
 The usage requirements of data dependency barriers are a little subtle, and
 it's not always obvious that they're needed.  To illustrate, consider the
@@ -2708,144 +2708,6 @@ the properties of the memory window through which 
devices are accessed and/or
 the use of any special device communication instructions the CPU may have.
 
 
-CACHE COHERENCY

-
-Life isn't quite as simple as it may appear above, however: for while the
-caches are expected to be coherent, there's no guarantee that that coherency
-will be ordered.  This means that while changes made on one CPU will
-eventually become visible on all CPUs, there's no guarantee that they will
-become apparent in the same order on those other CPUs.
-
-
-Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
-has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
-
-   :
-   :  ++
-   :  +-+ ||
-   ++  : +--->| Cache A |<--->||
-   ||  : |+-+ ||
-   |  CPU 1 |<---+||
-   ||  : |+-+ ||
-   ++  : +--->| Cache B |<--->||
-   :  +-+ ||
-   :  | Memory |
-   :  +-+ | System |
-   ++  : +--->| Cache C |<--->||
-   ||  : |+-+ ||
-   |  CPU 2 |<---+||
-   ||  : |+-+ ||
-   ++  : +--->| Cache D |<--->||
-   :  +-+ ||
-   :  ++
-   :
-
-Imagine the system has the following properties:
-
- (*) an odd-numbered cache line may be in cache A, cache C or it may still be
- resident in memory;
-
- (*) an even-numbered cache line may be in cache B, cache D or it may still be
- resident in memory;
-
- (*) while the CPU core is interrogating one cache, the other cache may be
- making use of the bus to access the rest of the 

[PATCH 13/18] checkpatch: Remove checks relating to [smp_]read_barrier_depends()

2020-06-30 Thread Will Deacon
The [smp_]read_barrier_depends() macros no longer exist, so we don't
need to deal with them in the checkpatch script.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 scripts/checkpatch.pl | 9 +
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 4c820607540b..8032f80c5bc7 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -5903,8 +5903,7 @@ sub process {
my $barriers = qr{
mb|
rmb|
-   wmb|
-   read_barrier_depends
+   wmb
}x;
my $barrier_stems = qr{
mb__before_atomic|
@@ -5953,12 +5952,6 @@ sub process {
}
}
 
-# check for smp_read_barrier_depends and read_barrier_depends
-   if (!$file && $line =~ /\b(smp_|)read_barrier_depends\s*\(/) {
-   WARN("READ_BARRIER_DEPENDS",
-"$1read_barrier_depends should only be used in 
READ_ONCE or DEC Alpha code\n" . $herecurr);
-   }
-
 # check of hardware specific defines
if ($line =~ 
m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ 
m@include/asm-@) {
CHK("ARCH_DEFINES",
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 17/18] arm64: alternatives: Remove READ_ONCE() usage during patch operation

2020-06-30 Thread Will Deacon
In preparation for patching the internals of READ_ONCE() itself, replace
its usage on the alternatives patching patch with a volatile variable
instead.

Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/alternative.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index d1757ef1b1e7..87bca8d44084 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -21,7 +21,8 @@
 #define ALT_ORIG_PTR(a)__ALT_PTR(a, orig_offset)
 #define ALT_REPL_PTR(a)__ALT_PTR(a, alt_offset)
 
-static int all_alternatives_applied;
+/* Volatile, as we may be patching the guts of READ_ONCE() */
+static volatile int all_alternatives_applied;
 
 static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS);
 
@@ -217,7 +218,7 @@ static int __apply_alternatives_multi_stop(void *unused)
 
/* We always have a CPU 0 at this point (__init) */
if (smp_processor_id()) {
-   while (!READ_ONCE(all_alternatives_applied))
+   while (!all_alternatives_applied)
cpu_relax();
isb();
} else {
@@ -229,7 +230,7 @@ static int __apply_alternatives_multi_stop(void *unused)
BUG_ON(all_alternatives_applied);
__apply_alternatives(, false, remaining_capabilities);
/* Barriers provided by the cache flushing */
-   WRITE_ONCE(all_alternatives_applied, 1);
+   all_alternatives_applied = 1;
}
 
return 0;
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 15/18] arm64: alternatives: Split up alternative.h

2020-06-30 Thread Will Deacon
asm/alternative.h contains both the macros needed to use alternatives,
as well the type definitions and function prototypes for applying them.

Split the header in two, so that alternatives can be used from core
header files such as linux/compiler.h without the risk of circular
includes

Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/alternative-macros.h | 276 
 arch/arm64/include/asm/alternative.h| 267 +--
 arch/arm64/include/asm/insn.h   |   3 +-
 3 files changed, 279 insertions(+), 267 deletions(-)
 create mode 100644 arch/arm64/include/asm/alternative-macros.h

diff --git a/arch/arm64/include/asm/alternative-macros.h 
b/arch/arm64/include/asm/alternative-macros.h
new file mode 100644
index ..9f697bef7958
--- /dev/null
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -0,0 +1,276 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ALTERNATIVE_MACROS_H
+#define __ASM_ALTERNATIVE_MACROS_H
+
+#include 
+
+#define ARM64_CB_PATCH ARM64_NCAPS
+
+/* A64 instructions are always 32 bits. */
+#defineAARCH64_INSN_SIZE   4
+
+#ifndef __ASSEMBLY__
+
+#include 
+
+#define ALTINSTR_ENTRY(feature)
  \
+   " .word 661b - .\n" /* label   */ \
+   " .word 663f - .\n" /* new instruction */ \
+   " .hword " __stringify(feature) "\n"/* feature bit */ \
+   " .byte 662b-661b\n"/* source len  */ \
+   " .byte 664f-663f\n"/* replacement len */
+
+#define ALTINSTR_ENTRY_CB(feature, cb)   \
+   " .word 661b - .\n" /* label   */ \
+   " .word " __stringify(cb) "- .\n"   /* callback */\
+   " .hword " __stringify(feature) "\n"/* feature bit */ \
+   " .byte 662b-661b\n"/* source len  */ \
+   " .byte 664f-663f\n"/* replacement len */
+
+/*
+ * alternative assembly primitive:
+ *
+ * If any of these .org directive fail, it means that insn1 and insn2
+ * don't have the same length. This used to be written as
+ *
+ * .if ((664b-663b) != (662b-661b))
+ * .error "Alternatives instruction length mismatch"
+ * .endif
+ *
+ * but most assemblers die if insn1 or insn2 have a .inst. This should
+ * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
+ * containing commit 4e4d08cf7399b606 or c1baaddf8861).
+ *
+ * Alternatives with callbacks do not generate replacement instructions.
+ */
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)\
+   ".if "__stringify(cfg_enabled)" == 1\n" \
+   "661:\n\t"  \
+   oldinstr "\n"   \
+   "662:\n"\
+   ".pushsection .altinstructions,\"a\"\n" \
+   ALTINSTR_ENTRY(feature) \
+   ".popsection\n" \
+   ".pushsection .altinstr_replacement, \"a\"\n"   \
+   "663:\n\t"  \
+   newinstr "\n"   \
+   "664:\n\t"  \
+   ".popsection\n\t"   \
+   ".org   . - (664b-663b) + (662b-661b)\n\t"  \
+   ".org   . - (662b-661b) + (664b-663b)\n"\
+   ".endif\n"
+
+#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb)   \
+   ".if "__stringify(cfg_enabled)" == 1\n" \
+   "661:\n\t"  \
+   oldinstr "\n"   \
+   "662:\n"\
+   ".pushsection .altinstructions,\"a\"\n" \
+   ALTINSTR_ENTRY_CB(feature, cb)  \
+   ".popsection\n" \
+   "663:\n\t"  \
+   "664:\n\t"  \
+   ".endif\n"
+
+#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)\
+   __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
+
+#define ALTERNATIVE_CB(oldinstr, cb) \
+   __ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb)
+#else
+
+#include 
+
+.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+   .word \orig_offset - .
+   

[PATCH 18/18] arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y

2020-06-30 Thread Will Deacon
When building with LTO, there is an increased risk of the compiler
converting an address dependency headed by a READ_ONCE() invocation
into a control dependency and consequently allowing for harmful
reordering by the CPU.

Ensure that such transformations are harmless by overriding the generic
READ_ONCE() definition with one that provides acquire semantics when
building with LTO.

Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/rwonce.h   | 63 +++
 arch/arm64/kernel/vdso/Makefile   |  2 +-
 arch/arm64/kernel/vdso32/Makefile |  2 +-
 3 files changed, 65 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/rwonce.h

diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
new file mode 100644
index ..515e360b01a1
--- /dev/null
+++ b/arch/arm64/include/asm/rwonce.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+#ifndef __ASM_RWONCE_H
+#define __ASM_RWONCE_H
+
+#ifdef CONFIG_CLANG_LTO
+
+#include 
+#include 
+
+#ifndef BUILD_VDSO
+
+#ifdef CONFIG_AS_HAS_LDAPR
+#define __LOAD_RCPC(sfx, regs...)  \
+   ALTERNATIVE(\
+   "ldar"  #sfx "\t" #regs,\
+   ".arch_extension rcpc\n"\
+   "ldapr" #sfx "\t" #regs,\
+   ARM64_HAS_LDAPR)
+#else
+#define __LOAD_RCPC(sfx, regs...)  "ldar" #sfx "\t" #regs
+#endif /* CONFIG_AS_HAS_LDAPR */
+
+#define __READ_ONCE(x) \
+({ \
+   int atomic = 1; \
+   union { __unqual_scalar_typeof(x) __val; char __c[1]; } __u;\
+   typeof(&(x)) __x = &(x);\
+   switch (sizeof(x)) {\
+   case 1: \
+   asm volatile(__LOAD_RCPC(b, %w0, %1)\
+   : "=r" (*(__u8 *)__u.__c)   \
+   : "Q" (*__x) : "memory");   \
+   break;  \
+   case 2: \
+   asm volatile(__LOAD_RCPC(h, %w0, %1)\
+   : "=r" (*(__u16 *)__u.__c)  \
+   : "Q" (*__x) : "memory");   \
+   break;  \
+   case 4: \
+   asm volatile(__LOAD_RCPC(, %w0, %1) \
+   : "=r" (*(__u32 *)__u.__c)  \
+   : "Q" (*__x) : "memory");   \
+   break;  \
+   case 8: \
+   asm volatile(__LOAD_RCPC(, %0, %1)  \
+   : "=r" (*(__u64 *)__u.__c)  \
+   : "Q" (*__x) : "memory");   \
+   break;  \
+   default:\
+   atomic = 0; \
+   }   \
+   atomic ? (typeof(x))__u.__val : (*(volatile typeof(x) *)__x);   \
+})
+
+#endif /* !BUILD_VDSO */
+#endif /* CONFIG_CLANG_LTO */
+
+#include 
+
+#endif /* __ASM_RWONCE_H */
diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index 45d5cfe46429..60df97f2e7de 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -28,7 +28,7 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 
--hash-style=sysv  \
 $(btildflags-y) -T
 
 ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
-ccflags-y += -DDISABLE_BRANCH_PROFILING
+ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
 
 CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) 
$(GCC_PLUGINS_CFLAGS)
 KBUILD_CFLAGS  += $(DISABLE_LTO)
diff --git a/arch/arm64/kernel/vdso32/Makefile 
b/arch/arm64/kernel/vdso32/Makefile
index d88148bef6b0..4fdf3754a058 100644
--- a/arch/arm64/kernel/vdso32/Makefile
+++ b/arch/arm64/kernel/vdso32/Makefile
@@ -43,7 +43,7 @@ cc32-as-instr = $(call try-run,\
 # As a result we set our own flags here.
 
 # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
-VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem 

[PATCH 16/18] arm64: cpufeatures: Add capability for LDAPR instruction

2020-06-30 Thread Will Deacon
Armv8.3 introduced the LDAPR instruction, which provides weaker memory
ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an
RCsc implementation when implementing the Linux memory model, but LDAPR
can be used as a useful alternative to dependency ordering, particularly
when the compiler is capable of breaking the dependencies.

Since LDAPR is not available on all CPUs, add a cpufeature to detect it at
runtime and allow the instruction to be used with alternative code
patching.

Signed-off-by: Will Deacon 
---
 arch/arm64/Kconfig   |  3 +++
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 10 ++
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 66dc41fd49f2..e1073210e70b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1409,6 +1409,9 @@ config ARM64_PAN
 The feature is detected at runtime, and will remain as a 'nop'
 instruction if the cpu does not implement the feature.
 
+config AS_HAS_LDAPR
+   def_bool $(as-instr,.arch_extension rcpc)
+
 config ARM64_LSE_ATOMICS
bool
default ARM64_USE_LSE_ATOMICS
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d7b3bb0cb180..3ff0103d4dfd 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -62,7 +62,8 @@
 #define ARM64_HAS_GENERIC_AUTH 52
 #define ARM64_HAS_32BIT_EL153
 #define ARM64_BTI  54
+#define ARM64_HAS_LDAPR55
 
-#define ARM64_NCAPS55
+#define ARM64_NCAPS56
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f63053a63a9..a29256a782e9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2056,6 +2056,16 @@ static const struct arm64_cpu_capabilities 
arm64_features[] = {
.sign = FTR_UNSIGNED,
},
 #endif
+   {
+   .desc = "RCpc load-acquire (LDAPR)",
+   .capability = ARM64_HAS_LDAPR,
+   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .sys_reg = SYS_ID_AA64ISAR1_EL1,
+   .sign = FTR_UNSIGNED,
+   .field_pos = ID_AA64ISAR1_LRCPC_SHIFT,
+   .matches = has_cpuid_feature,
+   .min_field_value = 1,
+   },
{},
 };
 
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 06/18] vhost: Remove redundant use of read_barrier_depends() barrier

2020-06-30 Thread Will Deacon
Since commit 76ebbe78f739 ("locking/barriers: Add implicit
smp_read_barrier_depends() to READ_ONCE()"), there is no need to use
smp_read_barrier_depends() outside of the Alpha architecture code.

Unfortunately, there is precisely _one_ user in the vhost code, and
there isn't an obvious READ_ONCE() access making the barrier
redundant. However, on closer inspection (thanks, Jason), it appears
that vring synchronisation between the producer and consumer occurs via
the 'avail_idx' field, which is followed up by an rmb() in
vhost_get_vq_desc(), making the read_barrier_depends() redundant on
Alpha.

Jason says:

  | I'm also confused about the barrier here, basically in driver side
  | we did:
  |
  | 1) allocate pages
  | 2) store pages in indirect->addr
  | 3) smp_wmb()
  | 4) increase the avail idx (somehow a tail pointer of vring)
  |
  | in vhost we did:
  |
  | 1) read avail idx
  | 2) smp_rmb()
  | 3) read indirect->addr
  | 4) read from indirect->addr
  |
  | It looks to me even the data dependency barrier is not necessary
  | since we have rmb() which is sufficient for us to the correct
  | indirect->addr and driver are not expected to do any writing to
  | indirect->addr after avail idx is increased

Remove the redundant barrier invocation.

Suggested-by: Jason Wang 
Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 drivers/vhost/vhost.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d7b8df3edffc..74d135ee7e26 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2092,11 +2092,6 @@ static int get_indirect(struct vhost_virtqueue *vq,
return ret;
}
iov_iter_init(, READ, vq->indirect, ret, len);
-
-   /* We will use the result as an address to read from, so most
-* architectures only need a compiler barrier here. */
-   read_barrier_depends();
-
count = len / sizeof desc;
/* Buffers are chained via a 16 bit next field, so
 * we can have at most 2^16 of these. */
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 12/18] include/linux: Remove smp_read_barrier_depends() from comments

2020-06-30 Thread Will Deacon
smp_read_barrier_depends() doesn't exist any more, so reword the two
comments that mention it to refer to "dependency ordering" instead.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 include/linux/percpu-refcount.h | 2 +-
 include/linux/ptr_ring.h| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 22d9d183950d..87d8a38bdea1 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -155,7 +155,7 @@ static inline bool __ref_is_percpu(struct percpu_ref *ref,
 * between contaminating the pointer value, meaning that
 * READ_ONCE() is required when fetching it.
 *
-* The smp_read_barrier_depends() implied by READ_ONCE() pairs
+* The dependency ordering from the READ_ONCE() pairs
 * with smp_store_release() in __percpu_ref_switch_to_percpu().
 */
percpu_ptr = READ_ONCE(ref->percpu_count_ptr);
diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
index 417db0a79a62..808f9d3ee546 100644
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -107,7 +107,7 @@ static inline int __ptr_ring_produce(struct ptr_ring *r, 
void *ptr)
return -ENOSPC;
 
/* Make sure the pointer we are storing points to a valid data. */
-   /* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */
+   /* Pairs with the dependency ordering in __ptr_ring_consume. */
smp_wmb();
 
WRITE_ONCE(r->queue[r->producer++], ptr);
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 10/18] Documentation/barriers/kokr: Remove references to [smp_]read_barrier_depends()

2020-06-30 Thread Will Deacon
From: SeongJae Park 

This commit translates commit ("Documentation/barriers: Remove references to
[smp_]read_barrier_depends()") into Korean.

Signed-off-by: SeongJae Park 
Reviewed-by: Yunjae Lee 
Signed-off-by: Will Deacon 
---
 .../translations/ko_KR/memory-barriers.txt| 146 +-
 1 file changed, 3 insertions(+), 143 deletions(-)

diff --git a/Documentation/translations/ko_KR/memory-barriers.txt 
b/Documentation/translations/ko_KR/memory-barriers.txt
index 34d041d68f78..a1f772ef622c 100644
--- a/Documentation/translations/ko_KR/memory-barriers.txt
+++ b/Documentation/translations/ko_KR/memory-barriers.txt
@@ -577,7 +577,7 @@ ACQUIRE 는 해당 오퍼레이션의 로드 부분에만 적용되고 RELEASE 
 데이터 의존성 배리어 (역사적)
 -
 
-리눅스 커널 v4.15 기준으로, smp_read_barrier_depends() 가 READ_ONCE() 에
+리눅스 커널 v4.15 기준으로, smp_mb() 가 DEC Alpha 용 READ_ONCE() 코드에
 추가되었는데, 이는 이 섹션에 주의를 기울여야 하는 사람들은 DEC Alpha 아키텍쳐
 전용 코드를 만드는 사람들과 READ_ONCE() 자체를 만드는 사람들 뿐임을 의미합니다.
 그런 분들을 위해, 그리고 역사에 관심 있는 분들을 위해, 여기 데이터 의존성
@@ -2664,144 +2664,6 @@ CPU 코어는 프로그램의 인과성이 유지된다고만 여겨진다면 
 수도 있습니다.
 
 
-캐시 일관성

-
-하지만 삶은 앞에서 이야기한 것처럼 단순하지 않습니다: 캐시들은 일관적일 것으로
-기대되지만, 그 일관성이 순서에도 적용될 거라는 보장은 없습니다.  한 CPU 에서
-만들어진 변경 사항은 최종적으로는 시스템의 모든 CPU 에게 보여지게 되지만, 다른
-CPU 들에게도 같은 순서로 보이게 될 거라는 보장은 없다는 뜻입니다.
-
-
-두개의 CPU (1 & 2) 가 달려 있고, 각 CPU 에 두개의 데이터 캐시(CPU 1 은 A/B 를,
-CPU 2 는 C/D 를 갖습니다)가 병렬로 연결되어 있는 시스템을 다룬다고 생각해
-봅시다:
-
-   :
-   :  ++
-   :  +-+ ||
-   ++  : +--->| Cache A |<--->||
-   ||  : |+-+ ||
-   |  CPU 1 |<---+||
-   ||  : |+-+ ||
-   ++  : +--->| Cache B |<--->||
-   :  +-+ ||
-   :  | Memory |
-   :  +-+ | System |
-   ++  : +--->| Cache C |<--->||
-   ||  : |+-+ ||
-   |  CPU 2 |<---+||
-   ||  : |+-+ ||
-   ++  : +--->| Cache D |<--->||
-   :  +-+ ||
-   :  ++
-   :
-
-이 시스템이 다음과 같은 특성을 갖는다 생각해 봅시다:
-
- (*) 홀수번 캐시라인은 캐시 A, 캐시 C 또는 메모리에 위치할 수 있음;
-
- (*) 짝수번 캐시라인은 캐시 B, 캐시 D 또는 메모리에 위치할 수 있음;
-
- (*) CPU 코어가 한개의 캐시에 접근하는 동안, 다른 캐시는 - 더티 캐시라인을
- 메모리에 내리거나 추측성 로드를 하거나 하기 위해 - 시스템의 다른 부분에
- 액세스 하기 위해 버스를 사용할 수 있음;
-
- (*) 각 캐시는 시스템의 나머지 부분들과 일관성을 맞추기 위해 해당 캐시에
- 적용되어야 할 오퍼레이션들의 큐를 가짐;
-
- (*) 이 일관성 큐는 캐시에 이미 존재하는 라인에 가해지는 평범한 로드에 의해서는
- 비워지지 않는데, 큐의 오퍼레이션들이 이 로드의 결과에 영향을 끼칠 수 있다
- 할지라도 그러함.
-
-이제, 첫번째 CPU 에서 두개의 쓰기 오퍼레이션을 만드는데, 해당 CPU 의 캐시에
-요청된 순서로 오퍼레이션이 도달됨을 보장하기 위해 두 오퍼레이션 사이에 쓰기
-배리어를 사용하는 상황을 상상해 봅시다:
-
-   CPU 1   CPU 2   COMMENT
-   === === ===
-   u == 0, v == 1 and p == , q == 
-   v = 2;
-   smp_wmb();  v 의 변경이 p 의 변경 전에 보일 것을
-분명히 함
- v 는 이제 캐시 A 에 독점적으로 존재함
-   p = 
-p 는 이제 캐시 B 에 독점적으로 존재함
-
-여기서의 쓰기 메모리 배리어는 CPU 1 의 캐시가 올바른 순서로 업데이트 된 것으로
-시스템의 다른 CPU 들이 인지하게 만듭니다.  하지만, 이제 두번째 CPU 가 그 값들을
-읽으려 하는 상황을 생각해 봅시다:
-
-   CPU 1   CPU 2   COMMENT
-   === === ===
-   ...
-   q = p;
-   x = *q;
-
-위의 두개의 읽기 오퍼레이션은 예상된 순서로 일어나지 못할 수 있는데, 두번째 CPU
-의 한 캐시에 다른 캐시 이벤트가 발생해 v 를 담고 있는 캐시라인의 해당 캐시에의
-업데이트가 지연되는 사이, p 를 담고 있는 캐시라인은 두번째 CPU 의 다른 캐시에
-업데이트 되어버렸을 수 있기 때문입니다.
-
-   CPU 1   CPU 2   COMMENT
-   === === ===
-   u == 0, v == 1 and p == , q == 
-   v = 2;
-   smp_wmb();
- 
-   
-   p =  q = p;
-   
-
-   
-   x = *q;
-캐시에 업데이트 되기 전의 v 를 읽음
-   
-   
-
-기본적으로, 두개의 캐시라인 모두 CPU 2 에 최종적으로는 업데이트 될 것이지만,
-별도의 개입 없이는, 업데이트의 순서가 CPU 1 에서 만들어진 순서와 동일할
-것이라는 보장이 없습니다.
-
-
-여기에 개입하기 위해선, 데이터 의존성 배리어나 읽기 배리어를 로드 오퍼레이션들
-사이에 넣어야 합니다 (v4.15 부터는 READ_ONCE() 매크로에 의해 무조건적으로
-그렇게 됩니다).  이렇게 함으로써 캐시가 다음 요청을 처리하기 전에 일관성 큐를
-처리하도록 강제하게 됩니다.
-
-   CPU 1   CPU 2   COMMENT
-   === === ===
-   u == 0, v == 1 and p == , q == 
-   v = 2;
-   smp_wmb();
- 
-   
-   

[PATCH 07/18] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()

2020-06-30 Thread Will Deacon
In preparation for removing smp_read_barrier_depends() altogether,
move the Alpha code over to using smp_rmb() and smp_mb() directly.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 arch/alpha/include/asm/atomic.h  | 16 
 arch/alpha/include/asm/pgtable.h | 10 +-
 mm/memory.c  |  2 +-
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 2144530d1428..2f8f7e54792f 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -16,10 +16,10 @@
 
 /*
  * To ensure dependency ordering is preserved for the _relaxed and
- * _release atomics, an smp_read_barrier_depends() is unconditionally
- * inserted into the _relaxed variants, which are used to build the
- * barriered versions. Avoid redundant back-to-back fences in the
- * _acquire and _fence versions.
+ * _release atomics, an smp_mb() is unconditionally inserted into the
+ * _relaxed variants, which are used to build the barriered versions.
+ * Avoid redundant back-to-back fences in the _acquire and _fence
+ * versions.
  */
 #define __atomic_acquire_fence()
 #define __atomic_post_full_fence()
@@ -70,7 +70,7 @@ static inline int atomic_##op##_return_relaxed(int i, 
atomic_t *v)\
".previous" \
:"=" (temp), "=m" (v->counter), "=" (result)\
:"Ir" (i), "m" (v->counter) : "memory");\
-   smp_read_barrier_depends(); \
+   smp_mb();   \
return result;  \
 }
 
@@ -88,7 +88,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t 
*v) \
".previous" \
:"=" (temp), "=m" (v->counter), "=" (result)\
:"Ir" (i), "m" (v->counter) : "memory");\
-   smp_read_barrier_depends(); \
+   smp_mb();   \
return result;  \
 }
 
@@ -123,7 +123,7 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, 
atomic64_t * v) \
".previous" \
:"=" (temp), "=m" (v->counter), "=" (result)\
:"Ir" (i), "m" (v->counter) : "memory");\
-   smp_read_barrier_depends(); \
+   smp_mb();   \
return result;  \
 }
 
@@ -141,7 +141,7 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, 
atomic64_t * v)  \
".previous" \
:"=" (temp), "=m" (v->counter), "=" (result)\
:"Ir" (i), "m" (v->counter) : "memory");\
-   smp_read_barrier_depends(); \
+   smp_mb();   \
return result;  \
 }
 
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 162c17b2631f..660b14ce1317 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -277,9 +277,9 @@ extern inline pte_t pte_mkdirty(pte_t pte)  { pte_val(pte) 
|= __DIRTY_BITS; retur
 extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= 
__ACCESS_BITS; return pte; }
 
 /*
- * The smp_read_barrier_depends() in the following functions are required to
- * order the load of *dir (the pointer in the top level page table) with any
- * subsequent load of the returned pmd_t *ret (ret is data dependent on *dir).
+ * The smp_rmb() in the following functions are required to order the load of
+ * *dir (the pointer in the top level page table) with any subsequent load of
+ * the returned pmd_t *ret (ret is data dependent on *dir).
  *
  * If this ordering is not enforced, the CPU might load an older value of
  * *ret, which may be uninitialized data. See mm/memory.c:__pte_alloc for
@@ -293,7 +293,7 @@ extern inline pte_t pte_mkyoung(pte_t pte)  { pte_val(pte) 
|= __ACCESS_BITS; retu
 extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
 {
pmd_t *ret = (pmd_t *) pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & 
(PTRS_PER_PAGE - 1));
-   smp_read_barrier_depends(); /* see above */
+   smp_rmb(); /* see above */
return ret;
 }
 #define pmd_offset pmd_offset
@@ -303,7 +303,7 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, 
unsigned long address)
 {
pte_t *ret = (pte_t *) pmd_page_vaddr(*dir)
+ ((address 

[PATCH 08/18] locking/barriers: Remove definitions for [smp_]read_barrier_depends()

2020-06-30 Thread Will Deacon
There are no remaining users of [smp_]read_barrier_depends(), so
remove it from the generic implementation of 'barrier.h'.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 include/asm-generic/barrier.h | 17 -
 1 file changed, 17 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 2eacaf7d62f6..24f3f63f23e7 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -46,10 +46,6 @@
 #define dma_wmb()  wmb()
 #endif
 
-#ifndef read_barrier_depends
-#define read_barrier_depends() do { } while (0)
-#endif
-
 #ifndef __smp_mb
 #define __smp_mb() mb()
 #endif
@@ -62,10 +58,6 @@
 #define __smp_wmb()wmb()
 #endif
 
-#ifndef __smp_read_barrier_depends
-#define __smp_read_barrier_depends()   read_barrier_depends()
-#endif
-
 #ifdef CONFIG_SMP
 
 #ifndef smp_mb
@@ -80,10 +72,6 @@
 #define smp_wmb()  __smp_wmb()
 #endif
 
-#ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends() __smp_read_barrier_depends()
-#endif
-
 #else  /* !CONFIG_SMP */
 
 #ifndef smp_mb
@@ -98,10 +86,6 @@
 #define smp_wmb()  barrier()
 #endif
 
-#ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends() do { } while (0)
-#endif
-
 #endif /* CONFIG_SMP */
 
 #ifndef __smp_store_mb
@@ -196,7 +180,6 @@ do {
\
 #define virt_mb() __smp_mb()
 #define virt_rmb() __smp_rmb()
 #define virt_wmb() __smp_wmb()
-#define virt_read_barrier_depends() __smp_read_barrier_depends()
 #define virt_store_mb(var, value) __smp_store_mb(var, value)
 #define virt_mb__before_atomic() __smp_mb__before_atomic()
 #define virt_mb__after_atomic()__smp_mb__after_atomic()
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 05/18] asm/rwonce: Remove smp_read_barrier_depends() invocation

2020-06-30 Thread Will Deacon
Alpha overrides __READ_ONCE() directly, so there's no need to use
smp_read_barrier_depends() in the core code. This also means that
__READ_ONCE() can be relied upon to provide dependency ordering.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 include/asm-generic/rwonce.h | 19 ---
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
index f9dfa88fc04d..cc810f1f18ca 100644
--- a/include/asm-generic/rwonce.h
+++ b/include/asm-generic/rwonce.h
@@ -30,24 +30,16 @@
 
 /*
  * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
- * atomicity or dependency ordering guarantees. Note that this may result
- * in tears!
+ * atomicity. Note that this may result in tears!
  */
 #ifndef __READ_ONCE
 #define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
 #endif
 
-#define __READ_ONCE_SCALAR(x)  \
-({ \
-   __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \
-   smp_read_barrier_depends(); \
-   (typeof(x))__x; \
-})
-
 #define READ_ONCE(x)   \
 ({ \
compiletime_assert_rwonce_type(x);  \
-   __READ_ONCE_SCALAR(x);  \
+   __READ_ONCE(x); \
 })
 
 #define __WRITE_ONCE(x, val)   \
@@ -74,12 +66,9 @@ unsigned long __read_once_word_nocheck(const void *addr)
  */
 #define READ_ONCE_NOCHECK(x)   \
 ({ \
-   unsigned long __x;  \
-   compiletime_assert(sizeof(x) == sizeof(__x),\
+   compiletime_assert(sizeof(x) == sizeof(unsigned long),  \
"Unsupported access size for READ_ONCE_NOCHECK().");\
-   __x = __read_once_word_nocheck(&(x));   \
-   smp_read_barrier_depends(); \
-   (typeof(x))__x; \
+   (typeof(x))__read_once_word_nocheck(&(x));  \
 })
 
 static __no_kasan_or_inline
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 00/18] Allow architectures to override __READ_ONCE()

2020-06-30 Thread Will Deacon
Hi everyone,

This is the long-awaited version two of the patches I previously
posted in November last year:

  https://lore.kernel.org/lkml/20191108170120.22331-1-w...@kernel.org/

I ended up parking the series while the READ_ONCE() implementation was
being overhauled, but with that merged during the recent merge window
and LTO patches being posted again [1], it was time for a refresh.

The patches allow architectures to provide their own implementation of
__READ_ONCE(). This serves two main purposes:

  1. It finally allows us to remove [smp_]read_barrier_depends() from the
 Linux memory model and make it an implementation detail of the Alpha
 back-end.

  2. It allows arm64 to upgrade __READ_ONCE() to have RCpc acquire
 semantics when compiling with LTO, since this may enable compiler
 optimisations that break dependency ordering and therefore we
 require fencing to ensure ordering within the CPU.

Both of these are implemented by this series.

I've kept Paul's acks from v1 since, although the series has changed
somewhat, the patches with his Ack have not changed materially in my
opinion. I will drop them if anybody objects.

In terms of merging this, my preference would be a stable branch in the
arm64 tree, which others can pull in as they need it.

Cheers,

Will

[1] https://lore.kernel.org/r/20200624203200.78870-1-samitolva...@google.com

Cc: Sami Tolvanen 
Cc: Nick Desaulniers 
Cc: Kees Cook 
Cc: Marco Elver 
Cc: "Paul E. McKenney" 
Cc: Josh Triplett 
Cc: Matt Turner 
Cc: Ivan Kokshaysky 
Cc: Richard Henderson 
Cc: Peter Zijlstra 
Cc: Alan Stern 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Arnd Bergmann 
Cc: Boqun Feng 
Cc: Catalin Marinas 
Cc: Mark Rutland 
Cc: linux-arm-ker...@lists.infradead.org>
Cc: linux-al...@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Cc: kernel-t...@android.com

--->8

SeongJae Park (1):
  Documentation/barriers/kokr: Remove references to
[smp_]read_barrier_depends()

Will Deacon (17):
  tools: bpf: Use local copy of headers including uapi/linux/filter.h
  compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h
  asm/rwonce: Allow __READ_ONCE to be overridden by the architecture
  alpha: Override READ_ONCE() with barriered implementation
  asm/rwonce: Remove smp_read_barrier_depends() invocation
  vhost: Remove redundant use of read_barrier_depends() barrier
  alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
  locking/barriers: Remove definitions for [smp_]read_barrier_depends()
  Documentation/barriers: Remove references to
[smp_]read_barrier_depends()
  tools/memory-model: Remove smp_read_barrier_depends() from informal
doc
  include/linux: Remove smp_read_barrier_depends() from comments
  checkpatch: Remove checks relating to [smp_]read_barrier_depends()
  arm64: Reduce the number of header files pulled into vmlinux.lds.S
  arm64: alternatives: Split up alternative.h
  arm64: cpufeatures: Add capability for LDAPR instruction
  arm64: alternatives: Remove READ_ONCE() usage during patch operation
  arm64: lto: Strengthen READ_ONCE() to acquire when CLANG_LTO=y

 .../RCU/Design/Requirements/Requirements.rst  |   2 +-
 Documentation/memory-barriers.txt | 156 +-
 .../translations/ko_KR/memory-barriers.txt| 146 +
 arch/alpha/include/asm/atomic.h   |  16 +-
 arch/alpha/include/asm/barrier.h  |  61 +---
 arch/alpha/include/asm/pgtable.h  |  10 +-
 arch/alpha/include/asm/rwonce.h   |  19 ++
 arch/arm64/Kconfig|   3 +
 arch/arm64/include/asm/alternative-macros.h   | 276 ++
 arch/arm64/include/asm/alternative.h  | 267 +
 arch/arm64/include/asm/cpucaps.h  |   3 +-
 arch/arm64/include/asm/insn.h |   3 +-
 arch/arm64/include/asm/kernel-pgtable.h   |   2 +-
 arch/arm64/include/asm/memory.h   |  11 +-
 arch/arm64/include/asm/rwonce.h   |  63 
 arch/arm64/include/asm/uaccess.h  |   1 +
 arch/arm64/kernel/alternative.c   |   7 +-
 arch/arm64/kernel/cpufeature.c|  10 +
 arch/arm64/kernel/entry.S |   1 +
 arch/arm64/kernel/vdso/Makefile   |   2 +-
 arch/arm64/kernel/vdso32/Makefile |   2 +-
 arch/arm64/kernel/vmlinux.lds.S   |   1 -
 arch/arm64/kvm/hyp-init.S |   1 +
 drivers/vhost/vhost.c |   5 -
 include/asm-generic/Kbuild|   1 +
 include/asm-generic/barrier.h |  17 --
 include/asm-generic/rwonce.h  |  82 ++
 include/linux/compiler.h  |  83 +-
 include/linux/percpu-refcount.h   |   2 +-
 include/linux/ptr_ring.h  |   2 +-
 mm/memory.c   |   2 +-
 scripts/checkpatch.pl |   9 +-
 tools/bpf/Makefile   

[PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h

2020-06-30 Thread Will Deacon
In preparation for allowing architectures to define their own
implementation of the READ_ONCE() macro, move the generic
{READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h'
file and into a new 'rwonce.h' header under 'asm-generic'.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 include/asm-generic/Kbuild   |  1 +
 include/asm-generic/rwonce.h | 91 
 include/linux/compiler.h | 83 +---
 3 files changed, 94 insertions(+), 81 deletions(-)
 create mode 100644 include/asm-generic/rwonce.h

diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
index 44ec80e70518..74b0612601dd 100644
--- a/include/asm-generic/Kbuild
+++ b/include/asm-generic/Kbuild
@@ -45,6 +45,7 @@ mandatory-y += pci.h
 mandatory-y += percpu.h
 mandatory-y += pgalloc.h
 mandatory-y += preempt.h
+mandatory-y += rwonce.h
 mandatory-y += sections.h
 mandatory-y += serial.h
 mandatory-y += shmparam.h
diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
new file mode 100644
index ..92cc2f223cb3
--- /dev/null
+++ b/include/asm-generic/rwonce.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Prevent the compiler from merging or refetching reads or writes. The
+ * compiler is also forbidden from reordering successive instances of
+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
+ * particular ordering. One way to make the compiler aware of ordering is to
+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C
+ * statements.
+ *
+ * These two macros will also work on aggregate data types like structs or
+ * unions.
+ *
+ * Their two major use cases are: (1) Mediating communication between
+ * process-level code and irq/NMI handlers, all running on the same CPU,
+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
+ * mutilate accesses that either do not require ordering or that interact
+ * with an explicit memory barrier or atomic instruction that provides the
+ * required ordering.
+ */
+#ifndef __ASM_GENERIC_RWONCE_H
+#define __ASM_GENERIC_RWONCE_H
+
+#ifndef __ASSEMBLY__
+
+#include 
+#include 
+#include 
+
+#include 
+
+/*
+ * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
+ * atomicity or dependency ordering guarantees. Note that this may result
+ * in tears!
+ */
+#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
+
+#define __READ_ONCE_SCALAR(x)  \
+({ \
+   __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \
+   smp_read_barrier_depends(); \
+   (typeof(x))__x; \
+})
+
+#define READ_ONCE(x)   \
+({ \
+   compiletime_assert_rwonce_type(x);  \
+   __READ_ONCE_SCALAR(x);  \
+})
+
+#define __WRITE_ONCE(x, val)   \
+do {   \
+   *(volatile typeof(x) *)&(x) = (val);\
+} while (0)
+
+#define WRITE_ONCE(x, val) \
+do {   \
+   compiletime_assert_rwonce_type(x);  \
+   __WRITE_ONCE(x, val);   \
+} while (0)
+
+static __no_sanitize_or_inline
+unsigned long __read_once_word_nocheck(const void *addr)
+{
+   return __READ_ONCE(*(unsigned long *)addr);
+}
+
+/*
+ * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
+ * word from memory atomically but without telling KASAN/KCSAN. This is
+ * usually used by unwinding code when walking the stack of a running process.
+ */
+#define READ_ONCE_NOCHECK(x)   \
+({ \
+   unsigned long __x;  \
+   compiletime_assert(sizeof(x) == sizeof(__x),\
+   "Unsupported access size for READ_ONCE_NOCHECK().");\
+   __x = __read_once_word_nocheck(&(x));   \
+   smp_read_barrier_depends(); \
+   (typeof(x))__x; \
+})
+
+static __no_kasan_or_inline
+unsigned long read_word_at_a_time(const void *addr)
+{
+   kasan_check_read(addr, 1);
+   return *(unsigned long *)addr;
+}
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_GENERIC_RWONCE_H */
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 

[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation

2020-06-30 Thread Will Deacon
Rather then relying on the core code to use smp_read_barrier_depends()
as part of the READ_ONCE() definition, instead override __READ_ONCE()
in the Alpha code so that it is treated the same way as
smp_load_acquire().

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 arch/alpha/include/asm/barrier.h | 61 
 arch/alpha/include/asm/rwonce.h  | 19 ++
 2 files changed, 26 insertions(+), 54 deletions(-)
 create mode 100644 arch/alpha/include/asm/rwonce.h

diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
index 92ec486a4f9e..2ecd068d91d1 100644
--- a/arch/alpha/include/asm/barrier.h
+++ b/arch/alpha/include/asm/barrier.h
@@ -2,64 +2,17 @@
 #ifndef __BARRIER_H
 #define __BARRIER_H
 
-#include 
-
 #define mb()   __asm__ __volatile__("mb": : :"memory")
 #define rmb()  __asm__ __volatile__("mb": : :"memory")
 #define wmb()  __asm__ __volatile__("wmb": : :"memory")
 
-/**
- * read_barrier_depends - Flush all pending reads that subsequents reads
- * depend on.
- *
- * No data-dependent reads from memory-like regions are ever reordered
- * over this barrier.  All reads preceding this primitive are guaranteed
- * to access memory (but not necessarily other CPUs' caches) before any
- * reads following this primitive that depend on the data return by
- * any of the preceding reads.  This primitive is much lighter weight than
- * rmb() on most CPUs, and is never heavier weight than is
- * rmb().
- *
- * These ordering constraints are respected by both the local CPU
- * and the compiler.
- *
- * Ordering is not guaranteed by anything other than these primitives,
- * not even by data dependencies.  See the documentation for
- * memory_barrier() for examples and URLs to more information.
- *
- * For example, the following code would force ordering (the initial
- * value of "a" is zero, "b" is one, and "p" is ""):
- *
- * 
- * CPU 0   CPU 1
- *
- * b = 2;
- * memory_barrier();
- * p =  q = p;
- * read_barrier_depends();
- * d = *q;
- * 
- *
- * because the read of "*q" depends on the read of "p" and these
- * two reads are separated by a read_barrier_depends().  However,
- * the following code, with the same initial values for "a" and "b":
- *
- * 
- * CPU 0   CPU 1
- *
- * a = 2;
- * memory_barrier();
- * b = 3;  y = b;
- * read_barrier_depends();
- * x = a;
- * 
- *
- * does not enforce ordering, since there is no data dependency between
- * the read of "a" and the read of "b".  Therefore, on some CPUs, such
- * as Alpha, "y" could be set to 3 and "x" to 0.  Use rmb()
- * in cases like this where there are no data dependencies.
- */
-#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
+#define __smp_load_acquire(p)  \
+({ \
+   __unqual_scalar_typeof(*p) ___p1 =  \
+   (*(volatile typeof(___p1) *)(p));   \
+   compiletime_assert_atomic_type(*p); \
+   ___p1;  \
+})
 
 #ifdef CONFIG_SMP
 #define __ASM_SMP_MB   "\tmb\n"
diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include/asm/rwonce.h
new file mode 100644
index ..83a92e49a615
--- /dev/null
+++ b/arch/alpha/include/asm/rwonce.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2019 Google LLC.
+ */
+#ifndef __ASM_RWONCE_H
+#define __ASM_RWONCE_H
+
+#include 
+
+/*
+ * Alpha is apparently daft enough to reorder address-dependent loads
+ * on some CPU implementations. Knock some common sense into it with
+ * a memory barrier in READ_ONCE().
+ */
+#define __READ_ONCE(x) __smp_load_acquire(&(x))
+
+#include 
+
+#endif /* __ASM_RWONCE_H */
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 01/18] tools: bpf: Use local copy of headers including uapi/linux/filter.h

2020-06-30 Thread Will Deacon
Pulling header files directly out of the kernel sources for inclusion in
userspace programs is highly error prone, not least because it bypasses
the kbuild infrastructure entirely and so may end up referencing other
header files that have not been generated.

Subsequent patches will cause compiler.h to pull in the ungenerated
asm/rwonce.h file via filter.h, breaking the build for tools/bpf:

  | $ make -C tools/bpf
  | make: Entering directory '/linux/tools/bpf'
  |   CC   bpf_jit_disasm.o
  |   LINK bpf_jit_disasm
  |   CC   bpf_dbg.o
  | In file included from /linux/include/uapi/linux/filter.h:9,
  |  from /linux/tools/bpf/bpf_dbg.c:41:
  | /linux/include/linux/compiler.h:247:10: fatal error: asm/rwonce.h: No such 
file or directory
  |  #include 
  |   ^~
  | compilation terminated.
  | make: *** [Makefile:61: bpf_dbg.o] Error 1
  | make: Leaving directory '/linux/tools/bpf'

Take a copy of the installed version of linux/filter.h  (i.e. the one
created by the 'headers_install' target) into tools/include/uapi/linux/
and adjust the BPF tool Makefile to reference the local include
directories instead of those in the main source tree.

Cc: Alexei Starovoitov 
Cc: Masahiro Yamada 
Suggested-by: Daniel Borkmann 
Reported-by: Xiao Yang 
Signed-off-by: Will Deacon 
---
 tools/bpf/Makefile|  3 +-
 tools/include/uapi/linux/filter.h | 90 +++
 2 files changed, 92 insertions(+), 1 deletion(-)
 create mode 100644 tools/include/uapi/linux/filter.h

diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
index 6df1850f8353..8a69258fd8aa 100644
--- a/tools/bpf/Makefile
+++ b/tools/bpf/Makefile
@@ -9,7 +9,8 @@ MAKE = make
 INSTALL ?= install
 
 CFLAGS += -Wall -O2
-CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/include/uapi -I$(srctree)/include
+CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/tools/include/uapi \
+ -I$(srctree)/tools/include
 
 # This will work when bpf is built in tools env. where srctree
 # isn't set and when invoked from selftests build, where srctree
diff --git a/tools/include/uapi/linux/filter.h 
b/tools/include/uapi/linux/filter.h
new file mode 100644
index ..eaef459e7bd4
--- /dev/null
+++ b/tools/include/uapi/linux/filter.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Linux Socket Filter Data Structures
+ */
+
+#ifndef __LINUX_FILTER_H__
+#define __LINUX_FILTER_H__
+
+
+#include 
+#include 
+
+/*
+ * Current version of the filter code architecture.
+ */
+#define BPF_MAJOR_VERSION 1
+#define BPF_MINOR_VERSION 1
+
+/*
+ * Try and keep these values and structures similar to BSD, especially
+ * the BPF code definitions which need to match so you can share filters
+ */
+ 
+struct sock_filter {   /* Filter block */
+   __u16   code;   /* Actual filter code */
+   __u8jt; /* Jump true */
+   __u8jf; /* Jump false */
+   __u32   k;  /* Generic multiuse field */
+};
+
+struct sock_fprog {/* Required for SO_ATTACH_FILTER. */
+   unsigned short  len;/* Number of filter blocks */
+   struct sock_filter *filter;
+};
+
+/* ret - BPF_K and BPF_X also apply */
+#define BPF_RVAL(code)  ((code) & 0x18)
+#define BPF_A   0x10
+
+/* misc */
+#define BPF_MISCOP(code) ((code) & 0xf8)
+#define BPF_TAX 0x00
+#define BPF_TXA 0x80
+
+/*
+ * Macros for filter block array initializers.
+ */
+#ifndef BPF_STMT
+#define BPF_STMT(code, k) { (unsigned short)(code), 0, 0, k }
+#endif
+#ifndef BPF_JUMP
+#define BPF_JUMP(code, k, jt, jf) { (unsigned short)(code), jt, jf, k }
+#endif
+
+/*
+ * Number of scratch memory words for: BPF_ST and BPF_STX
+ */
+#define BPF_MEMWORDS 16
+
+/* RATIONALE. Negative offsets are invalid in BPF.
+   We use them to reference ancillary data.
+   Unlike introduction new instructions, it does not break
+   existing compilers/optimizers.
+ */
+#define SKF_AD_OFF(-0x1000)
+#define SKF_AD_PROTOCOL 0
+#define SKF_AD_PKTTYPE 4
+#define SKF_AD_IFINDEX 8
+#define SKF_AD_NLATTR  12
+#define SKF_AD_NLATTR_NEST 16
+#define SKF_AD_MARK20
+#define SKF_AD_QUEUE   24
+#define SKF_AD_HATYPE  28
+#define SKF_AD_RXHASH  32
+#define SKF_AD_CPU 36
+#define SKF_AD_ALU_XOR_X   40
+#define SKF_AD_VLAN_TAG44
+#define SKF_AD_VLAN_TAG_PRESENT 48
+#define SKF_AD_PAY_OFFSET  52
+#define SKF_AD_RANDOM  56
+#define SKF_AD_VLAN_TPID   60
+#define SKF_AD_MAX 64
+
+#define SKF_NET_OFF(-0x10)
+#define SKF_LL_OFF (-0x20)
+
+#define BPF_NET_OFFSKF_NET_OFF
+#define BPF_LL_OFF SKF_LL_OFF
+
+#endif /* __LINUX_FILTER_H__ */
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 03/18] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture

2020-06-30 Thread Will Deacon
The meat and potatoes of READ_ONCE() is defined by the __READ_ONCE()
macro, which uses a volatile casts in an attempt to avoid tearing of
byte, halfword, word and double-word accesses. Allow this to be
overridden by the architecture code in the case that things like memory
barriers are also required.

Acked-by: Paul E. McKenney 
Signed-off-by: Will Deacon 
---
 include/asm-generic/rwonce.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
index 92cc2f223cb3..f9dfa88fc04d 100644
--- a/include/asm-generic/rwonce.h
+++ b/include/asm-generic/rwonce.h
@@ -33,7 +33,9 @@
  * atomicity or dependency ordering guarantees. Note that this may result
  * in tears!
  */
+#ifndef __READ_ONCE
 #define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
+#endif
 
 #define __READ_ONCE_SCALAR(x)  \
 ({ \
-- 
2.27.0.212.ge8ba1cc988-goog

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 1/6] mm/page_alloc: tweak comments in has_unmovable_pages()

2020-06-30 Thread David Hildenbrand
Let's move the split comment regarding bootmem allocations and memory
holes, especially in the context of ZONE_MOVABLE, to the PageReserved()
check.

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Signed-off-by: David Hildenbrand 
---
 mm/page_alloc.c | 22 ++
 1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 48eb0f1410d47..bd3ebf08f09b9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8207,14 +8207,6 @@ struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
unsigned long iter = 0;
unsigned long pfn = page_to_pfn(page);
 
-   /*
-* TODO we could make this much more efficient by not checking every
-* page in the range if we know all of them are in MOVABLE_ZONE and
-* that the movable zone guarantees that pages are migratable but
-* the later is not the case right now unfortunatelly. E.g. movablecore
-* can still lead to having bootmem allocations in zone_movable.
-*/
-
if (is_migrate_cma_page(page)) {
/*
 * CMA allocations (alloc_contig_range) really need to mark
@@ -8233,6 +8225,12 @@ struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
 
page = pfn_to_page(pfn + iter);
 
+   /*
+* Both, bootmem allocations and memory holes are marked
+* PG_reserved and are unmovable. We can even have unmovable
+* allocations inside ZONE_MOVABLE, for example when
+* specifying "movable_core".
+*/
if (PageReserved(page))
return page;
 
@@ -8306,14 +8304,6 @@ struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
 * it.  But now, memory offline itself doesn't call
 * shrink_node_slabs() and it still to be fixed.
 */
-   /*
-* If the page is not RAM, page_count()should be 0.
-* we don't need more check. This is an _used_ not-movable page.
-*
-* The problematic thing here is PG_reserved pages. PG_reserved
-* is set to both of a memory hole page and a _used_ kernel
-* page at boot.
-*/
return page;
}
return NULL;
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 2/6] mm/page_isolation: don't dump_page(NULL) in set_migratetype_isolate()

2020-06-30 Thread David Hildenbrand
Right now, if we have two isolations racing, we might trigger the
WARN_ON_ONCE() and to dump_page(NULL), dereferencing NULL. Let's just
return directly.

In the future, we might want to report -EAGAIN to the caller instead, as
this could indicate a temporary isolation failure only.

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Signed-off-by: David Hildenbrand 
---
 mm/page_isolation.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index f6d07c5f0d34d..553b49a34cf71 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -29,10 +29,12 @@ static int set_migratetype_isolate(struct page *page, int 
migratetype, int isol_
/*
 * We assume the caller intended to SET migrate type to isolate.
 * If it is already set, then someone else must have raced and
-* set it before us.  Return -EBUSY
+* set it before us.
 */
-   if (is_migrate_isolate_page(page))
-   goto out;
+   if (is_migrate_isolate_page(page)) {
+   spin_unlock_irqrestore(>lock, flags);
+   return -EBUSY;
+   }
 
/*
 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 5/6] mm/page_alloc: restrict ZONE_MOVABLE optimization in has_unmovable_pages() to memory offlining

2020-06-30 Thread David Hildenbrand
We can already have pages that can be offlined but not allocated in
ZONE_MOVABLE - PageHWPoison pages. While these pages can be skipped when
offlining ("moving them to /dev/null"), we cannot move them when
allocating.

virtio-mem managed memory is similar. The logical memory holes
corresponding to unplug memory ranges can be skipped when offlining,
however, the pages cannot be moved. Currently, virtio-mem special-cases
ZONE_MOVABLE, such that:
- partially plugged memory blocks it added to Linux cannot be onlined to
  ZONE_MOVABLE
- when unplugging memory, it will never consider memory blocks that were
  onlined to ZONE_MOVABLE

We also want to support ZONE_MOVABLE in virtio-mem for both cases. Note
that virtio-mem does not blindly try to unplug random pages within its
managed memory region. It always plugs memory left-to-right and tries to
unplug memory right-to-left - in roughly MAX_ORDER - 1 granularity. In
theory, the movable ZONE part would only shrink when unplugging memory
from ZONE_MOVABLE.

Let's perform the ZONE_MOVABLE optimization only for memory offlining,
such that we reduce the number of false positives from
has_unmovable_pages() in case of alloc_contig_range() on ZONE_MOVABLE.

Note: We currently don't seem to have any user of alloc_contig_range()
that actually uses ZONE_MOVABLE. This change is mostly valuable for the
documentation.

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Signed-off-by: David Hildenbrand 
---
 mm/page_alloc.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bd3ebf08f09b9..45077d74d975d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8237,9 +8237,12 @@ struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
/*
 * If the zone is movable and we have ruled out all reserved
 * pages then it should be reasonably safe to assume the rest
-* is movable.
+* is movable. As we can have some pages in the movable zone
+* that are only considered movable for memory offlining (esp.,
+* PageHWPoison and PageOffline that will be skipped), we
+* perform this optimization only for memory offlining.
 */
-   if (zone_idx(zone) == ZONE_MOVABLE)
+   if ((flags & MEMORY_OFFLINE) && zone_idx(zone) == ZONE_MOVABLE)
continue;
 
/*
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 3/6] mm/page_isolation: drop WARN_ON_ONCE() in set_migratetype_isolate()

2020-06-30 Thread David Hildenbrand
Inside has_unmovable_pages(), we have a comment describing how unmovable
data could end up in ZONE_MOVABLE - via "movable_core". Also, besides
checking if the first page in the pageblock is reserved, we don't
perform any further checks in case of ZONE_MOVABLE.

In case of memory offlining, we set REPORT_FAILURE, properly
dump_page() the page and handle the error gracefully.
alloc_contig_pages() users currently never allocate from ZONE_MOVABLE.
E.g., hugetlb uses alloc_contig_pages() for the allocation of gigantic
pages only, which will never end up on the MOVABLE zone
(see htlb_alloc_mask()).

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Signed-off-by: David Hildenbrand 
---
 mm/page_isolation.c | 16 ++--
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 553b49a34cf71..02a01bff6b219 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -58,16 +58,12 @@ static int set_migratetype_isolate(struct page *page, int 
migratetype, int isol_
spin_unlock_irqrestore(>lock, flags);
if (!ret) {
drain_all_pages(zone);
-   } else {
-   WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
-
-   if ((isol_flags & REPORT_FAILURE) && unmovable)
-   /*
-* printk() with zone->lock held will likely trigger a
-* lockdep splat, so defer it here.
-*/
-   dump_page(unmovable, "unmovable page");
-   }
+   } else if ((isol_flags & REPORT_FAILURE) && unmovable)
+   /*
+* printk() with zone->lock held will likely trigger a
+* lockdep splat, so defer it here.
+*/
+   dump_page(unmovable, "unmovable page");
 
return ret;
 }
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 0/6] mm / virtio-mem: support ZONE_MOVABLE

2020-06-30 Thread David Hildenbrand
Currently, virtio-mem does not really support ZONE_MOVABLE. While it allows
to online fully plugged memory blocks to ZONE_MOVABLE, it does not allow
to online partially-plugged memory blocks to ZONE_MOVABLE and will never
consider such memory blocks when unplugging memory. This might be
surprising for users (especially, if onlining suddenly fails).

Let's support partially plugged memory blocks in ZONE_MOVABLE, allowing
partially plugged memory blocks to be online to ZONE_MOVABLE and also
unplugging from such memory blocks.

This is especially helpful for testing, but also paves the way for
virtio-mem optimizations, allowing more memory to get reliably unplugged.

Cleanup has_unmovable_pages() and set_migratetype_isolate(), providing
better documentation of how ZONE_MOVABLE interacts with different kind of
unmovable pages (memory offlining vs. alloc_contig_range()).

David Hildenbrand (6):
  mm/page_alloc: tweak comments in has_unmovable_pages()
  mm/page_isolation: don't dump_page(NULL) in set_migratetype_isolate()
  mm/page_isolation: drop WARN_ON_ONCE() in set_migratetype_isolate()
  mm/page_isolation: cleanup set_migratetype_isolate()
  mm/page_alloc: restrict ZONE_MOVABLE optimization in
has_unmovable_pages() to memory offlining
  virtio-mem: don't special-case ZONE_MOVABLE

 drivers/virtio/virtio_mem.c | 47 +++--
 mm/page_alloc.c | 29 +--
 mm/page_isolation.c | 40 ++-
 3 files changed, 36 insertions(+), 80 deletions(-)

-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 4/6] mm/page_isolation: cleanup set_migratetype_isolate()

2020-06-30 Thread David Hildenbrand
Let's clean it up a bit, simplifying error handling and getting rid of
the label.

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Signed-off-by: David Hildenbrand 
---
 mm/page_isolation.c | 18 +++---
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 02a01bff6b219..5f869bef23fa4 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -17,12 +17,9 @@
 
 static int set_migratetype_isolate(struct page *page, int migratetype, int 
isol_flags)
 {
-   struct page *unmovable = NULL;
-   struct zone *zone;
+   struct zone *zone = page_zone(page);
+   struct page *unmovable;
unsigned long flags;
-   int ret = -EBUSY;
-
-   zone = page_zone(page);
 
spin_lock_irqsave(>lock, flags);
 
@@ -51,21 +48,20 @@ static int set_migratetype_isolate(struct page *page, int 
migratetype, int isol_
NULL);
 
__mod_zone_freepage_state(zone, -nr_pages, mt);
-   ret = 0;
+   spin_unlock_irqrestore(>lock, flags);
+   drain_all_pages(zone);
+   return 0;
}
 
-out:
spin_unlock_irqrestore(>lock, flags);
-   if (!ret) {
-   drain_all_pages(zone);
-   } else if ((isol_flags & REPORT_FAILURE) && unmovable)
+   if (isol_flags & REPORT_FAILURE)
/*
 * printk() with zone->lock held will likely trigger a
 * lockdep splat, so defer it here.
 */
dump_page(unmovable, "unmovable page");
 
-   return ret;
+   return -EBUSY;
 }
 
 static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v1 6/6] virtio-mem: don't special-case ZONE_MOVABLE

2020-06-30 Thread David Hildenbrand
Let's allow to online partially plugged memory blocks to ZONE_MOVABLE
and also consider memory blocks that were onlined to ZONE_MOVABLE when
unplugging memory. While unplugged memory blocks are, in general,
unmovable, they can be skipped when offlining memory.

virtio-mem only unplugs fairly big chunks (in the megabyte range) and
rather tries to shrink the memory region than randomly choosing memory. In
theory, if all other pages in the movable zone would be movable, virtio-mem
would only shrink that zone and not create any kind of fragmentation.

Note: Support for defragmentation is planned, to deal with fragmentation
after unplug due to memory chunks within memory blocks that could not
get unplugged before (e.g., somebody pinning pages within ZONE_MOVABLE
for a longer time).

Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Cc: Jason Wang 
Signed-off-by: David Hildenbrand 
---
 drivers/virtio/virtio_mem.c | 47 +++--
 1 file changed, 8 insertions(+), 39 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index f26f5f64ae822..2ddfc4a0e2ee0 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -36,18 +36,10 @@ enum virtio_mem_mb_state {
VIRTIO_MEM_MB_STATE_OFFLINE,
/* Partially plugged, fully added to Linux, offline. */
VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL,
-   /* Fully plugged, fully added to Linux, online (!ZONE_MOVABLE). */
+   /* Fully plugged, fully added to Linux, online. */
VIRTIO_MEM_MB_STATE_ONLINE,
-   /* Partially plugged, fully added to Linux, online (!ZONE_MOVABLE). */
+   /* Partially plugged, fully added to Linux, online. */
VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL,
-   /*
-* Fully plugged, fully added to Linux, online (ZONE_MOVABLE).
-* We are not allowed to allocate (unplug) parts of this block that
-* are not movable (similar to gigantic pages). We will never allow
-* to online OFFLINE_PARTIAL to ZONE_MOVABLE (as they would contain
-* unmovable parts).
-*/
-   VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE,
VIRTIO_MEM_MB_STATE_COUNT
 };
 
@@ -526,21 +518,10 @@ static bool virtio_mem_owned_mb(struct virtio_mem *vm, 
unsigned long mb_id)
 }
 
 static int virtio_mem_notify_going_online(struct virtio_mem *vm,
- unsigned long mb_id,
- enum zone_type zone)
+ unsigned long mb_id)
 {
switch (virtio_mem_mb_get_state(vm, mb_id)) {
case VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL:
-   /*
-* We won't allow to online a partially plugged memory block
-* to the MOVABLE zone - it would contain unmovable parts.
-*/
-   if (zone == ZONE_MOVABLE) {
-   dev_warn_ratelimited(>vdev->dev,
-"memory block has holes, MOVABLE 
not supported\n");
-   return NOTIFY_BAD;
-   }
-   return NOTIFY_OK;
case VIRTIO_MEM_MB_STATE_OFFLINE:
return NOTIFY_OK;
default:
@@ -560,7 +541,6 @@ static void virtio_mem_notify_offline(struct virtio_mem *vm,
VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL);
break;
case VIRTIO_MEM_MB_STATE_ONLINE:
-   case VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE:
virtio_mem_mb_set_state(vm, mb_id,
VIRTIO_MEM_MB_STATE_OFFLINE);
break;
@@ -579,24 +559,17 @@ static void virtio_mem_notify_offline(struct virtio_mem 
*vm,
virtio_mem_retry(vm);
 }
 
-static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long 
mb_id,
-enum zone_type zone)
+static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long 
mb_id)
 {
unsigned long nb_offline;
 
switch (virtio_mem_mb_get_state(vm, mb_id)) {
case VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL:
-   BUG_ON(zone == ZONE_MOVABLE);
virtio_mem_mb_set_state(vm, mb_id,
VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL);
break;
case VIRTIO_MEM_MB_STATE_OFFLINE:
-   if (zone == ZONE_MOVABLE)
-   virtio_mem_mb_set_state(vm, mb_id,
-   VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE);
-   else
-   virtio_mem_mb_set_state(vm, mb_id,
-   VIRTIO_MEM_MB_STATE_ONLINE);
+   virtio_mem_mb_set_state(vm, mb_id, VIRTIO_MEM_MB_STATE_ONLINE);
break;
default:
BUG();
@@ -675,7 +648,6 @@ static int virtio_mem_memory_notifier_cb(struct 
notifier_block *nb,
const unsigned long start = 

Re: [RFC 0/3] virtio: NUMA-aware memory allocation

2020-06-30 Thread Stefan Hajnoczi
On Mon, Jun 29, 2020 at 11:28:41AM -0400, Michael S. Tsirkin wrote:
> On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> > On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> > > 
> > > On 2020/6/25 下午9:57, Stefan Hajnoczi wrote:
> > > > These patches are not ready to be merged because I was unable to 
> > > > measure a
> > > > performance improvement. I'm publishing them so they are archived in 
> > > > case
> > > > someone picks up this work again in the future.
> > > > 
> > > > The goal of these patches is to allocate virtqueues and driver state 
> > > > from the
> > > > device's NUMA node for optimal memory access latency. Only guests with 
> > > > a vNUMA
> > > > topology and virtio devices spread across vNUMA nodes benefit from 
> > > > this.  In
> > > > other cases the memory placement is fine and we don't need to take NUMA 
> > > > into
> > > > account inside the guest.
> > > > 
> > > > These patches could be extended to virtio_net.ko and other devices in 
> > > > the
> > > > future. I only tested virtio_blk.ko.
> > > > 
> > > > The benchmark configuration was designed to trigger worst-case NUMA 
> > > > placement:
> > > >   * Physical NVMe storage controller on host NUMA node 0
> 
> It's possible that numa is not such a big deal for NVMe.
> And it's possible that bios misconfigures ACPI reporting NUMA placement
> incorrectly.
> I think that the best thing to try is to use a ramdisk
> on a specific numa node.

Using ramdisk is an interesting idea, thanks.

Stefan


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v3 1/1] s390: virtio: let arch accept devices without IOMMU feature

2020-06-30 Thread Cornelia Huck
On Mon, 29 Jun 2020 17:18:09 -0400
"Michael S. Tsirkin"  wrote:

> On Mon, Jun 29, 2020 at 06:48:28PM +0200, Pierre Morel wrote:
> > 
> > 
> > On 2020-06-29 18:09, Michael S. Tsirkin wrote:  
> > > On Wed, Jun 17, 2020 at 12:43:57PM +0200, Pierre Morel wrote:  
> > > > An architecture protecting the guest memory against unauthorized host
> > > > access may want to enforce VIRTIO I/O device protection through the
> > > > use of VIRTIO_F_IOMMU_PLATFORM.
> > > > Let's give a chance to the architecture to accept or not devices
> > > > without VIRTIO_F_IOMMU_PLATFORM.  
> > > 
> > > I agree it's a bit misleading. Protection is enforced by memory
> > > encryption, you can't trust the hypervisor to report the bit correctly
> > > so using that as a securoty measure would be pointless.
> > > The real gain here is that broken configs are easier to
> > > debug.
> > > 
> > > Here's an attempt at a better description:
> > > 
> > >   On some architectures, guest knows that VIRTIO_F_IOMMU_PLATFORM is
> > >   required for virtio to function: e.g. this is the case on s390 protected
> > >   virt guests, since otherwise guest passes encrypted guest memory to 
> > > devices,
> > >   which the device can't read. Without VIRTIO_F_IOMMU_PLATFORM the
> > >   result is that affected memory (or even a whole page containing
> > >   it is corrupted). Detect and fail probe instead - that is easier
> > >   to debug.  

s/guest/the guest/ (x2)

> > 
> > Thanks indeed better aside from the "encrypted guest memory": the mechanism
> > used to avoid the access to the guest memory from the host by s390 is not
> > encryption but a hardware feature denying the general host access and
> > allowing pieces of memory to be shared between guest and host.  
> 
> s/encrypted/protected/
> 
> > As a consequence the data read from memory is not corrupted but not read at
> > all and the read error kills the hypervizor with a SIGSEGV.  
> 
> s/(or even a whole page containing it is corrupted)/can not be
>   read and the read error kills the hypervizor with a SIGSEGV/

s/hypervizor/hypervisor/

> 
> 
> As an aside, we could maybe handle that more gracefully
> on the hypervisor side.
> 
> >   
> > > 
> > > however, now that we have described what it is (hypervisor
> > > misconfiguration) I ask a question: can we be sure this will never
> > > ever work? E.g. what if some future hypervisor gains ability to
> > > access the protected guest memory in some abstractly secure manner?  
> > 
> > The goal of the s390 PV feature is to avoid this possibility so I don't
> > think so; however, there is a possibility that some hardware VIRTIO device
> > gain access to the guest's protected memory, even such device does not exist
> > yet.
> > 
> > At the moment such device exists we will need a driver for it, at least to
> > enable the feature and apply policies, it is also one of the reasons why a
> > hook to the architecture is interesting.  
> 
> 
> Not neessarily, it could also be fully transparent. See e.g.
> recent AMD andvances allowing unmodified guests with SEV.

I guess it depends on the architecture's protection mechanism and
threat model whether this makes sense.

> 
> 
> > > We are blocking this here, and it's hard to predict the future,
> > > and a broken hypervisor can always find ways to crash the guest ...  
> > 
> > yes, this is also something to fix on the hypervizor side, Halil is working
> > on it.
> >   
> > > 
> > > IMHO it would be safer to just print a warning.
> > > What do you think?  
> > 
> > Sadly, putting a warning may not help as qemu is killed if it accesses the
> > protected memory.
> > Also note that the crash occurs not only on start but also on hotplug.

Failing to start a guest is not that bad IMHO, but crashing a guest
that is running perfectly fine is. I vote for just failing the probe if
preconditions are not met.

> > 
> > Thanks,
> > Pierre  
> 
> Well that depends on where does the warning go. If it's on a serial port
> it might be reported host side before the crash triggers.  But
> interesting point generally. How about a feature to send a warning code
> or string to host then?

I would generally expect a guest warning to stay on the guest side --
especially as the host admin and the guest admin may be different
persons. So having a general way to send an alert to from a guest to
the host is not uninteresting, although we need to be careful to avoid
the guest being able to DOS the host.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization