Re: [Qemu-devel] [Qemu-ppc] [PATCH v2] spapr: quantify error messages regarding capability settings

2019-08-03 Thread David Gibson
On Fri, Aug 02, 2019 at 11:03:48AM +1000, Daniel Black wrote:
> On Thu, 1 Aug 2019 12:41:59 +0200
> Greg Kurz  wrote:
> 
> > On Thu,  1 Aug 2019 13:38:19 +1000
> > Daniel Black  wrote:
> > 
> > > Its not immediately obvious how cap-X=Y setting need to be applied
> > > to the command line so, for spapr capability error messages, this
> > > has been clarified to:
> > > 
> ...
> > > index bbb001f84a..1c0222a081 100644
> > > --- a/hw/ppc/spapr_caps.c
> > > +++ b/hw/ppc/spapr_caps.c
> > > @@ -37,6 +37,8 @@
> > >  
> > >  #include "hw/ppc/spapr.h"
> > >  
> > > +#define CAPABILITY_ERROR(X) "appending -machine " X  
> > 
> > I would make that:
> > 
> > #define CAPABILITY_HINT() "try appending -machine " X
> > 
> > because it is really an hint for the user, not an
> > error,
> 
> Works for me. At the lowest layer it is a hint.

Oh.. of course it is.  Which means we should be using the
error_append_hint() system that's for exactly this sort of
information.

Sorry I didn't think of that earlier.

> 
> > and all original strings have "try",
> 
> True.
> 
> > except...
> 
> 
> > > @@ -249,11 +255,13 @@ static void
> > > cap_safe_cache_apply(SpaprMachineState *spapr, uint8_t val, if
> > > (tcg_enabled() && val) { /* TCG only supports broken, allow other
> > > values and print a warning */ error_setg(_err,
> > > -   "TCG doesn't support requested feature,
> > > cap-cfpc=%s",
> > > +   "TCG doesn't support requested feature, "
> > > +   CAPABILITY_ERROR("cap-cfpc=%s"),  
> > 
> > ... this one, but it doesn't look like a hint to me. It just tells
> > which is the unsupported cap.
> 
> This is one of 3 that local_error (commit
> 006e9d3618698eeef2f3e07628d22cb6f5c2a039) - intentionally just a
> warning and to TLDR the commit/Suraj conversation; defaults apply
> to all machine types; hardware security measures don't make sense in
> TCG; hence warning.
> 
> For every function with CAPABILITY_[ERROR|HINT] its called by
> spapr_caps_apply, has its errp as _fatal (intentionally - spoke
> to Suraj - migrations to machines without capabilities need to fail and
> defaults (kvm) should be secure unless explicitly disabled).
> 
> > > cap_cfpc_possible.vals[val]);
> > >  } else if (kvm_enabled() && (val > kvm_val)) {
> > >  error_setg(errp,
> > > -"Requested safe cache capability level not supported by kvm, try
> > > cap-cfpc=%s", +"Requested safe cache capability level not supported
> > > by kvm, try "
> > > +   CAPABILITY_ERROR("cap-cfpc=%s"),
> > > cap_cfpc_possible.vals[kvm_val]);  
> > 
> > Also, we have a dedicated API for hints, which are only printed under
> > the monitor but ignored under QMP.
> 
> Ok.
>  
> > Not sure why it isn't used here but it should be something like:
> 
> If error_append_hint should be used for fatal errors (all that use
> errp), then this patten should be applied further to
> CAPABILITY_[HINT|ERROR] functions.
> 
> If error_append_hint needs to apply to warnings
> cap_[cfpc/sbbc/ibs]_apply functions need to use it.
> 
> Would I be right in I'm assuming that the below pattern needs to apply
> to both of these cases?
> 
> > error_setg(errp, 
> >"Requested safe cache capability level not
> > supported by kvm");
> > error_append_hint(errp,
> > CAPABILITY_HINT("cap-cfpc=%s") "\n", cap_cfpc_possible.vals[kvm_val]);
> 
> This is going a little beyond the scope of fixing a message, ok, but
> lets not extend the scope too much more.
> 

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2] ivshmem-server: Terminate also on SIGINT

2019-08-03 Thread Claudio Fontana
On 8/3/19 3:22 PM, Jan Kiszka wrote:
> From: Jan Kiszka 
> 
> Allows to shutdown a foreground session via ctrl-c.
> 
> Signed-off-by: Jan Kiszka 
> ---
> 
> Changes in v2:
>  - adjust error message
> 
>  contrib/ivshmem-server/main.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/contrib/ivshmem-server/main.c b/contrib/ivshmem-server/main.c
> index 197c79c57e..e4cd35f74c 100644
> --- a/contrib/ivshmem-server/main.c
> +++ b/contrib/ivshmem-server/main.c
> @@ -223,8 +223,9 @@ main(int argc, char *argv[])
>  sa_quit.sa_handler = ivshmem_server_quit_cb;
>  sa_quit.sa_flags = 0;
>  if (sigemptyset(_quit.sa_mask) == -1 ||
> -sigaction(SIGTERM, _quit, 0) == -1) {
> -perror("failed to add SIGTERM handler; sigaction");
> +sigaction(SIGTERM, _quit, 0) == -1 ||
> +sigaction(SIGINT, _quit, 0) == -1) {
> +perror("failed to add signal handler; sigaction");
>  goto err;
>  }
> 
> --
> 2.16.4
> 
> 
Reviewed-by: Claudio Fontana 



Re: [Qemu-devel] [PATCH v7 0/6] target/arm: Implement ARMv8.5-BTI for linux-user

2019-08-03 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20190803210803.5701-1-richard.hender...@linaro.org/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH v7 0/6] target/arm: Implement ARMv8.5-BTI for 
linux-user
Message-id: 20190803210803.5701-1-richard.hender...@linaro.org

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20190803210803.5701-1-richard.hender...@linaro.org 
-> patchew/20190803210803.5701-1-richard.hender...@linaro.org
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for 
path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 
'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for 
path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for 
path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for 
path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for 
path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) 
registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 
'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' 
(https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 
'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' 
(https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 
'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out 
'20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) 
registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' 
(https://github.com/openssl/openssl) registered for path 
'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': 
checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out 
'50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered 
for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) 
registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': 
checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked 
out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 
'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out 

[Qemu-devel] [PATCH v7 2/6] linux-user: Validate mmap/mprotect prot value

2019-08-03 Thread Richard Henderson
The kernel will return -EINVAL for bits set in the prot argument
that are unknown or invalid.  Previously we were simply cropping
out the bits that we care about.

Introduce validate_prot_to_pageflags to perform this check in a
single place between the two syscalls.  Differentiate between
the target and host versions of prot.  Compute the qemu internal
page_flags value at the same time.

Signed-off-by: Richard Henderson 
---
 linux-user/mmap.c | 105 --
 1 file changed, 74 insertions(+), 31 deletions(-)

diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 46a6e3a761..c1a188ec0b 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -60,11 +60,37 @@ void mmap_fork_end(int child)
 pthread_mutex_unlock(_mutex);
 }
 
+/*
+ * Validate target prot bitmask.
+ * Return the prot bitmask for the host in *HOST_PROT.
+ * Return 0 if the target prot bitmask is invalid, otherwise
+ * the internal qemu page_flags (which will include PAGE_VALID).
+ */
+static int validate_prot_to_pageflags(int *host_prot, int prot)
+{
+int valid = PROT_READ | PROT_WRITE | PROT_EXEC | TARGET_PROT_SEM;
+int page_flags = (prot & PAGE_BITS) | PAGE_VALID;
+
+/*
+ * While PROT_SEM was added with the initial futex api, and continues
+ * to be accepted, it is documented as unused on all architectures.
+ * Moreover, it was never added to glibc so we don't have a definition
+ * for the host.  Follow the kernel and ignore it.
+ *
+ * TODO: We do not actually have to map guest pages as executable,
+ * since they will not be directly executed by the host.  We only
+ * need to remember exec within page_flags.
+ */
+*host_prot = prot & (PROT_READ | PROT_WRITE | PROT_EXEC);
+
+return prot & ~valid ? 0 : page_flags;
+}
+
 /* NOTE: all the constants are the HOST ones, but addresses are target. */
-int target_mprotect(abi_ulong start, abi_ulong len, int prot)
+int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
 {
 abi_ulong end, host_start, host_end, addr;
-int prot1, ret;
+int prot1, ret, page_flags, host_prot;
 
 #ifdef DEBUG_MMAP
 printf("mprotect: start=0x" TARGET_ABI_FMT_lx
@@ -74,56 +100,65 @@ int target_mprotect(abi_ulong start, abi_ulong len, int 
prot)
prot & PROT_EXEC ? 'x' : '-');
 #endif
 
-if ((start & ~TARGET_PAGE_MASK) != 0)
+if ((start & ~TARGET_PAGE_MASK) != 0) {
 return -TARGET_EINVAL;
+}
+page_flags = validate_prot_to_pageflags(_prot, target_prot);
+if (!page_flags) {
+return -TARGET_EINVAL;
+}
 len = TARGET_PAGE_ALIGN(len);
 end = start + len;
 if (!guest_range_valid(start, len)) {
 return -TARGET_ENOMEM;
 }
-prot &= PROT_READ | PROT_WRITE | PROT_EXEC;
-if (len == 0)
+if (len == 0) {
 return 0;
+}
 
 mmap_lock();
 host_start = start & qemu_host_page_mask;
 host_end = HOST_PAGE_ALIGN(end);
 if (start > host_start) {
 /* handle host page containing start */
-prot1 = prot;
-for(addr = host_start; addr < start; addr += TARGET_PAGE_SIZE) {
+prot1 = host_prot;
+for (addr = host_start; addr < start; addr += TARGET_PAGE_SIZE) {
 prot1 |= page_get_flags(addr);
 }
 if (host_end == host_start + qemu_host_page_size) {
-for(addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
+for (addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
 prot1 |= page_get_flags(addr);
 }
 end = host_end;
 }
-ret = mprotect(g2h(host_start), qemu_host_page_size, prot1 & 
PAGE_BITS);
-if (ret != 0)
+ret = mprotect(g2h(host_start), qemu_host_page_size,
+   prot1 & PAGE_BITS);
+if (ret != 0) {
 goto error;
+}
 host_start += qemu_host_page_size;
 }
 if (end < host_end) {
-prot1 = prot;
-for(addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
+prot1 = host_prot;
+for (addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
 prot1 |= page_get_flags(addr);
 }
-ret = mprotect(g2h(host_end - qemu_host_page_size), 
qemu_host_page_size,
-   prot1 & PAGE_BITS);
-if (ret != 0)
+ret = mprotect(g2h(host_end - qemu_host_page_size),
+   qemu_host_page_size, prot1 & PAGE_BITS);
+if (ret != 0) {
 goto error;
+}
 host_end -= qemu_host_page_size;
 }
 
 /* handle the pages in the middle */
 if (host_start < host_end) {
-ret = mprotect(g2h(host_start), host_end - host_start, prot);
-if (ret != 0)
+ret = mprotect(g2h(host_start), host_end - host_start, host_prot);
+if (ret != 0) {
 goto error;
+}
 }
-page_set_flags(start, start + len, prot | PAGE_VALID);
+

[Qemu-devel] [PATCH v7 6/6] tests/tcg/aarch64: Add bti smoke test

2019-08-03 Thread Richard Henderson
This will build with older toolchains, without the upstream support
for -mbranch-protection.  Such a toolchain will produce a warning
in such cases,

ld: warning: /tmp/ccyZt0kq.o: unsupported GNU_PROPERTY_TYPE (5) \
type: 0xc000

but the still places the note at the correct location in the binary
for processing by the runtime loader.

Signed-off-by: Richard Henderson 
---
 tests/tcg/aarch64/bti-1.c | 77 +++
 tests/tcg/aarch64/bti-crt.inc.c   | 69 +++
 tests/tcg/aarch64/Makefile.target |  3 ++
 tests/tcg/aarch64/bti.ld  | 15 ++
 4 files changed, 164 insertions(+)
 create mode 100644 tests/tcg/aarch64/bti-1.c
 create mode 100644 tests/tcg/aarch64/bti-crt.inc.c
 create mode 100644 tests/tcg/aarch64/bti.ld

diff --git a/tests/tcg/aarch64/bti-1.c b/tests/tcg/aarch64/bti-1.c
new file mode 100644
index 00..2aee57ea7a
--- /dev/null
+++ b/tests/tcg/aarch64/bti-1.c
@@ -0,0 +1,77 @@
+/*
+ * Branch target identification, basic notskip cases.
+ */
+
+#include "bti-crt.inc.c"
+
+/*
+ * Work around lack of -mbranch-protection=standard in older toolchains.
+ * The signal handler is invoked by the kernel with PSTATE.BTYPE=2, which
+ * means that the handler must begin with a marker like BTI_C.
+ */
+asm("skip2_sigill1:\n\
+   hint#34\n\
+   b   skip2_sigill2\n\
+.type skip2_sigill1,%function\n\
+.size skip2_sigill1,8");
+
+extern void skip2_sigill1(int sig, siginfo_t *info, ucontext_t *uc)
+__attribute__((visibility("hidden")));
+
+static void __attribute__((used))
+skip2_sigill2(int sig, siginfo_t *info, ucontext_t *uc)
+{
+uc->uc_mcontext.pc += 8;
+uc->uc_mcontext.pstate = 1;
+}
+
+#define NOP   "nop"
+#define BTI_N "hint #32"
+#define BTI_C "hint #34"
+#define BTI_J "hint #36"
+#define BTI_JC"hint #38"
+
+#define BTYPE_1(DEST) \
+asm("mov %0,#1; adr x16, 1f; br x16; 1: " DEST "; mov %0,#0" \
+: "=r"(skipped) : : "x16")
+
+#define BTYPE_2(DEST) \
+asm("mov %0,#1; adr x16, 1f; blr x16; 1: " DEST "; mov %0,#0" \
+: "=r"(skipped) : : "x16", "x30")
+
+#define BTYPE_3(DEST) \
+asm("mov %0,#1; adr x15, 1f; br x15; 1: " DEST "; mov %0,#0" \
+: "=r"(skipped) : : "x15")
+
+#define TEST(WHICH, DEST, EXPECT) \
+do { WHICH(DEST); fail += skipped ^ EXPECT; } while (0)
+
+
+int main()
+{
+int fail = 0;
+int skipped;
+
+/* Signal-like with SA_SIGINFO.  */
+signal_info(SIGILL, skip2_sigill1);
+
+TEST(BTYPE_1, NOP, 1);
+TEST(BTYPE_1, BTI_N, 1);
+TEST(BTYPE_1, BTI_C, 0);
+TEST(BTYPE_1, BTI_J, 0);
+TEST(BTYPE_1, BTI_JC, 0);
+
+TEST(BTYPE_2, NOP, 1);
+TEST(BTYPE_2, BTI_N, 1);
+TEST(BTYPE_2, BTI_C, 0);
+TEST(BTYPE_2, BTI_J, 1);
+TEST(BTYPE_2, BTI_JC, 0);
+
+TEST(BTYPE_3, NOP, 1);
+TEST(BTYPE_3, BTI_N, 1);
+TEST(BTYPE_3, BTI_C, 1);
+TEST(BTYPE_3, BTI_J, 0);
+TEST(BTYPE_3, BTI_JC, 0);
+
+return fail;
+}
diff --git a/tests/tcg/aarch64/bti-crt.inc.c b/tests/tcg/aarch64/bti-crt.inc.c
new file mode 100644
index 00..bb363853de
--- /dev/null
+++ b/tests/tcg/aarch64/bti-crt.inc.c
@@ -0,0 +1,69 @@
+/*
+ * Minimal user-environment for testing BTI.
+ *
+ * Normal libc is not built with BTI support enabled, and so could
+ * generate a BTI TRAP before ever reaching main.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+int main(void);
+
+void _start(void)
+{
+exit(main());
+}
+
+void exit(int ret)
+{
+register int x0 __asm__("x0") = ret;
+register int x8 __asm__("x8") = __NR_exit;
+
+asm volatile("svc #0" : : "r"(x0), "r"(x8));
+__builtin_unreachable();
+}
+
+/*
+ * Irritatingly, the user API struct sigaction does not match the
+ * kernel API struct sigaction.  So for simplicity, isolate the
+ * kernel ABI here, and make this act like signal.
+ */
+void signal_info(int sig, void (*fn)(int, siginfo_t *, ucontext_t *))
+{
+struct kernel_sigaction {
+void (*handler)(int, siginfo_t *, ucontext_t *);
+unsigned long flags;
+unsigned long restorer;
+unsigned long mask;
+} sa = { fn, SA_SIGINFO, 0, 0 };
+
+register int x0 __asm__("x0") = sig;
+register void *x1 __asm__("x1") = 
+register void *x2 __asm__("x2") = 0;
+register int x3 __asm__("x3") = sizeof(unsigned long);
+register int x8 __asm__("x8") = __NR_rt_sigaction;
+
+asm volatile("svc #0"
+ : : "r"(x0), "r"(x1), "r"(x2), "r"(x3), "r"(x8) : "memory");
+}
+
+/*
+ * Create the PT_NOTE that will enable BTI in the page tables.
+ * This will be created by the compiler with -mbranch-protection=standard,
+ * but as of 2019-03-29, this is has not been committed to gcc mainline.
+ * This will probably be in GCC10.
+ */
+asm(".section .note.gnu.property,\"a\"\n\
+   .align  3\n\
+   .long   4\n\
+.long  16\n\
+.long  5\n\
+.string\"GNU\"\n\
+   .long   0xc000\n\
+   .long   4\n\
+   .long   1\n\
+   

[Qemu-devel] [PATCH v7 5/6] linux-user: Parse NT_GNU_PROPERTY_TYPE_0 notes

2019-08-03 Thread Richard Henderson
For aarch64, this includes the GNU_PROPERTY_AARCH64_FEATURE_1_BTI bit,
which indicates that the image should be mapped with guarded pages.

Signed-off-by: Richard Henderson 
---
 linux-user/elfload.c | 94 
 1 file changed, 86 insertions(+), 8 deletions(-)
---

Note: The behaviour here when GNU_PROPERTY_AARCH64_FEATURE_1_BTI
is present differs from Dave's v1 patch set, in which the kernel
refuses to load the binary if the host does not support BTI.

However, I feel that's not the best way to introduce a feature
that adds security and is otherwise designed to be backward
compatible to such hosts.  We should want entire distributions
to be built indicating compatibility with BTI via this markup.

I included this rationale in my review of Dave's patch set.


r~


diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index bd43c4817d..d18e7dd313 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2289,7 +2289,7 @@ static void load_elf_image(const char *image_name, int 
image_fd,
 struct elfhdr *ehdr = (struct elfhdr *)bprm_buf;
 struct elf_phdr *phdr;
 abi_ulong load_addr, load_bias, loaddr, hiaddr, error;
-int i, retval;
+int i, retval, prot_exec = PROT_EXEC;
 const char *errmsg;
 
 /* First of all, some simple consistency checks */
@@ -2324,17 +2324,89 @@ static void load_elf_image(const char *image_name, int 
image_fd,
 loaddr = -1, hiaddr = 0;
 info->alignment = 0;
 for (i = 0; i < ehdr->e_phnum; ++i) {
-if (phdr[i].p_type == PT_LOAD) {
-abi_ulong a = phdr[i].p_vaddr - phdr[i].p_offset;
+struct elf_phdr *eppnt = phdr + i;
+
+if (eppnt->p_type == PT_LOAD) {
+abi_ulong a = eppnt->p_vaddr - eppnt->p_offset;
 if (a < loaddr) {
 loaddr = a;
 }
-a = phdr[i].p_vaddr + phdr[i].p_memsz;
+a = eppnt->p_vaddr + eppnt->p_memsz;
 if (a > hiaddr) {
 hiaddr = a;
 }
 ++info->nsegs;
-info->alignment |= phdr[i].p_align;
+info->alignment |= eppnt->p_align;
+} else if (eppnt->p_type == PT_GNU_PROPERTY) {
+#ifdef TARGET_AARCH64
+/*
+ * Process NT_GNU_PROPERTY_TYPE_0.
+ *
+ * TODO: For AArch64, the PT_GNU_PROPERTY is authoritative:
+ * it is present if and only if NT_GNU_PROPERTY_TYPE_0 is.
+ * That may or may not be true for other architectures.
+ *
+ * TODO: The only item that is AArch64 specific is the
+ * GNU_PROPERTY_AARCH64_FEATURE_1_AND processing at the end.
+ * If we were to ever process GNU_PROPERTY_X86_*, all of the
+ * code through checking the gnu0 magic number is sharable.
+ * But for now, since this *is* only used by AArch64, don't
+ * process the note elsewhere.
+ */
+const uint32_t gnu0_magic = const_le32('G' | 'N' << 8 | 'U' << 16);
+uint32_t note[7];
+
+/*
+ * The note contents are 7 words, but depending on LP64 vs ILP32
+ * there may be an 8th padding word at the end.  Check for and
+ * read the minimum size.  Further checks below will validate
+ * that the sizes of everything involved are as we expect.
+ */
+if (eppnt->p_filesz < sizeof(note)) {
+continue;
+}
+if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) {
+memcpy(note, bprm_buf + eppnt->p_offset, sizeof(note));
+} else {
+retval = pread(image_fd, note, sizeof(note), eppnt->p_offset);
+if (retval != sizeof(note)) {
+goto exit_perror;
+}
+}
+#ifdef BSWAP_NEEDED
+for (i = 0; i < ARRAY_SIZE(note); ++i) {
+bswap32s(note + i);
+}
+#endif
+/*
+ * Check that this is a NT_GNU_PROPERTY_TYPE_0 note.
+ * Again, descsz includes padding.  Full size validation
+ * awaits checking the final payload.
+ */
+if (note[0] != 4 ||   /* namesz */
+note[1] < 12 ||   /* descsz */
+note[2] != NT_GNU_PROPERTY_TYPE_0 ||  /* type */
+note[3] != gnu0_magic) {  /* name */
+continue;
+}
+/*
+ * Check for the BTI feature.  If present, this indicates
+ * that all the executable pages of the binary should be
+ * mapped with PROT_BTI, so that branch targets are enforced.
+ */
+if (note[4] == GNU_PROPERTY_AARCH64_FEATURE_1_AND &&
+note[5] == 4 &&
+(note[6] & GNU_PROPERTY_AARCH64_FEATURE_1_BTI)) {
+/*
+ * Elf 

[Qemu-devel] [PATCH v7 3/6] linux-user: Set PAGE_TARGET_1 for TARGET_PROT_BTI

2019-08-03 Thread Richard Henderson
Transform the prot bit to a qemu internal page bit, and save
it in the page tables.

Signed-off-by: Richard Henderson 
---
 include/exec/cpu-all.h |  2 ++
 linux-user/syscall_defs.h  |  4 
 linux-user/mmap.c  | 16 
 target/arm/translate-a64.c |  6 +++---
 4 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 40b140cbba..27470b73f7 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -248,6 +248,8 @@ extern intptr_t qemu_host_page_mask;
 /* FIXME: Code that sets/uses this is broken and needs to go away.  */
 #define PAGE_RESERVED  0x0020
 #endif
+/* Target-specific bits that will be used via page_get_flags().  */
+#define PAGE_TARGET_1  0x0080
 
 #if defined(CONFIG_USER_ONLY)
 void page_dump(FILE *f);
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index 0662270300..a59a81e4b6 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -1124,6 +1124,10 @@ struct target_winsize {
 #define TARGET_PROT_SEM 0x08
 #endif
 
+#ifdef TARGET_AARCH64
+#define TARGET_PROT_BTI 0x10
+#endif
+
 /* Common */
 #define TARGET_MAP_SHARED  0x01/* Share changes */
 #define TARGET_MAP_PRIVATE 0x02/* Changes are private */
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index c1a188ec0b..c1bed290f6 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -83,6 +83,22 @@ static int validate_prot_to_pageflags(int *host_prot, int 
prot)
  */
 *host_prot = prot & (PROT_READ | PROT_WRITE | PROT_EXEC);
 
+#ifdef TARGET_AARCH64
+/*
+ * The PROT_BTI bit is only accepted if the cpu supports the feature.
+ * Since this is the unusual case, don't bother checking unless
+ * the bit has been requested.  If set and valid, record the bit
+ * within QEMU's page_flags as PAGE_TARGET_1.
+ */
+if (prot & TARGET_PROT_BTI) {
+ARMCPU *cpu = ARM_CPU(thread_cpu);
+if (cpu_isar_feature(aa64_bti, cpu)) {
+valid |= TARGET_PROT_BTI;
+page_flags |= PAGE_TARGET_1;
+}
+}
+#endif
+
 return prot & ~valid ? 0 : page_flags;
 }
 
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 06ff3a7f2e..395e498acf 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -13963,10 +13963,10 @@ static void disas_data_proc_simd_fp(DisasContext *s, 
uint32_t insn)
  */
 static bool is_guarded_page(CPUARMState *env, DisasContext *s)
 {
-#ifdef CONFIG_USER_ONLY
-return false;  /* FIXME */
-#else
 uint64_t addr = s->base.pc_first;
+#ifdef CONFIG_USER_ONLY
+return page_get_flags(addr) & PAGE_TARGET_1;
+#else
 int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
 unsigned int index = tlb_index(env, mmu_idx, addr);
 CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
-- 
2.17.1




[Qemu-devel] [PATCH v7 1/6] linux-user/aarch64: Reset btype for signals

2019-08-03 Thread Richard Henderson
The kernel sets btype for the signal handler as if for a call.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index cd521ee42d..2c596a7088 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -506,10 +506,16 @@ static void target_setup_frame(int usig, struct 
target_sigaction *ka,
 + offsetof(struct target_rt_frame_record, tramp);
 }
 env->xregs[0] = usig;
-env->xregs[31] = frame_addr;
 env->xregs[29] = frame_addr + fr_ofs;
-env->pc = ka->_sa_handler;
 env->xregs[30] = return_addr;
+env->xregs[31] = frame_addr;
+env->pc = ka->_sa_handler;
+
+/* Invoke the signal handler as if by indirect call.  */
+if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
+env->btype = 2;
+}
+
 if (info) {
 tswap_siginfo(>info, info);
 env->xregs[1] = frame_addr + offsetof(struct target_rt_sigframe, info);
-- 
2.17.1




[Qemu-devel] [PATCH v7 0/6] target/arm: Implement ARMv8.5-BTI for linux-user

2019-08-03 Thread Richard Henderson
Changes since v6:
  * Rebased on the ARMv8.1-VHE patch set.
  * Review from Dave Martin:
+ Remove PSTATE.BTYPE adjustment on syscall entry.
+ Rely on PT_GNU_PROPERTY to find the NT_GNU_PROPERTY_TYPE_0 note.
+ For the test case, add a linker script to create the PHDR.

Changes since v5:
  * New function to validate the target PROT parameter for mmap/mprotect.
  * Require BTI in the cpu for PROT_BTI set.
  * Set PSTATE.BTYPE=2 for the signal handler.
Adjust the smoke test to match.
  * Tidy up the note parsing.

Based-on: 20190803184800.8221-1-richard.hender...@linaro.org
"[PATCH v3 00/34] target/arm: Implement ARMv8.1-VHE"


r~


Richard Henderson (6):
  linux-user/aarch64: Reset btype for signals
  linux-user: Validate mmap/mprotect prot value
  linux-user: Set PAGE_TARGET_1 for TARGET_PROT_BTI
  include/elf: Add defines related to GNU property notes for AArch64
  linux-user: Parse NT_GNU_PROPERTY_TYPE_0 notes
  tests/tcg/aarch64: Add bti smoke test

 include/elf.h |  22 ++
 include/exec/cpu-all.h|   2 +
 linux-user/syscall_defs.h |   4 +
 linux-user/aarch64/signal.c   |  10 ++-
 linux-user/elfload.c  |  94 +--
 linux-user/mmap.c | 121 ++
 target/arm/translate-a64.c|   6 +-
 tests/tcg/aarch64/bti-1.c |  77 +++
 tests/tcg/aarch64/bti-crt.inc.c   |  69 +
 tests/tcg/aarch64/Makefile.target |   3 +
 tests/tcg/aarch64/bti.ld  |  15 
 11 files changed, 379 insertions(+), 44 deletions(-)
 create mode 100644 tests/tcg/aarch64/bti-1.c
 create mode 100644 tests/tcg/aarch64/bti-crt.inc.c
 create mode 100644 tests/tcg/aarch64/bti.ld

-- 
2.17.1




[Qemu-devel] [PATCH v7 4/6] include/elf: Add defines related to GNU property notes for AArch64

2019-08-03 Thread Richard Henderson
These are all of the defines required to parse
GNU_PROPERTY_AARCH64_FEATURE_1_AND, copied from binutils.
Other missing defines related to other GNU program headers
and notes are elided for now.

Signed-off-by: Richard Henderson 
---
 include/elf.h | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/include/elf.h b/include/elf.h
index 3501e0c8d0..7c4dc4b2cc 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -26,9 +26,13 @@ typedef int64_t  Elf64_Sxword;
 #define PT_NOTE4
 #define PT_SHLIB   5
 #define PT_PHDR6
+#define PT_LOOS0x6000
+#define PT_HIOS0x6fff
 #define PT_LOPROC  0x7000
 #define PT_HIPROC  0x7fff
 
+#define PT_GNU_PROPERTY   (PT_LOOS + 0x474e553)
+
 #define PT_MIPS_REGINFO   0x7000
 #define PT_MIPS_RTPROC0x7001
 #define PT_MIPS_OPTIONS   0x7002
@@ -1651,6 +1655,24 @@ typedef struct elf64_shdr {
 #define NT_ARM_HW_WATCH 0x403   /* ARM hardware watchpoint registers */
 #define NT_ARM_SYSTEM_CALL  0x404   /* ARM system call number */
 
+/* Defined note types for GNU systems.  */
+
+#define NT_GNU_PROPERTY_TYPE_0  5   /* Program property */
+
+/* Values used in GNU .note.gnu.property notes (NT_GNU_PROPERTY_TYPE_0).  */
+
+#define GNU_PROPERTY_STACK_SIZE 1
+#define GNU_PROPERTY_NO_COPY_ON_PROTECTED   2
+
+#define GNU_PROPERTY_LOPROC 0xc000
+#define GNU_PROPERTY_HIPROC 0xdfff
+#define GNU_PROPERTY_LOUSER 0xe000
+#define GNU_PROPERTY_HIUSER 0x
+
+#define GNU_PROPERTY_AARCH64_FEATURE_1_AND  0xc000
+#define GNU_PROPERTY_AARCH64_FEATURE_1_BTI  (1u << 0)
+#define GNU_PROPERTY_AARCH64_FEATURE_1_PAC  (1u << 1)
+
 /*
  * Physical entry point into the kernel.
  *
-- 
2.17.1




[Qemu-devel] [PATCH v3 33/34] target/arm: check TGE and E2H flags for EL0 pauth traps

2019-08-03 Thread Richard Henderson
From: Alex Bennée 

According to ARM ARM we should only trap from EL0
when TCG or E2H are 0.

Signed-off-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/pauth_helper.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
index 42c9141bb7..9fa002068e 100644
--- a/target/arm/pauth_helper.c
+++ b/target/arm/pauth_helper.c
@@ -371,7 +371,9 @@ static void pauth_check_trap(CPUARMState *env, int el, 
uintptr_t ra)
 if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
 uint64_t hcr = arm_hcr_el2_eff(env);
 bool trap = !(hcr & HCR_API);
-/* FIXME: ARMv8.1-VHE: trap only applies to EL1&0 regime.  */
+if (el < 1) {
+trap &= !(hcr & HCR_TGE) | !(hcr & HCR_E2H);
+}
 /* FIXME: ARMv8.3-NV: HCR_NV trap takes precedence for ERETA[AB].  */
 if (trap) {
 pauth_trap(env, 2, ra);
-- 
2.17.1




[Qemu-devel] [PATCH v3 34/34] target/arm: generate a custom MIDR for -cpu max

2019-08-03 Thread Richard Henderson
From: Alex Bennée 

While most features are now detected by probing the ID_* registers
kernels can (and do) use MIDR_EL1 for working out of they have to
apply errata. This can trip up warnings in the kernel as it tries to
work out if it should apply workarounds to features that don't
actually exist in the reported CPU type.

Avoid this problem by synthesising our own MIDR value.

Signed-off-by: Alex Bennée 
Reviewed-by: Peter Maydell 
Reviewed-by: Richard Henderson 
Message-Id: <20190726113950.7499-1-alex.ben...@linaro.org>
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  6 ++
 target/arm/cpu64.c | 19 +++
 2 files changed, 25 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index d7c5a123a3..6e4c97d398 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1605,6 +1605,12 @@ FIELD(V7M_FPCCR, ASPEN, 31, 1)
 /*
  * System register ID fields.
  */
+FIELD(MIDR_EL1, REVISION, 0, 4)
+FIELD(MIDR_EL1, PARTNUM, 4, 12)
+FIELD(MIDR_EL1, ARCHITECTURE, 16, 4)
+FIELD(MIDR_EL1, VARIANT, 20, 4)
+FIELD(MIDR_EL1, IMPLEMENTER, 24, 8)
+
 FIELD(ID_ISAR0, SWAP, 0, 4)
 FIELD(ID_ISAR0, BITCOUNT, 4, 4)
 FIELD(ID_ISAR0, BITFIELD, 8, 4)
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index b1bb394c6d..3a1e98a18e 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -296,6 +296,25 @@ static void aarch64_max_initfn(Object *obj)
 uint32_t u;
 aarch64_a57_initfn(obj);
 
+/*
+ * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a 
real
+ * one and try to apply errata workarounds or use impdef features we
+ * don't provide.
+ * An IMPLEMENTER field of 0 means "reserved for software use";
+ * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers
+ * to see which features are present";
+ * the VARIANT, PARTNUM and REVISION fields are all implementation
+ * defined and we choose to define PARTNUM just in case guest
+ * code needs to distinguish this QEMU CPU from other software
+ * implementations, though this shouldn't be needed.
+ */
+t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0);
+t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf);
+t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q');
+t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0);
+t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
+cpu->midr = t;
+
 t = cpu->isar.id_aa64isar0;
 t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
 t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
-- 
2.17.1




[Qemu-devel] [PATCH v3 32/34] target/arm: Enable ARMv8.1-VHE in -cpu max

2019-08-03 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 1901997a06..b1bb394c6d 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -337,6 +337,7 @@ static void aarch64_max_initfn(Object *obj)
 t = cpu->isar.id_aa64mmfr1;
 t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
 t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
+t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
 cpu->isar.id_aa64mmfr1 = t;
 
 /* Replicate the same data to the 32-bit id registers.  */
-- 
2.17.1




[Qemu-devel] [PATCH v3 25/34] target/arm: Update aa64_zva_access for EL2

2019-08-03 Thread Richard Henderson
The comment that we don't support EL2 is somewhat out of date.
Update to include checks against HCR_EL2.TDZ.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 26 +-
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 9e9d2ce99b..37c881baab 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4113,11 +4113,27 @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
   bool isread)
 {
-/* We don't implement EL2, so the only control on DC ZVA is the
- * bit in the SCTLR which can prohibit access for EL0.
- */
-if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_DZE)) {
-return CP_ACCESS_TRAP;
+int cur_el = arm_current_el(env);
+
+if (cur_el < 2) {
+uint64_t hcr = arm_hcr_el2_eff(env);
+
+if (cur_el == 0) {
+if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+if (!(env->cp15.sctlr_el[2] & SCTLR_DZE)) {
+return CP_ACCESS_TRAP_EL2;
+}
+} else {
+if (!(env->cp15.sctlr_el[1] & SCTLR_DZE)) {
+return CP_ACCESS_TRAP;
+}
+if (hcr & HCR_TDZ) {
+return CP_ACCESS_TRAP_EL2;
+}
+}
+} else if (hcr & HCR_TDZ) {
+return CP_ACCESS_TRAP_EL2;
+}
 }
 return CP_ACCESS_OK;
 }
-- 
2.17.1




[Qemu-devel] [PATCH v3 29/34] target/arm: Update arm_phys_excp_target_el for TGE

2019-08-03 Thread Richard Henderson
The TGE bit routes all asynchronous exceptions to EL2.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 984a441cc4..a0969b78bf 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7968,6 +7968,12 @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t 
excp_idx,
 break;
 };
 
+/*
+ * For these purposes, TGE and AMO/IMO/FMO both force the
+ * interrupt to EL2.  Fold TGE into the bit extracted above.
+ */
+hcr |= (hcr_el2 & HCR_TGE) != 0;
+
 /* Perform a table-lookup for the target EL given the current state */
 target_el = target_el_table[is64][scr][rw][hcr][secure][cur_el];
 
-- 
2.17.1




[Qemu-devel] [PATCH v3 27/34] target/arm: Install asids for E2&0 translation regime

2019-08-03 Thread Richard Henderson
When clearing HCR_E2H, this involves re-installing the EL1&0 asid.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 38 ++
 1 file changed, 34 insertions(+), 4 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index b8c45eb484..9d74162bbd 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3518,10 +3518,29 @@ static void vmsa_ttbr_el1_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 }
 }
 
+static void update_el2_asid(CPUARMState *env)
+{
+CPUState *cs = env_cpu(env);
+uint64_t ttbr0, ttbr1, ttcr;
+int asid, idxmask;
+
+ttbr0 = env->cp15.ttbr0_el[2];
+ttbr1 = env->cp15.ttbr1_el[2];
+ttcr = env->cp15.tcr_el[2].raw_tcr;
+idxmask = ARMMMUIdxBit_EL20_2 | ARMMMUIdxBit_EL20_0;
+asid = extract64(ttcr & TTBCR_A1 ? ttbr1 : ttbr0, 48, 16);
+
+tlb_set_asid_for_mmuidx(cs, asid, idxmask, 0);
+}
+
 static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
 uint64_t value)
 {
 raw_write(env, ri, value);
+if (arm_hcr_el2_eff(env) & HCR_E2H) {
+/* We are running with EL2&0 regime and the ASID is active.  */
+update_el2_asid(env);
+}
 }
 
 static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4670,6 +4689,7 @@ static void hcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 ARMCPU *cpu = env_archcpu(env);
 /* Begin with bits defined in base ARMv8.0.  */
 uint64_t valid_mask = MAKE_64BIT_MASK(0, 34);
+uint64_t old_value;
 
 if (arm_feature(env, ARM_FEATURE_EL3)) {
 valid_mask &= ~HCR_HCD;
@@ -4696,15 +4716,25 @@ static void hcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 /* Clear RES0 bits.  */
 value &= valid_mask;
 
-/* These bits change the MMU setup:
+old_value = env->cp15.hcr_el2;
+env->cp15.hcr_el2 = value;
+
+/*
+ * These bits change the MMU setup:
  * HCR_VM enables stage 2 translation
  * HCR_PTW forbids certain page-table setups
- * HCR_DC Disables stage1 and enables stage2 translation
+ * HCR_DC disables stage1 and enables stage2 translation
+ * HCR_E2H enables E2&0 translation regime.
  */
-if ((env->cp15.hcr_el2 ^ value) & (HCR_VM | HCR_PTW | HCR_DC)) {
+if ((old_value ^ value) & (HCR_VM | HCR_PTW | HCR_DC | HCR_E2H)) {
 tlb_flush(CPU(cpu));
+/* Also install the correct ASID for the regime.  */
+if (value & HCR_E2H) {
+update_el2_asid(env);
+} else {
+update_lpae_el1_asid(env, false);
+}
 }
-env->cp15.hcr_el2 = value;
 
 /*
  * Updates to VI and VF require us to update the status of
-- 
2.17.1




[Qemu-devel] [PATCH v3 24/34] target/arm: Update arm_sctlr for VHE

2019-08-03 Thread Richard Henderson
Use the correct sctlr for EL2&0 regime.  Due to header ordering,
and where arm_mmu_idx is declared, we need to move the function
out of line.  Use the function in many more places in order to
select the correct control.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
v5: Use arm_mmu_idx() to avoid incorrectly replicating the el2&0
condition therein.  Drop the change to cpu_get_dump_info, as
that needs a more significant rethink of hard-coded oddness.
---
 target/arm/cpu.h  | 11 +--
 target/arm/helper-a64.c   |  2 +-
 target/arm/helper.c   | 14 --
 target/arm/pauth_helper.c |  9 +
 4 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 8d90a4fc4d..d7c5a123a3 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3096,16 +3096,7 @@ static inline bool arm_sctlr_b(CPUARMState *env)
 (env->cp15.sctlr_el[1] & SCTLR_B) != 0;
 }
 
-static inline uint64_t arm_sctlr(CPUARMState *env, int el)
-{
-if (el == 0) {
-/* FIXME: ARMv8.1-VHE S2 translation regime.  */
-return env->cp15.sctlr_el[1];
-} else {
-return env->cp15.sctlr_el[el];
-}
-}
-
+uint64_t arm_sctlr(CPUARMState *env, int el);
 
 /* Return true if the processor is in big-endian mode. */
 static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
index 060699b901..3bf1b731e7 100644
--- a/target/arm/helper-a64.c
+++ b/target/arm/helper-a64.c
@@ -70,7 +70,7 @@ static void daif_check(CPUARMState *env, uint32_t op,
uint32_t imm, uintptr_t ra)
 {
 /* DAIF update to PSTATE. This is OK from EL0 only if UMA is set.  */
-if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
+if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) {
 raise_exception_ra(env, EXCP_UDEF,
syn_aa64_sysregtrap(0, extract32(op, 0, 3),
extract32(op, 3, 3), 4,
diff --git a/target/arm/helper.c b/target/arm/helper.c
index a570d43232..9e9d2ce99b 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3867,7 +3867,7 @@ static void aa64_fpsr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 static CPAccessResult aa64_daif_access(CPUARMState *env, const ARMCPRegInfo 
*ri,
bool isread)
 {
-if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
+if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) {
 return CP_ACCESS_TRAP;
 }
 return CP_ACCESS_OK;
@@ -3886,7 +3886,7 @@ static CPAccessResult aa64_cacheop_access(CPUARMState 
*env,
 /* Cache invalidate/clean: NOP, but EL0 must UNDEF unless
  * SCTLR_EL1.UCI is set.
  */
-if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UCI)) {
+if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UCI)) {
 return CP_ACCESS_TRAP;
 }
 return CP_ACCESS_OK;
@@ -8718,6 +8718,16 @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx 
mmu_idx)
 }
 }
 
+uint64_t arm_sctlr(CPUARMState *env, int el)
+{
+/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
+if (el == 0) {
+ARMMMUIdx mmu_idx = arm_mmu_idx(env);
+el = (mmu_idx == ARMMMUIdx_EL20_0 ? 2 : 1);
+}
+return env->cp15.sctlr_el[el];
+}
+
 #ifndef CONFIG_USER_ONLY
 
 /* Return the SCTLR value which controls this address translation regime */
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
index d3194f2043..42c9141bb7 100644
--- a/target/arm/pauth_helper.c
+++ b/target/arm/pauth_helper.c
@@ -386,14 +386,7 @@ static void pauth_check_trap(CPUARMState *env, int el, 
uintptr_t ra)
 
 static bool pauth_key_enabled(CPUARMState *env, int el, uint32_t bit)
 {
-uint32_t sctlr;
-if (el == 0) {
-/* FIXME: ARMv8.1-VHE S2 translation regime.  */
-sctlr = env->cp15.sctlr_el[1];
-} else {
-sctlr = env->cp15.sctlr_el[el];
-}
-return (sctlr & bit) != 0;
+return (arm_sctlr(env, el) & bit) != 0;
 }
 
 uint64_t HELPER(pacia)(CPUARMState *env, uint64_t x, uint64_t y)
-- 
2.17.1




[Qemu-devel] [PATCH v3 26/34] target/arm: Update ctr_el0_access for EL2

2019-08-03 Thread Richard Henderson
Update to include checks against HCR_EL2.TID2.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 26 +-
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 37c881baab..b8c45eb484 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -5361,11 +5361,27 @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU 
*cpu)
 static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
  bool isread)
 {
-/* Only accessible in EL0 if SCTLR.UCT is set (and only in AArch64,
- * but the AArch32 CTR has its own reginfo struct)
- */
-if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UCT)) {
-return CP_ACCESS_TRAP;
+int cur_el = arm_current_el(env);
+
+if (cur_el < 2) {
+uint64_t hcr = arm_hcr_el2_eff(env);
+
+if (cur_el == 0) {
+if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+if (!(env->cp15.sctlr_el[2] & SCTLR_UCT)) {
+return CP_ACCESS_TRAP_EL2;
+}
+} else {
+if (!(env->cp15.sctlr_el[1] & SCTLR_UCT)) {
+return CP_ACCESS_TRAP;
+}
+if (hcr & HCR_TID2) {
+return CP_ACCESS_TRAP_EL2;
+}
+}
+} else if (hcr & HCR_TID2) {
+return CP_ACCESS_TRAP_EL2;
+}
 }
 return CP_ACCESS_OK;
 }
-- 
2.17.1




[Qemu-devel] [PATCH v3 22/34] target/arm: Add regime_has_2_ranges

2019-08-03 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/internals.h | 16 
 target/arm/helper.c| 22 +-
 target/arm/translate-a64.c |  3 +--
 3 files changed, 22 insertions(+), 19 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index dd0bc4377f..1b64ceeda6 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -824,6 +824,22 @@ static inline void arm_call_el_change_hook(ARMCPU *cpu)
 }
 }
 
+/* Return true if this address translation regime has two ranges.  */
+static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
+{
+switch (mmu_idx) {
+case ARMMMUIdx_Stage1_E0:
+case ARMMMUIdx_Stage1_E1:
+case ARMMMUIdx_EL10_0:
+case ARMMMUIdx_EL10_1:
+case ARMMMUIdx_EL20_0:
+case ARMMMUIdx_EL20_2:
+return true;
+default:
+return false;
+}
+}
+
 /* Return true if this address translation regime is secure */
 static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
 {
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 9c2c81c434..5472424179 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9006,15 +9006,8 @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx 
mmu_idx, bool is_aa64,
 }
 
 if (is_aa64) {
-switch (regime_el(env, mmu_idx)) {
-case 1:
-if (!is_user) {
-xn = pxn || (user_rw & PAGE_WRITE);
-}
-break;
-case 2:
-case 3:
-break;
+if (regime_has_2_ranges(mmu_idx) && !is_user) {
+xn = pxn || (user_rw & PAGE_WRITE);
 }
 } else if (arm_feature(env, ARM_FEATURE_V7)) {
 switch (regime_el(env, mmu_idx)) {
@@ -9548,7 +9541,6 @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, 
uint64_t va,
 ARMMMUIdx mmu_idx)
 {
 uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
-uint32_t el = regime_el(env, mmu_idx);
 bool tbi, tbid, epd, hpd, using16k, using64k;
 int select, tsz;
 
@@ -9558,7 +9550,7 @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, 
uint64_t va,
  */
 select = extract64(va, 55, 1);
 
-if (el > 1) {
+if (!regime_has_2_ranges(mmu_idx)) {
 tsz = extract32(tcr, 0, 6);
 using64k = extract32(tcr, 14, 1);
 using16k = extract32(tcr, 15, 1);
@@ -9714,10 +9706,7 @@ static bool get_phys_addr_lpae(CPUARMState *env, 
target_ulong address,
 param = aa64_va_parameters(env, address, mmu_idx,
access_type != MMU_INST_FETCH);
 level = 0;
-/* If we are in 64-bit EL2 or EL3 then there is no TTBR1, so mark it
- * invalid.
- */
-ttbr1_valid = (el < 2);
+ttbr1_valid = regime_has_2_ranges(mmu_idx);
 addrsize = 64 - 8 * param.tbi;
 inputsize = 64 - param.tsz;
 } else {
@@ -11368,8 +11357,7 @@ void cpu_get_tb_cpu_state(CPUARMState *env, 
target_ulong *pc,
 ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1);
 int tbii, tbid;
 
-/* FIXME: ARMv8.1-VHE S2 translation regime.  */
-if (regime_el(env, stage1) < 2) {
+if (regime_has_2_ranges(mmu_idx)) {
 ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1);
 tbid = (p1.tbi << 1) | p0.tbi;
 tbii = tbid & ~((p1.tbid << 1) | p0.tbid);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index dbe2189e51..06ff3a7f2e 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -175,8 +175,7 @@ static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 
dst,
 if (tbi == 0) {
 /* Load unmodified address */
 tcg_gen_mov_i64(dst, src);
-} else if (s->current_el >= 2) {
-/* FIXME: ARMv8.1-VHE S2 translation regime.  */
+} else if (!regime_has_2_ranges(s->mmu_idx)) {
 /* Force tag byte to all zero */
 tcg_gen_extract_i64(dst, src, 0, 56);
 } else {
-- 
2.17.1




[Qemu-devel] [PATCH v3 31/34] target/arm: Update {fp, sve}_exception_el for VHE

2019-08-03 Thread Richard Henderson
When TGE+E2H are both set, CPACR_EL1 is ignored.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 53 -
 1 file changed, 28 insertions(+), 25 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index d481716b97..2939454c8a 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -5539,7 +5539,9 @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
 int sve_exception_el(CPUARMState *env, int el)
 {
 #ifndef CONFIG_USER_ONLY
-if (el <= 1) {
+uint64_t hcr_el2 = arm_hcr_el2_eff(env);
+
+if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
 bool disabled = false;
 
 /* The CPACR.ZEN controls traps to EL1:
@@ -5554,8 +5556,7 @@ int sve_exception_el(CPUARMState *env, int el)
 }
 if (disabled) {
 /* route_to_el2 */
-return (arm_feature(env, ARM_FEATURE_EL2)
-&& (arm_hcr_el2_eff(env) & HCR_TGE) ? 2 : 1);
+return hcr_el2 & HCR_TGE ? 2 : 1;
 }
 
 /* Check CPACR.FPEN.  */
@@ -11263,8 +11264,6 @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, 
uint32_t bytes)
 int fp_exception_el(CPUARMState *env, int cur_el)
 {
 #ifndef CONFIG_USER_ONLY
-int fpen;
-
 /* CPACR and the CPTR registers don't exist before v6, so FP is
  * always accessible
  */
@@ -11292,30 +11291,34 @@ int fp_exception_el(CPUARMState *env, int cur_el)
  * 0, 2 : trap EL0 and EL1/PL1 accesses
  * 1: trap only EL0 accesses
  * 3: trap no accesses
+ * This register is ignored if E2H+TGE are both set.
  */
-fpen = extract32(env->cp15.cpacr_el1, 20, 2);
-switch (fpen) {
-case 0:
-case 2:
-if (cur_el == 0 || cur_el == 1) {
-/* Trap to PL1, which might be EL1 or EL3 */
-if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
+if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
+int fpen = extract32(env->cp15.cpacr_el1, 20, 2);
+
+switch (fpen) {
+case 0:
+case 2:
+if (cur_el == 0 || cur_el == 1) {
+/* Trap to PL1, which might be EL1 or EL3 */
+if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
+return 3;
+}
+return 1;
+}
+if (cur_el == 3 && !is_a64(env)) {
+/* Secure PL1 running at EL3 */
 return 3;
 }
-return 1;
+break;
+case 1:
+if (cur_el == 0) {
+return 1;
+}
+break;
+case 3:
+break;
 }
-if (cur_el == 3 && !is_a64(env)) {
-/* Secure PL1 running at EL3 */
-return 3;
-}
-break;
-case 1:
-if (cur_el == 0) {
-return 1;
-}
-break;
-case 3:
-break;
 }
 
 /*
-- 
2.17.1




[Qemu-devel] [PATCH v3 23/34] target/arm: Update arm_mmu_idx for VHE

2019-08-03 Thread Richard Henderson
Return the indexes for the EL2&0 regime when the appropriate bits
are set within HCR_EL2.  This happens for initial generation in
arm_mmu_idx, and reconstruction in core_to_arm_mmu_idx.

In order to make this reliable, we also need a bit in TBFLAGS.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
v5: Consistently check E2H & TGE & ELUsingAArch32(EL2).
---
 target/arm/cpu.h|  2 ++
 target/arm/helper.c | 51 -
 2 files changed, 39 insertions(+), 14 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index b5300f9014..8d90a4fc4d 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3153,6 +3153,8 @@ FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1)
 /* Target EL if we take a floating-point-disabled exception */
 FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2)
 FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
+/* For A profile only, if EL2 is AA64 and HCR_EL2. == 11 */
+FIELD(TBFLAG_ANY, E2H_TGE, 22, 1)
 
 /* Bit usage when in AArch32 state: */
 FIELD(TBFLAG_A32, THUMB, 0, 1)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 5472424179..a570d43232 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11257,21 +11257,31 @@ int fp_exception_el(CPUARMState *env, int cur_el)
 
 ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
 {
+bool e2h;
+
 if (arm_feature(env, ARM_FEATURE_M)) {
 return mmu_idx | ARM_MMU_IDX_M;
 }
 
 mmu_idx |= ARM_MMU_IDX_A;
+if (mmu_idx & ARM_MMU_IDX_S) {
+return mmu_idx;
+}
+
+/*
+ * All remaining states are non-secure, so we can directly
+ * access hcr_el2 for these two bits.
+ */
+e2h = (env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)
+  && arm_el_is_aa64(env, 2);
+
 switch (mmu_idx) {
 case 0 | ARM_MMU_IDX_A:
-return ARMMMUIdx_EL10_0;
+return e2h ? ARMMMUIdx_EL20_0 : ARMMMUIdx_EL10_0;
 case 1 | ARM_MMU_IDX_A:
 return ARMMMUIdx_EL10_1;
 case ARMMMUIdx_E2:
-case ARMMMUIdx_SE0:
-case ARMMMUIdx_SE1:
-case ARMMMUIdx_SE3:
-return mmu_idx;
+return e2h ? ARMMMUIdx_EL20_2 : ARMMMUIdx_E2;
 default:
 g_assert_not_reached();
 }
@@ -11300,25 +11310,27 @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState 
*env, bool secstate)
 ARMMMUIdx arm_mmu_idx(CPUARMState *env)
 {
 int el;
+bool e2h;
 
 if (arm_feature(env, ARM_FEATURE_M)) {
 return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
 }
 
 el = arm_current_el(env);
+if (el == 3 || arm_is_secure_below_el3(env)) {
+return ARMMMUIdx_SE0 + el;
+}
+
+e2h = (env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)
+  && arm_el_is_aa64(env, 2);
+
 switch (el) {
 case 0:
-/* TODO: ARMv8.1-VHE */
+return e2h ? ARMMMUIdx_EL20_0 : ARMMMUIdx_EL10_0;
 case 1:
-return (arm_is_secure_below_el3(env)
-? ARMMMUIdx_SE0 + el
-: ARMMMUIdx_EL10_0 + el);
+return ARMMMUIdx_EL10_1;
 case 2:
-/* TODO: ARMv8.1-VHE */
-/* TODO: ARMv8.4-SecEL2 */
-return ARMMMUIdx_E2;
-case 3:
-return ARMMMUIdx_SE3;
+return e2h ? ARMMMUIdx_EL20_2 : ARMMMUIdx_E2;
 default:
 g_assert_not_reached();
 }
@@ -11428,6 +11440,17 @@ void cpu_get_tb_cpu_state(CPUARMState *env, 
target_ulong *pc,
 
 flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, 
arm_to_core_mmu_idx(mmu_idx));
 
+/*
+ * Include E2H in TBFLAGS so that core_to_arm_mmu_idx can
+ * reliably determine EL1&0 vs EL2&0 regimes.
+ */
+if (arm_el_is_aa64(env, 2)) {
+uint64_t hcr = arm_hcr_el2_eff(env);
+if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+flags = FIELD_DP32(flags, TBFLAG_ANY, E2H_TGE, 1);
+}
+}
+
 /* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine
  * states defined in the ARM ARM for software singlestep:
  *  SS_ACTIVE   PSTATE.SS   State
-- 
2.17.1




[Qemu-devel] [PATCH v3 21/34] target/arm: Reorganize ARMMMUIdx

2019-08-03 Thread Richard Henderson
Prepare for, but do not yet implement, the EL2&0 regime
and the Secure EL2 regime.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 173 -
 target/arm/internals.h |  44 +--
 target/arm/helper.c|  60 --
 target/arm/m_helper.c  |   6 +-
 target/arm/translate.c |   1 -
 5 files changed, 165 insertions(+), 119 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 552269daad..b5300f9014 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2764,7 +2764,10 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
  *  + NonSecure EL1 & 0 stage 1
  *  + NonSecure EL1 & 0 stage 2
  *  + NonSecure EL2
- *  + Secure EL1 & EL0
+ *  + NonSecure EL2 & 0   (ARMv8.1-VHE)
+ *  + Secure EL0
+ *  + Secure EL1
+ *  + Secure EL2  (ARMv8.4-SecEL2)
  *  + Secure EL3
  * If EL3 is 32-bit:
  *  + NonSecure PL1 & 0 stage 1
@@ -2774,8 +2777,9 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
  * (reminder: for 32 bit EL3, Secure PL1 is *EL3*, not EL1.)
  *
  * For QEMU, an mmu_idx is not quite the same as a translation regime because:
- *  1. we need to split the "EL1 & 0" regimes into two mmu_idxes, because they
- * may differ in access permissions even if the VA->PA map is the same
+ *  1. we need to split the "EL1 & 0" and "EL2 & 0" regimes into two mmu_idxes,
+ * because they may differ in access permissions even if the VA->PA map is
+ * the same
  *  2. we want to cache in our TLB the full VA->IPA->PA lookup for a stage 1+2
  * translation, which means that we have one mmu_idx that deals with two
  * concatenated translation regimes [this sort of combined s1+2 TLB is
@@ -2787,19 +2791,26 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
  *  4. we can also safely fold together the "32 bit EL3" and "64 bit EL3"
  * translation regimes, because they map reasonably well to each other
  * and they can't both be active at the same time.
- * This gives us the following list of mmu_idx values:
+ *  5. we want to be able to use the TLB for accesses done as part of a
+ * stage1 page table walk, rather than having to walk the stage2 page
+ * table over and over.
  *
- * NS EL0 (aka NS PL0) stage 1+2
- * NS EL1 (aka NS PL1) stage 1+2
+ * This gives us the following list of cases:
+ *
+ * NS EL0 (aka NS PL0) EL1&0 stage 1+2
+ * NS EL1 (aka NS PL1) EL1&0 stage 1+2
+ * NS EL0 (aka NS PL0) EL2&0
+ * NS EL2 (aka NS PL2) EL2&0
  * NS EL2 (aka NS PL2)
- * S EL3 (aka S PL1)
  * S EL0 (aka S PL0)
  * S EL1 (not used if EL3 is 32 bit)
- * NS EL0+1 stage 2
+ * S EL2 (not used if EL3 is 32 bit)
+ * S EL3 (aka S PL1)
+ * NS EL0&1 stage 2
  *
- * (The last of these is an mmu_idx because we want to be able to use the TLB
- * for the accesses done as part of a stage 1 page table walk, rather than
- * having to walk the stage 2 page table over and over.)
+ * We then merge the two NS EL0 cases, and two NS EL2 cases to produce
+ * 8 different mmu_idx.  We retain separate symbols for these four cases
+ * in order to simplify distinguishing them in the code.
  *
  * R profile CPUs have an MPU, but can use the same set of MMU indexes
  * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
@@ -2837,62 +2848,88 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
  * For M profile we arrange them to have a bit for priv, a bit for negpri
  * and a bit for secure.
  */
-#define ARM_MMU_IDX_A 0x10 /* A profile */
-#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
-#define ARM_MMU_IDX_M 0x40 /* M profile */
+#define ARM_MMU_IDX_S 0x04  /* Secure */
+#define ARM_MMU_IDX_A 0x10  /* A profile */
+#define ARM_MMU_IDX_M 0x20  /* M profile */
+#define ARM_MMU_IDX_NOTLB 0x100 /* does not have a TLB */
 
-/* meanings of the bits for M profile mmu idx values */
-#define ARM_MMU_IDX_M_PRIV 0x1
+/* Meanings of the bits for A profile mmu idx values */
+#define ARM_MMU_IDX_A_PRIV   0x3
+#define ARM_MMU_IDX_A_EL10   0x40
+#define ARM_MMU_IDX_A_EL20   0x80
+
+/* Meanings of the bits for M profile mmu idx values */
+#define ARM_MMU_IDX_M_PRIV   0x1
 #define ARM_MMU_IDX_M_NEGPRI 0x2
-#define ARM_MMU_IDX_M_S 0x4
 
-#define ARM_MMU_IDX_TYPE_MASK (~0x7)
+#define ARM_MMU_IDX_TYPE_MASK(ARM_MMU_IDX_A | ARM_MMU_IDX_M)
 #define ARM_MMU_IDX_COREIDX_MASK 0x7
 
 typedef enum ARMMMUIdx {
-ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A,
-ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
+ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A | ARM_MMU_IDX_A_EL10,
+ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A | ARM_MMU_IDX_A_EL10,
+ARMMMUIdx_EL20_0 = 0 | ARM_MMU_IDX_A | ARM_MMU_IDX_A_EL20,
+ARMMMUIdx_EL20_2 = 2 | ARM_MMU_IDX_A | ARM_MMU_IDX_A_EL20,
+
 ARMMMUIdx_E2 = 2 | ARM_MMU_IDX_A,
-ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
-ARMMMUIdx_SE0 = 4 | ARM_MMU_IDX_A,
-ARMMMUIdx_SE1 = 5 | ARM_MMU_IDX_A,
-ARMMMUIdx_Stage2 = 6 | 

[Qemu-devel] [PATCH v3 19/34] target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3

2019-08-03 Thread Richard Henderson
This is part of a reorganization to the set of mmu_idx.
The EL3 regime only has a single stage translation, and
is always secure.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  4 ++--
 target/arm/internals.h |  2 +-
 target/arm/helper.c| 18 +-
 target/arm/translate.c |  2 +-
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c7ce8a4da5..94337b2fb0 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2853,7 +2853,7 @@ typedef enum ARMMMUIdx {
 ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A,
 ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
-ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
+ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
 ARMMMUIdx_SE0 = 4 | ARM_MMU_IDX_A,
 ARMMMUIdx_SE1 = 5 | ARM_MMU_IDX_A,
 ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
@@ -2879,7 +2879,7 @@ typedef enum ARMMMUIdxBit {
 ARMMMUIdxBit_EL10_0 = 1 << 0,
 ARMMMUIdxBit_EL10_1 = 1 << 1,
 ARMMMUIdxBit_S1E2 = 1 << 2,
-ARMMMUIdxBit_S1E3 = 1 << 3,
+ARMMMUIdxBit_SE3 = 1 << 3,
 ARMMMUIdxBit_SE0 = 1 << 4,
 ARMMMUIdxBit_SE1 = 1 << 5,
 ARMMMUIdxBit_Stage2 = 1 << 6,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index c505cae30c..dbb46da549 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -819,7 +819,7 @@ static inline bool regime_is_secure(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_MPriv:
 case ARMMMUIdx_MUser:
 return false;
-case ARMMMUIdx_S1E3:
+case ARMMMUIdx_SE3:
 case ARMMMUIdx_SE0:
 case ARMMMUIdx_SE1:
 case ARMMMUIdx_MSPrivNegPri:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e0d4f33026..e5b07b4770 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3116,7 +3116,7 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 /* stage 1 current state PL1: ATS1CPR, ATS1CPW */
 switch (el) {
 case 3:
-mmu_idx = ARMMMUIdx_S1E3;
+mmu_idx = ARMMMUIdx_SE3;
 break;
 case 2:
 mmu_idx = ARMMMUIdx_Stage1_E1;
@@ -3198,7 +3198,7 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 mmu_idx = ARMMMUIdx_S1E2;
 break;
 case 6: /* AT S1E3R, AT S1E3W */
-mmu_idx = ARMMMUIdx_S1E3;
+mmu_idx = ARMMMUIdx_SE3;
 break;
 default:
 g_assert_not_reached();
@@ -3422,9 +3422,9 @@ static void update_lpae_el1_asid(CPUARMState *env, int 
secure)
 ttbr0 = env->cp15.ttbr0_s;
 ttbr1 = env->cp15.ttbr1_s;
 ttcr = env->cp15.tcr_el[3].raw_tcr;
-/* Note that cp15.ttbr0_s == cp15.ttbr0_el[3], so S1E3 is affected.  */
+/* Note that cp15.ttbr0_s == cp15.ttbr0_el[3], so SE3 is affected.  */
 /* ??? Secure EL3 really using the ASID field?  Doesn't make sense.  */
-idxmask = ARMMMUIdxBit_SE1 | ARMMMUIdxBit_SE0 | ARMMMUIdxBit_S1E3;
+idxmask = ARMMMUIdxBit_SE1 | ARMMMUIdxBit_SE0 | ARMMMUIdxBit_SE3;
 break;
 case ARM_CP_SECSTATE_NS:
 ttbr0 = env->cp15.ttbr0_ns;
@@ -3967,7 +3967,7 @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 ARMCPU *cpu = env_archcpu(env);
 CPUState *cs = CPU(cpu);
 
-tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E3);
+tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
 }
 
 static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3992,7 +3992,7 @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 {
 CPUState *cs = env_cpu(env);
 
-tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
+tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
 }
 
 static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4020,7 +4020,7 @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 CPUState *cs = CPU(cpu);
 uint64_t pageaddr = sextract64(value << 12, 0, 56);
 
-tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E3);
+tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
 }
 
 static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4069,7 +4069,7 @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 uint64_t pageaddr = sextract64(value << 12, 0, 56);
 
 tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
- ARMMMUIdxBit_S1E3);
+ ARMMMUIdxBit_SE3);
 }
 
 static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -8693,7 +8693,7 @@ static inline uint32_t regime_el(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_Stage2:
 case ARMMMUIdx_S1E2:
 return 2;
-case ARMMMUIdx_S1E3:
+case ARMMMUIdx_SE3:
 return 3;
 case ARMMMUIdx_SE0:
 return 

[Qemu-devel] [PATCH v3 30/34] target/arm: Update regime_is_user for EL2&0

2019-08-03 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index a0969b78bf..d481716b97 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -8936,6 +8936,7 @@ static inline bool regime_is_user(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 {
 switch (mmu_idx) {
 case ARMMMUIdx_SE0:
+case ARMMMUIdx_EL20_0:
 case ARMMMUIdx_Stage1_E0:
 case ARMMMUIdx_MUser:
 case ARMMMUIdx_MSUser:
-- 
2.17.1




[Qemu-devel] [PATCH v3 18/34] target/arm: Rename ARMMMUIdx_S1SE* to ARMMMUIdx_SE*

2019-08-03 Thread Richard Henderson
This is part of a reorganization to the set of mmu_idx.
The Secure regimes all have a single stage translation;
there is no point in pointing out that the idx is for stage1.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  8 
 target/arm/internals.h |  4 ++--
 target/arm/translate.h |  2 +-
 target/arm/helper.c| 30 +++---
 target/arm/translate-a64.c |  4 ++--
 target/arm/translate.c |  6 +++---
 6 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index ade558f63c..c7ce8a4da5 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2854,8 +2854,8 @@ typedef enum ARMMMUIdx {
 ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
-ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
-ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
+ARMMMUIdx_SE0 = 4 | ARM_MMU_IDX_A,
+ARMMMUIdx_SE1 = 5 | ARM_MMU_IDX_A,
 ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
 ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
 ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
@@ -2880,8 +2880,8 @@ typedef enum ARMMMUIdxBit {
 ARMMMUIdxBit_EL10_1 = 1 << 1,
 ARMMMUIdxBit_S1E2 = 1 << 2,
 ARMMMUIdxBit_S1E3 = 1 << 3,
-ARMMMUIdxBit_S1SE0 = 1 << 4,
-ARMMMUIdxBit_S1SE1 = 1 << 5,
+ARMMMUIdxBit_SE0 = 1 << 4,
+ARMMMUIdxBit_SE1 = 1 << 5,
 ARMMMUIdxBit_Stage2 = 1 << 6,
 ARMMMUIdxBit_MUser = 1 << 0,
 ARMMMUIdxBit_MPriv = 1 << 1,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index cd9b1acb20..c505cae30c 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -820,8 +820,8 @@ static inline bool regime_is_secure(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_MUser:
 return false;
 case ARMMMUIdx_S1E3:
-case ARMMMUIdx_S1SE0:
-case ARMMMUIdx_S1SE1:
+case ARMMMUIdx_SE0:
+case ARMMMUIdx_SE1:
 case ARMMMUIdx_MSPrivNegPri:
 case ARMMMUIdx_MSUserNegPri:
 case ARMMMUIdx_MSPriv:
diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2056..715fa08e3b 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -122,7 +122,7 @@ static inline int default_exception_el(DisasContext *s)
  * exceptions can only be routed to ELs above 1, so we target the higher of
  * 1 or the current EL.
  */
-return (s->mmu_idx == ARMMMUIdx_S1SE0 && s->secure_routed_to_el3)
+return (s->mmu_idx == ARMMMUIdx_SE0 && s->secure_routed_to_el3)
 ? 3 : MAX(1, s->current_el);
 }
 
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 4c0c314c1a..e0d4f33026 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -566,7 +566,7 @@ static void contextidr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 
 switch (ri->secure) {
 case ARM_CP_SECSTATE_S:
-idxmask = ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
+idxmask = ARMMMUIdxBit_SE1 | ARMMMUIdxBit_SE0;
 break;
 case ARM_CP_SECSTATE_NS:
 idxmask = ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0;
@@ -3122,7 +3122,7 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 mmu_idx = ARMMMUIdx_Stage1_E1;
 break;
 case 1:
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
+mmu_idx = secure ? ARMMMUIdx_SE1 : ARMMMUIdx_Stage1_E1;
 break;
 default:
 g_assert_not_reached();
@@ -3132,13 +3132,13 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 /* stage 1 current state PL0: ATS1CUR, ATS1CUW */
 switch (el) {
 case 3:
-mmu_idx = ARMMMUIdx_S1SE0;
+mmu_idx = ARMMMUIdx_SE0;
 break;
 case 2:
 mmu_idx = ARMMMUIdx_Stage1_E0;
 break;
 case 1:
-mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
+mmu_idx = secure ? ARMMMUIdx_SE0 : ARMMMUIdx_Stage1_E0;
 break;
 default:
 g_assert_not_reached();
@@ -3192,7 +3192,7 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 case 0:
 switch (ri->opc1) {
 case 0: /* AT S1E1R, AT S1E1W */
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
+mmu_idx = secure ? ARMMMUIdx_SE1 : ARMMMUIdx_Stage1_E1;
 break;
 case 4: /* AT S1E2R, AT S1E2W */
 mmu_idx = ARMMMUIdx_S1E2;
@@ -3205,13 +3205,13 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 }
 break;
 case 2: /* AT S1E0R, AT S1E0W */
-mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
+mmu_idx = secure ? ARMMMUIdx_SE0 : ARMMMUIdx_Stage1_E0;
 break;
 case 4: /* AT S12E1R, AT S12E1W */
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_EL10_1;
+mmu_idx = secure ? ARMMMUIdx_SE1 : 

[Qemu-devel] [PATCH v3 20/34] target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2

2019-08-03 Thread Richard Henderson
This is part of a reorganization to the set of mmu_idx.
The non-secure EL2 regime only has a single stage translation;
there is no point in pointing out that the idx is for stage1.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  4 ++--
 target/arm/internals.h |  2 +-
 target/arm/helper.c| 24 
 target/arm/translate.c |  2 +-
 4 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 94337b2fb0..552269daad 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2852,7 +2852,7 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
 typedef enum ARMMMUIdx {
 ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A,
 ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
-ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
+ARMMMUIdx_E2 = 2 | ARM_MMU_IDX_A,
 ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
 ARMMMUIdx_SE0 = 4 | ARM_MMU_IDX_A,
 ARMMMUIdx_SE1 = 5 | ARM_MMU_IDX_A,
@@ -2878,7 +2878,7 @@ typedef enum ARMMMUIdx {
 typedef enum ARMMMUIdxBit {
 ARMMMUIdxBit_EL10_0 = 1 << 0,
 ARMMMUIdxBit_EL10_1 = 1 << 1,
-ARMMMUIdxBit_S1E2 = 1 << 2,
+ARMMMUIdxBit_E2 = 1 << 2,
 ARMMMUIdxBit_SE3 = 1 << 3,
 ARMMMUIdxBit_SE0 = 1 << 4,
 ARMMMUIdxBit_SE1 = 1 << 5,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index dbb46da549..027878516f 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -812,7 +812,7 @@ static inline bool regime_is_secure(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_EL10_1:
 case ARMMMUIdx_Stage1_E0:
 case ARMMMUIdx_Stage1_E1:
-case ARMMMUIdx_S1E2:
+case ARMMMUIdx_E2:
 case ARMMMUIdx_Stage2:
 case ARMMMUIdx_MPrivNegPri:
 case ARMMMUIdx_MUserNegPri:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e5b07b4770..69c913d824 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -740,7 +740,7 @@ static void tlbiall_hyp_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 {
 CPUState *cs = env_cpu(env);
 
-tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
+tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
 }
 
 static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -748,7 +748,7 @@ static void tlbiall_hyp_is_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 {
 CPUState *cs = env_cpu(env);
 
-tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
+tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
 }
 
 static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -757,7 +757,7 @@ static void tlbimva_hyp_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 CPUState *cs = env_cpu(env);
 uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
 
-tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
+tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
 }
 
 static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -767,7 +767,7 @@ static void tlbimva_hyp_is_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
 
 tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
- ARMMMUIdxBit_S1E2);
+ ARMMMUIdxBit_E2);
 }
 
 static const ARMCPRegInfo cp_reginfo[] = {
@@ -3167,7 +3167,7 @@ static void ats1h_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
 uint64_t par64;
 
-par64 = do_ats_write(env, value, access_type, ARMMMUIdx_S1E2);
+par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
 
 A32_BANKED_CURRENT_REG_SET(env, par, par64);
 }
@@ -3195,7 +3195,7 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 mmu_idx = secure ? ARMMMUIdx_SE1 : ARMMMUIdx_Stage1_E1;
 break;
 case 4: /* AT S1E2R, AT S1E2W */
-mmu_idx = ARMMMUIdx_S1E2;
+mmu_idx = ARMMMUIdx_E2;
 break;
 case 6: /* AT S1E3R, AT S1E3W */
 mmu_idx = ARMMMUIdx_SE3;
@@ -3958,7 +3958,7 @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 ARMCPU *cpu = env_archcpu(env);
 CPUState *cs = CPU(cpu);
 
-tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
+tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
 }
 
 static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3984,7 +3984,7 @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 {
 CPUState *cs = env_cpu(env);
 
-tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
+tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
 }
 
 static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4006,7 +4006,7 @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 CPUState *cs = CPU(cpu);
 uint64_t pageaddr = sextract64(value << 12, 0, 56);
 

[Qemu-devel] [PATCH v3 28/34] target/arm: Flush tlbs for E2&0 translation regime

2019-08-03 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 31 ---
 1 file changed, 24 insertions(+), 7 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 9d74162bbd..984a441cc4 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3917,8 +3917,11 @@ static CPAccessResult aa64_cacheop_access(CPUARMState 
*env,
 
 static int vae1_tlbmask(CPUARMState *env)
 {
+/* Since we exclude secure first, we may read HCR_EL2 directly. */
 if (arm_is_secure_below_el3(env)) {
 return ARMMMUIdxBit_SE1 | ARMMMUIdxBit_SE0;
+} else if (env->cp15.hcr_el2 & HCR_E2H) {
+return ARMMMUIdxBit_EL20_2 | ARMMMUIdxBit_EL10_0;
 } else {
 return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0;
 }
@@ -3956,6 +3959,10 @@ static int vmalle1_tlbmask(CPUARMState *env)
 if (arm_is_secure_below_el3(env)) {
 return ARMMMUIdxBit_SE1 | ARMMMUIdxBit_SE0;
 } else if (arm_feature(env, ARM_FEATURE_EL2)) {
+/* Since we exclude secure first, we may read HCR_EL2 directly. */
+if (env->cp15.hcr_el2 & HCR_E2H) {
+return ARMMMUIdxBit_EL20_2 | ARMMMUIdxBit_EL20_0;
+}
 return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0 | ARMMMUIdxBit_Stage2;
 } else {
 return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0;
@@ -3971,13 +3978,22 @@ static void tlbi_aa64_alle1_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 tlb_flush_by_mmuidx(cs, mask);
 }
 
+static int vae2_tlbmask(CPUARMState *env)
+{
+if (arm_hcr_el2_eff(env) & HCR_E2H) {
+return ARMMMUIdxBit_EL20_0 | ARMMMUIdxBit_EL20_2;
+} else {
+return ARMMMUIdxBit_E2;
+}
+}
+
 static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
   uint64_t value)
 {
-ARMCPU *cpu = env_archcpu(env);
-CPUState *cs = CPU(cpu);
+CPUState *cs = env_cpu(env);
+int mask = vae2_tlbmask(env);
 
-tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
+tlb_flush_by_mmuidx(cs, mask);
 }
 
 static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4002,8 +4018,9 @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 uint64_t value)
 {
 CPUState *cs = env_cpu(env);
+int mask = vae2_tlbmask(env);
 
-tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
+tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
 }
 
 static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4021,11 +4038,11 @@ static void tlbi_aa64_vae2_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
  * Currently handles both VAE2 and VALE2, since we don't support
  * flush-last-level-only.
  */
-ARMCPU *cpu = env_archcpu(env);
-CPUState *cs = CPU(cpu);
+CPUState *cs = env_cpu(env);
+int mask = vae2_tlbmask(env);
 uint64_t pageaddr = sextract64(value << 12, 0, 56);
 
-tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
+tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
 }
 
 static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
-- 
2.17.1




[Qemu-devel] [PATCH v3 17/34] target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*

2019-08-03 Thread Richard Henderson
This is part of a reorganization to the set of mmu_idx.
The EL1&0 regime is the only one that uses 2-stage translation.
Spelling out Stage avoids confusion with Secure.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  4 ++--
 target/arm/internals.h |  6 +++---
 target/arm/helper.c| 24 
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 14730d29c6..ade558f63c 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2868,8 +2868,8 @@ typedef enum ARMMMUIdx {
 /* Indexes below here don't have TLBs and are used only for AT system
  * instructions or for the first stage of an S12 page table walk.
  */
-ARMMMUIdx_S1NSE0 = 0 | ARM_MMU_IDX_NOTLB,
-ARMMMUIdx_S1NSE1 = 1 | ARM_MMU_IDX_NOTLB,
+ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
+ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
 } ARMMMUIdx;
 
 /* Bit macros for the core-mmu-index values for each index,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 1caa15e7e0..cd9b1acb20 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -810,8 +810,8 @@ static inline bool regime_is_secure(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 switch (mmu_idx) {
 case ARMMMUIdx_EL10_0:
 case ARMMMUIdx_EL10_1:
-case ARMMMUIdx_S1NSE0:
-case ARMMMUIdx_S1NSE1:
+case ARMMMUIdx_Stage1_E0:
+case ARMMMUIdx_Stage1_E1:
 case ARMMMUIdx_S1E2:
 case ARMMMUIdx_Stage2:
 case ARMMMUIdx_MPrivNegPri:
@@ -966,7 +966,7 @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env);
 #ifdef CONFIG_USER_ONLY
 static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
 {
-return ARMMMUIdx_S1NSE0;
+return ARMMMUIdx_Stage1_E0;
 }
 #else
 ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 6c8eddfdf4..4c0c314c1a 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3119,10 +3119,10 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 mmu_idx = ARMMMUIdx_S1E3;
 break;
 case 2:
-mmu_idx = ARMMMUIdx_S1NSE1;
+mmu_idx = ARMMMUIdx_Stage1_E1;
 break;
 case 1:
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S1NSE1;
+mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
 break;
 default:
 g_assert_not_reached();
@@ -3135,10 +3135,10 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 mmu_idx = ARMMMUIdx_S1SE0;
 break;
 case 2:
-mmu_idx = ARMMMUIdx_S1NSE0;
+mmu_idx = ARMMMUIdx_Stage1_E0;
 break;
 case 1:
-mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
+mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
 break;
 default:
 g_assert_not_reached();
@@ -3192,7 +3192,7 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 case 0:
 switch (ri->opc1) {
 case 0: /* AT S1E1R, AT S1E1W */
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S1NSE1;
+mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
 break;
 case 4: /* AT S1E2R, AT S1E2W */
 mmu_idx = ARMMMUIdx_S1E2;
@@ -3205,7 +3205,7 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 }
 break;
 case 2: /* AT S1E0R, AT S1E0W */
-mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
+mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
 break;
 case 4: /* AT S12E1R, AT S12E1W */
 mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_EL10_1;
@@ -8698,8 +8698,8 @@ static inline uint32_t regime_el(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_S1SE0:
 return arm_el_is_aa64(env, 3) ? 1 : 3;
 case ARMMMUIdx_S1SE1:
-case ARMMMUIdx_S1NSE0:
-case ARMMMUIdx_S1NSE1:
+case ARMMMUIdx_Stage1_E0:
+case ARMMMUIdx_Stage1_E1:
 case ARMMMUIdx_MPrivNegPri:
 case ARMMMUIdx_MUserNegPri:
 case ARMMMUIdx_MPriv:
@@ -8757,7 +8757,7 @@ static inline bool 
regime_translation_disabled(CPUARMState *env,
 }
 
 if ((env->cp15.hcr_el2 & HCR_DC) &&
-(mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
+(mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1)) {
 /* HCR.DC means SCTLR_EL1.M behaves as 0 */
 return true;
 }
@@ -8802,7 +8802,7 @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx 
mmu_idx)
 static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
 {
 if (mmu_idx == ARMMMUIdx_EL10_0 || mmu_idx == ARMMMUIdx_EL10_1) {
-mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_EL10_0);
+mmu_idx += (ARMMMUIdx_Stage1_E0 - ARMMMUIdx_EL10_0);
 }
 return mmu_idx;
 }
@@ -8837,7 +8837,7 @@ static inline bool 

[Qemu-devel] [PATCH v3 11/34] target/arm: Add the hypervisor virtual counter

2019-08-03 Thread Richard Henderson
Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/cpu-qom.h |  1 +
 target/arm/cpu.h | 11 +
 target/arm/cpu.c |  2 ++
 target/arm/helper.c  | 57 
 4 files changed, 66 insertions(+), 5 deletions(-)

diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
index 2049fa9612..43fc8296db 100644
--- a/target/arm/cpu-qom.h
+++ b/target/arm/cpu-qom.h
@@ -76,6 +76,7 @@ void arm_gt_ptimer_cb(void *opaque);
 void arm_gt_vtimer_cb(void *opaque);
 void arm_gt_htimer_cb(void *opaque);
 void arm_gt_stimer_cb(void *opaque);
+void arm_gt_hvtimer_cb(void *opaque);
 
 #define ARM_AFF0_SHIFT 0
 #define ARM_AFF0_MASK  (0xFFULL << ARM_AFF0_SHIFT)
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index e37008a4f7..bba4e1f984 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -144,11 +144,12 @@ typedef struct ARMGenericTimer {
 uint64_t ctl; /* Timer Control register */
 } ARMGenericTimer;
 
-#define GTIMER_PHYS 0
-#define GTIMER_VIRT 1
-#define GTIMER_HYP  2
-#define GTIMER_SEC  3
-#define NUM_GTIMERS 4
+#define GTIMER_PHYS 0
+#define GTIMER_VIRT 1
+#define GTIMER_HYP  2
+#define GTIMER_SEC  3
+#define GTIMER_HYPVIRT  4
+#define NUM_GTIMERS 5
 
 typedef struct {
 uint64_t raw_tcr;
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index ec2ab95dbe..4431330c2e 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1226,6 +1226,8 @@ static void arm_cpu_realizefn(DeviceState *dev, Error 
**errp)
   arm_gt_htimer_cb, cpu);
 cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
   arm_gt_stimer_cb, cpu);
+cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, GTIMER_SCALE,
+  arm_gt_hvtimer_cb, cpu);
 #endif
 
 cpu_exec_realizefn(cs, _err);
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e2fcb03da5..e0f5627218 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2527,6 +2527,7 @@ static uint64_t gt_tval_read(CPUARMState *env, const 
ARMCPRegInfo *ri,
 
 switch (timeridx) {
 case GTIMER_VIRT:
+case GTIMER_HYPVIRT:
 offset = gt_virt_cnt_offset(env);
 break;
 }
@@ -2543,6 +2544,7 @@ static void gt_tval_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 
 switch (timeridx) {
 case GTIMER_VIRT:
+case GTIMER_HYPVIRT:
 offset = gt_virt_cnt_offset(env);
 break;
 }
@@ -2698,6 +2700,34 @@ static void gt_sec_ctl_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 gt_ctl_write(env, ri, GTIMER_SEC, value);
 }
 
+static void gt_hv_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+gt_timer_reset(env, ri, GTIMER_HYPVIRT);
+}
+
+static void gt_hv_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
+ uint64_t value)
+{
+gt_cval_write(env, ri, GTIMER_HYPVIRT, value);
+}
+
+static uint64_t gt_hv_tval_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+return gt_tval_read(env, ri, GTIMER_HYPVIRT);
+}
+
+static void gt_hv_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
+ uint64_t value)
+{
+gt_tval_write(env, ri, GTIMER_HYPVIRT, value);
+}
+
+static void gt_hv_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
+uint64_t value)
+{
+gt_ctl_write(env, ri, GTIMER_HYPVIRT, value);
+}
+
 void arm_gt_ptimer_cb(void *opaque)
 {
 ARMCPU *cpu = opaque;
@@ -2726,6 +2756,13 @@ void arm_gt_stimer_cb(void *opaque)
 gt_recalc_timer(cpu, GTIMER_SEC);
 }
 
+void arm_gt_hvtimer_cb(void *opaque)
+{
+ARMCPU *cpu = opaque;
+
+gt_recalc_timer(cpu, GTIMER_HYPVIRT);
+}
+
 static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
 /* Note that CNTFRQ is purely reads-as-written for the benefit
  * of software; writing it doesn't actually change the timer frequency.
@@ -6849,6 +6886,26 @@ void register_cp_regs_for_features(ARMCPU *cpu)
   .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
   .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
   .fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) },
+#ifndef CONFIG_USER_ONLY
+{ .name = "CNTHV_CVAL_EL2", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 2,
+  .fieldoffset =
+offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].cval),
+  .type = ARM_CP_IO, .access = PL2_RW,
+  .writefn = gt_hv_cval_write, .raw_writefn = raw_write },
+{ .name = "CNTHV_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
+  .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 0,
+  .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW,
+  .resetfn = gt_hv_timer_reset,
+  .readfn = gt_hv_tval_read, .writefn = gt_hv_tval_write },
+{ .name = 

[Qemu-devel] [PATCH v3 12/34] target/arm: Add VHE system register redirection and aliasing

2019-08-03 Thread Richard Henderson
Several of the EL1/0 registers are redirected to the EL2 version when in
EL2 and HCR_EL2.E2H is set.  Many of these registers have side effects.
Link together the two ARMCPRegInfo structures after they have been
properly instantiated.  Install common dispatch routines to all of the
relevant registers.

The same set of registers that are redirected also have additional
EL12/EL02 aliases created to access the original register that was
redirected.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  44 +++
 target/arm/helper.c | 175 
 2 files changed, 206 insertions(+), 13 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index bba4e1f984..a0f10b60eb 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2455,19 +2455,6 @@ struct ARMCPRegInfo {
  */
 ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
 
-/* Offsets of the secure and non-secure fields in CPUARMState for the
- * register if it is banked.  These fields are only used during the static
- * registration of a register.  During hashing the bank associated
- * with a given security state is copied to fieldoffset which is used from
- * there on out.
- *
- * It is expected that register definitions use either fieldoffset or
- * bank_fieldoffsets in the definition but not both.  It is also expected
- * that both bank offsets are set when defining a banked register.  This
- * use indicates that a register is banked.
- */
-ptrdiff_t bank_fieldoffsets[2];
-
 /* Function for making any access checks for this register in addition to
  * those specified by the 'access' permissions bits. If NULL, no extra
  * checks required. The access check is performed at runtime, not at
@@ -2502,6 +2489,37 @@ struct ARMCPRegInfo {
  * fieldoffset is 0 then no reset will be done.
  */
 CPResetFn *resetfn;
+
+union {
+/*
+ * Offsets of the secure and non-secure fields in CPUARMState for
+ * the register if it is banked.  These fields are only used during
+ * the static registration of a register.  During hashing the bank
+ * associated with a given security state is copied to fieldoffset
+ * which is used from there on out.
+ *
+ * It is expected that register definitions use either fieldoffset
+ * or bank_fieldoffsets in the definition but not both.  It is also
+ * expected that both bank offsets are set when defining a banked
+ * register.  This use indicates that a register is banked.
+ */
+ptrdiff_t bank_fieldoffsets[2];
+
+/*
+ * "Original" writefn and readfn.
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
+ * accessor functions of various EL1/EL0 to perform the runtime
+ * check for which sysreg should actually be modified, and then
+ * forwards the operation.  Before overwriting the accessors,
+ * the original function is copied here, so that accesses that
+ * really do go to the EL1/EL0 version proceed normally.
+ * (The corresponding EL2 register is linked via opaque.)
+ */
+struct {
+CPReadFn *orig_readfn;
+CPWriteFn *orig_writefn;
+};
+};
 };
 
 /* Macros which are lvalues for the field in CPUARMState for the
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e0f5627218..e9f4cae5e8 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -5225,6 +5225,171 @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
 REGINFO_SENTINEL
 };
 
+#ifndef CONFIG_USER_ONLY
+/* Test if system register redirection is to occur in the current state.  */
+static bool redirect_for_e2h(CPUARMState *env)
+{
+return arm_current_el(env) == 2 && (arm_hcr_el2_eff(env) & HCR_E2H);
+}
+
+static uint64_t el2_e2h_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+CPReadFn *readfn;
+
+if (redirect_for_e2h(env)) {
+/* Switch to the saved EL2 version of the register.  */
+ri = ri->opaque;
+readfn = ri->readfn;
+} else {
+readfn = ri->orig_readfn;
+}
+if (readfn == NULL) {
+readfn = raw_read;
+}
+return readfn(env, ri);
+}
+
+static void el2_e2h_write(CPUARMState *env, const ARMCPRegInfo *ri,
+  uint64_t value)
+{
+CPWriteFn *writefn;
+
+if (redirect_for_e2h(env)) {
+/* Switch to the saved EL2 version of the register.  */
+ri = ri->opaque;
+writefn = ri->writefn;
+} else {
+writefn = ri->orig_writefn;
+}
+if (writefn == NULL) {
+writefn = raw_write;
+}
+writefn(env, ri, value);
+}
+
+static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
+{
+struct E2HAlias {
+uint32_t src_key, dst_key, new_key;
+const char *src_name, *dst_name, *new_name;
+bool (*feature)(const ARMISARegisters *id);

[Qemu-devel] [PATCH v3 16/34] target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2

2019-08-03 Thread Richard Henderson
The EL1&0 regime is the only one that uses 2-stage translation.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  4 +--
 target/arm/internals.h |  2 +-
 target/arm/helper.c| 54 +++---
 target/arm/translate-a64.c |  2 +-
 target/arm/translate.c |  2 +-
 5 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 8a3f61bc2c..14730d29c6 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2856,7 +2856,7 @@ typedef enum ARMMMUIdx {
 ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
-ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
+ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
 ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
 ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
 ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
@@ -2882,7 +2882,7 @@ typedef enum ARMMMUIdxBit {
 ARMMMUIdxBit_S1E3 = 1 << 3,
 ARMMMUIdxBit_S1SE0 = 1 << 4,
 ARMMMUIdxBit_S1SE1 = 1 << 5,
-ARMMMUIdxBit_S2NS = 1 << 6,
+ARMMMUIdxBit_Stage2 = 1 << 6,
 ARMMMUIdxBit_MUser = 1 << 0,
 ARMMMUIdxBit_MPriv = 1 << 1,
 ARMMMUIdxBit_MUserNegPri = 1 << 2,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index fafefdc59e..1caa15e7e0 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -813,7 +813,7 @@ static inline bool regime_is_secure(CPUARMState *env, 
ARMMMUIdx mmu_idx)
 case ARMMMUIdx_S1NSE0:
 case ARMMMUIdx_S1NSE1:
 case ARMMMUIdx_S1E2:
-case ARMMMUIdx_S2NS:
+case ARMMMUIdx_Stage2:
 case ARMMMUIdx_MPrivNegPri:
 case ARMMMUIdx_MUserNegPri:
 case ARMMMUIdx_MPriv:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index e391654638..6c8eddfdf4 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -684,7 +684,7 @@ static void tlbiall_nsnh_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 tlb_flush_by_mmuidx(cs,
 ARMMMUIdxBit_EL10_1 |
 ARMMMUIdxBit_EL10_0 |
-ARMMMUIdxBit_S2NS);
+ARMMMUIdxBit_Stage2);
 }
 
 static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -695,7 +695,7 @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 tlb_flush_by_mmuidx_all_cpus_synced(cs,
 ARMMMUIdxBit_EL10_1 |
 ARMMMUIdxBit_EL10_0 |
-ARMMMUIdxBit_S2NS);
+ARMMMUIdxBit_Stage2);
 }
 
 static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -716,7 +716,7 @@ static void tlbiipas2_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 
 pageaddr = sextract64(value << 12, 0, 40);
 
-tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
+tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
 }
 
 static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -732,7 +732,7 @@ static void tlbiipas2_is_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 pageaddr = sextract64(value << 12, 0, 40);
 
 tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
- ARMMMUIdxBit_S2NS);
+ ARMMMUIdxBit_Stage2);
 }
 
 static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3539,10 +3539,10 @@ static void vttbr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 vmid = extract64(value, 48, 8);
 
 /*
- * A change in VMID to the stage2 page table (S2NS) invalidates
+ * A change in VMID to the stage2 page table (Stage2) invalidates
  * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
  */
-tlb_set_asid_for_mmuidx(cs, vmid, ARMMMUIdxBit_S2NS,
+tlb_set_asid_for_mmuidx(cs, vmid, ARMMMUIdxBit_Stage2,
 ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0);
 }
 
@@ -3937,7 +3937,7 @@ static int vmalle1_tlbmask(CPUARMState *env)
 if (arm_is_secure_below_el3(env)) {
 return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
 } else if (arm_feature(env, ARM_FEATURE_EL2)) {
-return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0 | ARMMMUIdxBit_S2NS;
+return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0 | ARMMMUIdxBit_Stage2;
 } else {
 return ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0;
 }
@@ -4091,7 +4091,7 @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 
 pageaddr = sextract64(value << 12, 0, 48);
 
-tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
+tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
 }
 
 static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -4107,7 +4107,7 @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 pageaddr = sextract64(value << 12, 0, 48);
 
 

[Qemu-devel] [PATCH v3 15/34] target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*

2019-08-03 Thread Richard Henderson
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the EL1&0 regime.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  8 +++
 target/arm/internals.h |  4 ++--
 target/arm/helper.c| 44 +++---
 target/arm/translate-a64.c |  4 ++--
 target/arm/translate.c |  6 +++---
 5 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index a0f10b60eb..8a3f61bc2c 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2850,8 +2850,8 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
 #define ARM_MMU_IDX_COREIDX_MASK 0x7
 
 typedef enum ARMMMUIdx {
-ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A,
-ARMMMUIdx_S12NSE1 = 1 | ARM_MMU_IDX_A,
+ARMMMUIdx_EL10_0 = 0 | ARM_MMU_IDX_A,
+ARMMMUIdx_EL10_1 = 1 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
 ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
@@ -2876,8 +2876,8 @@ typedef enum ARMMMUIdx {
  * for use when calling tlb_flush_by_mmuidx() and friends.
  */
 typedef enum ARMMMUIdxBit {
-ARMMMUIdxBit_S12NSE0 = 1 << 0,
-ARMMMUIdxBit_S12NSE1 = 1 << 1,
+ARMMMUIdxBit_EL10_0 = 1 << 0,
+ARMMMUIdxBit_EL10_1 = 1 << 1,
 ARMMMUIdxBit_S1E2 = 1 << 2,
 ARMMMUIdxBit_S1E3 = 1 << 3,
 ARMMMUIdxBit_S1SE0 = 1 << 4,
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 232d963875..fafefdc59e 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -808,8 +808,8 @@ static inline void arm_call_el_change_hook(ARMCPU *cpu)
 static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
 {
 switch (mmu_idx) {
-case ARMMMUIdx_S12NSE0:
-case ARMMMUIdx_S12NSE1:
+case ARMMMUIdx_EL10_0:
+case ARMMMUIdx_EL10_1:
 case ARMMMUIdx_S1NSE0:
 case ARMMMUIdx_S1NSE1:
 case ARMMMUIdx_S1E2:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 185f5e4aea..e391654638 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -569,7 +569,7 @@ static void contextidr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 idxmask = ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
 break;
 case ARM_CP_SECSTATE_NS:
-idxmask = ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
+idxmask = ARMMMUIdxBit_EL10_1 | ARMMMUIdxBit_EL10_0;
 break;
 default:
 g_assert_not_reached();
@@ -682,8 +682,8 @@ static void tlbiall_nsnh_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 CPUState *cs = env_cpu(env);
 
 tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0 |
+ARMMMUIdxBit_EL10_1 |
+ARMMMUIdxBit_EL10_0 |
 ARMMMUIdxBit_S2NS);
 }
 
@@ -693,8 +693,8 @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 CPUState *cs = env_cpu(env);
 
 tlb_flush_by_mmuidx_all_cpus_synced(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0 |
+ARMMMUIdxBit_EL10_1 |
+ARMMMUIdxBit_EL10_0 |
 ARMMMUIdxBit_S2NS);
 }
 
@@ -3047,7 +3047,7 @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t 
value,
 format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
 
 if (arm_feature(env, ARM_FEATURE_EL2)) {
-if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
+if (mmu_idx == ARMMMUIdx_EL10_0 || mmu_idx == ARMMMUIdx_EL10_1) {
 format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
 } else {
 format64 |= arm_current_el(env) == 2;
@@ -3146,11 +3146,11 @@ static void ats_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 break;
 case 4:
 /* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
-mmu_idx = ARMMMUIdx_S12NSE1;
+mmu_idx = ARMMMUIdx_EL10_1;
 break;
 case 6:
 /* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
-mmu_idx = ARMMMUIdx_S12NSE0;
+mmu_idx = ARMMMUIdx_EL10_0;
 break;
 default:
 g_assert_not_reached();
@@ -3208,10 +3208,10 @@ static void ats_write64(CPUARMState *env, const 
ARMCPRegInfo *ri,
 mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
 break;
 case 4: /* AT S12E1R, AT S12E1W */
-mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S12NSE1;
+mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_EL10_1;
 break;
 case 6: /* AT S12E0R, AT S12E0W */
-mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S12NSE0;
+mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_EL10_0;
 break;
 default:
 g_assert_not_reached();
@@ 

[Qemu-devel] [PATCH v3 10/34] target/arm: Update CNTVCT_EL0 for VHE

2019-08-03 Thread Richard Henderson
The virtual offset may be 0 depending on EL, E2H and TGE.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 40 +---
 1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 8d8b3cc40e..e2fcb03da5 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2484,9 +2484,31 @@ static uint64_t gt_cnt_read(CPUARMState *env, const 
ARMCPRegInfo *ri)
 return gt_get_countervalue(env);
 }
 
+static uint64_t gt_virt_cnt_offset(CPUARMState *env)
+{
+uint64_t hcr;
+
+switch (arm_current_el(env)) {
+case 2:
+hcr = arm_hcr_el2_eff(env);
+if (hcr & HCR_E2H) {
+return 0;
+}
+break;
+case 0:
+hcr = arm_hcr_el2_eff(env);
+if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+return 0;
+}
+break;
+}
+
+return env->cp15.cntvoff_el2;
+}
+
 static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
 {
-return gt_get_countervalue(env) - env->cp15.cntvoff_el2;
+return gt_get_countervalue(env) - gt_virt_cnt_offset(env);
 }
 
 static void gt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -2501,7 +2523,13 @@ static void gt_cval_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri,
  int timeridx)
 {
-uint64_t offset = timeridx == GTIMER_VIRT ? env->cp15.cntvoff_el2 : 0;
+uint64_t offset = 0;
+
+switch (timeridx) {
+case GTIMER_VIRT:
+offset = gt_virt_cnt_offset(env);
+break;
+}
 
 return (uint32_t)(env->cp15.c14_timer[timeridx].cval -
   (gt_get_countervalue(env) - offset));
@@ -2511,7 +2539,13 @@ static void gt_tval_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
   int timeridx,
   uint64_t value)
 {
-uint64_t offset = timeridx == GTIMER_VIRT ? env->cp15.cntvoff_el2 : 0;
+uint64_t offset = 0;
+
+switch (timeridx) {
+case GTIMER_VIRT:
+offset = gt_virt_cnt_offset(env);
+break;
+}
 
 trace_arm_gt_tval_write(timeridx, value);
 env->cp15.c14_timer[timeridx].cval = gt_get_countervalue(env) - offset +
-- 
2.17.1




[Qemu-devel] [PATCH v3 08/34] target/arm: Add CONTEXTIDR_EL2

2019-08-03 Thread Richard Henderson
Not all of the breakpoint types are supported, but those that
only examine contextidr are extended to support the new register.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/debug_helper.c | 50 +--
 target/arm/helper.c   | 11 +
 2 files changed, 49 insertions(+), 12 deletions(-)

diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
index dde80273ff..2e3e90c6a5 100644
--- a/target/arm/debug_helper.c
+++ b/target/arm/debug_helper.c
@@ -20,6 +20,7 @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
 int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
 int bt;
 uint32_t contextidr;
+uint64_t hcr_el2;
 
 /*
  * Links to unimplemented or non-context aware breakpoints are
@@ -40,24 +41,44 @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
 }
 
 bt = extract64(bcr, 20, 4);
-
-/*
- * We match the whole register even if this is AArch32 using the
- * short descriptor format (in which case it holds both PROCID and ASID),
- * since we don't implement the optional v7 context ID masking.
- */
-contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
+hcr_el2 = arm_hcr_el2_eff(env);
 
 switch (bt) {
 case 3: /* linked context ID match */
-if (arm_current_el(env) > 1) {
-/* Context matches never fire in EL2 or (AArch64) EL3 */
+switch (arm_current_el(env)) {
+default:
+/* Context matches never fire in AArch64 EL3 */
 return false;
+case 2:
+if (!(hcr_el2 & HCR_E2H)) {
+/* Context matches never fire in EL2 without E2H enabled. */
+return false;
+}
+contextidr = env->cp15.contextidr_el[2];
+break;
+case 1:
+contextidr = env->cp15.contextidr_el[1];
+break;
+case 0:
+if ((hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
+contextidr = env->cp15.contextidr_el[2];
+} else {
+contextidr = env->cp15.contextidr_el[1];
+}
+break;
 }
-return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
-case 5: /* linked address mismatch (reserved in AArch64) */
+break;
+
+case 7:  /* linked contextidr_el1 match */
+contextidr = env->cp15.contextidr_el[1];
+break;
+case 13: /* linked contextidr_el2 match */
+contextidr = env->cp15.contextidr_el[2];
+break;
+
 case 9: /* linked VMID match (reserved if no EL2) */
 case 11: /* linked context ID and VMID match (reserved if no EL2) */
+case 15: /* linked full context ID match */
 default:
 /*
  * Links to Unlinked context breakpoints must generate no
@@ -66,7 +87,12 @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
 return false;
 }
 
-return false;
+/*
+ * We match the whole register even if this is AArch32 using the
+ * short descriptor format (in which case it holds both PROCID and ASID),
+ * since we don't implement the optional v7 context ID masking.
+ */
+return contextidr == (uint32_t)env->cp15.dbgbvr[lbn];
 }
 
 static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 9a18ecf8f6..8baeb3f319 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6801,6 +6801,17 @@ void register_cp_regs_for_features(ARMCPU *cpu)
 define_arm_cp_regs(cpu, lor_reginfo);
 }
 
+if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
+static const ARMCPRegInfo vhe_reginfo[] = {
+{ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
+  .access = PL2_RW,
+  .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
+REGINFO_SENTINEL
+};
+define_arm_cp_regs(cpu, vhe_reginfo);
+}
+
 if (cpu_isar_feature(aa64_sve, cpu)) {
 define_one_arm_cp_reg(cpu, _el1_reginfo);
 if (arm_feature(env, ARM_FEATURE_EL2)) {
-- 
2.17.1




[Qemu-devel] [PATCH v3 09/34] target/arm: Add TTBR1_EL2

2019-08-03 Thread Richard Henderson
At the same time, add writefn to TTBR0_EL2 and TCR_EL2.
A later patch will update any ASID therein.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 8baeb3f319..8d8b3cc40e 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3449,6 +3449,12 @@ static void vmsa_ttbr_el1_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 }
 }
 
+static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
+uint64_t value)
+{
+raw_write(env, ri, value);
+}
+
 static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
 uint64_t value)
 {
@@ -4844,10 +4850,8 @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
   .resetvalue = 0 },
 { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
   .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
-  .access = PL2_RW,
-  /* no .writefn needed as this can't cause an ASID change;
-   * no .raw_writefn or .resetfn needed as we never use mask/base_mask
-   */
+  .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
+  /* no .raw_writefn or .resetfn needed as we never use mask/base_mask */
   .fieldoffset = offsetof(CPUARMState, cp15.tcr_el[2]) },
 { .name = "VTCR", .state = ARM_CP_STATE_AA32,
   .cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
@@ -4881,7 +4885,7 @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
   .fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[2]) },
 { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
   .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
-  .access = PL2_RW, .resetvalue = 0,
+  .access = PL2_RW, .resetvalue = 0, .writefn = vmsa_tcr_ttbr_el2_write,
   .fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
 { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
   .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
@@ -6807,6 +6811,10 @@ void register_cp_regs_for_features(ARMCPU *cpu)
   .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
   .access = PL2_RW,
   .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
+{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
+  .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
+  .fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) },
 REGINFO_SENTINEL
 };
 define_arm_cp_regs(cpu, vhe_reginfo);
-- 
2.17.1




[Qemu-devel] [PATCH v3 14/34] target/arm: Simplify tlb_force_broadcast alternatives

2019-08-03 Thread Richard Henderson
Rather than call to a separate function and re-compute any
parameters for the flush, simply use the correct flush
function directly.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 52 +
 1 file changed, 24 insertions(+), 28 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7ecaacb276..185f5e4aea 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -626,56 +626,54 @@ static void tlbiall_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
   uint64_t value)
 {
 /* Invalidate all (TLBIALL) */
-ARMCPU *cpu = env_archcpu(env);
+CPUState *cs = env_cpu(env);
 
 if (tlb_force_broadcast(env)) {
-tlbiall_is_write(env, NULL, value);
-return;
+tlb_flush_all_cpus_synced(cs);
+} else {
+tlb_flush(cs);
 }
-
-tlb_flush(CPU(cpu));
 }
 
 static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
   uint64_t value)
 {
 /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
-ARMCPU *cpu = env_archcpu(env);
+CPUState *cs = env_cpu(env);
 
+value &= TARGET_PAGE_MASK;
 if (tlb_force_broadcast(env)) {
-tlbimva_is_write(env, NULL, value);
-return;
+tlb_flush_page_all_cpus_synced(cs, value);
+} else {
+tlb_flush_page(cs, value);
 }
-
-tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
 }
 
 static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
 /* Invalidate by ASID (TLBIASID) */
-ARMCPU *cpu = env_archcpu(env);
+CPUState *cs = env_cpu(env);
 
 if (tlb_force_broadcast(env)) {
-tlbiasid_is_write(env, NULL, value);
-return;
+tlb_flush_all_cpus_synced(cs);
+} else {
+tlb_flush(cs);
 }
-
-tlb_flush(CPU(cpu));
 }
 
 static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
 /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
-ARMCPU *cpu = env_archcpu(env);
+CPUState *cs = env_cpu(env);
 
+value &= TARGET_PAGE_MASK;
 if (tlb_force_broadcast(env)) {
-tlbimvaa_is_write(env, NULL, value);
-return;
+tlb_flush_page_all_cpus_synced(cs, value);
+} else {
+tlb_flush_page(cs, value);
 }
-
-tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
 }
 
 static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3923,11 +3921,10 @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 int mask = vae1_tlbmask(env);
 
 if (tlb_force_broadcast(env)) {
-tlbi_aa64_vmalle1is_write(env, NULL, value);
-return;
+tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
+} else {
+tlb_flush_by_mmuidx(cs, mask);
 }
-
-tlb_flush_by_mmuidx(cs, mask);
 }
 
 static int vmalle1_tlbmask(CPUARMState *env)
@@ -4049,11 +4046,10 @@ static void tlbi_aa64_vae1_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 uint64_t pageaddr = sextract64(value << 12, 0, 56);
 
 if (tlb_force_broadcast(env)) {
-tlbi_aa64_vae1is_write(env, NULL, value);
-return;
+tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
+} else {
+tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
 }
-
-tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
 }
 
 static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
-- 
2.17.1




[Qemu-devel] [PATCH v3 13/34] target/arm: Split out vae1_tlbmask, vmalle1_tlbmask

2019-08-03 Thread Richard Henderson
No functional change, but unify code sequences.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 118 ++--
 1 file changed, 37 insertions(+), 81 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index e9f4cae5e8..7ecaacb276 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3898,70 +3898,61 @@ static CPAccessResult aa64_cacheop_access(CPUARMState 
*env,
  * Page D4-1736 (DDI0487A.b)
  */
 
+static int vae1_tlbmask(CPUARMState *env)
+{
+if (arm_is_secure_below_el3(env)) {
+return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
+} else {
+return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
+}
+}
+
 static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
   uint64_t value)
 {
 CPUState *cs = env_cpu(env);
-bool sec = arm_is_secure_below_el3(env);
+int mask = vae1_tlbmask(env);
 
-if (sec) {
-tlb_flush_by_mmuidx_all_cpus_synced(cs,
-ARMMMUIdxBit_S1SE1 |
-ARMMMUIdxBit_S1SE0);
-} else {
-tlb_flush_by_mmuidx_all_cpus_synced(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0);
-}
+tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
 }
 
 static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
 uint64_t value)
 {
 CPUState *cs = env_cpu(env);
+int mask = vae1_tlbmask(env);
 
 if (tlb_force_broadcast(env)) {
 tlbi_aa64_vmalle1is_write(env, NULL, value);
 return;
 }
 
+tlb_flush_by_mmuidx(cs, mask);
+}
+
+static int vmalle1_tlbmask(CPUARMState *env)
+{
+/*
+ * Note that the 'ALL' scope must invalidate both stage 1 and
+ * stage 2 translations, whereas most other scopes only invalidate
+ * stage 1 translations.
+ */
 if (arm_is_secure_below_el3(env)) {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S1SE1 |
-ARMMMUIdxBit_S1SE0);
+return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
+} else if (arm_feature(env, ARM_FEATURE_EL2)) {
+return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0 | ARMMMUIdxBit_S2NS;
 } else {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0);
+return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
 }
 }
 
 static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
   uint64_t value)
 {
-/* Note that the 'ALL' scope must invalidate both stage 1 and
- * stage 2 translations, whereas most other scopes only invalidate
- * stage 1 translations.
- */
-ARMCPU *cpu = env_archcpu(env);
-CPUState *cs = CPU(cpu);
+CPUState *cs = env_cpu(env);
+int mask = vmalle1_tlbmask(env);
 
-if (arm_is_secure_below_el3(env)) {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S1SE1 |
-ARMMMUIdxBit_S1SE0);
-} else {
-if (arm_feature(env, ARM_FEATURE_EL2)) {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0 |
-ARMMMUIdxBit_S2NS);
-} else {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0);
-}
-}
+tlb_flush_by_mmuidx(cs, mask);
 }
 
 static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3985,28 +3976,10 @@ static void tlbi_aa64_alle3_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
 uint64_t value)
 {
-/* Note that the 'ALL' scope must invalidate both stage 1 and
- * stage 2 translations, whereas most other scopes only invalidate
- * stage 1 translations.
- */
 CPUState *cs = env_cpu(env);
-bool sec = arm_is_secure_below_el3(env);
-bool has_el2 = arm_feature(env, ARM_FEATURE_EL2);
+int mask = vmalle1_tlbmask(env);
 
-if (sec) {
-tlb_flush_by_mmuidx_all_cpus_synced(cs,
-ARMMMUIdxBit_S1SE1 |
-ARMMMUIdxBit_S1SE0);
-} else if (has_el2) {
-tlb_flush_by_mmuidx_all_cpus_synced(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0 |
-ARMMMUIdxBit_S2NS);
-} else {
-  tlb_flush_by_mmuidx_all_cpus_synced(cs,
-

[Qemu-devel] [PATCH v3 07/34] target/arm: Enable HCR_E2H for VHE

2019-08-03 Thread Richard Henderson
Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h| 7 ---
 target/arm/helper.c | 6 +-
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index e6a76d14c6..e37008a4f7 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1366,13 +1366,6 @@ static inline void xpsr_write(CPUARMState *env, uint32_t 
val, uint32_t mask)
 #define HCR_ATA   (1ULL << 56)
 #define HCR_DCT   (1ULL << 57)
 
-/*
- * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
- * HCR_MASK and then clear it again if the feature bit is not set in
- * hcr_write().
- */
-#define HCR_MASK  ((1ULL << 34) - 1)
-
 #define SCR_NS(1U << 0)
 #define SCR_IRQ   (1U << 1)
 #define SCR_FIQ   (1U << 2)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 65e3ffbb43..9a18ecf8f6 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4623,7 +4623,8 @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
 static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
 {
 ARMCPU *cpu = env_archcpu(env);
-uint64_t valid_mask = HCR_MASK;
+/* Begin with bits defined in base ARMv8.0.  */
+uint64_t valid_mask = MAKE_64BIT_MASK(0, 34);
 
 if (arm_feature(env, ARM_FEATURE_EL3)) {
 valid_mask &= ~HCR_HCD;
@@ -4637,6 +4638,9 @@ static void hcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
  */
 valid_mask &= ~HCR_TSC;
 }
+if (cpu_isar_feature(aa64_vh, cpu)) {
+valid_mask |= HCR_E2H;
+}
 if (cpu_isar_feature(aa64_lor, cpu)) {
 valid_mask |= HCR_TLOR;
 }
-- 
2.17.1




[Qemu-devel] [PATCH v3 06/34] target/arm: Define isar_feature_aa64_vh

2019-08-03 Thread Richard Henderson
Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 94c990cddb..e6a76d14c6 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3573,6 +3573,11 @@ static inline bool isar_feature_aa64_sve(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
 }
 
+static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
+}
+
 static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
-- 
2.17.1




[Qemu-devel] [PATCH v3 04/34] target/arm: Install ASIDs for short-form from EL1

2019-08-03 Thread Richard Henderson
This is less complex than the LPAE case, but still we now avoid the
flush in case it is only the PROCID field that is changing.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 34 --
 1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 2a65f4127e..c0dc76ed41 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -551,17 +551,31 @@ static void fcse_write(CPUARMState *env, const 
ARMCPRegInfo *ri, uint64_t value)
 static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
  uint64_t value)
 {
-ARMCPU *cpu = env_archcpu(env);
-
-if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
-&& !extended_addresses_enabled(env)) {
-/* For VMSA (when not using the LPAE long descriptor page table
- * format) this register includes the ASID, so do a TLB flush.
- * For PMSA it is purely a process ID and no action is needed.
- */
-tlb_flush(CPU(cpu));
-}
 raw_write(env, ri, value);
+
+/*
+ * For VMSA (when not using the LPAE long descriptor page table format)
+ * this register includes the ASID.  For PMSA it is purely a process ID
+ * and no action is needed.
+ */
+if (!arm_feature(env, ARM_FEATURE_PMSA) &&
+!extended_addresses_enabled(env)) {
+CPUState *cs = env_cpu(env);
+int asid = extract32(value, 0, 8);
+int idxmask;
+
+switch (ri->secure) {
+case ARM_CP_SECSTATE_S:
+idxmask = ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
+break;
+case ARM_CP_SECSTATE_NS:
+idxmask = ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
+break;
+default:
+g_assert_not_reached();
+}
+tlb_set_asid_for_mmuidx(cs, asid, idxmask, 0);
+}
 }
 
 /* IS variants of TLB operations must affect all cores */
-- 
2.17.1




[Qemu-devel] [PATCH v3 03/34] target/arm: Install ASIDs for long-form from EL1

2019-08-03 Thread Richard Henderson
In addition to providing the core with the current ASID, this minimizes
both the number of flushes due to non-changing ASID as well as the set
of mmu_idx that are affected by each flush.

In particular, updates to the secure mode registers flushes only the
relevant secure mode mmu_idx's, and similarly non-secure updates only
affect non-secure mmu_idx's.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 73 +
 1 file changed, 48 insertions(+), 25 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index b74c23a9bc..2a65f4127e 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3327,6 +3327,36 @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
 REGINFO_SENTINEL
 };
 
+/* Called after a change to any of TTBR*_EL1 or TTBCR_EL1.  */
+static void update_lpae_el1_asid(CPUARMState *env, int secure)
+{
+CPUState *cs = env_cpu(env);
+uint64_t ttbr0, ttbr1, ttcr;
+int asid, idxmask;
+
+switch (secure) {
+case ARM_CP_SECSTATE_S:
+ttbr0 = env->cp15.ttbr0_s;
+ttbr1 = env->cp15.ttbr1_s;
+ttcr = env->cp15.tcr_el[3].raw_tcr;
+/* Note that cp15.ttbr0_s == cp15.ttbr0_el[3], so S1E3 is affected.  */
+/* ??? Secure EL3 really using the ASID field?  Doesn't make sense.  */
+idxmask = ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0 | ARMMMUIdxBit_S1E3;
+break;
+case ARM_CP_SECSTATE_NS:
+ttbr0 = env->cp15.ttbr0_ns;
+ttbr1 = env->cp15.ttbr1_ns;
+ttcr = env->cp15.tcr_el[1].raw_tcr;
+idxmask = ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
+break;
+default:
+g_assert_not_reached();
+}
+asid = extract64(ttcr & TTBCR_A1 ? ttbr1 : ttbr0, 48, 16);
+
+tlb_set_asid_for_mmuidx(cs, asid, idxmask, 0);
+}
+
 static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
  uint64_t value)
 {
@@ -3363,18 +3393,16 @@ static void vmsa_ttbcr_raw_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
 static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
  uint64_t value)
 {
-ARMCPU *cpu = env_archcpu(env);
 TCR *tcr = raw_ptr(env, ri);
 
-if (arm_feature(env, ARM_FEATURE_LPAE)) {
-/* With LPAE the TTBCR could result in a change of ASID
- * via the TTBCR.A1 bit, so do a TLB flush.
- */
-tlb_flush(CPU(cpu));
-}
 /* Preserve the high half of TCR_EL1, set via TTBCR2.  */
 value = deposit64(tcr->raw_tcr, 0, 32, value);
 vmsa_ttbcr_raw_write(env, ri, value);
+
+if (arm_feature(env, ARM_FEATURE_LPAE)) {
+/* The A1 bit controls which ASID is active.  */
+update_lpae_el1_asid(env, ri->secure);
+}
 }
 
 static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
@@ -3392,24 +3420,19 @@ static void vmsa_ttbcr_reset(CPUARMState *env, const 
ARMCPRegInfo *ri)
 static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
-ARMCPU *cpu = env_archcpu(env);
-TCR *tcr = raw_ptr(env, ri);
-
-/* For AArch64 the A1 bit could result in a change of ASID, so TLB flush. 
*/
-tlb_flush(CPU(cpu));
-tcr->raw_tcr = value;
+raw_write(env, ri, value);
+/* The A1 bit controls which ASID is active.  */
+update_lpae_el1_asid(env, ri->secure);
 }
 
-static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
-uint64_t value)
+static void vmsa_ttbr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
+uint64_t value)
 {
-/* If the ASID changes (with a 64-bit write), we must flush the TLB.  */
-if (cpreg_field_is_64bit(ri) &&
-extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
-ARMCPU *cpu = env_archcpu(env);
-tlb_flush(CPU(cpu));
-}
 raw_write(env, ri, value);
+if (cpreg_field_is_64bit(ri)) {
+/* The LPAE format (64-bit write) contains an ASID field.  */
+update_lpae_el1_asid(env, ri->secure);
+}
 }
 
 static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
@@ -3455,12 +3478,12 @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
   .fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, },
 { .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
   .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
-  .access = PL1_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
+  .access = PL1_RW, .writefn = vmsa_ttbr_el1_write, .resetvalue = 0,
   .bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s),
  offsetof(CPUARMState, cp15.ttbr0_ns) } },
 { .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH,
   .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1,
-  .access = PL1_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
+  .access = PL1_RW, .writefn = 

[Qemu-devel] [PATCH v3 01/34] cputlb: Add tlb_set_asid_for_mmuidx

2019-08-03 Thread Richard Henderson
Although we can't do much with ASIDs except remember them, this
will allow cleanups within target/ that should make things clearer.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
v2: Assert cpu_is_self; only flush idx w/ asid mismatch.
v3: Improve asid comment.
---
 include/exec/cpu-all.h  | 11 +++
 include/exec/cpu-defs.h |  2 ++
 include/exec/exec-all.h | 19 +++
 accel/tcg/cputlb.c  | 26 ++
 4 files changed, 58 insertions(+)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58f81..40b140cbba 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -439,4 +439,15 @@ static inline CPUTLB *env_tlb(CPUArchState *env)
 return _neg(env)->tlb;
 }
 
+/**
+ * cpu_tlb(env)
+ * @cpu: The generic CPUState
+ *
+ * Return the CPUTLB state associated with the cpu.
+ */
+static inline CPUTLB *cpu_tlb(CPUState *cpu)
+{
+return _neg(cpu)->tlb;
+}
+
 #endif /* CPU_ALL_H */
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index 9bc713a70b..b42986d822 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -169,6 +169,8 @@ typedef struct CPUTLBDesc {
 size_t n_used_entries;
 /* The next index to use in the tlb victim table.  */
 size_t vindex;
+/* The current ASID for this tlb, if used; otherwise ignored.  */
+uint32_t asid;
 /* The tlb victim table, in two parts.  */
 CPUTLBEntry vtable[CPU_VTLB_SIZE];
 CPUIOTLBEntry viotlb[CPU_VTLB_SIZE];
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 16034ee651..9c77aa5bf9 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -225,6 +225,21 @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint16_t 
idxmap);
  * depend on when the guests translation ends the TB.
  */
 void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap);
+/**
+ * tlb_set_asid_for_mmuidx:
+ * @cpu: Originating cpu
+ * @asid: Address Space Identifier
+ * @idxmap: bitmap of MMU indexes to set to @asid
+ * @depmap: bitmap of dependent MMU indexes
+ *
+ * Set an ASID for all of @idxmap.  If any previous ASID was different,
+ * then we will flush the mmu idx.  If a flush is required, then also flush
+ * all dependent mmu indicies in @depmap.  This latter is typically used
+ * for secondary page resolution, for implementing virtualization within
+ * the guest.
+ */
+void tlb_set_asid_for_mmuidx(CPUState *cpu, uint32_t asid,
+ uint16_t idxmap, uint16_t dep_idxmap);
 /**
  * tlb_set_page_with_attrs:
  * @cpu: CPU to add this TLB entry for
@@ -310,6 +325,10 @@ static inline void 
tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
uint16_t idxmap)
 {
 }
+static inline void tlb_set_asid_for_mmuidx(CPUState *cpu, uint32_t asid,
+   uint16_t idxmap, uint16_t depmap)
+{
+}
 #endif
 
 #define CODE_GEN_ALIGN   16 /* must be >= of the size of a icache line 
*/
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b25a..c68f57755b 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -540,6 +540,32 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, 
target_ulong addr)
 tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
 }
 
+void tlb_set_asid_for_mmuidx(CPUState *cpu, uint32_t asid, uint16_t idxmap,
+ uint16_t depmap)
+{
+CPUTLB *tlb = cpu_tlb(cpu);
+uint16_t work, to_flush = 0;
+
+/* It doesn't make sense to set context across cpus.  */
+assert_cpu_is_self(cpu);
+
+/*
+ * We don't support ASIDs except for trivially.
+ * If there is any change, then we must flush the TLB.
+ */
+for (work = idxmap; work != 0; work &= work - 1) {
+int mmu_idx = ctz32(work);
+if (tlb->d[mmu_idx].asid != asid) {
+tlb->d[mmu_idx].asid = asid;
+to_flush |= 1 << mmu_idx;
+}
+}
+if (to_flush) {
+to_flush |= depmap;
+tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(to_flush));
+}
+}
+
 /* update the TLBs so that writes to code in the virtual page 'addr'
can be detected */
 void tlb_protect_code(ram_addr_t ram_addr)
-- 
2.17.1




[Qemu-devel] [PATCH v3 02/34] cputlb: Add tlb_flush_asid_by_mmuidx and friends

2019-08-03 Thread Richard Henderson
Since we have remembered ASIDs, we can further minimize flushing
by comparing against the one we want to flush.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 include/exec/exec-all.h | 16 
 include/qom/cpu.h   |  2 ++
 accel/tcg/cputlb.c  | 55 +
 3 files changed, 73 insertions(+)

diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 9c77aa5bf9..0d890e1e60 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -240,6 +240,22 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, 
uint16_t idxmap);
  */
 void tlb_set_asid_for_mmuidx(CPUState *cpu, uint32_t asid,
  uint16_t idxmap, uint16_t dep_idxmap);
+/**
+ * tlb_flush_asid_by_mmuidx:
+ * @cpu: Originating CPU of the flush
+ * @asid: Address Space Identifier
+ * @idxmap: bitmap of MMU indexes to flush if asid matches
+ *
+ * For each mmu index, if @asid matches the value previously saved via
+ * tlb_set_asid_for_mmuidx, flush the index.
+ */
+void tlb_flush_asid_by_mmuidx(CPUState *cpu, uint32_t asid, uint16_t idxmap);
+/* Similarly, broadcasting to all cpus. */
+void tlb_flush_asid_by_mmuidx_all_cpus(CPUState *cpu, uint32_t asid,
+   uint16_t idxmap);
+/* Similarly, waiting for the broadcast to complete.  */
+void tlb_flush_asid_by_mmuidx_all_cpus_synced(CPUState *cpu, uint32_t asid,
+  uint16_t idxmap);
 /**
  * tlb_set_page_with_attrs:
  * @cpu: CPU to add this TLB entry for
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 5ee0046b62..c072dd4c47 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -285,12 +285,14 @@ typedef union {
 unsigned long host_ulong;
 void *host_ptr;
 vaddr target_ptr;
+uint64_t  uint64;
 } run_on_cpu_data;
 
 #define RUN_ON_CPU_HOST_PTR(p)((run_on_cpu_data){.host_ptr = (p)})
 #define RUN_ON_CPU_HOST_INT(i)((run_on_cpu_data){.host_int = (i)})
 #define RUN_ON_CPU_HOST_ULONG(ul) ((run_on_cpu_data){.host_ulong = (ul)})
 #define RUN_ON_CPU_TARGET_PTR(v)  ((run_on_cpu_data){.target_ptr = (v)})
+#define RUN_ON_CPU_UINT64(i)  ((run_on_cpu_data){.uint64 = (i)})
 #define RUN_ON_CPU_NULL   RUN_ON_CPU_HOST_PTR(NULL)
 
 typedef void (*run_on_cpu_func)(CPUState *cpu, run_on_cpu_data data);
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index c68f57755b..62baaa9ca6 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -540,6 +540,61 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, 
target_ulong addr)
 tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
 }
 
+static void tlb_flush_asid_by_mmuidx_async_work(CPUState *cpu,
+run_on_cpu_data data)
+{
+CPUTLB *tlb = cpu_tlb(cpu);
+uint32_t asid = data.uint64;
+uint16_t idxmap = data.uint64 >> 32;
+uint16_t to_flush = 0, work;
+
+assert_cpu_is_self(cpu);
+
+for (work = idxmap; work != 0; work &= work - 1) {
+int mmu_idx = ctz32(work);
+if (tlb->d[mmu_idx].asid == asid) {
+to_flush |= 1 << mmu_idx;
+}
+}
+
+if (to_flush) {
+tlb_flush_by_mmuidx_async_work(cpu, RUN_ON_CPU_HOST_INT(to_flush));
+}
+}
+
+void tlb_flush_asid_by_mmuidx(CPUState *cpu, uint32_t asid, uint16_t idxmap)
+{
+uint64_t asid_idx = deposit64(asid, 32, 32, idxmap);
+
+if (cpu->created && !qemu_cpu_is_self(cpu)) {
+async_run_on_cpu(cpu, tlb_flush_asid_by_mmuidx_async_work,
+ RUN_ON_CPU_UINT64(asid_idx));
+} else {
+tlb_flush_asid_by_mmuidx_async_work(cpu, RUN_ON_CPU_UINT64(asid_idx));
+}
+}
+
+void tlb_flush_asid_by_mmuidx_all_cpus(CPUState *src_cpu,
+   uint32_t asid, uint16_t idxmap)
+{
+uint64_t asid_idx = deposit64(asid, 32, 32, idxmap);
+
+flush_all_helper(src_cpu, tlb_flush_asid_by_mmuidx_async_work,
+ RUN_ON_CPU_UINT64(asid_idx));
+tlb_flush_asid_by_mmuidx_async_work(src_cpu, RUN_ON_CPU_UINT64(asid_idx));
+}
+
+void tlb_flush_asid_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
+  uint32_t asid, uint16_t idxmap)
+{
+uint64_t asid_idx = deposit64(asid, 32, 32, idxmap);
+
+flush_all_helper(src_cpu, tlb_flush_asid_by_mmuidx_async_work,
+ RUN_ON_CPU_UINT64(asid_idx));
+async_safe_run_on_cpu(src_cpu, tlb_flush_asid_by_mmuidx_async_work,
+  RUN_ON_CPU_UINT64(asid_idx));
+}
+
 void tlb_set_asid_for_mmuidx(CPUState *cpu, uint32_t asid, uint16_t idxmap,
  uint16_t depmap)
 {
-- 
2.17.1




[Qemu-devel] [PATCH v3 05/34] target/arm: Install ASIDs for EL2

2019-08-03 Thread Richard Henderson
The VMID is the ASID for the 2nd stage page lookup.

Reviewed-by: Alex Bennée 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 26 --
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index c0dc76ed41..65e3ffbb43 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3452,17 +3452,23 @@ static void vmsa_ttbr_el1_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
 uint64_t value)
 {
-ARMCPU *cpu = env_archcpu(env);
-CPUState *cs = CPU(cpu);
+CPUState *cs = env_cpu(env);
+int vmid;
 
-/* Accesses to VTTBR may change the VMID so we must flush the TLB.  */
-if (raw_read(env, ri) != value) {
-tlb_flush_by_mmuidx(cs,
-ARMMMUIdxBit_S12NSE1 |
-ARMMMUIdxBit_S12NSE0 |
-ARMMMUIdxBit_S2NS);
-raw_write(env, ri, value);
-}
+raw_write(env, ri, value);
+
+/*
+ * TODO: with ARMv8.1-VMID16, aarch64 must examine VTCR.VS
+ * (re-evaluating with changes to VTCR) then use bits [63:48].
+ */
+vmid = extract64(value, 48, 8);
+
+/*
+ * A change in VMID to the stage2 page table (S2NS) invalidates
+ * the combined stage 1&2 tlbs (S12NSE1 and S12NSE0).
+ */
+tlb_set_asid_for_mmuidx(cs, vmid, ARMMMUIdxBit_S2NS,
+ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0);
 }
 
 static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
-- 
2.17.1




[Qemu-devel] [PATCH v3 00/34] target/arm: Implement ARMv8.1-VHE

2019-08-03 Thread Richard Henderson
About half of this patch set is cleanup of the qemu tlb handling
leading up to the actual implementation of VHE, and the biggest
piece of that: The EL2&0 translation regime.

Changes since v2:
  * arm_mmu_idx was incomplete; test TGE+E2H not just E2H.
  * arm_sctlr was incomplete; now uses arm_mmu_idx to avoid
duplication of tests.
  * Update aa64_zva_access and ctr_el0_access for EL2.

Changes since v1:
  * Merge feedback from AJB.
  * Split out 7 renaming patches from "Reorganize ARMMMUIdx".
  * Alex's MIDR patch keeps the nested KVM from spitting warnings.

I have tested 

  qemu-system-aarch64 -accel kvm -cpu host -M virt,gic-version-host \
-m 512 -bios /usr/share/edk2/aarch64/QEMU_EFI.fd -nographic

with fedora 30 system qemu, itself booted with

  ../bld/aarch64-softmmu/qemu-system-aarch64 \
-cpu max -M virt,gic-version=3,virtualization=on \
-drive if=virtio,file=./f30.q,format=qcow2 \
-m 4G -nographic

It took a while, but eventually the nested bios arrived at the
pxe boot sequence.  Thankfully (?), the f30 shipped bios has
debug enabled, so there's some sense of progress in the meantime.


r~


Alex Bennée (2):
  target/arm: check TGE and E2H flags for EL0 pauth traps
  target/arm: generate a custom MIDR for -cpu max

Richard Henderson (32):
  cputlb: Add tlb_set_asid_for_mmuidx
  cputlb: Add tlb_flush_asid_by_mmuidx and friends
  target/arm: Install ASIDs for long-form from EL1
  target/arm: Install ASIDs for short-form from EL1
  target/arm: Install ASIDs for EL2
  target/arm: Define isar_feature_aa64_vh
  target/arm: Enable HCR_E2H for VHE
  target/arm: Add CONTEXTIDR_EL2
  target/arm: Add TTBR1_EL2
  target/arm: Update CNTVCT_EL0 for VHE
  target/arm: Add the hypervisor virtual counter
  target/arm: Add VHE system register redirection and aliasing
  target/arm: Split out vae1_tlbmask, vmalle1_tlbmask
  target/arm: Simplify tlb_force_broadcast alternatives
  target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*
  target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2
  target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*
  target/arm: Rename ARMMMUIdx_S1SE* to ARMMMUIdx_SE*
  target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3
  target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2
  target/arm: Reorganize ARMMMUIdx
  target/arm: Add regime_has_2_ranges
  target/arm: Update arm_mmu_idx for VHE
  target/arm: Update arm_sctlr for VHE
  target/arm: Update aa64_zva_access for EL2
  target/arm: Update ctr_el0_access for EL2
  target/arm: Install asids for E2&0 translation regime
  target/arm: Flush tlbs for E2&0 translation regime
  target/arm: Update arm_phys_excp_target_el for TGE
  target/arm: Update regime_is_user for EL2&0
  target/arm: Update {fp,sve}_exception_el for VHE
  target/arm: Enable ARMv8.1-VHE in -cpu max

 include/exec/cpu-all.h |   11 +
 include/exec/cpu-defs.h|2 +
 include/exec/exec-all.h|   35 ++
 include/qom/cpu.h  |2 +
 target/arm/cpu-qom.h   |1 +
 target/arm/cpu.h   |  261 -
 target/arm/internals.h |   62 ++-
 target/arm/translate.h |2 +-
 accel/tcg/cputlb.c |   81 +++
 target/arm/cpu.c   |2 +
 target/arm/cpu64.c |   20 +
 target/arm/debug_helper.c  |   50 +-
 target/arm/helper-a64.c|2 +-
 target/arm/helper.c| 1042 +---
 target/arm/m_helper.c  |6 +-
 target/arm/pauth_helper.c  |   13 +-
 target/arm/translate-a64.c |   13 +-
 target/arm/translate.c |   17 +-
 18 files changed, 1134 insertions(+), 488 deletions(-)

-- 
2.17.1




[Qemu-devel] [PATCH v2] ivshmem-server: Terminate also on SIGINT

2019-08-03 Thread Jan Kiszka
From: Jan Kiszka 

Allows to shutdown a foreground session via ctrl-c.

Signed-off-by: Jan Kiszka 
---

Changes in v2:
 - adjust error message

 contrib/ivshmem-server/main.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/contrib/ivshmem-server/main.c b/contrib/ivshmem-server/main.c
index 197c79c57e..e4cd35f74c 100644
--- a/contrib/ivshmem-server/main.c
+++ b/contrib/ivshmem-server/main.c
@@ -223,8 +223,9 @@ main(int argc, char *argv[])
 sa_quit.sa_handler = ivshmem_server_quit_cb;
 sa_quit.sa_flags = 0;
 if (sigemptyset(_quit.sa_mask) == -1 ||
-sigaction(SIGTERM, _quit, 0) == -1) {
-perror("failed to add SIGTERM handler; sigaction");
+sigaction(SIGTERM, _quit, 0) == -1 ||
+sigaction(SIGINT, _quit, 0) == -1) {
+perror("failed to add signal handler; sigaction");
 goto err;
 }

--
2.16.4



Re: [Qemu-devel] qemu-ga -- virtio driver version reporting

2019-08-03 Thread Marc-André Lureau
Hi

On Fri, Aug 2, 2019 at 5:12 PM Tomáš Golembiovský  wrote:
>
> Hi,
>
> I would like to add version reporting of Windows virtio drivers to qemu-ga.
> Obviously this is specific to Windows as for POSIX systems it corelates with
> the version of kernel. I would appreciate your ideas on a few topics.
>
> Does it make sense to add this information as new (optonal) field to result of
> 'guest-get-osinfo'. Or would it be better to add whole new command? I expect

If the information is cheap to retrieve, I think it is fine as part of
get-osinfo.

> the result to look something like this:
>
> "component-versions": [
> {
> "name": "VirtIO Balloon Driver",
> "version": "03/10/2019,62.77.104.16900"
> },
> {
> "name": "QEMU PVPanic Device",
> "version": "06/11/2018,62.76.104.15400"
> },
> ...
> ]

I am not a Windows expert, but I can imagine drivers have a more
uniquely identifiable ID than a human string.

>
> Alternatively we could report all available versions of the specific
> driver instead of just the latest. Note that this does not say much
> about which version is in use or if a device is available in the
> system.

What's the goal of this version reporting btw? to audit the VM? Isn't
there other mechanism to keep Windows systems up to date and alert
management layers? Perhaps that's Windows business/enterprise
solutions that are too expensive though, and we want something more
specific to qemu VMs.

>
>
> I have checked the available drivers and the names quite vary. I guess we'll
> need to list and match the complete name and not just some substring (like
> "VirtIO"). See the following list:
>
> QEMU FWCfg Device
> QEMU PVPanic Device
> QEMU Serial PCI Card
> Red Hat Q35 SM Bus driver
> Red Hat QXL controller
> Red Hat VirtIO Ethernet Adapter
> Red Hat VirtIO SCSI controller
> Red Hat VirtIO SCSI controller
> Red Hat VirtIO SCSI pass-through controller
> VirtIO Balloon Driver
> VirtIO Input Driver
> VirtIO RNG Device
> VirtIO Serial Driver
> VirtIO-Serial Driver
>
> Is it OK to hardcode the list in qemu-ga source? Is there already any support
> for dealing with regexes or tries in qemu source tree?

glib has GRegexp.

>
> Any other ideas, concerns?
>
> Tomas
>
> --
> Tomáš Golembiovský 



Re: [Qemu-devel] [PATCH] ivshmem-server: Terminate also on SIGINT

2019-08-03 Thread Claudio Fontana
On 8/3/19 1:48 PM, Jan Kiszka wrote:
> From: Jan Kiszka 
> 
> Allows to shutdown a foreground session via ctrl-c.
> 
> Signed-off-by: Jan Kiszka 
> ---
>  contrib/ivshmem-server/main.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/contrib/ivshmem-server/main.c b/contrib/ivshmem-server/main.c
> index 197c79c57e..8a81cdb04c 100644
> --- a/contrib/ivshmem-server/main.c
> +++ b/contrib/ivshmem-server/main.c
> @@ -223,7 +223,8 @@ main(int argc, char *argv[])
>  sa_quit.sa_handler = ivshmem_server_quit_cb;
>  sa_quit.sa_flags = 0;
>  if (sigemptyset(_quit.sa_mask) == -1 ||
> -sigaction(SIGTERM, _quit, 0) == -1) {
> +sigaction(SIGTERM, _quit, 0) == -1 ||
> +sigaction(SIGINT, _quit, 0) == -1) {
>  perror("failed to add SIGTERM handler; sigaction");

I guess the error string should not mention SIGTERM specifically anymore:

perror("failed to add signal handler; sigaction");

>  goto err;
>  }
> --
> 2.16.4
> 
> 

Ciao,

Claudio



[Qemu-devel] [PATCH] ivshmem-server: Terminate also on SIGINT

2019-08-03 Thread Jan Kiszka
From: Jan Kiszka 

Allows to shutdown a foreground session via ctrl-c.

Signed-off-by: Jan Kiszka 
---
 contrib/ivshmem-server/main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/contrib/ivshmem-server/main.c b/contrib/ivshmem-server/main.c
index 197c79c57e..8a81cdb04c 100644
--- a/contrib/ivshmem-server/main.c
+++ b/contrib/ivshmem-server/main.c
@@ -223,7 +223,8 @@ main(int argc, char *argv[])
 sa_quit.sa_handler = ivshmem_server_quit_cb;
 sa_quit.sa_flags = 0;
 if (sigemptyset(_quit.sa_mask) == -1 ||
-sigaction(SIGTERM, _quit, 0) == -1) {
+sigaction(SIGTERM, _quit, 0) == -1 ||
+sigaction(SIGINT, _quit, 0) == -1) {
 perror("failed to add SIGTERM handler; sigaction");
 goto err;
 }
--
2.16.4



[Qemu-devel] [PATCH] ivshmem-server: Clean up shmem on shutdown

2019-08-03 Thread Jan Kiszka
From: Jan Kiszka 

So far, the server leaves the posix shared memory object behind when
terminating, requiring the user to explicitly remove it in order to
start a new instance.

Signed-off-by: Jan Kiszka 
---
 contrib/ivshmem-server/ivshmem-server.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/contrib/ivshmem-server/ivshmem-server.c 
b/contrib/ivshmem-server/ivshmem-server.c
index 77f97b209c..9b9dbc87ec 100644
--- a/contrib/ivshmem-server/ivshmem-server.c
+++ b/contrib/ivshmem-server/ivshmem-server.c
@@ -370,6 +370,7 @@ ivshmem_server_close(IvshmemServer *server)
 }

 unlink(server->unix_sock_path);
+shm_unlink(server->shm_path);
 close(server->sock_fd);
 close(server->shm_fd);
 server->sock_fd = -1;
--
2.16.4



Re: [Qemu-devel] [PULL 0/1] EDK2 firmware patches

2019-08-03 Thread Peter Maydell
On Sat, 3 Aug 2019 at 09:26, Philippe Mathieu-Daudé  wrote:
>
> The following changes since commit 9bcf2dfa163f67b0fec6ee0fe88ad5dc5d69dc59:
>
>   Merge remote-tracking branch 
> 'remotes/elmarco/tags/slirp-CVE-2019-14378-pull-request' into staging 
> (2019-08-02 13:06:03 +0100)
>
> are available in the Git repository at:
>
>   https://gitlab.com/philmd/qemu.git tags/edk2-next-20190803
>
> for you to fetch changes up to 177cd674d6203d3c1a98e170ea56c5a904ac4ce8:
>
>   Makefile: remove DESTDIR from firmware file content (2019-08-03 09:52:32 
> +0200)
>
> 
> A harmless build-sys patch that fixes a regression affecting Linux
> distributions packaging QEMU.
>
> 
>
> Olaf Hering (1):
>   Makefile: remove DESTDIR from firmware file content

Is this pullreq intended for 4.1 ?

thanks
-- PMM



[Qemu-devel] [PULL 1/1] Makefile: remove DESTDIR from firmware file content

2019-08-03 Thread Philippe Mathieu-Daudé
From: Olaf Hering 

The resulting firmware files should only contain the runtime path.
Fixes commit 26ce90fde5c ("Makefile: install the edk2 firmware images
and their descriptors")

Signed-off-by: Olaf Hering 
Reviewed-by: Daniel P. Berrangé 
Reviewed-by: Philippe Mathieu-Daudé 
Reviewed-by: Laszlo Ersek 
Message-Id: <20190530192812.17637-1-o...@aepfle.de>
Fixes: https://bugs.launchpad.net/qemu/+bug/1838703
Signed-off-by: Philippe Mathieu-Daudé 
---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index cfab1561b9..85862fb81a 100644
--- a/Makefile
+++ b/Makefile
@@ -881,7 +881,7 @@ ifneq ($(DESCS),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/firmware"
set -e; tmpf=$$(mktemp); trap 'rm -f -- "$$tmpf"' EXIT; \
for x in $(DESCS); do \
-   sed -e 's,@DATADIR@,$(DESTDIR)$(qemu_datadir),' \
+   sed -e 's,@DATADIR@,$(qemu_datadir),' \
"$(SRC_PATH)/pc-bios/descriptors/$$x" > "$$tmpf"; \
$(INSTALL_DATA) "$$tmpf" \
"$(DESTDIR)$(qemu_datadir)/firmware/$$x"; \
-- 
2.20.1




[Qemu-devel] [PULL 0/1] EDK2 firmware patches

2019-08-03 Thread Philippe Mathieu-Daudé
The following changes since commit 9bcf2dfa163f67b0fec6ee0fe88ad5dc5d69dc59:

  Merge remote-tracking branch 
'remotes/elmarco/tags/slirp-CVE-2019-14378-pull-request' into staging 
(2019-08-02 13:06:03 +0100)

are available in the Git repository at:

  https://gitlab.com/philmd/qemu.git tags/edk2-next-20190803

for you to fetch changes up to 177cd674d6203d3c1a98e170ea56c5a904ac4ce8:

  Makefile: remove DESTDIR from firmware file content (2019-08-03 09:52:32 
+0200)


A harmless build-sys patch that fixes a regression affecting Linux
distributions packaging QEMU.



Olaf Hering (1):
  Makefile: remove DESTDIR from firmware file content

 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.20.1




[Qemu-devel] [FOR 4.1 PATCH] riscv: roms: Fix make rules for building sifive_u bios

2019-08-03 Thread Bin Meng
Currently the make rules are wrongly using qemu/virt opensbi image
for sifive_u machine. Correct it.

Signed-off-by: Bin Meng 

---

 roms/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/roms/Makefile b/roms/Makefile
index dc70fb5..775c963 100644
--- a/roms/Makefile
+++ b/roms/Makefile
@@ -183,7 +183,7 @@ opensbi64-sifive_u:
$(MAKE) -C opensbi \
CROSS_COMPILE=$(riscv64_cross_prefix) \
PLATFORM="qemu/sifive_u"
-   cp opensbi/build/platform/qemu/virt/firmware/fw_jump.bin 
../pc-bios/opensbi-riscv64-sifive_u-fw_jump.bin
+   cp opensbi/build/platform/qemu/sifive_u/firmware/fw_jump.bin 
../pc-bios/opensbi-riscv64-sifive_u-fw_jump.bin
 
 clean:
rm -rf seabios/.config seabios/out seabios/builds
-- 
2.7.4