[Xen-devel] [PATCH 3/4] x86emul: correct handling of FPU insns faulting on memory write
When an FPU instruction with a memory destination fails during the memory write, it should not affect FPU register state. Due to the way we emulate FPU (and SIMD) instructions, we can only guarantee this by - backing out changes to the FPU register state in such a case or - doing a descriptor read and/or page walk up front, perhaps with the stubs accessing the actual memory location then. The latter would require a significant change in how the emulator does its guest memory accessing, so for now the former variant is being chosen. Signed-off-by: Jan Beulich--- Note that the state save overhead (unless state hadn't been loaded at all before, which should only be possible if a guest is fiddling with the instruction stream under emulation) is taken for every FPU insn hitting the emulator. We could reduce this to just the ones writing to memory, but that would involve quite a few further changes and resulting code where even more code paths need to match up with one another. --- a/tools/fuzz/x86_instruction_emulator/x86-insn-emulator-fuzzer.c +++ b/tools/fuzz/x86_instruction_emulator/x86-insn-emulator-fuzzer.c @@ -433,6 +433,7 @@ static struct x86_emulate_ops fuzz_emulo SET(wbinvd), SET(invlpg), .get_fpu= emul_test_get_fpu, +.put_fpu= emul_test_put_fpu, .cpuid = emul_test_cpuid, }; #undef SET --- a/tools/tests/x86_emulator/test_x86_emulator.c +++ b/tools/tests/x86_emulator/test_x86_emulator.c @@ -293,6 +293,7 @@ static struct x86_emulate_ops emulops = .read_cr= emul_test_read_cr, .read_msr = read_msr, .get_fpu= emul_test_get_fpu, +.put_fpu= emul_test_put_fpu, }; int main(int argc, char **argv) --- a/tools/tests/x86_emulator/x86_emulate.c +++ b/tools/tests/x86_emulator/x86_emulate.c @@ -138,4 +138,11 @@ int emul_test_get_fpu( return X86EMUL_OKAY; } +void emul_test_put_fpu( +struct x86_emulate_ctxt *ctxt, +enum x86_emulate_fpu_type backout) +{ +/* TBD */ +} + #include "x86_emulate/x86_emulate.c" --- a/tools/tests/x86_emulator/x86_emulate.h +++ b/tools/tests/x86_emulator/x86_emulate.h @@ -178,3 +178,7 @@ int emul_test_get_fpu( void *exception_callback_arg, enum x86_emulate_fpu_type type, struct x86_emulate_ctxt *ctxt); + +void emul_test_put_fpu( +struct x86_emulate_ctxt *ctxt, +enum x86_emulate_fpu_type backout); --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -1619,6 +1620,35 @@ static int hvmemul_get_fpu( if ( !curr->fpu_dirtied ) hvm_funcs.fpu_dirty_intercept(); +else if ( type == X86EMUL_FPU_fpu ) +{ +const typeof(curr->arch.xsave_area->fpu_sse) *fpu_ctxt = +curr->arch.fpu_ctxt; + +/* + * Latch current register state so that we can back out changes + * if needed (namely when a memory write fails after register state + * has already been updated). + * NB: We don't really need the "enable" part of the called function + * (->fpu_dirtied set implies CR0.TS clear), but the additional + * overhead should be low enough to not warrant introduction of yet + * another slightly different function. However, we need to undo the + * ->fpu_dirtied clearing the function does as well as the possible + * masking of all exceptions by FNSTENV.) + */ +save_fpu_enable(); +curr->fpu_dirtied = true; +if ( (fpu_ctxt->fcw & 0x3f) != 0x3f ) +{ +uint16_t fcw; + +asm ( "fnstcw %0" : "=m" (fcw) ); +if ( (fcw & 0x3f) == 0x3f ) +asm ( "fldcw %0" :: "m" (fpu_ctxt->fcw) ); +else +ASSERT(fcw == fpu_ctxt->fcw); +} +} curr->arch.hvm_vcpu.fpu_exception_callback = exception_callback; curr->arch.hvm_vcpu.fpu_exception_callback_arg = exception_callback_arg; @@ -1627,10 +1657,24 @@ static int hvmemul_get_fpu( } static void hvmemul_put_fpu( -struct x86_emulate_ctxt *ctxt) +struct x86_emulate_ctxt *ctxt, +enum x86_emulate_fpu_type backout) { struct vcpu *curr = current; + curr->arch.hvm_vcpu.fpu_exception_callback = NULL; + +if ( backout == X86EMUL_FPU_fpu ) +{ +/* + * To back out changes to the register file simply adjust state such + * that upon next FPU insn use by the guest we'll reload the state + * saved (or freshly loaded) by hvmemul_get_fpu(). + */ +curr->fpu_dirtied = false; +stts(); +hvm_funcs.fpu_leave(curr); +} } static int hvmemul_invlpg( --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -2268,6 +2268,7 @@ static struct hvm_function_table __initd .update_guest_cr = svm_update_guest_cr, .update_guest_efer= svm_update_guest_efer, .update_guest_vendor =
[Xen-devel] [PATCH 2/4] x86emul: centralize put_fpu() invocations
..., splitting parts of it into check_*() macros. This is in preparation of making ->put_fpu() do further adjustments to register state. (Some of the check_xmm() invocations could be avoided, as in some of the cases no insns handled there can actually raise #XM, but I think we're better off keeping them to avoid later additions of further insn patterns rendering the lack of the check a bug.) Signed-off-by: Jan Beulich--- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -937,6 +937,7 @@ do { struct fpu_insn_ctxt { uint8_t insn_bytes; +uint8_t type; int8_t exn_raised; }; @@ -956,8 +957,6 @@ static int _get_fpu( { int rc; -fic->exn_raised = -1; - fail_if(!ops->get_fpu); rc = ops->get_fpu(fpu_handle_exception, fic, type, ctxt); @@ -965,6 +964,8 @@ static int _get_fpu( { unsigned long cr0; +fic->type = type; + fail_if(!ops->read_cr); if ( type >= X86EMUL_FPU_xmm ) { @@ -1006,22 +1007,31 @@ do { rc = _get_fpu(_type, _fic, ctxt, ops); \ if ( rc ) goto done;\ } while (0) -#define _put_fpu() \ + +#define check_fpu(_fic) \ do {\ -if ( ops->put_fpu != NULL ) \ -(ops->put_fpu)(ctxt); \ +generate_exception_if((_fic)->exn_raised >= 0, \ + (_fic)->exn_raised); \ } while (0) -#define put_fpu(_fic) \ + +#define check_xmm(_fic) \ do {\ -_put_fpu(); \ if ( (_fic)->exn_raised == EXC_XM && ops->read_cr &&\ ops->read_cr(4, , ctxt) == X86EMUL_OKAY && \ !(cr4 & X86_CR4_OSXMMEXCPT) ) \ (_fic)->exn_raised = EXC_UD;\ -generate_exception_if((_fic)->exn_raised >= 0, \ - (_fic)->exn_raised); \ +check_fpu(_fic);\ } while (0) +static void put_fpu( +struct fpu_insn_ctxt *fic, +struct x86_emulate_ctxt *ctxt, +const struct x86_emulate_ops *ops) +{ +if ( fic->type != X86EMUL_FPU_none && ops->put_fpu ) +ops->put_fpu(ctxt); +} + static inline bool fpu_check_write(void) { uint16_t fsw; @@ -3015,7 +3025,7 @@ x86_emulate( struct operand dst = { .reg = PTR_POISON }; unsigned long cr4; enum x86_swint_type swint_type; -struct fpu_insn_ctxt fic; +struct fpu_insn_ctxt fic = { .type = X86EMUL_FPU_none, .exn_raised = -1 }; struct x86_emulate_stub stub = {}; DECLARE_ALIGNED(mmval_t, mmval); @@ -3708,7 +3718,7 @@ x86_emulate( host_and_vcpu_must_have(fpu); get_fpu(X86EMUL_FPU_wait, ); asm volatile ( "fwait" ::: "memory" ); -put_fpu(); +check_fpu(); break; case 0x9c: /* pushf */ @@ -4153,7 +4163,7 @@ x86_emulate( break; } } -put_fpu(); +check_fpu(); break; case 0xd9: /* FPU 0xd9 */ @@ -4242,7 +4252,7 @@ x86_emulate( if ( dst.type == OP_MEM && dst.bytes == 4 && !fpu_check_write() ) dst.type = OP_NONE; } -put_fpu(); +check_fpu(); break; case 0xda: /* FPU 0xda */ @@ -4293,7 +4303,7 @@ x86_emulate( break; } } -put_fpu(); +check_fpu(); break; case 0xdb: /* FPU 0xdb */ @@ -4365,7 +4375,7 @@ x86_emulate( if ( dst.type == OP_MEM && !fpu_check_write() ) dst.type = OP_NONE; } -put_fpu(); +check_fpu(); break; case 0xdc: /* FPU 0xdc */ @@ -4416,7 +4426,7 @@ x86_emulate( break; } } -put_fpu(); +check_fpu(); break; case 0xdd: /* FPU 0xdd */ @@ -4475,7 +4485,7 @@ x86_emulate( if ( dst.type == OP_MEM && dst.bytes == 8 && !fpu_check_write() ) dst.type = OP_NONE; } -put_fpu(); +check_fpu(); break; case 0xde: /* FPU 0xde */ @@ -4523,7 +4533,7 @@ x86_emulate( break; } } -put_fpu(); +check_fpu(); break; case 0xdf: /* FPU 0xdf */ @@ -4605,7 +4615,7 @@ x86_emulate( if ( dst.type == OP_MEM && !fpu_check_write() ) dst.type = OP_NONE; } -put_fpu(); +check_fpu(); break; case 0xe0 ... 0xe2: /*
[Xen-devel] [PATCH 1/4] x86emul: fold exit paths
Move "cannot_emulate" and make it go through the common (error) exit path. Signed-off-by: Jan Beulich--- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -7762,7 +7762,9 @@ x86_emulate( } default: -goto cannot_emulate; +cannot_emulate: +rc = X86EMUL_UNHANDLEABLE; +goto done; } if ( state->simd_size ) @@ -7906,11 +7908,6 @@ x86_emulate( _put_fpu(); put_stub(stub); return rc; - - cannot_emulate: -_put_fpu(); -put_stub(stub); -return X86EMUL_UNHANDLEABLE; #undef state } x86emul: fold exit paths Move "cannot_emulate" and make it go through the common (error) exit path. Signed-off-by: Jan Beulich --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -7762,7 +7762,9 @@ x86_emulate( } default: -goto cannot_emulate; +cannot_emulate: +rc = X86EMUL_UNHANDLEABLE; +goto done; } if ( state->simd_size ) @@ -7906,11 +7908,6 @@ x86_emulate( _put_fpu(); put_stub(stub); return rc; - - cannot_emulate: -_put_fpu(); -put_stub(stub); -return X86EMUL_UNHANDLEABLE; #undef state } ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Xen 4.6.5 released
>>> On 13.03.17 at 11:29,wrote: > On 13/03/17 09:24, Jan Beulich wrote: > On 10.03.17 at 18:22, wrote: >>> On 08.03.2017 13:54, Jan Beulich wrote: All, I am pleased to announce the release of Xen 4.6.5. This is available immediately from its git repository http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.6 (tag RELEASE-4.6.5) or from the XenProject download page http://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-465.html (where a list of changes can also be found). We recommend all users of the 4.6 stable series to update to this latest point release. >>> This does not seem to compile for me (x86_64) without the attached >>> (admittedly >>> brutish) change. >> I guess it's the emulator test code which has a problem here (I >> did notice this myself), but that doesn't get built by default (and >> I see no reason why anyone would want to build it when putting >> together packages for people to consume - this is purely a dev >> tool). Please clarify. > > These tools are all built automatically. If so, how come osstest didn't notice the issue (long ago)? Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1 1/3] x86/vvmx: add mov-ss blocking check to vmentry
On 13/03/17 10:51, Sergey Dyasli wrote: > Intel SDM states that if there is a current VMCS and there is MOV-SS > blocking, VMFailValid occurs and control passes to the next instruction. > > Implement such behaviour for nested vmlaunch and vmresume. > > Signed-off-by: Sergey DyasliThe content here looks correct, so Reviewed-by: Andrew Cooper I am wondering however whether we can start introducing transparent unions and bitfields for the controls, like I did with ept_qual_t ~Andrew ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 15/18] xen/arm: Introduce a helper to synchronize SError
We may have to isolate the SError between the context switch of 2 vCPUs or may have to prevent slipping hypervisor SError to guest. So we need a helper to synchronize SError before context switching or returning to guest. This function will be used by the later patches in this series, we use "#if 0" to disable it temporarily to remove compiler warnning. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c | 11 +++ 1 file changed, 11 insertions(+) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 44a0281..ee7865b 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2899,6 +2899,17 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) } } +#if 0 +static void synchronize_serror(void) +{ +/* Synchronize against in-flight ld/st. */ +dsb(sy); + +/* A single instruction exception window */ +isb(); +} +#endif + asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs) { enter_hypervisor_head(regs); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 13/18] xen/arm: Replace do_trap_guest_serror with new helpers
We have introduced two helpers to handle the guest/hyp SErrors: do_trap_guest_serror and do_trap_guest_hyp_serror. These handlers can take the role of do_trap_guest_serror and reduce the assembly code in the same time. So we use these two helpers to replace it and drop it now. Signed-off-by: Wei Chen--- xen/arch/arm/arm32/traps.c | 5 + xen/arch/arm/arm64/entry.S | 36 +++- xen/arch/arm/traps.c| 15 --- xen/include/asm-arm/processor.h | 2 -- 4 files changed, 4 insertions(+), 54 deletions(-) diff --git a/xen/arch/arm/arm32/traps.c b/xen/arch/arm/arm32/traps.c index 4176f0e..5bc5f64 100644 --- a/xen/arch/arm/arm32/traps.c +++ b/xen/arch/arm/arm32/traps.c @@ -62,10 +62,7 @@ asmlinkage void do_trap_prefetch_abort(struct cpu_user_regs *regs) asmlinkage void do_trap_data_abort(struct cpu_user_regs *regs) { -if ( VABORT_GEN_BY_GUEST(regs) ) -do_trap_guest_error(regs); -else -do_unexpected_trap("Data Abort", regs); +do_trap_hyp_serror(regs); } /* diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index 113e1c3..8d5a890 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -178,40 +178,10 @@ hyp_error_invalid: invalid BAD_ERROR hyp_error: -/* - * Only two possibilities: - * 1) Either we come from the exit path, having just unmasked - *PSTATE.A: change the return code to an EL2 fault, and - *carry on, as we're already in a sane state to handle it. - * 2) Or we come from anywhere else, and that's a bug: we panic. - */ entry hyp=1 msr daifclr, #2 - -/* - * The ELR_EL2 may be modified by an interrupt, so we have to use the - * saved value in cpu_user_regs to check whether we come from 1) or - * not. - */ -ldr x0, [sp, #UREGS_PC] -adr x1, abort_guest_exit_start -cmp x0, x1 -adr x1, abort_guest_exit_end -ccmpx0, x1, #4, ne mov x0, sp -mov x1, #BAD_ERROR - -/* - * Not equal, the exception come from 2). It's a bug, we have to - * panic the hypervisor. - */ -b.nedo_bad_mode - -/* - * Otherwise, the exception come from 1). It happened because of - * the guest. Crash this guest. - */ -bl do_trap_guest_error +bl do_trap_hyp_serror exithyp=1 /* Traps taken in Current EL with SP_ELx */ @@ -267,7 +237,7 @@ guest_error: entry hyp=0, compat=0 msr daifclr, #2 mov x0, sp -bl do_trap_guest_error +bl do_trap_guest_serror exithyp=0, compat=0 guest_sync_compat: @@ -309,7 +279,7 @@ guest_error_compat: entry hyp=0, compat=1 msr daifclr, #2 mov x0, sp -bl do_trap_guest_error +bl do_trap_guest_serror exithyp=0, compat=1 ENTRY(return_to_new_vcpu32) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 48cfc8e..44a0281 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2899,21 +2899,6 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) } } -asmlinkage void do_trap_guest_error(struct cpu_user_regs *regs) -{ -enter_hypervisor_head(regs); - -/* - * Currently, to ensure hypervisor safety, when we received a - * guest-generated vSerror/vAbort, we just crash the guest to protect - * the hypervisor. In future we can better handle this by injecting - * a vSerror/vAbort to the guest. - */ -gdprintk(XENLOG_WARNING, "Guest(Dom-%u) will be crashed by vSError\n", - current->domain->domain_id); -domain_crash_synchronous(); -} - asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs) { enter_hypervisor_head(regs); diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index 885dbca..afad78c 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -707,8 +707,6 @@ void vcpu_regs_user_to_hyp(struct vcpu *vcpu, int call_smc(register_t function_id, register_t arg0, register_t arg1, register_t arg2); -void do_trap_guest_error(struct cpu_user_regs *regs); - void do_trap_hyp_serror(struct cpu_user_regs *regs); void do_trap_guest_serror(struct cpu_user_regs *regs); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 18/18] xen/arm: Handle guest external abort as guest SError
The guest generated external data/instruction aborts can be treated as guest SErrors. We already have a handler to handle the SErrors, so we can reuse this handler to handle guest external aborts. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c | 14 ++ 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 3b84e80..24511e5 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2558,12 +2558,12 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs, /* * If this bit has been set, it means that this instruction abort is caused - * by a guest external abort. Currently we crash the guest to protect the - * hypervisor. In future one can better handle this by injecting a virtual - * abort to the guest. + * by a guest external abort. We can handle this instruction abort as guest + * SError. */ if ( hsr.iabt.eat ) -domain_crash_synchronous(); +return __do_trap_serror(regs, true); + if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) ) gpa = get_faulting_ipa(gva); @@ -2661,12 +2661,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs, /* * If this bit has been set, it means that this data abort is caused - * by a guest external abort. Currently we crash the guest to protect the - * hypervisor. In future one can better handle this by injecting a virtual - * abort to the guest. + * by a guest external abort. We treat this data abort as guest SError. */ if ( dabt.eat ) -domain_crash_synchronous(); +return __do_trap_serror(regs, true); info.dabt = dabt; #ifdef CONFIG_ARM_32 -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 17/18] xen/arm: Prevent slipping hypervisor SError to guest
If there is a pending SError while we're returning from trap. If the SError handle option is "DIVERSE", we have to prevent slipping this hypervisor SError to guest. So we have to use the dsb/isb to guarantee that the pending hypervisor SError would be caught in hypervisor before return to guest. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c | 10 ++ 1 file changed, 10 insertions(+) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index b8c8389..3b84e80 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2953,6 +2953,16 @@ asmlinkage void leave_hypervisor_tail(void) local_irq_disable(); if (!softirq_pending(smp_processor_id())) { gic_inject(); + +/* + * If the SErrors handle option is "DIVERSE", we have to prevent + * slipping the hypervisor SError to guest. So before returning + * from trap, we use the synchronize_serror to guarantee that the + * pending SError would be caught in hypervisor. + */ +if ( serrors_op == SERRORS_DIVERSE ) +synchronize_serror(); + WRITE_SYSREG(current->arch.hcr_el2, HCR_EL2); return; } -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 12/18] xen/arm: Introduce new helpers to handle guest/hyp SErrors
Currently, ARM32 and ARM64 has different SError exception handlers. These handlers include lots of code to check SError handle options and code to distinguish guest-generated SErrors from hypervisor SErrors. The new helpers: do_trap_guest_serror and do_trap_hyp_serror are wrappers of __do_trap_serror with constant guest/hyp parameters. __do_trap_serror moves the option checking code and SError checking code from assembly to C source. This will make the code become more readable and avoid placing check code in too many places. These two helpers only handle the following 3 types of SErrors: 1) Guest-generated SError and had been delivered in EL1 and then been forwarded to EL2. 2) Guest-generated SError but hadn't been delivered in EL1 before trapping to EL2. This SError would be caught in EL2 as soon as we just unmasked the PSTATE.A bit. 3) Hypervisor generated native SError, that would be a bug. In the new helpers, we have used the function "inject_vabt_exception" which was disabled by "#if 0" before. Now, we can remove the "#if 0" to make this function to be available. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c| 69 +++-- xen/include/asm-arm/processor.h | 4 +++ 2 files changed, 71 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 053b7fc..48cfc8e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -646,7 +646,6 @@ static void inject_dabt_exception(struct cpu_user_regs *regs, #endif } -#if 0 /* Inject a virtual Abort/SError into the guest. */ static void inject_vabt_exception(struct cpu_user_regs *regs) { @@ -676,7 +675,59 @@ static void inject_vabt_exception(struct cpu_user_regs *regs) current->arch.hcr_el2 |= HCR_VA; } -#endif + +/* + * SError exception handler. We only handle the following 3 types of SErrors: + * 1) Guest-generated SError and had been delivered in EL1 and then + *been forwarded to EL2. + * 2) Guest-generated SError but hadn't been delivered in EL1 before + *trapping to EL2. This SError would be caught in EL2 as soon as + *we just unmasked the PSTATE.A bit. + * 3) Hypervisor generated native SError, that would be a bug. + * + * A true parameter "guest" means that the SError is type#1 or type#2. + */ +static void __do_trap_serror(struct cpu_user_regs *regs, bool guest) +{ +/* + * Only "DIVERSE" option needs to distinguish the guest-generated SErrors + * from hypervisor SErrors. + */ +if ( serrors_op == SERRORS_DIVERSE ) +{ +/* Forward the type#1 and type#2 SErrors to guests. */ +if ( guest ) +return inject_vabt_exception(regs); + +/* Type#3 SErrors will panic the whole system */ +goto crash_system; +} + +/* + * The "FORWARD" option will forward all SErrors to the guests, except + * idle domain generated SErrors. + */ +if ( serrors_op == SERRORS_FORWARD ) +{ +/* + * Because the idle domain doesn't have the ability to handle the + * SErrors, we have to crash the whole system while we get a SError + * generated by idle domain. + */ +if ( is_idle_vcpu(current) ) +goto crash_system; + +return inject_vabt_exception(regs); +} + +crash_system: +/* Three possibilities to crash the whole system: + * 1) "DIVERSE" option with Hypervisor generated SErrors. + * 2) "FORWARD" option with Idle Domain generated SErrors. + * 3) "PANIC" option with all SErrors. + */ +do_unexpected_trap("SError", regs); +} struct reg_ctxt { /* Guest-side state */ @@ -2863,6 +2914,20 @@ asmlinkage void do_trap_guest_error(struct cpu_user_regs *regs) domain_crash_synchronous(); } +asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs) +{ +enter_hypervisor_head(regs); + +__do_trap_serror(regs, VABORT_GEN_BY_GUEST(regs)); +} + +asmlinkage void do_trap_guest_serror(struct cpu_user_regs *regs) +{ +enter_hypervisor_head(regs); + +__do_trap_serror(regs, true); +} + asmlinkage void do_trap_irq(struct cpu_user_regs *regs) { enter_hypervisor_head(regs); diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index 148cc6f..885dbca 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -709,6 +709,10 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1, void do_trap_guest_error(struct cpu_user_regs *regs); +void do_trap_hyp_serror(struct cpu_user_regs *regs); + +void do_trap_guest_serror(struct cpu_user_regs *regs); + /* Functions for pending virtual abort checking window. */ void abort_guest_exit_start(void); void abort_guest_exit_end(void); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 09/18] xen/arm64: Use alternative to skip the check of pending serrors
We have provided an option to administrator to determine how to handle the SErrors. In order to skip the check of pending SError, in conventional way, we have to read the option every time before we try to check the pending SError. This will add overhead to check the option at every trap. The ARM64 supports the alternative patching feature. We can use an ALTERNATIVE to avoid checking option at every trap. We added a new cpufeature named "SKIP_CHECK_PENDING_VSERROR". This feature will be enabled when the option is not diverse. Signed-off-by: Wei Chen--- xen/arch/arm/arm64/entry.S | 41 + 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index 02802c0..4baa3cb 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -1,5 +1,6 @@ #include #include +#include #include /* @@ -229,12 +230,14 @@ hyp_irq: guest_sync: entry hyp=0, compat=0 -bl check_pending_vserror /* - * If x0 is Non-zero, a vSError took place, the initial exception - * doesn't have any significance to be handled. Exit ASAP + * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is + * not set. If a vSError took place, the initial exception will be + * skipped. Exit ASAP */ -cbnzx0, 1f +ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", +"nop; nop", +SKIP_CHECK_PENDING_VSERROR) msr daifclr, #2 mov x0, sp bl do_trap_hypervisor @@ -243,12 +246,14 @@ guest_sync: guest_irq: entry hyp=0, compat=0 -bl check_pending_vserror /* - * If x0 is Non-zero, a vSError took place, the initial exception - * doesn't have any significance to be handled. Exit ASAP + * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is + * not set. If a vSError took place, the initial exception will be + * skipped. Exit ASAP */ -cbnzx0, 1f +ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", +"nop; nop", +SKIP_CHECK_PENDING_VSERROR) mov x0, sp bl do_trap_irq 1: @@ -267,12 +272,14 @@ guest_error: guest_sync_compat: entry hyp=0, compat=1 -bl check_pending_vserror /* - * If x0 is Non-zero, a vSError took place, the initial exception - * doesn't have any significance to be handled. Exit ASAP + * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is + * not set. If a vSError took place, the initial exception will be + * skipped. Exit ASAP */ -cbnzx0, 1f +ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", +"nop; nop", +SKIP_CHECK_PENDING_VSERROR) msr daifclr, #2 mov x0, sp bl do_trap_hypervisor @@ -281,12 +288,14 @@ guest_sync_compat: guest_irq_compat: entry hyp=0, compat=1 -bl check_pending_vserror /* - * If x0 is Non-zero, a vSError took place, the initial exception - * doesn't have any significance to be handled. Exit ASAP + * The vSError will be checked while SKIP_CHECK_PENDING_VSERROR is + * not set. If a vSError took place, the initial exception will be + * skipped. Exit ASAP */ -cbnzx0, 1f +ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", +"nop; nop", +SKIP_CHECK_PENDING_VSERROR) mov x0, sp bl do_trap_irq 1: -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 16/18] xen/arm: Isolate the SError between the context switch of 2 vCPUs
If there is a pending SError while we are doing context switch, if the SError handle option is "FORWARD", We have to guranatee this serror to be caught by current vCPU, otherwise it will be caught by next vCPU and be forwarded to this wrong vCPU. We don't want to export serror_op accessing to other source files and use serror_op every place, so we add a helper to synchronize SError for context switching. The synchronize_serror has been used in this helper, so the "#if 0" can be removed. Signed-off-by: Wei Chen--- xen/arch/arm/domain.c | 2 ++ xen/arch/arm/traps.c| 14 -- xen/include/asm-arm/processor.h | 2 ++ 3 files changed, 16 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 69c2854..a547fcd 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -312,6 +312,8 @@ void context_switch(struct vcpu *prev, struct vcpu *next) local_irq_disable(); +prevent_forward_serror_to_next_vcpu(); + set_current(next); prev = __context_switch(prev, next); diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index ee7865b..b8c8389 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2899,7 +2899,6 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) } } -#if 0 static void synchronize_serror(void) { /* Synchronize against in-flight ld/st. */ @@ -2908,7 +2907,18 @@ static void synchronize_serror(void) /* A single instruction exception window */ isb(); } -#endif + +/* + * If the SErrors option is "FORWARD", we have to prevent forwarding + * serror to wrong vCPU. So before context switch, we have to use the + * synchronize_serror to guarantee that the pending serror would be + * caught by current vCPU. + */ +void prevent_forward_serror_to_next_vcpu(void) +{ +if ( serrors_op == SERRORS_FORWARD ) +synchronize_serror(); +} asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs) { diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index afad78c..3b43234 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -711,6 +711,8 @@ void do_trap_hyp_serror(struct cpu_user_regs *regs); void do_trap_guest_serror(struct cpu_user_regs *regs); +void prevent_forward_serror_to_next_vcpu(void); + /* Functions for pending virtual abort checking window. */ void abort_guest_exit_start(void); void abort_guest_exit_end(void); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 14/18] xen/arm: Unmask the Abort/SError bit in the exception entries
Currently, we masked the Abort/SError bit in Xen exception entries. So Xen could not capture any Abort/SError while it's running. Now, Xen has the ability to handle the Abort/SError, we should unmask the Abort/SError bit by default to let Xen capture Abort/SError while it's running. But in order to avoid receiving nested asynchronous abort, we don't unmask Abort/SError bit in hyp_error and trap_data_abort. Signed-off-by: Wei Chen--- We haven't done this before, so I don't know how can this change will affect the Xen. If the IRQ and Abort take place at the same time, how can we handle them? If an abort is taking place while we're handling the IRQ, the program jump to abort exception, and then enable the IRQ. In this case, what will happen? So I think I need more discussions from community. --- xen/arch/arm/arm32/entry.S | 15 ++- xen/arch/arm/arm64/entry.S | 13 - 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S index 79929ca..4d46239 100644 --- a/xen/arch/arm/arm32/entry.S +++ b/xen/arch/arm/arm32/entry.S @@ -125,6 +125,7 @@ abort_guest_exit_end: trap_##trap:\ SAVE_ALL; \ cpsie i;/* local_irq_enable */ \ +cpsie a;/* asynchronous abort enable */ \ adr lr, return_from_trap; \ mov r0, sp; \ mov r11, sp;\ @@ -135,6 +136,18 @@ trap_##trap: \ ALIGN; \ trap_##trap:\ SAVE_ALL; \ +cpsie a;/* asynchronous abort enable */ \ +adr lr, return_from_trap; \ +mov r0, sp; \ +mov r11, sp;\ +bic sp, #7; /* Align the stack pointer (noop on guest trap) */ \ +b do_trap_##trap + +#define DEFINE_TRAP_ENTRY_NOABORT(trap) \ +ALIGN; \ +trap_##trap:\ +SAVE_ALL; \ +cpsie i;/* local_irq_enable */ \ adr lr, return_from_trap; \ mov r0, sp; \ mov r11, sp;\ @@ -155,10 +168,10 @@ GLOBAL(hyp_traps_vector) DEFINE_TRAP_ENTRY(undefined_instruction) DEFINE_TRAP_ENTRY(supervisor_call) DEFINE_TRAP_ENTRY(prefetch_abort) -DEFINE_TRAP_ENTRY(data_abort) DEFINE_TRAP_ENTRY(hypervisor) DEFINE_TRAP_ENTRY_NOIRQ(irq) DEFINE_TRAP_ENTRY_NOIRQ(fiq) +DEFINE_TRAP_ENTRY_NOABORT(data_abort) return_from_trap: mov sp, r11 diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index 8d5a890..0401a41 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -187,13 +187,14 @@ hyp_error: /* Traps taken in Current EL with SP_ELx */ hyp_sync: entry hyp=1 -msr daifclr, #2 +msr daifclr, #6 mov x0, sp bl do_trap_hypervisor exithyp=1 hyp_irq: entry hyp=1 +msr daifclr, #4 mov x0, sp bl do_trap_irq exithyp=1 @@ -208,7 +209,7 @@ guest_sync: ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", "nop; nop", SKIP_CHECK_PENDING_VSERROR) -msr daifclr, #2 +msr daifclr, #6 mov x0, sp bl do_trap_hypervisor 1: @@ -224,6 +225,7 @@ guest_irq: ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", "nop; nop", SKIP_CHECK_PENDING_VSERROR) +msr daifclr, #4 mov x0, sp bl do_trap_irq 1: @@ -235,7 +237,7 @@ guest_fiq_invalid: guest_error: entry hyp=0, compat=0 -msr daifclr, #2 +msr daifclr, #6 mov x0, sp bl do_trap_guest_serror exithyp=0, compat=0 @@ -250,7 +252,7 @@ guest_sync_compat: ALTERNATIVE("bl check_pending_vserror; cbnz x0, 1f", "nop; nop", SKIP_CHECK_PENDING_VSERROR) -msr daifclr, #2 +msr daifclr, #6
[Xen-devel] [PATCH 0/4] x86emul: FPU handling corrections
1: fold exit paths 2: centralize put_fpu() invocations 3: correct handling of FPU insns faulting on memory write 4: correct FPU code/data pointers and opcode handling XTF: add FPU/SIMD register state test Signed-off-by: Jan Beulich___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 11/18] xen/arm: Move macro VABORT_GEN_BY_GUEST to common header
We want to move part of SErrors checking code from hyp_error assembly code to a function. This new function will use this macro to distinguish the guest SErrors from hypervisor SErrors. So we have to move this macro to common header. Signed-off-by: Wei Chen--- xen/arch/arm/arm64/entry.S| 2 ++ xen/include/asm-arm/arm32/processor.h | 10 -- xen/include/asm-arm/processor.h | 10 ++ 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index 4baa3cb..113e1c3 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -380,10 +380,12 @@ check_pending_vserror: * exception handler, and the elr_el2 will be set to * abort_guest_exit_start or abort_guest_exit_end. */ +.global abort_guest_exit_start abort_guest_exit_start: isb +.global abort_guest_exit_end abort_guest_exit_end: /* Mask PSTATE asynchronous abort bit, close the checking window. */ msr daifset, #4 diff --git a/xen/include/asm-arm/arm32/processor.h b/xen/include/asm-arm/arm32/processor.h index f6d5df3..68cc821 100644 --- a/xen/include/asm-arm/arm32/processor.h +++ b/xen/include/asm-arm/arm32/processor.h @@ -56,16 +56,6 @@ struct cpu_user_regs uint32_t pad1; /* Doubleword-align the user half of the frame */ }; -/* Functions for pending virtual abort checking window. */ -void abort_guest_exit_start(void); -void abort_guest_exit_end(void); - -#define VABORT_GEN_BY_GUEST(r) \ -( \ -( (unsigned long)abort_guest_exit_start == (r)->pc ) || \ -( (unsigned long)abort_guest_exit_end == (r)->pc ) \ -) - #endif /* Layout as used in assembly, with src/dest registers mixed in */ diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index d7b0711..148cc6f 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -709,6 +709,16 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1, void do_trap_guest_error(struct cpu_user_regs *regs); +/* Functions for pending virtual abort checking window. */ +void abort_guest_exit_start(void); +void abort_guest_exit_end(void); + +#define VABORT_GEN_BY_GUEST(r) \ +( \ +( (unsigned long)abort_guest_exit_start == (r)->pc ) || \ +( (unsigned long)abort_guest_exit_end == (r)->pc ) \ +) + register_t get_default_hcr_flags(void); #endif /* __ASSEMBLY__ */ -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 06/18] xen/arm: Introduce a virtual abort injection helper
When guest triggers async aborts, in most platform, such aborts will be routed to hypervisor. But we don't want the hypervisor to handle such aborts, so we have to route such aborts back to the guest. This helper is using the HCR_EL2.VSE (HCR.VA for aarch32) bit to route such aborts back to the guest. If the guest PC had been advanced by SVC/HVC/SMC instructions before we caught the SError in hypervisor, we have to adjust the guest PC to exact address while the SError generated. About HSR_EC_SVC32/64, even thought we don't trap SVC32/64 today, we would like them to be handled here. This would be useful when VM introspection will gain support of SVC32/64 trapping. This helper will be used by the later patches in this series, we use #if 0 to disable it in this patch temporarily to remove the warning message of unused function from compiler. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c| 32 xen/include/asm-arm/processor.h | 1 + 2 files changed, 33 insertions(+) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index c11359d..e425832 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -618,6 +618,38 @@ static void inject_dabt_exception(struct cpu_user_regs *regs, #endif } +#if 0 +/* Inject a virtual Abort/SError into the guest. */ +static void inject_vabt_exception(struct cpu_user_regs *regs) +{ +const union hsr hsr = { .bits = regs->hsr }; + +/* + * SVC/HVC/SMC already have an adjusted PC (See ARM ARM DDI 0487A.j + * D1.10.1 for more details), which we need to correct in order to + * return to after having injected the SError. + */ +switch ( hsr.ec ) +{ +case HSR_EC_SVC32: +case HSR_EC_HVC32: +case HSR_EC_SMC32: +#ifdef CONFIG_ARM_64 +case HSR_EC_SVC64: +case HSR_EC_HVC64: +case HSR_EC_SMC64: +#endif +regs->pc -= hsr.len ? 4 : 2; +break; + +default: +break; +} + +current->arch.hcr_el2 |= HCR_VA; +} +#endif + struct reg_ctxt { /* Guest-side state */ uint32_t sctlr_el1; diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index 4b6338b..d7b0711 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -252,6 +252,7 @@ #define HSR_EC_HVC320x12 #define HSR_EC_SMC320x13 #ifdef CONFIG_ARM_64 +#define HSR_EC_SVC640x15 #define HSR_EC_HVC640x16 #define HSR_EC_SMC640x17 #define HSR_EC_SYSREG 0x18 -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 03/18] xen/arm: Avoid setting/clearing HCR_RW at every context switch
The HCR_EL2 flags for 64-bit and 32-bit domains are different. But when we initialized the HCR_EL2 for vcpu0 of Dom0 and all vcpus of DomU in vcpu_initialise, we didn't know the domain's address size information. We had to use compatible flags to initialize HCR_EL2, and set HCR_RW for 64-bit domain or clear HCR_RW for 32-bit domain at every context switch. But, after we added the HCR_EL2 to vcpu's context, this behaviour seems a little fussy. We can update the HCR_RW bit in vcpu's context as soon as we get the domain's address size to avoid setting/clearing HCR_RW at every context switch. Signed-off-by: Wei Chen--- xen/arch/arm/arm64/domctl.c | 6 ++ xen/arch/arm/domain.c| 5 + xen/arch/arm/domain_build.c | 7 +++ xen/arch/arm/p2m.c | 5 - xen/include/asm-arm/domain.h | 1 + 5 files changed, 19 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c index 44e1e7b..ab8781f 100644 --- a/xen/arch/arm/arm64/domctl.c +++ b/xen/arch/arm/arm64/domctl.c @@ -14,6 +14,8 @@ static long switch_mode(struct domain *d, enum domain_type type) { +struct vcpu *v; + if ( d == NULL ) return -EINVAL; if ( d->tot_pages != 0 ) @@ -23,6 +25,10 @@ static long switch_mode(struct domain *d, enum domain_type type) d->arch.type = type; +if ( is_64bit_domain(d) ) +for_each_vcpu(d, v) +vcpu_switch_to_aarch64_mode(v); + return 0; } diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 5d18bb0..69c2854 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -537,6 +537,11 @@ void vcpu_destroy(struct vcpu *v) free_xenheap_pages(v->arch.stack, STACK_ORDER); } +void vcpu_switch_to_aarch64_mode(struct vcpu *v) +{ +v->arch.hcr_el2 |= HCR_RW; +} + int arch_domain_create(struct domain *d, unsigned int domcr_flags, struct xen_arch_domainconfig *config) { diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index de59e5f..3abacc0 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2148,6 +2148,10 @@ int construct_dom0(struct domain *d) return -EINVAL; } d->arch.type = kinfo.type; + +if ( is_64bit_domain(d) ) +vcpu_switch_to_aarch64_mode(v); + #endif allocate_memory(d, ); @@ -2240,6 +2244,9 @@ int construct_dom0(struct domain *d) printk("Failed to allocate dom0 vcpu %d on pcpu %d\n", i, cpu); break; } + +if ( is_64bit_domain(d) ) +vcpu_switch_to_aarch64_mode(d->vcpu[i]); } return 0; diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index c49bfa6..1cba0d0 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -136,11 +136,6 @@ void p2m_restore_state(struct vcpu *n) WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); isb(); -if ( is_32bit_domain(n->domain) ) -n->arch.hcr_el2 &= ~HCR_RW; -else -n->arch.hcr_el2 |= HCR_RW; - WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); isb(); diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 7b1dacc..68185e2 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -268,6 +268,7 @@ struct arch_vcpu void vcpu_show_execution_state(struct vcpu *); void vcpu_show_registers(const struct vcpu *); +void vcpu_switch_to_aarch64_mode(struct vcpu *); unsigned int domain_max_vcpus(const struct domain *); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 10/18] xen/arm32: Use cpu_hwcaps to skip the check of pending serrors
We have provided an option to administrator to determine how to handle the SErrors. In order to skip the check of pending SError, in conventional way, we have to read the option every time before we try to check the pending SError. Currently, we haven't export the option to other source file. But, in the previous patch, we had set "SKIP_CHECK_PENDING_VSERROR" to cpu_hwcaps when the option doesn't need to check the SErrors. So we can use checking cpu_hwcaps to replace checking the option directly. Signed-off-by: Wei Chen--- This is a temporary solution, this would have to be dropped as soon as ARM32 gain support of alternative patching to avoid potential misusage. The alternative patching support patches for ARM32 are still in review stage. --- xen/arch/arm/arm32/entry.S | 19 +++ 1 file changed, 19 insertions(+) diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S index 2187226..79929ca 100644 --- a/xen/arch/arm/arm32/entry.S +++ b/xen/arch/arm/arm32/entry.S @@ -1,5 +1,6 @@ #include #include +#include #include #define SAVE_ONE_BANKED(reg)mrs r11, reg; str r11, [sp, #UREGS_##reg] @@ -11,6 +12,21 @@ #define RESTORE_BANKED(mode) \ RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode) +/* + * If the SKIP_CHECK_PENDING_VSERROR has been set in the cpu feature, + * the checking of pending SErrors will be skipped. + * + * As it is a temporary solution, we are assuming that + * SKIP_CHECK_PENDING_VSERROR will always be in the first word for + * cpu_hwcaps. This would have to be dropped as soon as ARM32 gain + * support of alternative. + */ +#define SKIP_VSERROR_CHECK \ +ldr r1, =cpu_hwcaps;\ +ldr r1, [r1]; \ +tst r1, #SKIP_CHECK_PENDING_VSERROR;\ +moveq pc, lr + #define SAVE_ALL\ sub sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */ \ push {r0-r12}; /* Save R0-R12 */\ @@ -44,6 +60,9 @@ save_guest_regs: SAVE_BANKED(fiq) SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq) SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq); + +SKIP_VSERROR_CHECK + /* * Start to check pending virtual abort in the gap of Guest -> HYP * world switch. -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 08/18] xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
In the later patches of this series, we want to use the alternative patching framework to avoid checking serror_op in every entries. So we define a new cpu feature "SKIP_CHECK_PENDING_VSERROR" for serror_op. When serror_op is not equal to SERROR_DIVERSE, this feature will be set to cpu_hwcaps. But we could not update cpu_hwcaps directly in the serror parameter parsing function. Because if the serror parameter is not placed in the command line, the parsing function would not be invoked. So, we introduce this initcall to guarantee the cpu_hwcaps can be updated no matter the serror parameter is placed in the command line or not. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c | 9 + xen/include/asm-arm/cpufeature.h | 3 ++- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 5e31699..053b7fc 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -134,6 +134,15 @@ static void __init parse_serrors_behavior(const char *str) } custom_param("serrors", parse_serrors_behavior); +static __init int update_serrors_cpu_caps(void) +{ +if ( serrors_op != SERRORS_DIVERSE ) +cpus_set_cap(SKIP_CHECK_PENDING_VSERROR); + +return 0; +} +__initcall(update_serrors_cpu_caps); + register_t get_default_hcr_flags(void) { return (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM| diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h index c0a25ae..ec3f9e5 100644 --- a/xen/include/asm-arm/cpufeature.h +++ b/xen/include/asm-arm/cpufeature.h @@ -40,8 +40,9 @@ #define ARM32_WORKAROUND_766422 2 #define ARM64_WORKAROUND_834220 3 #define LIVEPATCH_FEATURE 4 +#define SKIP_CHECK_PENDING_VSERROR 5 -#define ARM_NCAPS 5 +#define ARM_NCAPS 6 #ifndef __ASSEMBLY__ -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 07/18] xen/arm: Introduce a command line parameter for SErrors/Aborts
In order to distinguish guest-generated SErrors from hypervisor-generated SErrors. We have to place SError checking code in every EL1 -> EL2 paths. That will be an overhead on entries caused by dsb/isb. But not all platforms want to categorize the SErrors. For example, a host that is running with trusted guests. The administrator can confirm that all guests that are running on the host will not trigger such SErrors. In this user scene, we should provide some options to administrator to avoid categorizing the SErrors. And then reduce the overhead of dsb/isb. We provided following 3 options to administrator to determine how to handle the SErrors: * `diverse`: The hypervisor will distinguish guest SErrors from hypervisor SErrors. The guest generated SErrors will be forwarded to guests, the hypervisor generated SErrors will cause the whole system crash. It requires: 1. Place dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors correctly. 2. Place dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor SErrors to guests. 3. Place dsb/isb in context switch to isolate the SErrors between 2 vCPUs. * `forward`: The hypervisor will not distinguish guest SErrors from hypervisor SErrors. All SErrors will be forwarded to guests, except the SErrors generated when idle vCPU is running. The idle domain doesn't have the ability to hanle the SErrors, so we have to crash the whole system when we get SErros with idle vCPU. This option will avoid most overhead of the dsb/isb, except the dsb/isb in context switch which is used to isolate the SErrors between 2 vCPUs. * `panic`: The hypervisor will not distinguish guest SErrors from hypervisor SErrors. All SErrors will crash the whole system. This option will avoid all overhead of the dsb/isb. Signed-off-by: Wei Chen--- About adding dsb/isb to prevent slipping Hypervisor SErrors to Guests if the selected option is "diverse". Some Hypervisor SErrors could not be avoid by software, for example ECC Error. But I don't know whether it's worth adding the overhead by default. --- docs/misc/xen-command-line.markdown | 44 + xen/arch/arm/traps.c| 19 2 files changed, 63 insertions(+) diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown index 4daf5b5..79554ce 100644 --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -1481,6 +1481,50 @@ enabling more sockets and cores to go into deeper sleep states. Set the serial transmit buffer size. +### serrors (ARM) +> `= diverse | forward | panic` + +> Default: `diverse` + +This parameter is provided to administrator to determine how to handle the +SErrors. + +In order to distinguish guest-generated SErrors from hypervisor-generated +SErrors. We have to place SError checking code in every EL1 -> EL2 paths. +That will be an overhead on entries caused by dsb/isb. But not all platforms +need to categorize the SErrors. For example, a host that is running with +trusted guests. The administrator can confirm that all guests that are +running on the host will not trigger such SErrors. In this case, the +administrator can use this parameter to skip categorizing the SErrors. And +reduce the overhead of dsb/isb. + +We provided following 3 options to administrator to determine how to handle +the SErrors: + +* `diverse`: + The hypervisor will distinguish guest SErrors from hypervisor SErrors. + The guest generated SErrors will be forwarded to guests, the hypervisor + generated SErrors will cause the whole system crash. + It requires: + 1. Place dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors + correctly. + 2. Place dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor + SErrors to guests. + 3. Place dsb/isb in context switch to isolate the SErrors between 2 vCPUs. + +* `forward`: + The hypervisor will not distinguish guest SErrors from hypervisor SErrors. + All SErrors will be forwarded to guests, except the SErrors generated when + idle vCPU is running. The idle domain doesn't have the ability to hanle the + SErrors, so we have to crash the whole system when we get SErros with idle + vCPU. This option will avoid most overhead of the dsb/isb, except the dsb/isb + in context switch which is used to isolate the SErrors between 2 vCPUs. + +* `panic`: + The hypervisor will not distinguish guest SErrors from hypervisor SErrors. + All SErrors will crash the whole system. This option will avoid all overhead + of the dsb/isb. + ### smap > `= | hvm` diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index e425832..5e31699 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -115,6 +115,25 @@ static void __init parse_vwfi(const char *s) } custom_param("vwfi", parse_vwfi); +static enum { +SERRORS_DIVERSE, +SERRORS_FORWARD, +SERRORS_PANIC, +} serrors_op; +
Re: [Xen-devel] [RFC PATCH] mm, hotplug: get rid of auto_online_blocks
On Thu, 9 Mar 2017 13:54:00 +0100 Michal Hockowrote: [...] > > It's major regression if you remove auto online in kernels that > > run on top of x86 kvm/vmware hypervisors, making API cleanups > > while breaking useful functionality doesn't make sense. > > > > I would ACK config option removal if auto online keeps working > > for all x86 hypervisors (hyperv/xen isn't the only who needs it) > > and keep kernel CLI option to override default. > > > > That doesn't mean that others will agree with flipping default, > > that's why config option has been added. > > > > Now to sum up what's been discussed on this thread, there were 2 > > different issues discussed: > > 1) memory hotplug: remove in kernel auto online for all > > except of hyperv/xen > > > >- suggested RFC is not acceptable from virt point of view > > as it regresses guests on top of x86 kvm/vmware which > > both use ACPI based memory hotplug. > > > >- udev/userspace solution doesn't work in practice as it's > > too slow and unreliable when system is under load which > > is quite common in virt usecase. That's why auto online > > has been introduced in the first place. > > Please try to be more specific why "too slow" is a problem. Also how > much slower are we talking about? In virt case on host with lots VMs, userspace handler processing could be scheduled late enough to trigger a race between (guest memory going away/OOM handler) and memory coming online. > > > 2) memory unplug: online memory as movable > > > >- doesn't work currently with udev rule due to kernel > > issues https://bugzilla.redhat.com/show_bug.cgi?id=1314306#c7 > > These should be fixed > > >- could be fixed both for in kernel auto online and udev > > with following patch: > > https://bugzilla.redhat.com/attachment.cgi?id=1146332 > > but fixing it this way exposes zone disbalance issues, > > which are not present in current kernel as blocks are > > onlined in Zone Normal. So this is area to work and > > improve on. > > > >- currently if one wants to use online_movable, > > one has to either > >* disable auto online in kernel OR > > which might not just work because an unmovable allocation could have > made the memblock pinned. With memhp_default_state=offline on kernel CLI there won't be any unmovable allocation as hotplugged memory won't be onlined and user can online it manually. So it works for non default usecase of playing with memory hot-unplug. > >* remove udev rule that distro ships > > AND write custom daemon that will be able to online > > block in right zone/order. So currently whole > > online_movable thing isn't working by default > > regardless of who onlines memory. > > my epxperience with onlining full nodes as movable shows this works just > fine (with all the limitations of the movable zones but that is a > separate thing). I haven't played with configurations where movable > zones are sharing the node with other zones. I don't have access to a such baremetal configuration to play with anymore. > > I'm in favor of implementing that in kernel as it keeps > > kernel internals inside kernel and doesn't need > > kernel API to be involved (memory blocks in sysfs, > > online_kernel, online_movable) > > There would be no need in userspace which would have to > > deal with kernel zoo and maintain that as well. > > The kernel is supposed to provide a proper API and that is sysfs > currently. I am not entirely happy about it either but pulling a lot of > code into the kernel is not the rigth thing to do. Especially when > different usecases require different treatment. If it could be done from kernel side alone, it looks like a better way to me not to involve userspace at all. And for ACPI based x86/ARM it's possible to implement without adding a lot of kernel code. That's one more of a reason to keep CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE so we could continue on improving kernel only auto-onlining and fixing current memory hot(un)plug issues without affecting other platforms/users that are no interested in it. (PS: I don't care much about sysfs knob for setting auto-onlining, as kernel CLI override with memhp_default_state seems sufficient to me) ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 05/18] xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check
Xen will do exception syndrome check while some types of exception take place in EL2. The syndrome check code read the ESR_EL2 register directly, but in some situation this register maybe overridden by nested exception. For example, if we re-enable IRQ before reading ESR_EL2 which means Xen will enter in IRQ exception mode and return the processor with clobbered ESR_EL2 (See ARM ARM DDI 0487A.j D7.2.25) In this case the guest exception syndrome has been overridden, we will check the syndrome for guest sync exception with a mismatched ESR_EL2 value. So we want to save ESR_EL2 to cpu_user_regs as soon as the exception takes place in EL2 to avoid using a mismatched syndrome value. Signed-off-by: Wei Chen--- xen/arch/arm/arm32/asm-offsets.c | 1 + xen/arch/arm/arm32/entry.S| 3 +++ xen/arch/arm/arm64/asm-offsets.c | 1 + xen/arch/arm/arm64/entry.S| 13 + xen/arch/arm/traps.c | 2 +- xen/include/asm-arm/arm32/processor.h | 2 +- xen/include/asm-arm/arm64/processor.h | 10 -- 7 files changed, 24 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/arm32/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c index f8e6b53..5b543ab 100644 --- a/xen/arch/arm/arm32/asm-offsets.c +++ b/xen/arch/arm/arm32/asm-offsets.c @@ -26,6 +26,7 @@ void __dummy__(void) OFFSET(UREGS_lr, struct cpu_user_regs, lr); OFFSET(UREGS_pc, struct cpu_user_regs, pc); OFFSET(UREGS_cpsr, struct cpu_user_regs, cpsr); + OFFSET(UREGS_hsr, struct cpu_user_regs, hsr); OFFSET(UREGS_LR_usr, struct cpu_user_regs, lr_usr); OFFSET(UREGS_SP_usr, struct cpu_user_regs, sp_usr); diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S index 2a6f4f0..2187226 100644 --- a/xen/arch/arm/arm32/entry.S +++ b/xen/arch/arm/arm32/entry.S @@ -23,6 +23,9 @@ add r11, sp, #UREGS_kernel_sizeof+4;\ str r11, [sp, #UREGS_sp]; \ \ +mrc CP32(r11, HSR); /* Save exception syndrome */ \ +str r11, [sp, #UREGS_hsr]; \ +\ mrs r11, SPSR_hyp; \ str r11, [sp, #UREGS_cpsr]; \ and r11, #PSR_MODE_MASK;\ diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c index 69ea92a..ce24e44 100644 --- a/xen/arch/arm/arm64/asm-offsets.c +++ b/xen/arch/arm/arm64/asm-offsets.c @@ -27,6 +27,7 @@ void __dummy__(void) OFFSET(UREGS_SP, struct cpu_user_regs, sp); OFFSET(UREGS_PC, struct cpu_user_regs, pc); OFFSET(UREGS_CPSR, struct cpu_user_regs, cpsr); + OFFSET(UREGS_ESR_el2, struct cpu_user_regs, hsr); OFFSET(UREGS_SPSR_el1, struct cpu_user_regs, spsr_el1); diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S index c181b5e..02802c0 100644 --- a/xen/arch/arm/arm64/entry.S +++ b/xen/arch/arm/arm64/entry.S @@ -121,9 +121,13 @@ lr .reqx30 // link register stp lr, x21, [sp, #UREGS_LR] -mrs x22, elr_el2 -mrs x23, spsr_el2 -stp x22, x23, [sp, #UREGS_PC] +mrs x21, elr_el2 +str x21, [sp, #UREGS_PC] + +add x21, sp, #UREGS_CPSR +mrs x22, spsr_el2 +mrs x23, esr_el2 +stp w22, w23, [x21] .endm @@ -307,7 +311,8 @@ ENTRY(return_to_new_vcpu64) return_from_trap: msr daifset, #2 /* Mask interrupts */ -ldp x21, x22, [sp, #UREGS_PC] // load ELR, SPSR +ldr x21, [sp, #UREGS_PC]// load ELR +ldr w22, [sp, #UREGS_CPSR] // load SPSR pop x0, x1 pop x2, x3 diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 476e2be..c11359d 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2657,7 +2657,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs) asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) { -const union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) }; +const union hsr hsr = { .bits = regs->hsr }; enter_hypervisor_head(regs); diff --git a/xen/include/asm-arm/arm32/processor.h b/xen/include/asm-arm/arm32/processor.h index db3b17b..f6d5df3 100644 --- a/xen/include/asm-arm/arm32/processor.h +++ b/xen/include/asm-arm/arm32/processor.h @@ -37,7 +37,7 @@ struct cpu_user_regs uint32_t pc, pc32; }; uint32_t cpsr; /* Return mode */ -uint32_t pad0; /* Doubleword-align the kernel half of the frame */ +uint32_t hsr; /* Exception Syndrome */ /* Outer guest frame only from here on... */ diff --git a/xen/include/asm-arm/arm64/processor.h
[Xen-devel] [PATCH 01/18] xen/arm: Introduce a helper to get default HCR_EL2 flags
We want to add HCR_EL2 register to Xen context switch. And each copy of HCR_EL2 in vcpu structure will be initialized with the same set of trap flags as the HCR_EL2 register. We introduce a helper here to represent these flags to be reused easily. Signed-off-by: Wei Chen--- xen/arch/arm/traps.c| 11 --- xen/include/asm-arm/processor.h | 2 ++ 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 614501f..d343c66 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -115,6 +115,13 @@ static void __init parse_vwfi(const char *s) } custom_param("vwfi", parse_vwfi); +register_t get_default_hcr_flags(void) +{ +return (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM| + (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) | + HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB); +} + void init_traps(void) { /* Setup Hyp vector base */ @@ -139,9 +146,7 @@ void init_traps(void) CPTR_EL2); /* Setup hypervisor traps */ -WRITE_SYSREG(HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM| - (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) | - HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB,HCR_EL2); +WRITE_SYSREG(get_default_hcr_flags(), HCR_EL2); isb(); } diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index afc0e9a..4b6338b 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -708,6 +708,8 @@ int call_smc(register_t function_id, register_t arg0, register_t arg1, void do_trap_guest_error(struct cpu_user_regs *regs); +register_t get_default_hcr_flags(void); + #endif /* __ASSEMBLY__ */ #endif /* __ASM_ARM_PROCESSOR_H */ /* -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 02/18] xen/arm: Restore HCR_EL2 register
Different domains may have different HCR_EL2 flags. For example, the 64-bit domain needs HCR_RW flag but the 32-bit does not need it. So we give each domain a default HCR_EL2 value and save it in the VCPU's context. HCR_EL2 register has only one bit can be updated automatically without explicit write (HCR_VSE). But we haven't used this bit currently, so we can consider that the HCR_EL2 register will not be modified while the guest is running. So save the HCR_EL2 while guest exiting to hypervisor is not neccessary. We just have to restore this register for each VCPU while leaving hypervisor. We prefer to restore HCR_EL2 in leave_hypervisor_tail rather than in ctxt_switch_to. Because the leave_hypervisor_tail is the closest place to the exception return. In this case, we don't need to warrant the HCR_EL2 will not be changed between ctxt_switch_to and exception return. Even though we have restored HCR_EL2 in leave_hypervisor_tail, we still have to keep the write to HCR_EL2 in p2m_restore_state. That because p2m_restore_state could be used to switch between two p2m and possibly to do address translation using hardware. For instance when building the hardware domain, we are using the instruction to before copying data. During the translation, some bits of base register (such as SCTLR and HCR) could be cached in TLB and used for the translation. We had some issues in the past (see commit b3cbe129d "xen: arm: Ensure HCR_EL2.RW is set correctly when building dom0"), so we should probably keep the write to HCR_EL2 in p2m_restore_state. Signed-off-by: wei chen--- xen/arch/arm/domain.c| 2 ++ xen/arch/arm/p2m.c | 15 +-- xen/arch/arm/traps.c | 1 + xen/include/asm-arm/domain.h | 3 +++ 4 files changed, 15 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index bb327da..5d18bb0 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -513,6 +513,8 @@ int vcpu_initialise(struct vcpu *v) v->arch.actlr = READ_SYSREG32(ACTLR_EL1); +v->arch.hcr_el2 = get_default_hcr_flags(); + processor_vcpu_initialise(v); if ( (rc = vcpu_vgic_init(v)) != 0 ) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 1fc6ca3..c49bfa6 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -128,26 +128,29 @@ void p2m_save_state(struct vcpu *p) void p2m_restore_state(struct vcpu *n) { -register_t hcr; struct p2m_domain *p2m = >domain->arch.p2m; if ( is_idle_vcpu(n) ) return; -hcr = READ_SYSREG(HCR_EL2); - WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); isb(); if ( is_32bit_domain(n->domain) ) -hcr &= ~HCR_RW; +n->arch.hcr_el2 &= ~HCR_RW; else -hcr |= HCR_RW; +n->arch.hcr_el2 |= HCR_RW; WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); isb(); -WRITE_SYSREG(hcr, HCR_EL2); +/* + * p2m_restore_state could be used to switch between two p2m and possibly + * to do address translation using hardware. And these operations may + * happen during the interval between enter/leave hypervior, so we should + * probably keep the write to HCR_EL2 here. + */ +WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); isb(); } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index d343c66..9792d02 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2811,6 +2811,7 @@ asmlinkage void leave_hypervisor_tail(void) local_irq_disable(); if (!softirq_pending(smp_processor_id())) { gic_inject(); +WRITE_SYSREG(current->arch.hcr_el2, HCR_EL2); return; } local_irq_enable(); diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 09fe502..7b1dacc 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -204,6 +204,9 @@ struct arch_vcpu register_t tpidr_el1; register_t tpidrro_el0; +/* HYP configuration */ +register_t hcr_el2; + uint32_t teecr, teehbr; /* ThumbEE, 32-bit guests only */ #ifdef CONFIG_ARM_32 /* -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 00/18] Provide a command line option to choose how to handle SErrors
From XSA-201 (see [1]), we know that, a guest could trigger SErrors when accessing memory mapped HW in a non-conventional way. In the patches for XSA-201, we crash the guest when we captured such asynchronous aborts to avoid data corruption. In order to distinguish guest-generated SErrors from hypervisor-generated SErrors. We have to place SError checking code in every EL1 -> EL2 paths. That will be an overhead on entries caused by dsb/isb. But not all platforms want to categorize the SErrors. For example, a host that is running with trusted guests. The administrator can confirm that all guests that are running on the host will not trigger such SErrors. In this user scene, we should provide some options to administrator to avoid categorizing the SErrors. And then reduce the overhead of dsb/isb. We provided following 3 options to administrator to determine how to handle the SErrors: * `diverse`: The hypervisor will distinguish guest SErrors from hypervisor SErrors. The guest generated SErrors will be forwarded to guests, the hypervisor generated SErrors will cause the whole system crash. It requires: 1. Place dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors correctly. 2. Place dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor SErrors to guests. 3. Place dsb/isb in context switch to isolate the SErrors between 2 vCPUs. * `forward`: The hypervisor will not distinguish guest SErrors from hypervisor SErrors. All SErrors will be forwarded to guests, except the SErrors generated when idle vCPU is running. The idle domain doesn't have the ability to hanle the SErrors, so we have to crash the whole system when we get SErros with idle vCPU. This option will avoid most overhead of the dsb/isb, except the dsb/isb in context switch which is used to isolate the SErrors between 2 vCPUs. * `panic`: The hypervisor will not distinguish guest SErrors from hypervisor SErrors. All SErrors will crash the whole system. This option will avoid all overhead of the dsb/isb. Wei Chen (18): xen/arm: Introduce a helper to get default HCR_EL2 flags xen/arm: Restore HCR_EL2 register xen/arm: Avoid setting/clearing HCR_RW at every context switch xen/arm: Save HCR_EL2 when a guest took the SError xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome check xen/arm: Introduce a virtual abort injection helper xen/arm: Introduce a command line parameter for SErrors/Aborts xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op xen/arm64: Use alternative to skip the check of pending serrors xen/arm32: Use cpu_hwcaps to skip the check of pending serrors xen/arm: Move macro VABORT_GEN_BY_GUEST to common header xen/arm: Introduce new helpers to handle guest/hyp SErrors xen/arm: Replace do_trap_guest_serror with new helpers xen/arm: Unmask the Abort/SError bit in the exception entries xen/arm: Introduce a helper to synchronize SError xen/arm: Isolate the SError between the context switch of 2 vCPUs xen/arm: Prevent slipping hypervisor SError to guest xen/arm: Handle guest external abort as guest SError docs/misc/xen-command-line.markdown | 44 xen/arch/arm/arm32/asm-offsets.c | 1 + xen/arch/arm/arm32/entry.S| 37 ++- xen/arch/arm/arm32/traps.c| 5 +- xen/arch/arm/arm64/asm-offsets.c | 1 + xen/arch/arm/arm64/domctl.c | 6 + xen/arch/arm/arm64/entry.S| 105 -- xen/arch/arm/domain.c | 9 ++ xen/arch/arm/domain_build.c | 7 ++ xen/arch/arm/p2m.c| 16 ++- xen/arch/arm/traps.c | 200 ++ xen/include/asm-arm/arm32/processor.h | 12 +- xen/include/asm-arm/arm64/processor.h | 10 +- xen/include/asm-arm/cpufeature.h | 3 +- xen/include/asm-arm/domain.h | 4 + xen/include/asm-arm/processor.h | 19 +++- 16 files changed, 370 insertions(+), 109 deletions(-) -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH 04/18] xen/arm: Save HCR_EL2 when a guest took the SError
The HCR_EL2.VSE (HCR.VA for aarch32) bit can be used to generate a virtual abort to guest. The HCR_EL2.VSE bit has a peculiar feature of getting cleared when the guest has taken the abort (this is the only bit that behaves as such in HCR_EL2 register). This means that if we set the HCR_EL2.VSE bit to signal such an abort, we must preserve it in the guest context until it disappears from HCR_EL2, and at which point it must be cleared from the context. This is achieved by reading back from HCR_EL2 until the guest takes the fault. If we preserved a pending VSE in guest context, we have to restore it to HCR_EL2 when context switch to this guest. This is achieved by writing saved HCR_EL2 value in guest context back to HCR_EL2 register before return to guest. This had been done by the patch of "Restore HCR_EL2 register". Signed-off-by: Wei Chen--- xen/arch/arm/traps.c | 11 +++ 1 file changed, 11 insertions(+) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 9792d02..476e2be 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2641,7 +2641,18 @@ static void do_trap_smc(struct cpu_user_regs *regs, const union hsr hsr) static void enter_hypervisor_head(struct cpu_user_regs *regs) { if ( guest_mode(regs) ) +{ +/* + * If we pended a virtual abort, preserve it until it gets cleared. + * See ARM ARM DDI 0487A.j D1.14.3 (Virtual Interrupts) for details, + * but the crucial bit is "On taking a vSError interrupt, HCR_EL2.VSE + * (alias of HCR.VA) is cleared to 0." + */ +if ( current->arch.hcr_el2 & HCR_VA ) +current->arch.hcr_el2 = READ_SYSREG(HCR_EL2); + gic_clear_lrs(current); +} } asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [GSoC 2017] Rust bindings for libxl
On Mon, Mar 13, 2017 at 10:47:08AM +, Wei Liu wrote: > Hello Saurav > > On Mon, Mar 06, 2017 at 03:50:37PM +, Saurav Sachidanand wrote: > > Hello, > > > > I'm Saurav Sachidanand, and I'm a CS sophomore studying in India. For > > more than year I've been programming in Rust and have published some > > personal projects in it (few involving the Rust-C FFI) and have > > contributed a some code to Servo (github.com/saurvs). I've also > > played around a bit with kernel modules in NetBSD. > > > > I'm interested in Xen's project for creating Rust bindings for libxl. > > Since I'm new to Xen, I'll spend time reading the docs, building and > > testing out Xen, and researching on the how to go about the > > implementing the bindings. > > > > Yeah, the first step would be to install and play with Xen for a bit. > > > I'd greatly appreciate any guidance and pointers you can give > > regarding this project. And if you could point me to some small coding > > tasks, I can start working it to get familiar with Xen's code base. > > > > From my point of view, this project needs to achieve several goals: > > 1. generate bindings systematically and automatically; > 2. be committed in tree (xen.git) -- see also tools/python directory; > 3. can be tested in project's CI infrastructure (osstest). > > Doug might have more points to add. > > As a small exercise, please try to implement a program in Rust so that > we can see (more or less) the same information as you would see when > calling "xl info", and provide building instructions so that we can test > it. Bonus point: do it in the form of a patch against xen.git so that we > can build it in-tree. > > And then you can come up with some ideas on how to achieve the goals. > Forgot to say: feel free to ask questions if you find it difficult to navigate xen source code. The code you want to check out at this stage is tools/libxl, tools/xl. Please also have a look at tools/python for existing in-tree bindings. Wei. > Wei. > > > > Thanks, > > Saurav ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 3/3] x86/vvmx: add a shadow vmcs check to vmlaunch
Intel SDM states that if the current VMCS is a shadow VMCS, VMFailInvalid occurs and control passes to the next instruction. Implement such behaviour for nested vmlaunch. Signed-off-by: Sergey Dyasli--- xen/arch/x86/hvm/vmx/vvmx.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3017849..173ec74 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1630,6 +1630,13 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) return X86EMUL_OKAY; } +/* Check that guest is not using a shadow vmcs for vmentry */ +if ( nvmx->shadow_vmcs ) +{ +vmfail_invalid(regs); +return X86EMUL_OKAY; +} + __vmread(GUEST_INTERRUPTIBILITY_INFO, _shadow); if ( intr_shadow & VMX_INTR_SHADOW_MOV_SS ) { -- 2.9.3 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 0/3] x86/vvmx: fixes for mov-ss and shadow vmcs handling
This series includes 2 more checks for nested vmentry and a fix for handling a nested shadow vmcs. Sergey Dyasli (3): x86/vvmx: add mov-ss blocking check to vmentry x86/vvmx: correct nested shadow VMCS handling x86/vvmx: add a shadow vmcs check to vmlaunch xen/arch/x86/hvm/vmx/vvmx.c| 45 ++ xen/include/asm-x86/hvm/vmx/vmcs.h | 1 + xen/include/asm-x86/hvm/vmx/vvmx.h | 1 + 3 files changed, 43 insertions(+), 4 deletions(-) -- 2.9.3 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 2/3] x86/vvmx: correct nested shadow VMCS handling
Currently xen always sets the shadow VMCS-indicator bit on nested vmptrld and always clears it on nested vmclear. This behavior is wrong when the guest loads a shadow VMCS: shadow bit will be lost on nested vmclear. Fix this by checking if the guest has provided a shadow VMCS. Signed-off-by: Sergey Dyasli--- xen/arch/x86/hvm/vmx/vvmx.c| 22 ++ xen/include/asm-x86/hvm/vmx/vvmx.h | 1 + 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 09e4250..3017849 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1119,10 +1119,19 @@ static bool_t nvmx_vpid_enabled(const struct vcpu *v) static void nvmx_set_vmcs_pointer(struct vcpu *v, struct vmcs_struct *vvmcs) { +struct nestedvmx *nvmx = _2_nvmx(v); paddr_t vvmcs_maddr = v->arch.hvm_vmx.vmcs_shadow_maddr; __vmpclear(vvmcs_maddr); -vvmcs->vmcs_revision_id |= VMCS_RID_TYPE_MASK; +if ( !nvmx->shadow_vmcs ) +{ +/* + * We must set the shadow VMCS-indicator in order for the next vmentry + * to succeed with a newly set up link pointer in vmcs01. + * Note: guest can see that this bit was set. + */ +vvmcs->vmcs_revision_id |= VMCS_RID_TYPE_MASK; +} __vmwrite(VMCS_LINK_POINTER, vvmcs_maddr); __vmwrite(VMREAD_BITMAP, page_to_maddr(v->arch.hvm_vmx.vmread_bitmap)); __vmwrite(VMWRITE_BITMAP, page_to_maddr(v->arch.hvm_vmx.vmwrite_bitmap)); @@ -1130,10 +1139,13 @@ static void nvmx_set_vmcs_pointer(struct vcpu *v, struct vmcs_struct *vvmcs) static void nvmx_clear_vmcs_pointer(struct vcpu *v, struct vmcs_struct *vvmcs) { +struct nestedvmx *nvmx = _2_nvmx(v); paddr_t vvmcs_maddr = v->arch.hvm_vmx.vmcs_shadow_maddr; __vmpclear(vvmcs_maddr); -vvmcs->vmcs_revision_id &= ~VMCS_RID_TYPE_MASK; +if ( !nvmx->shadow_vmcs ) +vvmcs->vmcs_revision_id &= ~VMCS_RID_TYPE_MASK; +nvmx->shadow_vmcs = false; __vmwrite(VMCS_LINK_POINTER, ~0ul); __vmwrite(VMREAD_BITMAP, 0); __vmwrite(VMWRITE_BITMAP, 0); @@ -1674,12 +1686,14 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs) { if ( writable ) { +struct nestedvmx *nvmx = _2_nvmx(v); struct vmcs_struct *vvmcs = vvmcx; +nvmx->shadow_vmcs = +vvmcs->vmcs_revision_id & ~VMX_BASIC_REVISION_MASK; if ( ((vvmcs->vmcs_revision_id ^ vmx_basic_msr) & VMX_BASIC_REVISION_MASK) || - (!cpu_has_vmx_vmcs_shadowing && - (vvmcs->vmcs_revision_id & ~VMX_BASIC_REVISION_MASK)) ) + (!cpu_has_vmx_vmcs_shadowing && nvmx->shadow_vmcs) ) { hvm_unmap_guest_frame(vvmcx, 1); vmfail(regs, VMX_INSN_VMPTRLD_INCORRECT_VMCS_ID); diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h index ca2fb25..9a65218 100644 --- a/xen/include/asm-x86/hvm/vmx/vvmx.h +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h @@ -51,6 +51,7 @@ struct nestedvmx { } ept; uint32_t guest_vpid; struct list_head launched_list; +bool shadow_vmcs; }; #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx) -- 2.9.3 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 1/3] x86/vvmx: add mov-ss blocking check to vmentry
Intel SDM states that if there is a current VMCS and there is MOV-SS blocking, VMFailValid occurs and control passes to the next instruction. Implement such behaviour for nested vmlaunch and vmresume. Signed-off-by: Sergey Dyasli--- xen/arch/x86/hvm/vmx/vvmx.c| 16 xen/include/asm-x86/hvm/vmx/vmcs.h | 1 + 2 files changed, 17 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index e2c0951..09e4250 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1572,6 +1572,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs) bool_t launched; struct vcpu *v = current; struct nestedvmx *nvmx = _2_nvmx(v); +unsigned long intr_shadow; int rc = vmx_inst_check_privilege(regs, 0); if ( rc != X86EMUL_OKAY ) @@ -1583,6 +1584,13 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs) return X86EMUL_OKAY; } +__vmread(GUEST_INTERRUPTIBILITY_INFO, _shadow); +if ( intr_shadow & VMX_INTR_SHADOW_MOV_SS ) +{ +vmfail_valid(regs, VMX_INSN_VMENTRY_BLOCKED_BY_MOV_SS); +return X86EMUL_OKAY; +} + launched = vvmcs_launched(>launched_list, PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr)); if ( !launched ) @@ -1598,6 +1606,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) bool_t launched; struct vcpu *v = current; struct nestedvmx *nvmx = _2_nvmx(v); +unsigned long intr_shadow; int rc = vmx_inst_check_privilege(regs, 0); if ( rc != X86EMUL_OKAY ) @@ -1609,6 +1618,13 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs) return X86EMUL_OKAY; } +__vmread(GUEST_INTERRUPTIBILITY_INFO, _shadow); +if ( intr_shadow & VMX_INTR_SHADOW_MOV_SS ) +{ +vmfail_valid(regs, VMX_INSN_VMENTRY_BLOCKED_BY_MOV_SS); +return X86EMUL_OKAY; +} + launched = vvmcs_launched(>launched_list, PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr)); if ( launched ) diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h index f465fff..dc5d91f 100644 --- a/xen/include/asm-x86/hvm/vmx/vmcs.h +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h @@ -515,6 +515,7 @@ enum vmx_insn_errno VMX_INSN_VMPTRLD_INCORRECT_VMCS_ID = 11, VMX_INSN_UNSUPPORTED_VMCS_COMPONENT= 12, VMX_INSN_VMXON_IN_VMX_ROOT = 15, +VMX_INSN_VMENTRY_BLOCKED_BY_MOV_SS = 26, VMX_INSN_FAIL_INVALID = ~0, }; -- 2.9.3 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Xen 4.6.5 released
On 13.03.2017 11:29, Andrew Cooper wrote: > On 13/03/17 09:24, Jan Beulich wrote: > On 10.03.17 at 18:22,wrote: >>> On 08.03.2017 13:54, Jan Beulich wrote: All, I am pleased to announce the release of Xen 4.6.5. This is available immediately from its git repository http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.6 (tag RELEASE-4.6.5) or from the XenProject download page http://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-465.html (where a list of changes can also be found). We recommend all users of the 4.6 stable series to update to this latest point release. >>> This does not seem to compile for me (x86_64) without the attached >>> (admittedly >>> brutish) change. >> I guess it's the emulator test code which has a problem here (I >> did notice this myself), but that doesn't get built by default (and >> I see no reason why anyone would want to build it when putting >> together packages for people to consume - this is purely a dev >> tool). Please clarify. > > These tools are all built automatically. Therefore, build fixes should > be backported. > > To avoid building them, you need override CONFIG_TESTS := n in the root > .config file to override the default in Config.mk Thanks Andrew, I was not sure but I did not do anything special except replacing the orig tarballs. The rest of the build is as we share it with Debian. So for a minor release / stable release update I would rather not change the environment. For the patch I just copied the definition from lib.h because gcc seems to be called without access to hypervisor includes (probably adapting the Makefile plus adding an include would be the better path but it was late'ish on a Friday and I wanted something compiling quickly). -Stefan signature.asc Description: OpenPGP digital signature ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [GSoC 2017] Rust bindings for libxl
Hello Saurav On Mon, Mar 06, 2017 at 03:50:37PM +, Saurav Sachidanand wrote: > Hello, > > I'm Saurav Sachidanand, and I'm a CS sophomore studying in India. For > more than year I've been programming in Rust and have published some > personal projects in it (few involving the Rust-C FFI) and have > contributed a some code to Servo (github.com/saurvs). I've also > played around a bit with kernel modules in NetBSD. > > I'm interested in Xen's project for creating Rust bindings for libxl. > Since I'm new to Xen, I'll spend time reading the docs, building and > testing out Xen, and researching on the how to go about the > implementing the bindings. > Yeah, the first step would be to install and play with Xen for a bit. > I'd greatly appreciate any guidance and pointers you can give > regarding this project. And if you could point me to some small coding > tasks, I can start working it to get familiar with Xen's code base. > From my point of view, this project needs to achieve several goals: 1. generate bindings systematically and automatically; 2. be committed in tree (xen.git) -- see also tools/python directory; 3. can be tested in project's CI infrastructure (osstest). Doug might have more points to add. As a small exercise, please try to implement a program in Rust so that we can see (more or less) the same information as you would see when calling "xl info", and provide building instructions so that we can test it. Bonus point: do it in the form of a patch against xen.git so that we can build it in-tree. And then you can come up with some ideas on how to achieve the goals. Wei. > Thanks, > Saurav ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] WTH is going on with memory hotplug sysf interface (was: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks)
On Mon 13-03-17 11:31:10, Igor Mammedov wrote: > On Fri, 10 Mar 2017 14:58:07 +0100 [...] > > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x-0x0009] > > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x0010-0x3fff] > > [0.00] ACPI: SRAT: Node 1 PXM 1 [mem 0x4000-0x7fff] > > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x1-0x27fff] > > hotplug > > [0.00] NUMA: Node 0 [mem 0x-0x0009] + [mem > > 0x0010-0x3fff] -> [mem 0x-0x3fff] > > [0.00] NODE_DATA(0) allocated [mem 0x3fffc000-0x3fff] > > [0.00] NODE_DATA(1) allocated [mem 0x7ffdc000-0x7ffd] > > [0.00] Zone ranges: > > [0.00] DMA [mem 0x1000-0x00ff] > > [0.00] DMA32[mem 0x0100-0x7ffd] > > [0.00] Normal empty > > [0.00] Movable zone start for each node > > [0.00] Early memory node ranges > > [0.00] node 0: [mem 0x1000-0x0009efff] > > [0.00] node 0: [mem 0x0010-0x3fff] > > [0.00] node 1: [mem 0x4000-0x7ffd] > > > > so there is neither any normal zone nor movable one at the boot time. > it could be if hotpluggable memory were present at boot time in E802 table > (if I remember right when running on hyperv there is movable zone at boot > time), > > but in qemu hotpluggable memory isn't put into E820, > so zone is allocated later when memory is enumerated > by ACPI subsystem and onlined. > It causes less issues wrt movable zone and works for > different versions of linux/windows as well. > > That's where in kernel auto-onlining could be also useful, > since user would be able to start-up with with small > non removable memory plus several removable DIMMs > and have all the memory onlined/available by the time > initrd is loaded. (missing piece here is onling > removable memory as movable by default). Why we should even care to online that memory that early rather than making it available via e820? > > Then I hotplugged 1G slot > > (qemu) object_add memory-backend-ram,id=mem1,size=1G > > (qemu) device_add pc-dimm,id=dimm1,memdev=mem1 > You can also specify node a pc-dimm goes to with 'node' property > if it should go to other then node 0. > > device_add pc-dimm,id=dimm1,memdev=mem1,node=1 thanks for the tip > > unfortunatelly the memory didn't show up automatically and I got > > [ 116.375781] acpi PNP0C80:00: Enumeration failure > it should work, > do you have CONFIG_ACPI_HOTPLUG_MEMORY enabled? No I didn't. Thanks, good to know! -- Michal Hocko SUSE Labs ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH] tools/Rules.mk: libxlutil should use $(XEN_XLUTIL)
A typo was made in 7a6de259f. Currently libxlutil lives in the same directory as libxl, fixing this issue causes no functional change. Signed-off-by: Wei Liu--- tools/Rules.mk | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/Rules.mk b/tools/Rules.mk index e676c6b665..3e49370f3d 100644 --- a/tools/Rules.mk +++ b/tools/Rules.mk @@ -182,7 +182,7 @@ SHLIB_libxenlight = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT) CFLAGS_libxlutil = -I$(XEN_XLUTIL) SHDEPS_libxlutil = $(SHLIB_libxenlight) -LDLIBS_libxlutil = $(SHDEPS_libxlutil) $(XEN_XENLIGHT)/libxlutil$(libextension) +LDLIBS_libxlutil = $(SHDEPS_libxlutil) $(XEN_XLUTIL)/libxlutil$(libextension) SHLIB_libxlutil = $(SHDEPS_libxlutil) -Wl,-rpath-link=$(XEN_XLUTIL) CFLAGS += -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -- 2.11.0 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] WTH is going on with memory hotplug sysf interface (was: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks)
On Fri, 10 Mar 2017 14:58:07 +0100 Michal Hockowrote: > Let's CC people touching this logic. A short summary is that onlining > memory via udev is currently unusable for online_movable because blocks > are added from lower addresses while movable blocks are allowed from > last blocks. More below. > > On Thu 09-03-17 13:54:00, Michal Hocko wrote: > > On Tue 07-03-17 13:40:04, Igor Mammedov wrote: > > > On Mon, 6 Mar 2017 15:54:17 +0100 > > > Michal Hocko wrote: > > > > > > > On Fri 03-03-17 18:34:22, Igor Mammedov wrote: > > [...] > > > > > in current mainline kernel it triggers following code path: > > > > > > > > > > online_pages() > > > > > ... > > > > >if (online_type == MMOP_ONLINE_KERNEL) { > > > > > > > > > > if (!zone_can_shift(pfn, nr_pages, ZONE_NORMAL, > > > > > _shift)) > > > > > return -EINVAL; > > > > > > > > Are you sure? I would expect MMOP_ONLINE_MOVABLE here > > > pretty much, reproducer is above so try and see for yourself > > > > I will play with this... > > OK so I did with -m 2G,slots=4,maxmem=4G -numa node,mem=1G -numa node,mem=1G > which generated 'mem' here distributes boot memory specified by "-m 2G" and does not include memory specified by -device pc-dimm. > [...] > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x-0x0009] > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x0010-0x3fff] > [0.00] ACPI: SRAT: Node 1 PXM 1 [mem 0x4000-0x7fff] > [0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x1-0x27fff] hotplug > [0.00] NUMA: Node 0 [mem 0x-0x0009] + [mem > 0x0010-0x3fff] -> [mem 0x-0x3fff] > [0.00] NODE_DATA(0) allocated [mem 0x3fffc000-0x3fff] > [0.00] NODE_DATA(1) allocated [mem 0x7ffdc000-0x7ffd] > [0.00] Zone ranges: > [0.00] DMA [mem 0x1000-0x00ff] > [0.00] DMA32[mem 0x0100-0x7ffd] > [0.00] Normal empty > [0.00] Movable zone start for each node > [0.00] Early memory node ranges > [0.00] node 0: [mem 0x1000-0x0009efff] > [0.00] node 0: [mem 0x0010-0x3fff] > [0.00] node 1: [mem 0x4000-0x7ffd] > > so there is neither any normal zone nor movable one at the boot time. it could be if hotpluggable memory were present at boot time in E802 table (if I remember right when running on hyperv there is movable zone at boot time), but in qemu hotpluggable memory isn't put into E820, so zone is allocated later when memory is enumerated by ACPI subsystem and onlined. It causes less issues wrt movable zone and works for different versions of linux/windows as well. That's where in kernel auto-onlining could be also useful, since user would be able to start-up with with small non removable memory plus several removable DIMMs and have all the memory onlined/available by the time initrd is loaded. (missing piece here is onling removable memory as movable by default). > Then I hotplugged 1G slot > (qemu) object_add memory-backend-ram,id=mem1,size=1G > (qemu) device_add pc-dimm,id=dimm1,memdev=mem1 You can also specify node a pc-dimm goes to with 'node' property if it should go to other then node 0. device_add pc-dimm,id=dimm1,memdev=mem1,node=1 > unfortunatelly the memory didn't show up automatically and I got > [ 116.375781] acpi PNP0C80:00: Enumeration failure it should work, do you have CONFIG_ACPI_HOTPLUG_MEMORY enabled? > so I had to probe it manually (prbably the BIOS my qemu uses doesn't > support auto probing - I haven't really dug further). Anyway the SRAT > table printed during the boot told that we should start at 0x1 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Xen 4.6.5 released
On 13/03/17 09:24, Jan Beulich wrote: On 10.03.17 at 18:22,wrote: >> On 08.03.2017 13:54, Jan Beulich wrote: >>> All, >>> >>> I am pleased to announce the release of Xen 4.6.5. This is >>> available immediately from its git repository >>> http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.6 >>> (tag RELEASE-4.6.5) or from the XenProject download page >>> http://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-465.html >>> (where a list of changes can also be found). >>> >>> We recommend all users of the 4.6 stable series to update to this >>> latest point release. >> This does not seem to compile for me (x86_64) without the attached >> (admittedly >> brutish) change. > I guess it's the emulator test code which has a problem here (I > did notice this myself), but that doesn't get built by default (and > I see no reason why anyone would want to build it when putting > together packages for people to consume - this is purely a dev > tool). Please clarify. These tools are all built automatically. Therefore, build fixes should be backported. To avoid building them, you need override CONFIG_TESTS := n in the root .config file to override the default in Config.mk ~Andrew ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [ovmf test] 106629: regressions - FAIL
flight 106629 ovmf real [real] http://logs.test-lab.xenproject.org/osstest/logs/106629/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 105963 test-amd64-i386-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 105963 version targeted for testing: ovmf e5735b98c2da8b4eeed36edfbec58a55ca3d236b baseline version: ovmf e0307a7dad02aa8c0cd8b3b0b9edce8ddb3fef2e Last test of basis 105963 2017-02-21 21:43:31 Z 19 days Failing since105980 2017-02-22 10:03:53 Z 19 days 53 attempts Testing same since 106629 2017-03-13 04:26:54 Z0 days1 attempts People who touched revisions under test: Anthony PERARDArd Biesheuvel Bi, Dandan Brijesh Singh Chao Zhang Chen A Chen Dandan Bi edk2-devel On Behalf Of rthomaiy <[mailto:edk2-devel-boun...@lists.01.org]> Fu Siyuan Hao Wu Hegde Nagaraj P Hess Chen Jeff Fan Jiaxin Wu Jiewen Yao Laszlo Ersek Leo Duran Paolo Bonzini Qin Long Richard Thomaiyar Ruiyu Ni Star Zeng Wu Jiaxin Yonghong Zhu Zhang Lubo Zhang, Chao B jobs: build-amd64-xsm pass build-i386-xsm pass build-amd64 pass build-i386 pass build-amd64-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-i386-pvops pass test-amd64-amd64-xl-qemuu-ovmf-amd64 fail test-amd64-i386-xl-qemuu-ovmf-amd64 fail sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. (No revision log; it would be 3847 lines long.) ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 21/21] x86/xen: rename some PV-only functions in smp_pv.c
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > After code split between PV and HVM some functions in xen_smp_ops have > xen_pv_ prefix and some only xen_ which makes them look like they're > common for both PV and HVM while they're not. Rename all the rest to > have xen_pv_ prefix. > > Signed-off-by: Vitaly Kuznetsov> --- > - This patch is rather a matter of taste and it makes code archeology > slightly harder, we may consider dropping it from the series. I'm fine with this change. Reviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 20/21] x86/xen: enable PVHVM-only builds
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Now everything is in place and we can move PV-only code under > CONFIG_XEN_PV. CONFIG_XEN_PV_SMP is created to support the change. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 19/21] xen: create xen_create/destroy_contiguous_region() stubs for PVHVM only builds
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > xen_create_contiguous_region()/xen_create_contiguous_region() are PV-only, > they both contain xen_feature(XENFEAT_auto_translated_physmap) check and > bail in the very beginning. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 18/21] xen/balloon: decorate PV-only parts with #ifdef CONFIG_XEN_PV
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Balloon driver uses several PV-only concepts (xen_start_info, > xen_extra_mem,..) and it seems the simpliest solution to make HVM-only > build happy is to decorate these parts with #ifdefs. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 17/21] x86/xen: create stubs for HVM-only builds in page.h
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > __pfn_to_mfn() is only used from PV code (mmu_pv.c, p2m.c) and from > page.h where all functions calling it check for > xen_feature(XENFEAT_auto_translated_physmap) first so we can replace > it with any stub to make build happy. > > set_foreign_p2m_mapping()/clear_foreign_p2m_mapping() are used from > grant-table.c but only if !xen_feature(XENFEAT_auto_translated_physmap). > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 16/21] x86/xen: define startup_xen for XEN PV only
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > startup_xen references PV-only code, decorate it with #ifdef CONFIG_XEN_PV > to make PV-free builds possible. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 13/21] x86/xen: split off mmu_pv.c
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Basically, mmu.c is renamed to mmu_pv.c and some code moved out to common > mmu.c. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 12/21] x86/xen: split off mmu_hvm.c
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Move PVHVM related code to mmu_hvm.c. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 11/21] x86/xen: split off smp_pv.c
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Basically, smp.c is renamed to smp_pv.c and some code moved out to common > smp.c. struct xen_common_irq delcaration ended up in smp.h. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Xen 4.6.5 released
>>> On 10.03.17 at 18:22,wrote: > On 08.03.2017 13:54, Jan Beulich wrote: >> All, >> >> I am pleased to announce the release of Xen 4.6.5. This is >> available immediately from its git repository >> http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.6 >> (tag RELEASE-4.6.5) or from the XenProject download page >> http://www.xenproject.org/downloads/xen-archives/xen-46-series/xen-465.html >> (where a list of changes can also be found). >> >> We recommend all users of the 4.6 stable series to update to this >> latest point release. > > This does not seem to compile for me (x86_64) without the attached > (admittedly > brutish) change. I guess it's the emulator test code which has a problem here (I did notice this myself), but that doesn't get built by default (and I see no reason why anyone would want to build it when putting together packages for people to consume - this is purely a dev tool). Please clarify. Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [OSSTEST PATCH] ts-xtf-run: Understand ./xtf-runner returning CRASH
On Tue, Mar 07, 2017 at 03:26:52PM +, Andrew Cooper wrote: > ./xtf-runner wants to distinguish between a clean and unclean exits of the > test. OSSTest doesn't care, so map unclean exit to failure. > > Signed-off-by: Andrew CooperReviewed-by: Wei Liu ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] WTH is going on with memory hotplug sysf interface (was: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks)
On Fri 10-03-17 13:00:37, Reza Arbab wrote: > On Fri, Mar 10, 2017 at 04:53:33PM +0100, Michal Hocko wrote: > >OK, so while I was playing with this setup some more I probably got why > >this is done this way. All new memblocks are added to the zone Normal > >where they are accounted as spanned but not present. > > It's not always zone Normal. See zone_for_memory(). This leads to a > workaround for having to do online_movable in descending block order. > Instead of this: > > 1. probe block 34, probe block 33, probe block 32, ... > 2. online_movable 34, online_movable 33, online_movable 32, ... > > you can online_movable the first block before adding the rest: I do I enforce that behavior when the probe happens automagically? > 1. probe block 32, online_movable 32 > 2. probe block 33, probe block 34, ... > - zone_for_memory() will cause these to start Movable > 3. online 33, online 34, ... > - they're already in Movable, so online_movable is equivalentr > > I agree with your general sentiment that this stuff is very nonintuitive. My criterion for nonintuitive is probably different because I would call this _completely_unusable_. Sorry for being so loud about this but the more I look into this area the more WTF code I see. This has seen close to zero review and seems to be building up more single usecase code on top of previous. We need to change this, seriously! -- Michal Hocko SUSE Labs ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] WTH is going on with memory hotplug sysf interface
On Fri 10-03-17 12:39:27, Yasuaki Ishimatsu wrote: > On 03/10/2017 08:58 AM, Michal Hocko wrote: [...] > >OK so I did with -m 2G,slots=4,maxmem=4G -numa node,mem=1G -numa node,mem=1G > >which generated > >[...] > >[0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x-0x0009] > >[0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x0010-0x3fff] > >[0.00] ACPI: SRAT: Node 1 PXM 1 [mem 0x4000-0x7fff] > >[0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x1-0x27fff] hotplug > >[0.00] NUMA: Node 0 [mem 0x-0x0009] + [mem > >0x0010-0x3fff] -> [mem 0x-0x3fff] > >[0.00] NODE_DATA(0) allocated [mem 0x3fffc000-0x3fff] > >[0.00] NODE_DATA(1) allocated [mem 0x7ffdc000-0x7ffd] > >[0.00] Zone ranges: > >[0.00] DMA [mem 0x1000-0x00ff] > >[0.00] DMA32[mem 0x0100-0x7ffd] > >[0.00] Normal empty > >[0.00] Movable zone start for each node > >[0.00] Early memory node ranges > >[0.00] node 0: [mem 0x1000-0x0009efff] > >[0.00] node 0: [mem 0x0010-0x3fff] > >[0.00] node 1: [mem 0x4000-0x7ffd] > > > >so there is neither any normal zone nor movable one at the boot time. > >Then I hotplugged 1G slot > >(qemu) object_add memory-backend-ram,id=mem1,size=1G > >(qemu) device_add pc-dimm,id=dimm1,memdev=mem1 > > > >unfortunatelly the memory didn't show up automatically and I got > >[ 116.375781] acpi PNP0C80:00: Enumeration failure > > > >so I had to probe it manually (prbably the BIOS my qemu uses doesn't > >support auto probing - I haven't really dug further). Anyway the SRAT > >table printed during the boot told that we should start at 0x1 > > > ># echo 0x1 > /sys/devices/system/memory/probe > ># grep . /sys/devices/system/memory/memory32/valid_zones > >Normal Movable > > > >which looks reasonably right? Both Normal and Movable zones are allowed > > > ># echo $((0x1+(128<<20))) > /sys/devices/system/memory/probe > ># grep . /sys/devices/system/memory/memory3?/valid_zones > >/sys/devices/system/memory/memory32/valid_zones:Normal > >/sys/devices/system/memory/memory33/valid_zones:Normal Movable > > > >Huh, so our valid_zones have changed under our feet... > > > ># echo $((0x1+2*(128<<20))) > /sys/devices/system/memory/probe > ># grep . /sys/devices/system/memory/memory3?/valid_zones > >/sys/devices/system/memory/memory32/valid_zones:Normal > >/sys/devices/system/memory/memory33/valid_zones:Normal > >/sys/devices/system/memory/memory34/valid_zones:Normal Movable > > > >and again. So only the last memblock is considered movable. Let's try to > >online them now. > > > ># echo online_movable > /sys/devices/system/memory/memory34/state > ># grep . /sys/devices/system/memory/memory3?/valid_zones > >/sys/devices/system/memory/memory32/valid_zones:Normal > >/sys/devices/system/memory/memory33/valid_zones:Normal Movable > >/sys/devices/system/memory/memory34/valid_zones:Movable Normal > > > > I think there is no strong reason which kernel has the restriction. > By setting the restrictions, it seems to have made management of > these zone structs simple. Could you be more specific please? How could this make management any easier when udev is basically racing with the physical hotplug and the result is basically undefined? -- Michal Hocko SUSE Labs ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2] xen: don't save/restore the physmap on VM save/restore
> -Original Message- > From: Igor Druzhinin > Sent: 10 March 2017 20:07 > To: sstabell...@kernel.org; Anthony Perard> Cc: Paul Durrant ; qemu-de...@nongnu.org; xen- > de...@lists.xenproject.org; Igor Druzhinin > Subject: [PATCH v2] xen: don't save/restore the physmap on VM > save/restore > > Saving/restoring the physmap to/from xenstore was introduced to > QEMU majorly in order to cover up the VRAM region restore issue. > The sequence of restore operations implies that we should know > the effective guest VRAM address *before* we have the VRAM region > restored (which happens later). Unfortunately, in Xen environment > VRAM memory does actually belong to a guest - not QEMU itself - > which means the position of this region is unknown beforehand and > can't be mapped into QEMU address space immediately. > > Previously, recreating xenstore keys, holding the physmap, by the > toolstack helped to get this information in place at the right > moment ready to be consumed by QEMU to map the region properly. > > The extraneous complexity of having those keys transferred by the > toolstack and unnecessary redundancy prompted us to propose a > solution which doesn't require any extra data in xenstore. The idea > is to defer the VRAM region mapping till the point we actually know > the effective address and able to map it. To that end, we initially > only register the pointer to the framebuffer without actual mapping. > Then, during the memory region restore phase, we perform the mapping > of the known address and update the VRAM region metadata (including > previously registered pointer) accordingly. > > Signed-off-by: Igor Druzhinin > --- > v2: > * Fix some building and coding style issues > --- > exec.c | 3 ++ > hw/display/vga.c | 2 +- > include/hw/xen/xen.h | 2 +- > xen-hvm-stub.c | 2 +- > xen-hvm.c| 114 > --- > 5 files changed, 33 insertions(+), 90 deletions(-) > > diff --git a/exec.c b/exec.c > index aabb035..5f2809e 100644 > --- a/exec.c > +++ b/exec.c > @@ -2008,6 +2008,9 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, > ram_addr_t addr) > } > > block->host = xen_map_cache(block->offset, block->max_length, 1); > +if (block->host == NULL) { > +return NULL; > +} I don't think this is right. Callers of this function do not seem to ever expect it to fail. Specifically the call to memory_region_get_ram_ptr() made by vga_common_init() just stashes the pointer as the VRAM base and never checks its validity. Anyway, if you modify do this code, you should cc the appropriate maintainers (Guest CPU cores) and justify why you need to. > } > return ramblock_ptr(block, addr); > } > diff --git a/hw/display/vga.c b/hw/display/vga.c > index 69c3e1d..be554c2 100644 > --- a/hw/display/vga.c > +++ b/hw/display/vga.c > @@ -2163,7 +2163,7 @@ void vga_common_init(VGACommonState *s, > Object *obj, bool global_vmstate) > memory_region_init_ram(>vram, obj, "vga.vram", s->vram_size, > _fatal); > vmstate_register_ram(>vram, global_vmstate ? NULL : DEVICE(obj)); > -xen_register_framebuffer(>vram); > +xen_register_framebuffer(>vram, >vram_ptr); > s->vram_ptr = memory_region_get_ram_ptr(>vram); > s->get_bpp = vga_get_bpp; > s->get_offsets = vga_get_offsets; > diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h > index 09c2ce5..3831843 100644 > --- a/include/hw/xen/xen.h > +++ b/include/hw/xen/xen.h > @@ -45,6 +45,6 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t > size, > struct MemoryRegion *mr, Error **errp); > void xen_modified_memory(ram_addr_t start, ram_addr_t length); > > -void xen_register_framebuffer(struct MemoryRegion *mr); > +void xen_register_framebuffer(struct MemoryRegion *mr, uint8_t **ptr); > > #endif /* QEMU_HW_XEN_H */ > diff --git a/xen-hvm-stub.c b/xen-hvm-stub.c > index c500325..c89065e 100644 > --- a/xen-hvm-stub.c > +++ b/xen-hvm-stub.c > @@ -46,7 +46,7 @@ qemu_irq *xen_interrupt_controller_init(void) > return NULL; > } > > -void xen_register_framebuffer(MemoryRegion *mr) > +void xen_register_framebuffer(MemoryRegion *mr, uint8_t **ptr) > { > } > > diff --git a/xen-hvm.c b/xen-hvm.c > index 5043beb..270cd99 100644 > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -41,6 +41,7 @@ > > static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi; > static MemoryRegion *framebuffer; > +static uint8_t **framebuffer_ptr; > static bool xen_in_migration; > > /* Compatibility with older version */ > @@ -302,7 +303,6 @@ static hwaddr xen_phys_offset_to_gaddr(hwaddr > start_addr, > return physmap->start_addr; > } > } > - Pure whitespace fix. Needs to either be called out in the commit comment or separated. The rest of the
[Xen-devel] [libvirt test] 106628: regressions - FAIL
flight 106628 libvirt real [real] http://logs.test-lab.xenproject.org/osstest/logs/106628/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-armhf-armhf-libvirt-xsm 5 xen-install fail REGR. vs. 106608 Regressions which are regarded as allowable (not blocking): test-armhf-armhf-libvirt 13 saverestore-support-checkfail like 106608 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail like 106608 Tests which did not succeed, but are not blocking: test-arm64-arm64-libvirt-xsm 1 build-check(1) blocked n/a build-arm64-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-libvirt-qcow2 1 build-check(1) blocked n/a test-arm64-arm64-libvirt 1 build-check(1) blocked n/a build-arm64-pvops 5 kernel-build fail never pass test-amd64-i386-libvirt 12 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 12 migrate-support-checkfail never pass test-amd64-amd64-libvirt 12 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail never pass build-arm64-xsm 5 xen-buildfail never pass build-arm64 5 xen-buildfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail never pass test-armhf-armhf-libvirt 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail never pass version targeted for testing: libvirt b17bb828380d19bf57e280c91b71e2813256b8c7 baseline version: libvirt 321ff4087cd731b2a1eddff38f9ef288d6922201 Last test of basis 106608 2017-03-12 04:20:49 Z1 days Testing same since 106628 2017-03-13 04:23:43 Z0 days1 attempts People who touched revisions under test: Fabian FreyerRoman Bogorodskiy jobs: build-amd64-xsm pass build-arm64-xsm fail build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-arm64 fail build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt blocked build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-arm64-pvopsfail build-armhf-pvopspass build-i386-pvops pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm blocked test-armhf-armhf-libvirt-xsm fail test-amd64-i386-libvirt-xsm pass test-amd64-amd64-libvirt pass test-arm64-arm64-libvirt blocked test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-libvirt-pairpass test-amd64-i386-libvirt-pair pass test-arm64-arm64-libvirt-qcow2 blocked test-armhf-armhf-libvirt-raw pass test-amd64-amd64-libvirt-vhd pass sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
[Xen-devel] [linux-linus test] 106625: regressions - FAIL
flight 106625 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/106625/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-armhf-armhf-xl 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-xsm 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-libvirt-xsm 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-cubietruck 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-libvirt 11 guest-start fail REGR. vs. 59254 test-amd64-amd64-xl-pvh-intel 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-arndale 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-credit2 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-multivcpu 11 guest-start fail REGR. vs. 59254 Regressions which are regarded as allowable (not blocking): test-amd64-amd64-xl-rtds 9 debian-installfail REGR. vs. 59254 test-armhf-armhf-xl-rtds 11 guest-start fail REGR. vs. 59254 test-armhf-armhf-xl-vhd 9 debian-di-install fail baseline untested test-armhf-armhf-libvirt-raw 9 debian-di-install fail baseline untested test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 59254 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 59254 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 59254 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 59254 Tests which did not succeed, but are not blocking: test-arm64-arm64-libvirt-xsm 1 build-check(1) blocked n/a test-arm64-arm64-xl 1 build-check(1) blocked n/a build-arm64-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-libvirt-qcow2 1 build-check(1) blocked n/a test-arm64-arm64-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-xl-credit2 1 build-check(1) blocked n/a test-arm64-arm64-xl-rtds 1 build-check(1) blocked n/a test-arm64-arm64-xl-multivcpu 1 build-check(1) blocked n/a test-arm64-arm64-xl-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvh-amd 11 guest-start fail never pass test-amd64-i386-libvirt 12 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 12 migrate-support-checkfail never pass build-arm64-xsm 5 xen-buildfail never pass test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail never pass test-amd64-amd64-libvirt 12 migrate-support-checkfail never pass build-arm64 5 xen-buildfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2 fail never pass version targeted for testing: linux56b24d1bbcff213dc9e1625eea5b8e13bb50feb8 baseline version: linux45820c294fe1b1a9df495d57f40585ef2d069a39 Last test of basis59254 2015-07-09 04:20:48 Z 613 days Failing since 59348 2015-07-10 04:24:05 Z 612 days 332 attempts Testing same since 106625 2017-03-13 00:18:13 Z0 days1 attempts 8063 people touched revisions under test, not listing them all jobs: build-amd64-xsm pass build-arm64-xsm fail build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-arm64 fail build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt blocked build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-arm64-pvopspass build-armhf-pvopspass build-i386-pvops pass build-amd64-rumprun pass
Re: [Xen-devel] [PATCH v2 10/21] x86/xen: split off smp_hvm.c
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Move PVHVM related code to smp_hvm.c. Drop 'static' qualifier from > xen_smp_send_reschedule(), xen_smp_send_call_function_ipi(), > xen_smp_send_call_function_single_ipi(), these functions will be moved to > common smp code when smp_pv.c is split. > > Signed-off-by: Vitaly KuznetsovOne nit below, with this addressed: Reviewed-by: Juergen Gross > --- > arch/x86/xen/Kconfig | 4 > arch/x86/xen/Makefile | 1 + > arch/x86/xen/smp.c | 57 +++-- > arch/x86/xen/smp.h | 3 +++ > arch/x86/xen/smp_hvm.c | 58 > ++ > 5 files changed, 69 insertions(+), 54 deletions(-) > create mode 100644 arch/x86/xen/smp_hvm.c > > diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h > index a059adb..bf36e79 100644 > --- a/arch/x86/xen/smp.h > +++ b/arch/x86/xen/smp.h > @@ -14,6 +14,9 @@ extern void xen_smp_intr_free(unsigned int cpu); > extern int xen_smp_intr_init_pv(unsigned int cpu); > extern void xen_smp_intr_free_pv(unsigned int cpu); > > +extern void xen_smp_send_reschedule(int cpu); > +extern void xen_smp_send_call_function_ipi(const struct cpumask *mask); > +extern void xen_smp_send_call_function_single_ipi(int cpu); Could you please drop the "extern" qualifier when adding new function prototypes? I know this just follows the style of the file, but I'd prefer not to add new instances. Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 09/21] x86/xen: split xen_cpu_die()
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Split xen_cpu_die() into xen_pv_cpu_die() and xen_hvm_cpu_die() to support > further splitting of smp.c. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 08/21] x86/xen: split xen_smp_prepare_boot_cpu()
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > Split xen_smp_prepare_boot_cpu() into xen_pv_smp_prepare_boot_cpu() and > xen_hvm_smp_prepare_boot_cpu() to support further splitting of smp.c. > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2 07/21] x86/xen: split xen_smp_intr_init()/xen_smp_intr_free()
On 02/03/17 18:53, Vitaly Kuznetsov wrote: > xen_smp_intr_init() and xen_smp_intr_free() have PV-specific code and as > a praparatory change to splitting smp.c we need to split these fucntions. > Create xen_smp_intr_init_pv()/xen_smp_intr_free_pv(). > > Signed-off-by: Vitaly KuznetsovReviewed-by: Juergen Gross Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH net v4] xen-netback: fix race condition on XenBus disconnect
From: Igor DruzhininDate: Fri, 10 Mar 2017 21:36:22 + > In some cases during XenBus disconnect event handling and subsequent > queue resource release there may be some TX handlers active on > other processors. Use RCU in order to synchronize with them. > > Signed-off-by: Igor Druzhinin Applied, thanks. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [Question] About the behavior of HLT in VMX guest mode
Hi guys, I'm confusing about the behavior of HLT instruction in VMX guest mode. I set "hlt exiting" bit to 0 in VMCS, and the vcpu didn't vmexit when execute HLT as expected. However, I used powertop/cpupower on host to watch the pcpu's c-states, it seems that the pcpu didn't enter C1/C1E state during this period. I searched the Intel spec vol-3, and only found that guest MWAIT won't entering a low-power sleep state under certain conditions(ch 25.3), but not mentioned HLT. My questions are 1) Does executing HLT instruction in guest-mode won't enter C1/C1E state ? 2) If it won't, then whether it would release the hardware resources shared with another hyper-thread ? Any suggestion would be greatly appreciated, thanks! -- Regards, Longpeng(Mike) ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel