> On 02/05/2019 09:14, Jan Beulich wrote:
> On 01.05.19 at 13:17, wrote:
>>> We appear to have implemented a memcpy() in the low-memory trampoline
>>> which we then call into from __start_xen(), for no adequately defined
>>> reason.
>> May I suggest that in cases like this you look at the
Argh, that's the first version again. Sorry. The fixed version is in
http://git.infradead.org/users/dwmw2/xen.git/shortlog/refs/heads/bootcleanup
but I won't post the whole series again right now.
smime.p7s
Description: S/MIME cryptographic signature
with no valid reason for reloc() to be running this
early, so I may well kill it with fire too. I just need to find a
safe location for the 16-bit boot code.
v2: Fix wake code. Thanks Andy for testing.
David Woodhouse (7):
x86/wakeup: Stop using %fs for lidt/lgdt
x86/boot: Remove gratuitous call
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
Signed-off-by: David Woodhouse
---
xen
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files changed, 10
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym
From: David Woodhouse
We appear to have implemented a memcpy() in the low-memory trampoline
which we then call into from __start_xen(), for no adequately defined
reason.
Kill it with fire.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/mem.S| 27 +--
xen
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image.
For now, the boot code (including the EFI loader path) still
From: David Woodhouse
The wakeup code is now relocated alongside the trampoline code, so %ds
is just fine here.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/wakeup.S | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch
On Wed, 2019-05-01 at 17:09 +0100, Andrew Cooper wrote:
> I'm afraid testing says no. S3 works fine without this change, and
> resets with it.
Thanks for testing. That's obvious in retrospect — although the wakeup
code is relocated alongside the trampoline code, it runs in real mode
with its own
trampoline though, which doesn't already contain anything that the
bootloader created for us.
In fact, isn't there already a chance that head.S will choose a location
for the trampoline which is already part of a module or contains one of
the Multiboot breadcrumbs?
David Woodhouse (7):
x86/wakeup: Stop
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image.
For now, the boot code (including the EFI loader path) still
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
Signed-off-by: David Woodhouse
---
xen
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files changed, 10
From: David Woodhouse
The wakeup code is now relocated alongside the trampoline code, so %ds
is just fine here.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/wakeup.S | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch
From: David Woodhouse
We appear to have implemented a memcpy() in the low-memory trampoline
which we then call into from __start_xen(), for no adequately defined
reason.
Kill it with fire.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/mem.S| 27 +--
xen
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym
> On 27/04/2019 07:15, David Woodhouse wrote:
>> I've been looking at kexec into Xen, and from Xen.
>>
>> Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
>> image as relocatable. So it loads it at address zero, which causes lots
>> of amusement
On Sat, 2019-04-27 at 08:15 +0200, David Woodhouse wrote:
> I've been looking at kexec into Xen, and from Xen.
>
> Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
> image as relocatable. So it loads it at address zero, which causes lots
> of amusement:
>
I've been looking at kexec into Xen, and from Xen.
Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
image as relocatable. So it loads it at address zero, which causes lots
of amusement:
Firstly, head.S trusts the low memory limit found in the BDA, which has
been scribbled on.
On Mon, 2016-08-08 at 18:54 +, Trammell Hudson wrote:
> Keir Fraser replied to Ward's follow up question:
>
> > > Is there a significant difference between booting 3.1.4 and
> > > 3.2.1 with kexec in terms of BIOS requirements?
> >
> > If you specify no-real-mode in both cases then there
> >
On Mon, 2019-03-04 at 15:46 +, Wei Liu wrote:
> To me it is just a bit weird to guard with cur_depth -- if you really
> want to continue at all cost, why don't you make it really continue at
> all cost?
There isn't another early exit from the loop. It really does continue
at all costs.
The
On Mon, 2019-03-04 at 15:51 +0100, Juergen Gross wrote:
> On 04/03/2019 15:31, David Woodhouse wrote:
> > On Mon, 2019-03-04 at 14:18 +, Wei Liu wrote:
> > > CC Ian as well.
> > >
> > > It would be better if you run ./scripts/get_maintainers.pl on
> >
On Mon, 2019-03-04 at 14:18 +, Wei Liu wrote:
> CC Ian as well.
>
> It would be better if you run ./scripts/get_maintainers.pl on your
> patches in the future to CC the correct people.
Will do; thanks.
> On Fri, Mar 01, 2019 at 12:16:56PM +, David Woodhouse wrote:
From: David Woodhouse
When recursing, a node sometimes disappears. Deal with it and move on
instead of aborting and failing to print the rest of what was
requested.
Signed-off-by: David Woodhouse
---
And thus did an extremely sporadic "not going to delete that device
because it al
st to zero the user %gs in the multicall too.
Signed-off-by: David Woodhouse
---
v2: Don't accidentally remove the call to xen_mc_batch().
arch/x86/include/asm/xen/hypercall.h | 11
arch/x86/xen/enlighten_pv.c | 40 ++--
2 files changed, 43 insertions(+), 8
st to zero the user %gs in the multicall too.
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/xen/hypercall.h | 11
arch/x86/xen/enlighten_pv.c | 42 +---
2 files changed, 43 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/xen/hype
On Fri, 2018-12-07 at 12:18 +, David Woodhouse wrote:
>
> > #else
> > + struct multicall_space mc = __xen_mc_entry(0);
> > + MULTI_set_segment_base(mc.mc, SEGBASE_GS_USER_SEL, 0);
> > +
> > loadsegment(fs, 0);
> >
On Thu, 2018-12-06 at 20:27 +, David Woodhouse wrote:
> On Thu, 2018-12-06 at 10:49 -0800, Andy Lutomirski wrote:
> > > On Dec 6, 2018, at 9:36 AM, Andrew Cooper <
> > > andrew.coop...@citrix.com> wrote:
> > > Basically - what is happening is that
On Thu, 2018-12-06 at 10:49 -0800, Andy Lutomirski wrote:
> > On Dec 6, 2018, at 9:36 AM, Andrew Cooper wrote:
> > Basically - what is happening is that xen_load_tls() is invalidating the
> > %gs selector while %gs is still non-NUL.
> >
> > If this happens to intersect with a vcpu reschedule,
On Wed, 2018-11-28 at 08:44 -0800, Andy Lutomirski wrote:
> > Can we assume it's always from kernel? The Xen code definitely seems to
> > handle invoking this from both kernel and userspace contexts.
>
> I learned that my comment here was wrong shortly after the patch landed :(
Turns out the
On Wed, 2018-08-22 at 09:19 +0200, gre...@linuxfoundation.org wrote:
> This is a note to let you know that I've just added the patch titled
>
> x86/entry/64: Remove %ebx handling from error_entry/exit
>
> to the 4.9-stable tree which can be found at:
>
>
On Mon, 2018-11-19 at 08:05 +0100, Juergen Gross wrote:
> On 15/11/2018 00:22, David Woodhouse wrote:
> > On Thu, 2018-11-08 at 11:18 +0100, Juergen Gross wrote:
> > > Oh, sorry. Of course it does. Dereferencing a percpu variable
> > > directly
On Thu, 2018-11-08 at 11:18 +0100, Juergen Gross wrote:
> Oh, sorry. Of course it does. Dereferencing a percpu variable
> directly can't work. How silly of me.
>
> The attached variant should repair that. Tested to not break booting.
Strictly speaking, shouldn't you have an atomic_init() in
> The Xen HV is doing it right. It is blocking the vcpu in do_poll() and
> any interrupt will unblock it.
Great. Thanks for the confirmation.
--
dwmw2
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
On Wed, 2018-10-10 at 14:30 +0200, Thomas Gleixner wrote:
> On Wed, 10 Oct 2018, David Woodhouse wrote:
>
> > On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> > > - /* If irq pending already clear it and return. */
> > > +
On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> - /* If irq pending already clear it and return. */
> + /* Guard against reentry. */
> + local_irq_save(flags);
> +
> + /* If irq pending already clear it. */
> if (xen_test_irq_pending(irq)) {
>
On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> The Xen specific queue spinlock wait function has two issues which
> could result in a hanging system.
>
> They have a similar root cause of clearing a pending wakeup of a
> waiting vcpu and later going to sleep waiting for the just
On Wed, 2018-09-05 at 10:40 +, Paul Durrant wrote:
>
> Actually the neatest approach would be to get information into the
> vlapic code as to whether APIC assist is suitable for the given
> vector so that the code there can selectively enable it, and then Xen
> would know it was safe to avoid
On Wed, 2018-09-05 at 09:36 +, Paul Durrant wrote:
>
> I see. Given that Windows has used APIC assist to circumvent its EOI
> then I wonder whether we can get away with essentially doing the
> same. I.e. for a completed APIC assist found in
> vlapic_has_pending_irq() we simply clear the APIC
On Mon, 2018-09-03 at 10:12 +, Paul Durrant wrote:
>
> I believe APIC assist is intended for fully synthetic interrupts.
Hm, if by 'fully synthetic interrupts' you mean
vlapic_virtual_intr_delivery_enabled(), then no I think APIC assist
doesn't get used in that case at all.
> Is it
On Thu, 2018-01-18 at 10:10 -0500, Paul Durrant wrote:
> Lastly the previous code did not properly emulate an EOI if a missed EOI
> was discovered in vlapic_has_pending_irq(); it merely cleared the bit in
> the ISR. The new code instead calls vlapic_EOI_set().
Hm, this *halves* my observed
On Wed, 2018-08-08 at 10:35 -0700, Sarah Newman wrote:
> commit b3681dd548d06deb2e1573890829dff4b15abf46 upstream.
>
> This version applies to v4.9.
I think you can kill the 'xorl %ebx,%ebx' from error_entry too but yes,
this does want to go to 4.9 and earlier because the 'Fixes:' tag is a
bit
On Mon, 2018-05-21 at 14:10 +0200, Roger Pau Monné wrote:
>
> Hm, I think I might have fixed this issue, see:
>
> https://git.qemu.org/?p=qemu.git;a=commit;h=a8036336609d2e184fc3543a4c439c0ba7d7f3a2
Hm, thanks. We'll look at porting that change to qemu-traditional which
still doesn't do it.
On Tue, 2016-01-26 at 09:34 +0800, Jianzhong,Chang wrote:
> There are some problems when msi guest_masked is set to 1 by default.
> When guest os is windows 2008 r2 server,
> the device(eg X540-AT2 vf) is not initialized correctly.
> Host will always receive message like this :"VF Reset msg
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
> +#ifdef CONFIG_INDIRECT_THUNK
> + /* callq __x86_indirect_thunk_rcx */
> + ctxt->io_emul_stub[10] = 0xe8;
> + *(int32_t *)>io_emul_stub[11] =
> + (unsigned long)__x86_indirect_thunk_rcx - (stub_va + 11 + 4);
> +
> +#else
Is
On Wed, 2018-01-24 at 13:49 +, Andrew Cooper wrote:
> On 24/01/18 13:34, Woodhouse, David wrote:
> > I am loath to suggest *more* tweakables, but given the IBPB cost is
> > there any merit in having a mode which does it only if the *domain* is
> > different, regardless of vcpu_id?
>
> This
On Mon, 2018-01-22 at 10:18 +, Andrew Cooper wrote:
> On 22/01/2018 10:04, David Woodhouse wrote:
> >
> > On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> > >
> > > --- a/xen/include/asm-x86/asm_defns.h
> > > +++ b/xen/include/asm-x86/asm_
On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
>
> --- a/xen/include/asm-x86/asm_defns.h
> +++ b/xen/include/asm-x86/asm_defns.h
> @@ -217,22 +217,34 @@ static always_inline void stac(void)
> addq $-(UREGS_error_code-UREGS_r15), %rsp
> cld
> movq
On Fri, 2018-01-19 at 08:07 -0700, Jan Beulich wrote:
> > > > On 19.01.18 at 15:48, wrote:
> > vcpu pointers are rather more susceptible to false aliasing in the case
> > that the 4k memory allocation behind struct vcpu gets reused.
> >
> > The risks are admittedly
On Wed, 2018-01-17 at 18:26 +0100, David Woodhouse wrote:
>
> > In both switching to idle, and back to the vCPU, we should hit this
> > case and not the 'else' case that does the IBPB:
> >
> > 1710 if ( (per_cpu(curr_vcpu, cpu) == next) ||
> > 17
On Mon, 2018-01-15 at 22:39 +0100, David Woodhouse wrote:
> On Mon, 2018-01-15 at 14:23 +0100, David Woodhouse wrote:
> >
> >
> > >
> > > >
> > > >
> > > > Also... if you're doing that in context_switch() does it do the right
&
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
>
> @@ -152,14 +163,38 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr,
> uint64_t val)
> {
> const struct vcpu *curr = current;
> struct domain *d = v->domain;
> + const struct cpuid_policy *cp = d->arch.cpuid;
> struct
On Mon, 2018-01-15 at 14:23 +0100, David Woodhouse wrote:
>
> > >
> > > Also... if you're doing that in context_switch() does it do the right
> > > thing with idle? If a CPU switches to the idle domain and then back
> > > again to the same vCPU, does
On Mon, 2018-01-15 at 13:02 +, Andrew Cooper wrote:
> On 15/01/18 12:54, David Woodhouse wrote:
> >
> > On Fri, 2018-01-12 at 18:01 +, Andrew Cooper wrote:
> > >
> > > @@ -1736,6 +1736,9 @@ void context_switch(struct vc
On Fri, 2018-01-12 at 18:01 +, Andrew Cooper wrote:
>
> @@ -1736,6 +1736,9 @@ void context_switch(struct vcpu *prev, struct
> vcpu *next)
> }
>
> ctxt_switch_levelling(next);
> +
> + if ( opt_ibpb )
> + wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB);
> }
>
If
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
>
> +.macro IND_THUNK_RETPOLINE reg:req
> + call 2f
> +1:
Linux and GCC have now settled on 'pause;lfence;jmp' here.
> + lfence
> + jmp 1b
> +2:
> + mov %\reg, (%rsp)
> + ret
> +.endm
> +
smime.p7s
On Thu, 2018-01-11 at 13:41 +, Andrew Cooper wrote:
> On 11/01/18 13:03, David Woodhouse wrote:
> >
> > On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> > >
> > > + * We've got no usable stack so can't use a RETPOLINE thunk, and
> &g
On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> + * We've got no usable stack so can't use a RETPOLINE thunk, and are
> + * further than +- 2G from the high mappings so couldn't use
> JUMP_THUNK
> + * even if was a non-RETPOLINE thunk. Futhermore, an LFENCE
801 - 861 of 861 matches
Mail list logo