... and always zero the LDT for HVM contexts. This causes erroneous execution
which manages to reference the LDT fail with a straight #GP fault, rather than
possibly finding a stale loaded LDT and wandering the #PF handler.
Future changes will cause the loading of LDT to be lazy, at which point
Windows is the only OS which pages out kernel datastructures, so chances are
good that this is a vestigial remnant of the PV Windows XP experiment.
Furthermore the implementation is incomplete; it only functions for a present
=> not-present transition, rather than a present => read/write
The existing translation area claims to be 2 frames and a guard page, but is
actually 4 frames with no guard page at all.
Allocate 2 frames in the percpu area, which actually has unmapped frames on
either side.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/smpboot.c
flight 117601 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117601/
Failures and problems with tests :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-libvirt-xsm broken
flight 117597 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117597/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemuu-win10-i386 broken
test-amd64-amd64-xl-credit2
On Thu, Jan 04, 2018 at 12:15:44AM +, Andrew Cooper wrote:
> Contemporary processors are gaining Indirect Branch Controls via microcode
> updates. Intel are introducing one bit to indicate IBRS and IBPB support, and
> a second bit for STIBP. AMD are introducing IPBP only, so enumerate it
The PERCPU linear range lives in slot 257, and all later slots slide along to
make room. The size of the directmap is reduced by one slot temporarily.
Later changes will remove the PERDOMAIN slot, at which point the latter slots
will slide back to fill the hole, and end up where they are now.
Like the mapcache region, we need an L1e which is modifiable in the context
switch code.
The Xen-reserved GDT frames are proactively mapped for the benefit of future
changes to the AP boot path.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/smpboot.c | 21
This change also introduces _alter_percpu_mappings(), a helper for creating
and modifying percpu mappings. The code will be extended with extra actions
in later patches.
The existing IDT heap allocation and idt_tables[] array are kept, although the
allocation logic is simplified as an IDT is
A number of hypercalls and softirq tasks pass small stack buffers via IPI.
These operate sequentially on a single CPU, so introduce a shared PER_CPU
buffer for use. Access to the buffer is via get_smp_ipi_buf(), which performs
a range check at compile time.
Signed-off-by: Andrew Cooper
On 01/04/2018 06:52 AM, Anthony PERARD wrote:
> On Wed, Jan 03, 2018 at 05:10:54PM -0600, Kevin Stange wrote:
>> On 01/03/2018 11:57 AM, Anthony PERARD wrote:
>>> On Wed, Dec 20, 2017 at 11:40:03AM -0600, Kevin Stange wrote:
Hi,
I've been working on transitioning a number of Windows
On 01/04/2018 07:26 AM, Paul Durrant wrote:
>> -Original Message-
>> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
>> Of Anthony PERARD
>> Sent: 04 January 2018 12:52
>> To: Kevin Stange
>> Cc: George Dunlap ; xen-
Future changes will alter the conditions under which we expect to take faults.
One adjustment however is to exclude the use of this fixup path for non-PV
guests. Well-formed code shouldn't reference the LDT while in HVM vcpu
context, but currently on a context switch from PV to HVM context,
There are two reasons:
1) To stop using the per-domain range for the mapcache
2) To make map_domain_page() safe to use during context switches
The new implementation is entirely percpu and rather more simple. See the
comment at the top of domain_page.c for a description of the algorithm.
A
This will be used to remove the mapcache override/current vcpu mechanism when
reworking map_domain_page() to be safe in the middle of context switches.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/mm.c| 11 +++
xen/arch/x86/setup.c | 2 ++
With the mapcache, xlat and GDT/LDT moved over to the PERCPU mappings, there
are no remaining users of the PERDOMAIN mappings. Drop the whole PERDOMAIN
infrastructure, and remove the PERDOMAIN slot in the virtual address layout.
Slide each of the subsequent slots back by one, and extend the
Keyhandlers for the following:
'1' - Walk idle_pg_table[]
'2' - Walk each percpu_mappings
'3' - Dump PT shadow stats
---
xen/arch/x86/hvm/save.c| 4 -
xen/arch/x86/mm/p2m-ept.c | 5 +-
xen/arch/x86/pv/pt-shadow.c| 19
xen/arch/x86/traps.c |
This is unfortunately quite invasive, because of the impact on the context
switch path.
PV vcpus gain an array of ldt and gdt ptes (replacing gdt_frames[]), which map
the frames loaded by HYPERCALL_set_gdt, or faulted in for the LDT. Each
present PTE here which isn't a read-only mapping of
Signed-off-by: Andrew Cooper
---
v3:
* Switch to using a single structure per cpu, rather than multiple fields.
---
xen/arch/x86/pv/Makefile | 1 +
xen/arch/x86/pv/pt-shadow.c| 86 ++
xen/arch/x86/smpboot.c
Pagetables are allocated and freed along with the other smp datastructures,
and the root of the pagetables is stored in the percpu_mappings variable.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/smpboot.c | 91 ++
This improves the shadowing performance substantially. In particular, system
calls for 64bit PV guests (which switch between the user and kernel
pagetables) no longer suffer a 4K copy hit in both directions.
See the code comments for reasoning and the algorithm description.
Signed-off-by:
The percpu fixmap range was introduced to allow opencoding of
map_domain_page() in the middle of a context switch.
The new implementation of map_domain_page() is safe to use in a context
switch, so drop the percpu fixmap infrastructure.
This removes the temporary build-time restriction on
With all CPUs using the same virtual stack mapping, the TSS rsp0/ist[0..2]
values are compile-time constant. Therefore, we can use a single read-only
TSS for the whole system.
To faciliate this, a new .rodata.page_aligned section needs introducing.
Signed-off-by: Andrew Cooper
With percpu stacks, it will not be safe to pass stack pointers. The logic in
machine_restart(), time_calibration() and set_mtrr() is singleton, so switch
to using static variables.
The set_mtrr_data is protected under the mtrr_mutex, which requires
mtrr_ap_init() and mtrr_aps_sync_end() to hold
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
---
xen/arch/x86/pv/mm.h | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/pv/mm.h b/xen/arch/x86/pv/mm.h
index 7502d53..a10b09a 100644
---
This is very easy for the APs. __high_start() is modified to switch stacks
before entering C. The BSP however is more complicated, and needs to stay on
cpu0_stack[] until setup is complete.
The end of __start_xen() is modified to copy the top-of-stack data to the
percpu stack immediately before
This is required to implement an opencoded version of map_domain_page() during
context switch. It must fit within l1_fixmap[], which imposes an upper limit
on the NR_CPUS.
The limit is currently 509, but will be lifted after later changes.
Signed-off-by: Andrew Cooper
TSS and IST setings are only required for safety when running userspace code.
Until we start executing dom0, the boot path is perfectly capable of handling
exceptions and interrupts without a loaded TSS.
Deferring the TSS setup is necessary to facilitiate moving the BSP onto a
percpu stack, which
Signed-off-by: Andrew Cooper
---
xen/arch/x86/mm.c| 19 ++-
xen/arch/x86/setup.c | 1 +
xen/include/asm-x86/mm.h | 6 +-
3 files changed, 24 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index
There are some addresses which are not safe to pass as IPI parameters, as they
are not mapped on other cpus (or worse, mapped to something else). Introduce
an arch-specific audit hook which is used to check the parameter.
ARM has this stubbed to true, whereas x86 now excluses pointers in the
Construction of the TSS is the final action remaining in load_system_tables(),
and is lifted to early_switch_to_idle(). As a single global TSS is in use,
the per_cpu init_tss variable is dropped.
The setting of HOST_TR_BASE is now a constant, so moves to construct_vmcs().
This means that
This involves allocating a total of 5 frames, which don't have to be an
order-3 allocation, and unconditionally has guard pages in place for a primary
stack overflow.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/smpboot.c | 27 ++-
Xen will need to track which %cr3 it is running on. Propagate a
tlb_maintenance parameter down into write_ptbase(), so toggle_guest_mode() can
retain its optimisation of not flushing global mappings and not ticking the
TLB clock.
Signed-off-by: Andrew Cooper
---
... and assert that it isn't changing under our feet. early_switch_to_idle()
is adjusted to set the shadow initially, when switching off idle_pg_table[].
EFI Runtime Service handling happens synchronously and under lock, so doesn't
interact with this path.
Signed-off-by: Andrew Cooper
idle_pg_table[] needs all slots populated before it is copied to create the
vcpu idle pagetables. One missing slot is for MMCFG, which is now allocated
early.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/setup.c | 4 ++--
xen/arch/x86/x86_64/mm.c | 15
Move the existing stub allocation into the new function, and call it before
initialising the idle domain; eventually it will allocate the pagetables for
the idle vcpu to use.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
---
This work was developed as an SP3 mitigation, but shelved when it became clear
that it wasn't viable to get done in the timeframe.
To protect against SP3 attacks, most mappings needs to be flushed while in
user context. However, to protect against all cross-VM attacks, it is
necessary to ensure
flight 117595 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117595/
Failures and problems with tests :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-i386-pvgrub broken
test-amd64-i386-xl
00] node 0: [mem 0x0010-0x7fff]
>> [0.00] Initmem setup node 0 [mem
>> 0x1000-0x7fff]
>> [0.00] On node 0 totalpages: 524181
>> [0.00] DMA zone: 64 pages used for memmap
>> [0.00
DMA-ing to the stack is generally considered bad practice. In this case, if a
timeout occurs because of a sluggish device which is processing the request,
the completion notification will corrupt the stack of a subsequent deeper call
tree.
Place the poll_slot in a percpu area and DMA to that
All alteration of IST settings (other than the crash path) happen in an
identical triple. Introduce helpers to keep the triple in sync, and reduce
the risk of opencoded mistakes.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/cpu/common.c | 4 +---
and move it into pv/descriptor-tables.c beside its GDT counterpart. Reduce
the !in_irq() check from a BUG_ON() to ASSERT().
Signed-off-by: Andrew Cooper
---
v2:
* New
---
xen/arch/x86/mm.c | 51 -
Introduce early_switch_to_idle() to replace the opencoded switching to idle
context in the BSP and AP boot paths, and extend it to switch away from
idle_pg_table[] as well.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
---
do_mca() makes several IPI with huge parameter blocks. All operations are
control-plane, and for debugging/development purposes, so restrict them to
being serialised. This allows the hypercall parameter block to safely be
static.
Signed-off-by: Andrew Cooper
---
The ACPI idle driver uses an IPI to retrieve cpuid_ecx(5). This is
problematic because it uses a stack pointer, but also wasteful at runtime.
Introduce X86_FEATURE_XEN_MONITOR as a synthetic feature bit meaning MONITOR
&& EXTENSIONS && INTERRUPT_BREAK, and calculate it when a cpu comes up rather
The loading of IDTR is moved out of load_system_tables() and into
early_switch_to_idle().
One complication for the BSP is that IST references still need to remain
uninitalised until reinit_bsp_stack(). Therefore, early_switch_to_idle() is
extended to take a bsp boolean.
For VT-x guests,
Ensure the pagetables we are switching to have the correct percpu mappings in
them. The _PGC_inuse_pgtable check ensures that the pagetables we edit aren't
in use elsewhere.
One complication however is context switching between two vcpus which both
require shadowing. See the code comment for
When booting Xen via UEFI the Xen config file can contain multiple sections
each describing different boot options. It is currently only possible to choose
which section to boot with if the buffer contains a string. UEFI provides a
different standard to pass optional arguments to an application,
Greeting,
I am trying to modify Xen 4.8 to have it print out the opcode as well as
some registers of an HVM domU as it runs. I tried to modify
xen/arch/x86/hvm/emulate.c 's hvmemul_insn_fetch to output the content in
hvmemul_ctxt->insn_buf with printk. In hvmemul_insn_fetch, it seems that a
lot
flight 117609 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117609/
Failures and problems with tests :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-libvirt broken
flight 117607 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117607/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf broken
build-armhf 4
flight 117613 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117613/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-xtf-amd64-amd64-5 broken
test-amd64-amd64-xl-qemut-win10-i386
On 04/01/18 21:21, Andrew Cooper wrote:
> This work was developed as an SP3 mitigation, but shelved when it became clear
> that it wasn't viable to get done in the timeframe.
>
> To protect against SP3 attacks, most mappings needs to be flushed while in
> user context. However, to protect
>>> On 04.01.18 at 01:15, wrote:
> Save all GPRs on entry to Xen.
>
> The entry_int82() path is via a DPL1 gate, only usable by 32bit PV guests, so
> can get away with only saving the 32bit registers. All other entrypoints can
> be reached from 32 or 64bit contexts.
>
>>> On 04.01.18 at 01:15, wrote:
> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -153,8 +153,28 @@ trampoline_protmode_entry:
> .code64
> start64:
> /* Jump to high mappings. */
> -movabs $__high_start,%rax
>>> On 04.01.18 at 01:15, wrote:
> --- a/xen/arch/x86/spec_ctrl.c
> +++ b/xen/arch/x86/spec_ctrl.c
> @@ -32,7 +32,7 @@ enum ind_thunk {
> THUNK_LFENCE,
> THUNK_JMP,
> } opt_thunk __initdata = THUNK_DEFAULT;
> -int opt_ibrs __initdata = -1;
> +int opt_ibrs
>>> On 03.01.18 at 17:53, wrote:
> On Wed, Jan 3, 2018 at 9:36 AM, Jan Beulich wrote:
> On 03.01.18 at 17:04, wrote:
>>> On Wed, Jan 3, 2018 at 4:20 AM, Jan Beulich wrote:
>>> On 02.01.18 at 16:56,
>>> On 04.01.18 at 01:15, wrote:
> Use -mindirect-branch=thunk-extern/-mindirect-branch-register when available.
> To begin with, use the retpoline thunk. Later work will add alternative
> thunks which can be selected at boot time.
>
> Signed-off-by: Andrew Cooper
>>> On 04.01.18 at 01:15, wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -558,8 +558,41 @@ static void init_amd(struct cpuinfo_x86 *c)
> wrmsr_amd_safe(0xc001100d, l, h & ~1);
> }
>
> + /*
> + * Attempt
>>> On 04.01.18 at 01:15, wrote:
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -246,7 +246,7 @@ enough. Setting this to a high value may cause boot
> failure, particularly if
> the NMI watchdog is also enabled.
>
> ###
>>> On 04.01.18 at 01:15, wrote:
> @@ -31,6 +33,38 @@ static inline void init_shadow_spec_ctrl_state(void)
> info->shadow_spec_ctrl = info->use_shadow_spec_ctrl = 0;
> }
>
> +/* WARNING! `ret`, `call *`, `jmp *` not safe after this call. */
> +static
On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
> >>> On 28.12.17 at 13:57, wrote:
> > In case the vCPU has pending events to inject. This fixes a bug that
> > happened if the guest mapped the vcpu info area using
> > VCPUOP_register_vcpu_info without having
On Wed, Jan 03, 2018 at 10:00:51AM -0700, Jan Beulich wrote:
> >>> On 03.01.18 at 09:26, wrote:
> > @@ -7741,6 +7752,16 @@ x86_emulate(
> > op_bytes = 16;
> > goto simd_0f3a_common;
> >
> > +case X86EMUL_OPC_66(0x0f3a, 0xce): /* gf2p8affineqb
> >
>>> On 04.01.18 at 01:15, wrote:
> Nothing very interesting at the moment, but the logic will grow as new
> mitigations are added.
>
> Signed-off-by: Andrew Cooper
Acked-by: Jan Beulich
init_speculation_mitigations()
>>> On 04.01.18 at 01:15, wrote:
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -245,6 +245,20 @@ and not running softirqs. Reduce this if softirqs are
> not being run frequently
> enough. Setting this to a high value may
>>> On 04.01.18 at 01:15, wrote:
> @@ -31,11 +32,12 @@ enum ind_thunk {
> THUNK_LFENCE,
> THUNK_JMP,
> } opt_thunk __initdata = THUNK_DEFAULT;
> +int opt_ibrs __initdata = -1;
static
> @@ -147,6 +230,18 @@ void __init init_speculation_mitigations(void)
>
>>> On 04.01.18 at 01:15, wrote:
> Signed-off-by: Andrew Cooper
Fundamentally (as before)
Reviewed-by: Jan Beulich
However:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2027,6 +2027,25 @@ int
> -Original Message-
> From: Christoph Moench-Tegeder [mailto:c...@burggraben.net]
> Sent: 03 January 2018 20:34
> To: Paul Durrant
> Cc: 'Alex Braunegg' ; 'Michael Collins'
> ; 'Juergen Gross' ; xen-
>
>>> On 04.01.18 at 01:15, wrote:
> On contemporary hardware, setting IBRS/STIBP has a performance impact on
> adjacent hyperthreads. It is therefore recommended to clear the setting
> before becoming idle, to avoid an idle core preventing adjacent userspace
> execution
flight 117634 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117634/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm
On Thu, Jan 4, 2018 at 8:00 AM, Jan Beulich wrote:
On 04.01.18 at 15:39, wrote:
>> On Thu, Jan 4, 2018 at 3:43 AM, Jan Beulich wrote:
>>> Just looking at the low bit of the first
>>> byte before assuming this could be a load option
>>> On 04.01.18 at 17:16, wrote:
> On Thu, Jan 4, 2018 at 8:00 AM, Jan Beulich wrote:
> On 04.01.18 at 15:39, wrote:
>>> On Thu, Jan 4, 2018 at 3:43 AM, Jan Beulich wrote:
Just looking at the low bit of
On Thu, Jan 4, 2018 at 9:25 AM, Jan Beulich wrote:
On 04.01.18 at 17:16, wrote:
>> On Thu, Jan 4, 2018 at 8:00 AM, Jan Beulich wrote:
>> On 04.01.18 at 15:39, wrote:
On Thu, Jan 4, 2018 at 3:43 AM,
>>> On 04.01.18 at 17:35, wrote:
> On Thu, Jan 4, 2018 at 9:25 AM, Jan Beulich wrote:
> On 04.01.18 at 17:16, wrote:
>>> On Thu, Jan 4, 2018 at 8:00 AM, Jan Beulich wrote:
>>> On 04.01.18 at 15:39,
On 12/26/2017 10:22 PM, David Miller wrote:
> From: Joao Martins
> Date: Thu, 21 Dec 2017 17:24:28 +
>
>> Commit eb1723a29b9a ("xen-netback: refactor guest rx") refactored Rx
>> handling and as a result decreased max grant copy ops from 4352 to 64.
>> Before this
>>> On 04.01.18 at 10:20, wrote:
> As for the test case for those insns, i am writing those related test
> cases in tools/tests/x86_emulator.
>
> How many test cases will you need ? One test case for one CPU
> feature(vaes,gfni and vpclmulqdq)?
My rule of thumb is
>>> On 04.01.18 at 11:49, wrote:
>> -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 04 January 2018 10:47
>> To: Paul Durrant
>> Cc: JulienGrall ; Andrew Cooper
>>
On Thu, Jan 04, 2018 at 03:53:39AM -0700, Jan Beulich wrote:
> >>> On 04.01.18 at 10:13, wrote:
> > On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
> >> >>> On 28.12.17 at 13:57, wrote:
> >> > In case the vCPU has pending events to inject.
>>> On 03.01.18 at 17:06, wrote:
>> -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 03 January 2018 15:48
>> To: Paul Durrant
>> Cc: JulienGrall ; Andrew Cooper
>>
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 04 January 2018 10:47
> To: Paul Durrant
> Cc: JulienGrall ; Andrew Cooper
> ; George Dunlap
> ; Ian Jackson
>>> On 04.01.18 at 10:13, wrote:
> On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
>> >>> On 28.12.17 at 13:57, wrote:
>> > In case the vCPU has pending events to inject. This fixes a bug that
>> > happened if the guest mapped the vcpu
] DMA32 zone: 8128 pages used for memmap
> [0.00] DMA32 zone: 520192 pages, LIFO batch:31
> [0.00] BUG: unable to handle kernel NULL pointer dereference at
> (null)
> [0.00] IP: zero_resv_unavail+0x8e/0xe1
> [0.00] PGD 0 P4D 0
> [0.00
On 01/03/2018 10:30 PM, Xen.org security team wrote:
> VULNERABLE SYSTEMS
> ==
>
> Systems running all versions of Xen are affected.
>
> For SP1 and SP2, both Intel and AMD are vulnerable.
>
> For SP3, only Intel processors are vulnerable. Furthermore, only
> 64-bit PV guests
>>> On 04.01.18 at 13:15, wrote:
> On Thu, Jan 04, 2018 at 05:10:52AM -0700, Jan Beulich wrote:
>> >>> On 04.01.18 at 12:37, wrote:
>> > On Thu, Jan 04, 2018 at 03:53:39AM -0700, Jan Beulich wrote:
>> >> >>> On 04.01.18 at 10:13,
In case the vCPU has pending events to inject. This fixes a bug that
happened if the guest mapped the vcpu info area using
VCPUOP_register_vcpu_info without having setup the event channel
upcall, and then setup the upcall vector.
In this scenario the guest would not receive any upcalls, because
>>> On 04.01.18 at 12:37, wrote:
> On Thu, Jan 04, 2018 at 03:53:39AM -0700, Jan Beulich wrote:
>> >>> On 04.01.18 at 10:13, wrote:
>> > On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
>> >> >>> On 28.12.17 at 13:57,
On Thu, Jan 04, 2018 at 05:10:52AM -0700, Jan Beulich wrote:
> >>> On 04.01.18 at 12:37, wrote:
> > On Thu, Jan 04, 2018 at 03:53:39AM -0700, Jan Beulich wrote:
> >> >>> On 04.01.18 at 10:13, wrote:
> >> > On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan
>>> On 04.01.18 at 13:11, wrote:
> In case the vCPU has pending events to inject. This fixes a bug that
> happened if the guest mapped the vcpu info area using
> VCPUOP_register_vcpu_info without having setup the event channel
> upcall, and then setup the upcall vector.
>
>
Hi all
This is a patch series to run PV guest inside a PVH container. The series is
still in a very RFC state. We're aware that some code is not very clean yet and
in the process of cleaning things up.
The series can be found at:
https://xenbits.xen.org/git-http/people/liuw/xen.git
Signed-off-by: Wei Liu
Reviewed-by: Andrew Cooper
---
tools/libxc/xc_dom_hvmloader.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/libxc/xc_dom_hvmloader.c b/tools/libxc/xc_dom_hvmloader.c
index 59f94e51e5..02c3eaef38 100644
---
From: Andrew Cooper
With CPUID Faulting offered to SVM guests, move Xen's faulting code to being
common rather than Intel specific.
This is necessary for nested Xen (inc. pv-shim mode) to prevent PV guests from
finding the outer HVM Xen leaves via native cpuid.
From: Andrew Cooper
This reduces the amount of line wrapping from guests; Xen in particular likes
to print lines longer than 80 characters.
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
---
xen/include/xen/sched.h
From: Andrew Cooper
CPUID Faulting can be virtulised for HVM guests without hardware support,
meaning it can be offered to SVM guests.
Signed-off-by: Andrew Cooper
---
xen/arch/x86/hvm/svm/svm.c | 6 ++
xen/arch/x86/msr.c | 3
From: George Dunlap
libxl will look for LIBXL_PVSHIM_PATH and LIBXL_PVSHIM_CMDLINE
environment variables. If the first is present, it will boot with the
shim and the existing kernel / ramdisk. (That is, the shim as the "kernel" and
the
kernel and ramdisk both as
flight 117590 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/117590/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl broken
From: Ian Jackson
** NOTE: I intend to change the config names from "pvhshim" to "pvshim" **
Signed-off-by: Ian Jackson
---
docs/man/xl.cfg.pod.5.in | 28
tools/xl/xl_parse.c | 11 +++
2 files
From: Roger Pau Monne
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/pv/shim.c| 110 ++
xen/common/memory.c | 14 ++
xen/include/asm-x86/pv/shim.h | 10
3 files changed, 134
Signed-off-by: Wei Liu
Signed-off-by: Andrew Cooper
---
xen/arch/x86/Makefile| 1 +
xen/arch/x86/boot/head.S | 40 +++-
xen/arch/x86/boot/x86_64.S | 2 +-
xen/arch/x86/guest/Makefile |
From: Andrew Cooper
Signed-off-by: Andrew Cooper
---
docs/misc/xen-command-line.markdown | 11 ++
xen/arch/x86/Kconfig| 22 +++
xen/arch/x86/pv/Makefile| 1 +
xen/arch/x86/pv/shim.c
From: Roger Pau Monne
Note that the unmask and the virq operations are handled by the shim
itself, and that FIFO event channels are not exposed to the guest.
Signed-off-by: Anthony Liguori
Signed-off-by: Roger Pau Monné
1 - 100 of 169 matches
Mail list logo