emory addresses and
> sizes, including those that reproduce the edge case bug:
>
> * 4K granule and 0 min_align_mask
> * 4K granule and 0xFFF min_align_mask (4K - 1)
> * 16K granule and 0xFFF min_align_mask
> * 64K granule and 0xFFF min_align_mask
> * 64K granule and 0x3FFF min_al
From: Petr Tesařík Sent: Monday, April 15, 2024 5:50 AM
>
> On Mon, 15 Apr 2024 12:23:22 +0000
> Michael Kelley wrote:
>
> > From: Petr Tesařík Sent: Monday, April 15, 2024 4:46 AM
> > >
> > > Hi Michael,
> > >
> > > sorry for
From: Petr Tesařík Sent: Monday, April 15, 2024 4:46 AM
>
> Hi Michael,
>
> sorry for taking so long to answer. Yes, there was no agreement on the
> removal of the "dir" parameter, but I'm not sure it's because of
> symmetry with swiotlb_sync_*(), because the topic was not really
> discussed.
>
From: Petr Tesařík Sent: Monday, July 10, 2023 2:36 AM
>
> On Sat, 8 Jul 2023 15:18:32 +0000
> "Michael Kelley (LINUX)" wrote:
>
> > From: Petr Tesařík Sent: Friday, July 7, 2023 3:22 AM
> > >
> > > On Fri, 7 Jul 2023 10:29:00 +0100
> > &g
From: Petr Tesařík Sent: Friday, July 7, 2023 3:22 AM
>
> On Fri, 7 Jul 2023 10:29:00 +0100
> Greg Kroah-Hartman wrote:
>
> > On Thu, Jul 06, 2023 at 02:22:50PM +0000, Michael Kelley (LINUX) wrote:
> > > From: Greg Kroah-Hartman Sent: Thursday,
&
From: Greg Kroah-Hartman Sent: Thursday, July 6,
2023 1:07 AM
>
> On Thu, Jul 06, 2023 at 03:50:55AM +, Michael Kelley (LINUX) wrote:
> > From: Petr Tesarik Sent: Tuesday, June 27,
> > 2023
> 2:54 AM
> > >
> > > Try to allocate a transient memory
From: Petr Tesarik Sent: Tuesday, June 27, 2023
2:54 AM
>
> Try to allocate a transient memory pool if no suitable slots can be found,
> except when allocating from a restricted pool. The transient pool is just
> enough big for this one bounce buffer. It is inserted into a per-device
> list of
From: Thomas Gleixner Sent: Saturday, May 6, 2023 9:23 AM
>
> On Sat, May 06 2023 at 00:53, Michael Kelley wrote:
> > From: Thomas Gleixner Sent: Thursday, May 4, 2023
> > 12:03 PM
> > [snip]
> >
> >> @@ -934,10 +961,10 @@ static void announce_cpu(
pu() fall apart.
>
> [ mikelley: Reported the announce_cpu() fallout
>
> Originally-by: David Woodhouse
> Signed-off-by: Thomas Gleixner
> ---
> V2: Fixup announce_cpu() - Michael Kelley
> ---
> arch/x86/Kconfig |3 +
> arch/x86/kernel/cpu/common.c |
I've tested the full v6 series in a normal Hyper-V guest and in an SEV-SNP
guest.
In the SNP guest, the page attributes in /sys/kernel/debug/x86/pat_memtype_list
are "write-back" in the expected cases. The "mtrr" x86 feature no longer
appears
in the "flags" output of "lscpu" or /proc/cpuinfo. /proc/mtrr does not exist,
again
as expected.
In a normal VM, the "mtrr" x86 feature appears in the flags, and /proc/mtrr
shows expected values. The boot option mtrr=debug works as expected.
Tested-by: Michael Kelley
CPUs
[0.467036] smpboot: Max logical packages: 1
[0.468035] smpboot: Total of 32 processors activated (153240.06 BogoMIPS)
The function announce_cpu() is specifically testing for CPU #1 to output the
"Booting SMP configuration" message. In a Hyper-V guest, CPU #1 is the second
SMT thread in a core, so it isn't started until all the even-numbered CPUs are
started.
I don't know if this cosmetic issue is worth fixing, but I thought I'd point it
out.
In any case,
Tested-by: Michael Kelley
| 52 +++
> 13 files changed, 454 insertions(+), 252 deletions(-)
>
> --
> 2.35.3
I've tested a Linux 6.2 kernel plus this series in a normal Hyper-V
guest and in a Hyper-V guest using SEV-SNP with vTOM. MMIO
memory is correctly mapped as WB or UC- depending on the
request, which fixes the original problem introduced for Hyper-V
by the Xen-specific change.
Tested-by: Michael Kelley
the following registration fail.
>*/
> - hv_ctl_table_hdr = register_sysctl_table(hv_root_table);
> + hv_ctl_table_hdr = register_sysctl("kernel", hv_ctl_table);
> if (!hv_ctl_table_hdr)
> pr_err("Hyper-V: sysctl table register error");
>
> --
> 2.39.1
Reviewed-by: Michael Kelley
I've come across a case with a VM running on Hyper-V that doesn't get
MTRRs, but the PAT is functional. (This is a Confidential VM using
AMD's SEV-SNP encryption technology with the vTOM option.) In this
case, the changes in commit 72cbc8f04fe2 ("x86/PAT: Have pat_enabled()
properly reflect
u32 slot, u32 vector, u8 vector_count)
> {
> int cpu;
> @@ -1697,7 +1697,7 @@ static void hv_compose_msi_msg(struct irq_data *data,
> struct msi_msg *msg)
> struct hv_pci_dev *hpdev;
> struct pci_bus *pbus;
> struct pci_dev *pdev;
> - struc
From: Guilherme G. Piccoli Sent: Friday, April 29, 2022
3:35 PM
>
> Hi Michael, first of all thanks for the great review, much appreciated.
> Some comments inline below:
>
> On 29/04/2022 14:16, Michael Kelley (LINUX) wrote:
> > [...]
> >> hypervisor I/O completio
From: Guilherme G. Piccoli Sent: Friday, April 29, 2022
11:04 AM
>
> On 29/04/2022 14:30, Michael Kelley (LINUX) wrote:
> > From: Guilherme G. Piccoli Sent: Wednesday, April 27,
> > 2022
> 3:49 PM
> >> [...]
> >>
> >> @@ -2843
From: Guilherme G. Piccoli Sent: Friday, April 29, 2022
1:38 PM
>
> On 29/04/2022 14:53, Michael Kelley (LINUX) wrote:
> > From: Guilherme G. Piccoli Sent: Wednesday, April 27,
> > 2022
> 3:49 PM
> >> [...]
> >> + panic_notifiers_level=
> >>
From: Guilherme G. Piccoli Sent: Wednesday, April 27,
2022 3:49 PM
>
> The panic() function is somewhat convoluted - a lot of changes were
> made over the years, adding comments that might be misleading/outdated
> now, it has a code structure that is a bit complex to follow, with
> lots of
S. Miller"
> Cc: Dexuan Cui
> Cc: Doug Berger
> Cc: Evan Green
> Cc: Florian Fainelli
> Cc: Haiyang Zhang
> Cc: Hari Bathini
> Cc: Heiko Carstens
> Cc: Julius Werner
> Cc: Justin Chen
> Cc: "K. Y. Srinivasan"
> Cc: Lee Jones
> Cc: Markus
is a
dependency on Patch 14 of your series where PANIC_NOTIFIER is
introduced.
> Cc: Andrea Parri (Microsoft)
> Cc: Dexuan Cui
> Cc: Haiyang Zhang
> Cc: "K. Y. Srinivasan"
> Cc: Michael Kelley
> Cc: Stephen Hemminger
> Cc: Tianyu Lan
> Cc: Wei Liu
> Test
From: Guilherme G. Piccoli Sent: Wednesday, April 27,
2022 3:49 PM
>
> Currently we have a debug infrastructure in the notifiers file, but
> it's very simple/limited. This patch extends it by:
>
> (a) Showing all registered/unregistered notifiers' callback names;
>
> (b) Adding a dynamic
From: Guilherme G. Piccoli Sent: Wednesday, April 27,
2022 3:49 PM
>
> Currently the regular CPU shutdown path for ARM disables IRQs/FIQs
> in the secondary CPUs - smp_send_stop() calls ipi_cpu_stop(), which
> is responsible for that. This makes sense, since we're turning off
> such CPUs,
From: Christoph Hellwig Sent: Sunday, April 3, 2022 10:06 PM
>
> Pass a bool to pass if swiotlb needs to be enabled based on the
Wording problems. I'm not sure what you meant to say.
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable
From: Dongli Zhang Sent: Friday, March 4, 2022 10:28
AM
>
> Hi Michael,
>
> On 3/4/22 10:12 AM, Michael Kelley (LINUX) wrote:
> > From: Christoph Hellwig Sent: Tuesday, March 1, 2022 2:53 AM
> >>
> >> Power SVM wants to allocate a swiotlb buffer that i
From: Christoph Hellwig Sent: Tuesday, March 1, 2022 2:53 AM
>
> Power SVM wants to allocate a swiotlb buffer that is not restricted to low
> memory for
> the trusted hypervisor scheme. Consolidate the support for this into the
> swiotlb_init
> interface by adding a new flag.
Hyper-V
From: Christoph Hellwig Sent: Monday, February 28, 2022 3:31 AM
>
> On Mon, Feb 28, 2022 at 02:53:39AM +, Michael Kelley (LINUX) wrote:
> > From: Christoph Hellwig Sent: Sunday, February 27, 2022 6:31
> > AM
> > >
> > > Pass a bool to pass
From: Christoph Hellwig Sent: Sunday, February 27, 2022 6:31 AM
>
> Pass a bool to pass if swiotlb needs to be enabled based on the
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable bounce buffering.
>
> Note that this patch removes the
One
is a Generation 1 VM that has legacy PCI devices and one is a Generation 2
VM with no legacy PCI devices. Tested hot add and remove of Mellanox
CX-3 and CX-4 SR-IOV NIC virtual functions that are directly mapped into the
VM. Also tested local NVMe devices directly mapped into one of the VMs.
No issues encountered. So for Azure/Hyper-V specifically,
Tested-by: Michael Kelley
From: Tianyu Lan Sent: Wednesday, December 1, 2021 8:03 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer
From: Tianyu Lan Sent: Wednesday, December 1, 2021 8:03 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need
From: Tianyu Lan Sent: Tuesday, November 23, 2021 6:31 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need
From: Tianyu Lan Sent: Tuesday, November 23, 2021 6:31 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need
From: Tianyu Lan Sent: Tuesday, November 23, 2021 6:31 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
From: Tianyu Lan Sent: Tuesday, November 23, 2021 6:31 AM
>
> In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
> extra address space which is above shared_gpa_boundary (E.G 39 bit
> address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
> physical address will
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39
AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> mpb_desc() still needs
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
> extra address space which is above shared_gpa_boundary
> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
> The access physical address will
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AMD SEV SNP, the access
> address should be in
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR registers twice.
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change sincv v4
> * Change gpadl handle in netvsc and uio driver from u32 to
> struct
From: Tianyu Lan Sent: Thursday, September 2, 2021 6:36 AM
>
> On 9/2/2021 8:23 AM, Michael Kelley wrote:
> >> + } else {
> >> + pages_wraparound = kcalloc(page_cnt * 2 - 1,
> >> +
From: Christoph Hellwig Sent: Thursday, September 2, 2021 1:00 AM
>
> On Tue, Aug 31, 2021 at 05:16:19PM +, Michael Kelley wrote:
> > As a quick overview, I think there are four places where the
> > shared_gpa_boundary must be applied to adjust the guest physical
> &
From: Michael Kelley Sent: Wednesday, September 1,
2021 7:34 PM
[snip]
> > +int netvsc_dma_map(struct hv_device *hv_dev,
> > + struct hv_netvsc_packet *packet,
> > + struct hv_page_buffer *pb)
> > +{
> > + u32 pa
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR registers twice.
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Per previous comment, the Subject line tag should be "scsi: storvsc: "
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject tag should be "Drivers: hv: vmbus: "
> VMbus ring buffer are shared with host and it's need to
> be accessed via extra address space of Isolation VM with
> AMD SNP support. This patch is to map the ring buffer
> address in extra
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should be "Drivers: hv: vmbus:"
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should probably be "x86/hyperv:" since the majority
of the code added is under arch/x86.
> hyperv provides ghcb hvcall to handle VMBus
> HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
> msg in SNP Isolation VM. Add such
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change since v3:
>* Change vmbus_teardown_gpadl() parameter and put gpadl handle,
>buffer
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Add new hvcall guest address host visibility support to mark
> memory visible to host. Call it inside set_memory_decrypted
> /encrypted(). Add HYPERVISOR feature check in the
> hv_is_isolation_supported() to optimize in
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyper-V exposes shared memory boundary via cpuid
> HYPERV_CPUID_ISOLATION_CONFIG and store it in the
> shared_gpa_boundary of ms_hyperv struct. This prepares
> to share memory with host for SNP guest.
>
> Signed-off-by: Tianyu Lan
>
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest
> to communicate with hypervisor. Map GHCB page for all
> cpus to read/write MSR register and submit hvcall request
> via ghcb page.
>
> Signed-off-by: Tianyu Lan
> ---
>
From: Christoph Hellwig Sent: Monday, August 30, 2021 5:01 AM
>
> Sorry for the delayed answer, but I look at the vmap_pfn usage in the
> previous version and tried to come up with a better version. This
> mostly untested branch:
>
>
From: Tianyu Lan Sent: Friday, August 20, 2021 11:04 AM
>
> On 8/21/2021 12:08 AM, Michael Kelley wrote:
> >>>> }
> >>> The whole approach here is to do dma remapping on each individual page
> >>> of the I/O buffer. But wouldn't it be p
From: Tianyu Lan Sent: Friday, August 20, 2021 8:20 AM
>
> On 8/20/2021 2:17 AM, Michael Kelley wrote:
> > From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> >
> > I'm not clear on why payload->range.offset needs to be set again.
> > Even after the dma ma
From: h...@lst.de Sent: Thursday, August 19, 2021 9:33 PM
>
> On Thu, Aug 19, 2021 at 06:17:40PM +, Michael Kelley wrote:
> > >
> > > @@ -1824,6 +1848,13 @@ static int storvsc_queuecommand(struct Scsi_Host
> > > *host, struct scsi_cmnd *scmnd)
> >
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
Subject line tag should be "scsi: storvsc:"
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
The Subject line tag should be "hv_netvsc:".
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> VMbus ring buffer are shared with host and it's need to
s/it's need/it needs/
> be accessed via extra address space of Isolation VM with
> SNP support. This patch is to map the ring buffer
> address in extra address space via
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
> security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
> is to add support for these Isolation VM support in Linux.
>
A general comment about this
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AMD SEV SNP, the access
> address should be in the
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V provides ghcb hvcall to handle VMBus
> HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
> msg in SNP Isolation VM. Add such support.
>
> Signed-off-by: Tianyu Lan
> ---
> arch/x86/hyperv/ivm.c | 43
From: Michael Kelley Sent: Friday, August 13, 2021
12:31 PM
> To: Tianyu Lan ; KY Srinivasan ;
> Haiyang Zhang ;
> Stephen Hemminger ; wei@kernel.org; Dexuan Cui
> ;
> t...@linutronix.de; mi...@redhat.com; b...@alien8.de; x...@kernel.org;
> h...@zytor.com; dave.han...@li
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page
See previous comments about tag in the Subject line.
> Hyper-V provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 04/13] HV: Mark vmbus ring buffer visible to host in
> Isolation VM
>
Use tag "Drivers: hv: vmbus:" in the Subject line.
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
>
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
[snip]
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index ad8a5c586a35..1e4a0882820a 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -29,6 +29,8 @@
> #include
>
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host
> visibility support
Use "x86/hyperv:" tag in the Subject line.
>
> From: Tianyu Lan
>
> Add new hvcall guest address host visibility support to mark
> memory visible
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 02/13] x86/HV: Initialize shared memory boundary in the
> Isolation VM.
As with Patch 1, use the "x86/hyperv:" tag in the Subject line.
>
> From: Tianyu Lan
>
> Hyper-V exposes shared memory boundary via cpuid
>
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 01/13] x86/HV: Initialize GHCB page in Isolation VM
The subject line tag on patches under arch/x86/hyperv is generally
"x86/hyperv:".
There's some variation in the spelling of "hyperv", but let's go with the all
ome simplification of the pvops implementation.
>
> Signed-off-by: Juergen Gross
> ---
> V4:
> - drop paravirt_time.h again
> - don't move Hyper-V code (Michael Kelley)
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/mshyperv.h
From: Juergen Gross Sent: Thursday, December 17, 2020 1:31 AM
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
>
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification
From: Wei Liu On Behalf Of Wei Liu
[snip]
> diff --git a/xen/arch/x86/guest/hyperv/util.c
> b/xen/arch/x86/guest/hyperv/util.c
> new file mode 100644
> index 00..0abb37b05f
> --- /dev/null
> +++ b/xen/arch/x86/guest/hyperv/util.c
> @@ -0,0 +1,74 @@
>
From: Wei Liu On Behalf Of Wei Liu Sent: Friday,
February 14, 2020 4:35 AM
>
> Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage
> of several hypercalls:
>
> * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST
> * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
> *
From: Jan Beulich Sent: Thursday, January 23, 2020 3:19 AM
>
> On 22.01.2020 21:23, Wei Liu wrote:
> > --- a/xen/arch/x86/e820.c
> > +++ b/xen/arch/x86/e820.c
> > @@ -36,6 +36,22 @@ boolean_param("e820-verbose", e820_verbose);
> > struct e820map e820;
> > struct e820map __initdata e820_raw;
>
From: Wei Liu On Behalf Of Wei Liu Sent: Wednesday,
January 22, 2020 12:24 PM
>
> Use the top-most addressable page for that purpose. Adjust e820 code
> accordingly.
>
> We also need to register Xen's guest OS ID to Hyper-V. Use 0x300 as the
> OS type.
>
> Signed-off-by: Wei Liu
> ---
> XXX
From: Wei Liu Sent: Tuesday, January 7, 2020 8:34 AM
>
> On Mon, Jan 06, 2020 at 11:27:18AM +0100, Jan Beulich wrote:
> > On 05.01.2020 17:47, Wei Liu wrote:
> > > Hyper-V's input / output argument must be 8 bytes aligned an not cross
> > > page boundary. The easiest way to satisfy those
From: Wei Liu On Behalf Of Wei Liu Sent: Sunday,
December 29, 2019 10:34 AM
>
> VP assist page is rather important as we need to toggle some bits in
> that page such that L1 guest can make hypercalls directly to L0 Hyper-V.
>
> Preemptively split out set_vp_assist page which will be used in
From: Wei Liu On Behalf Of Wei Liu Sent: Sunday,
December 29, 2019 10:34 AM
>
> Signed-off-by: Wei Liu
> ---
> xen/arch/x86/guest/hyperv/hyperv.c | 41 +++---
> 1 file changed, 38 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c
>
From: Durrant, Paul Sent: Wednesday, December 18, 2019
7:24 AM
> > From: Wei Liu On Behalf Of Wei Liu
> > Sent: 18 December 2019 14:43
[snip]
> > +
> > +static inline uint64_t read_hyperv_timer(void)
> > +{
> > +uint64_t scale, offset, ret, tsc;
> > +uint32_t seq;
> > +const
trace/hyperv.h | 2 +-
> arch/x86/kernel/kvm.c | 11 +--
> arch/x86/kernel/paravirt.c| 2 +-
> arch/x86/mm/tlb.c | 47 ++-
> arch/x86/xen/mmu_pv.c | 11 +++
> inc
85 matches
Mail list logo