Re: [PATCH v3] vhost-vdpa: fix page pinning leakage in error path (rework)

2020-11-25 Thread Michael S. Tsirkin
On Thu, Nov 05, 2020 at 06:26:33PM -0500, Si-Wei Liu wrote:
> Pinned pages are not properly accounted particularly when
> mapping error occurs on IOTLB update. Clean up dangling
> pinned pages for the error path.
> 
> The memory usage for bookkeeping pinned pages is reverted
> to what it was before: only one single free page is needed.
> This helps reduce the host memory demand for VM with a large
> amount of memory, or in the situation where host is running
> short of free memory.
> 
> Fixes: 4c8cf31885f6 ("vhost: introduce vDPA-based backend")
> Signed-off-by: Si-Wei Liu 


Not sure which tree this is against, I had to apply this with
minor tweaks. Pls take a look at the vhost tree and
let me know whether it looks ok to you.

> ---
> Changes in v3:
> - Turn explicit last_pfn check to a WARN_ON() (Jason)
> 
> Changes in v2:
> - Drop the reversion patch
> - Fix unhandled page leak towards the end of page_list
> 
>  drivers/vhost/vdpa.c | 80 
> 
>  1 file changed, 62 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index b6d9016..5b13dfd 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -560,6 +560,8 @@ static int vhost_vdpa_map(struct vhost_vdpa *v,
>  
>   if (r)
>   vhost_iotlb_del_range(dev->iotlb, iova, iova + size - 1);
> + else
> + atomic64_add(size >> PAGE_SHIFT, &dev->mm->pinned_vm);
>  
>   return r;
>  }
> @@ -591,14 +593,16 @@ static int vhost_vdpa_process_iotlb_update(struct 
> vhost_vdpa *v,
>   unsigned long list_size = PAGE_SIZE / sizeof(struct page *);
>   unsigned int gup_flags = FOLL_LONGTERM;
>   unsigned long npages, cur_base, map_pfn, last_pfn = 0;
> - unsigned long locked, lock_limit, pinned, i;
> + unsigned long lock_limit, sz2pin, nchunks, i;
>   u64 iova = msg->iova;
> + long pinned;
>   int ret = 0;
>  
>   if (vhost_iotlb_itree_first(iotlb, msg->iova,
>   msg->iova + msg->size - 1))
>   return -EEXIST;
>  
> + /* Limit the use of memory for bookkeeping */
>   page_list = (struct page **) __get_free_page(GFP_KERNEL);
>   if (!page_list)
>   return -ENOMEM;
> @@ -607,52 +611,75 @@ static int vhost_vdpa_process_iotlb_update(struct 
> vhost_vdpa *v,
>   gup_flags |= FOLL_WRITE;
>  
>   npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
> - if (!npages)
> - return -EINVAL;
> + if (!npages) {
> + ret = -EINVAL;
> + goto free;
> + }
>  
>   mmap_read_lock(dev->mm);
>  
> - locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
>   lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> -
> - if (locked > lock_limit) {
> + if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
>   ret = -ENOMEM;
> - goto out;
> + goto unlock;
>   }
>  
>   cur_base = msg->uaddr & PAGE_MASK;
>   iova &= PAGE_MASK;
> + nchunks = 0;
>  
>   while (npages) {
> - pinned = min_t(unsigned long, npages, list_size);
> - ret = pin_user_pages(cur_base, pinned,
> -  gup_flags, page_list, NULL);
> - if (ret != pinned)
> + sz2pin = min_t(unsigned long, npages, list_size);
> + pinned = pin_user_pages(cur_base, sz2pin,
> + gup_flags, page_list, NULL);
> + if (sz2pin != pinned) {
> + if (pinned < 0) {
> + ret = pinned;
> + } else {
> + unpin_user_pages(page_list, pinned);
> + ret = -ENOMEM;
> + }
>   goto out;
> + }
> + nchunks++;
>  
>   if (!last_pfn)
>   map_pfn = page_to_pfn(page_list[0]);
>  
> - for (i = 0; i < ret; i++) {
> + for (i = 0; i < pinned; i++) {
>   unsigned long this_pfn = page_to_pfn(page_list[i]);
>   u64 csize;
>  
>   if (last_pfn && (this_pfn != last_pfn + 1)) {
>   /* Pin a contiguous chunk of memory */
>   csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT;
> - if (vhost_vdpa_map(v, iova, csize,
> -map_pfn << PAGE_SHIFT,
> -msg->perm))
> + ret = vhost_vdpa_map(v, iova, csize,
> +  map_pfn << PAGE_SHIFT,
> +  msg->perm);
> + if (ret) {
> + /*
> +  * Un

Re: [PATCH v4] i2c: virtio: add a virtio i2c frontend driver

2020-11-25 Thread Michael S. Tsirkin
On Mon, Oct 12, 2020 at 09:55:55AM +0800, Jie Deng wrote:
> Add an I2C bus driver for virtio para-virtualization.
> 
> The controller can be emulated by the backend driver in
> any device model software by following the virtio protocol.
> 
> This driver communicates with the backend driver through a
> virtio I2C message structure which includes following parts:
> 
> - Header: i2c_msg addr, flags, len.
> - Data buffer: the pointer to the I2C msg data.
> - Status: the processing result from the backend.
> 
> People may implement different backend drivers to emulate
> different controllers according to their needs. A backend
> example can be found in the device model of the open source
> project ACRN. For more information, please refer to
> https://projectacrn.org.
> 
> The virtio device ID 34 is used for this I2C adpter since IDs
> before 34 have been reserved by other virtio devices.
> 
> Co-developed-by: Conghui Chen 
> Signed-off-by: Conghui Chen 
> Signed-off-by: Jie Deng 
> Reviewed-by: Shuo Liu 
> Reviewed-by: Andy Shevchenko 

I assume this will be updated once the specification is acked
by the virtio tc. Holding off on this one for now since
we know there will be host/guest ABI changes.

> ---
> The device ID request:
> https://github.com/oasis-tcs/virtio-spec/issues/85
> 
> The specification:
>   
> https://lists.oasis-open.org/archives/virtio-comment/202009/msg00021.html
> 
> Changes in v4:
>   - Use (!(vmsg && vmsg == &vi->vmsg)) instead of ((!vmsg) || (vmsg != 
> &vi->vmsg))
> 
> Changes in v3:
> - Move the interface into uAPI according to Jason.
> - Fix issues reported by Dan Carpenter.
>   - Fix typo reported by Randy.
> 
> Changes in v2:
> - Addressed comments received from Michael, Andy and Jason.
> 
>  drivers/i2c/busses/Kconfig  |  11 ++
>  drivers/i2c/busses/Makefile |   3 +
>  drivers/i2c/busses/i2c-virtio.c | 256 
> 
>  include/uapi/linux/virtio_i2c.h |  31 +
>  include/uapi/linux/virtio_ids.h |   1 +
>  5 files changed, 302 insertions(+)
>  create mode 100644 drivers/i2c/busses/i2c-virtio.c
>  create mode 100644 include/uapi/linux/virtio_i2c.h
> 
> diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
> index 293e7a0..f2f6543 100644
> --- a/drivers/i2c/busses/Kconfig
> +++ b/drivers/i2c/busses/Kconfig
> @@ -21,6 +21,17 @@ config I2C_ALI1535
> This driver can also be built as a module.  If so, the module
> will be called i2c-ali1535.
>  
> +config I2C_VIRTIO
> + tristate "Virtio I2C Adapter"
> + depends on VIRTIO
> + help
> +   If you say yes to this option, support will be included for the virtio
> +   I2C adapter driver. The hardware can be emulated by any device model
> +   software according to the virtio protocol.
> +
> +   This driver can also be built as a module. If so, the module
> +   will be called i2c-virtio.
> +
>  config I2C_ALI1563
>   tristate "ALI 1563"
>   depends on PCI
> diff --git a/drivers/i2c/busses/Makefile b/drivers/i2c/busses/Makefile
> index 19aff0e..821acfa 100644
> --- a/drivers/i2c/busses/Makefile
> +++ b/drivers/i2c/busses/Makefile
> @@ -6,6 +6,9 @@
>  # ACPI drivers
>  obj-$(CONFIG_I2C_SCMI)   += i2c-scmi.o
>  
> +# VIRTIO I2C host controller driver
> +obj-$(CONFIG_I2C_VIRTIO) += i2c-virtio.o
> +
>  # PC SMBus host controller drivers
>  obj-$(CONFIG_I2C_ALI1535)+= i2c-ali1535.o
>  obj-$(CONFIG_I2C_ALI1563)+= i2c-ali1563.o
> diff --git a/drivers/i2c/busses/i2c-virtio.c b/drivers/i2c/busses/i2c-virtio.c
> new file mode 100644
> index 000..36d8c68
> --- /dev/null
> +++ b/drivers/i2c/busses/i2c-virtio.c
> @@ -0,0 +1,256 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Virtio I2C Bus Driver
> + *
> + * Copyright (c) 2020 Intel Corporation. All rights reserved.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +
> +/**
> + * struct virtio_i2c_msg - the virtio I2C message structure
> + * @hdr: the virtio I2C message header
> + * @buf: virtio I2C message data buffer
> + * @status: the processing result from the backend
> + */
> +struct virtio_i2c_msg {
> + struct virtio_i2c_hdr hdr;
> + u8 *buf;
> + u8 status;
> +};
> +
> +/**
> + * struct virtio_i2c - virtio I2C data
> + * @vdev: virtio device for this controller
> + * @completion: completion of virtio I2C message
> + * @vmsg: the virtio I2C message for communication
> + * @adap: I2C adapter for this controller
> + * @i2c_lock: lock for virtqueue processing
> + * @vq: the virtio virtqueue for communication
> + */
> +struct virtio_i2c {
> + struct virtio_device *vdev;
> + struct completion completion;
> + struct virtio_i2c_msg vmsg;
> + struct i2c_adapter adap;
> + struct mutex i2c_lock;
> + struct virtqueue *vq;
> +};
> +
> +static void virtio_i2c_msg_

Re: [PATCH v3] virtio-rng: return available data with O_NONBLOCK

2020-11-25 Thread Michael S. Tsirkin
On Tue, Sep 08, 2020 at 05:33:40PM +0200, Martin Wilck wrote:
> On Tue, 2020-09-08 at 10:14 -0400, Michael S. Tsirkin wrote:
> > On Mon, Aug 31, 2020 at 02:37:26PM +0200, Laurent Vivier wrote:
> > > On 28/08/2020 23:34, Martin Wilck wrote:
> > > > On Wed, 2020-08-26 at 08:26 -0400, Michael S. Tsirkin wrote:
> > > > > On Tue, Aug 11, 2020 at 04:42:32PM +0200, Laurent Vivier wrote:
> > > > > > On 11/08/2020 16:28, mwi...@suse.com wrote:
> > > > > > > From: Martin Wilck 
> > > > > > > 
> > > > > > > If a program opens /dev/hwrng with O_NONBLOCK and uses
> > > > > > > poll() and
> > > > > > > non-blocking read() to retrieve random data, it ends up in
> > > > > > > a
> > > > > > > tight
> > > > > > > loop with poll() always returning POLLIN and read()
> > > > > > > returning
> > > > > > > EAGAIN.
> > > > > > > This repeats forever until some process makes a blocking
> > > > > > > read()
> > > > > > > call.
> > > > > > > The reason is that virtio_read() always returns 0 in non-
> > > > > > > blocking 
> > > > > > > mode,
> > > > > > > even if data is available. Worse, it fetches random data
> > > > > > > from the
> > > > > > > hypervisor after every non-blocking call, without ever
> > > > > > > using this
> > > > > > > data.
> > > > > > > 
> > > > > > > The following test program illustrates the behavior and can
> > > > > > > be
> > > > > > > used
> > > > > > > for testing and experiments. The problem will only be seen
> > > > > > > if all
> > > > > > > tasks use non-blocking access; otherwise the blocking reads
> > > > > > > will
> > > > > > > "recharge" the random pool and cause other, non-blocking
> > > > > > > reads to
> > > > > > > succeed at least sometimes.
> > > > > > > 
> > > > > > > /* Whether to use non-blocking mode in a task, problem
> > > > > > > occurs if
> > > > > > > CONDITION is 1 */
> > > > > > > //#define CONDITION (getpid() % 2 != 0)
> > > > > > > 
> > > > > > > static volatile sig_atomic_t stop;
> > > > > > > static void handler(int sig __attribute__((unused))) { stop
> > > > > > > = 1;
> > > > > > > }
> > > > > > > 
> > > > > > > static void loop(int fd, int sec)
> > > > > > > {
> > > > > > >   struct pollfd pfd = { .fd = fd, .events  = POLLIN, };
> > > > > > >   unsigned long errors = 0, eagains = 0, bytes = 0, succ
> > > > > > > = 0;
> > > > > > >   int size, rc, rd;
> > > > > > > 
> > > > > > >   srandom(getpid());
> > > > > > >   if (CONDITION && fcntl(fd, F_SETFL, fcntl(fd, F_GETFL)
> > > > > > > |
> > > > > > > O_NONBLOCK) == -1)
> > > > > > >   perror("fcntl");
> > > > > > >   size = MINBUFSIZ + random() % (MAXBUFSIZ - MINBUFSIZ +
> > > > > > > 1);
> > > > > > > 
> > > > > > >   for(;;) {
> > > > > > >   char buf[size];
> > > > > > > 
> > > > > > >   if (stop)
> > > > > > >   break;
> > > > > > >   rc = poll(&pfd, 1, sec);
> > > > > > >   if (rc > 0) {
> > > > > > >   rd = read(fd, buf, sizeof(buf));
> > > > > > >   if (rd == -1 && errno == EAGAIN)
> > > > > > >   eagains++;
> > > > > > >   else if (rd == -1)
> > > > > > >   errors++;
> > > > > > >   else {
> > > > > > >   succ++;
> > > > > > >   bytes += rd;
> > > > > > >   write(1, buf, sizeof(buf));
> > > > > > >   }
> > > > > > >   } else if (rc == -1) {
> > > > > > >   if (errno != EINTR)
> > > > > > >   perror("poll");
> > > > > > >   break;
> > > > > > >   } else
> > > > > > >   fprintf(stderr, "poll: timeout\n");
> > > > > > >   }
> > > > > > >   fprintf(stderr,
> > > > > > >   "pid %d %sblocking, bufsize %d, %d seconds, %lu
> > > > > > > bytes
> > > > > > > read, %lu success, %lu eagain, %lu errors\n",
> > > > > > >   getpid(), CONDITION ? "non-" : "", size, sec,
> > > > > > > bytes,
> > > > > > > succ, eagains, errors);
> > > > > > > }
> > > > > > > 
> > > > > > > int main(void)
> > > > > > > {
> > > > > > >   int fd;
> > > > > > > 
> > > > > > >   fork(); fork();
> > > > > > >   fd = open("/dev/hwrng", O_RDONLY);
> > > > > > >   if (fd == -1) {
> > > > > > >   perror("open");
> > > > > > >   return 1;
> > > > > > >   };
> > > > > > >   signal(SIGALRM, handler);
> > > > > > >   alarm(SECONDS);
> > > > > > >   loop(fd, SECONDS);
> > > > > > >   close(fd);
> > > > > > >   wait(NULL);
> > > > > > >   return 0;
> > > > > > > }
> > > > > > > 
> > > > > > > void loop(int fd)
> > > > > > > {
> > > > > > > struct pollfd pfd0 = { .fd = fd, .events  = POLLIN,
> > > > > > > };
> > > > > > > int rc;
> > > > > > > unsigned int n;
> > > > > > > 
> > > > > > > for (n = LOOPS; n > 0; n--) {
> > > > > > > struct pollfd pfd = pfd0;
> > > > > > > char buf[SIZE];
> > > > > > > 
> > > > > > > rc 

[PATCH v10 05/81] KVM: x86: add kvm_arch_vcpu_get_regs() and kvm_arch_vcpu_get_sregs()

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

These functions are used by the VM introspection code
(for the KVMI_VCPU_GET_REGISTERS command and all events sending the vCPU
registers to the introspection tool).

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c   | 10 ++
 include/linux/kvm_host.h |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a3fdc16cfd6f..540e42341435 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9375,6 +9375,11 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, 
struct kvm_regs *regs)
return 0;
 }
 
+void kvm_arch_vcpu_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+   __get_regs(vcpu, regs);
+}
+
 static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
vcpu->arch.emulate_regs_need_sync_from_vcpu = true;
@@ -9470,6 +9475,11 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
return 0;
 }
 
+void kvm_arch_vcpu_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+   __get_sregs(vcpu, sregs);
+}
+
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
struct kvm_mp_state *mp_state)
 {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index cd6ac3a43c9a..13c6b806477b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -902,9 +902,12 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
struct kvm_translation *tr);
 
 int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
+void kvm_arch_vcpu_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
 int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
 int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
  struct kvm_sregs *sregs);
+void kvm_arch_vcpu_get_sregs(struct kvm_vcpu *vcpu,
+ struct kvm_sregs *sregs);
 int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
  struct kvm_sregs *sregs);
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 19/81] KVM: x86: save the error code during EPT/NPF exits handling

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This is needed for kvm_page_track_emulation_failure().

When the introspection tool {read,write,exec}-protect a guest memory
page, it is notified from the read/write/fetch callbacks used by
the KVM emulator. If the emulation fails it is possible that the
read/write callbacks were not used. In such cases, the emulator will
call kvm_page_track_emulation_failure() to ensure that the introspection
tool is notified of the read/write #PF (based on this saved error code),
which in turn can emulate the instruction or unprotect the memory page
(and let the guest execute the instruction).

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 3 +++
 arch/x86/kvm/svm/svm.c  | 2 ++
 arch/x86/kvm/vmx/vmx.c  | 1 +
 3 files changed, 6 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 01853453a659..86048037da23 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -813,6 +813,9 @@ struct kvm_vcpu_arch {
 */
bool enforce;
} pv_cpuid;
+
+   /* #PF translated error code from EPT/NPT exit reason */
+   u64 error_code;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2bfefcfbddd7..43a2e4ec6178 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1916,6 +1916,8 @@ static int npf_interception(struct vcpu_svm *svm)
u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
u64 error_code = svm->vmcb->control.exit_info_1;
 
+   svm->vcpu.arch.error_code = error_code;
+
trace_kvm_page_fault(fault_address, error_code);
return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code,
static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a7d2bab38233..d5d4203378d3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5390,6 +5390,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
  ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
 
vcpu->arch.exit_qualification = exit_qualification;
+   vcpu->arch.error_code = error_code;
 
/*
 * Check that the GPA doesn't exceed physical memory limits, as that is
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 44/81] KVM: introspection: add a jobs list to every introspected vCPU

2020-11-25 Thread Adalbert Lazăr
Every vCPU has a lock-protected list in which the receiving thread
places the jobs that has to be done by the vCPU thread
once it is kicked out of guest (KVM_REQ_INTROSPECTION).

Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 include/linux/kvmi_host.h | 10 +
 virt/kvm/introspection/kvmi.c | 72 ++-
 virt/kvm/introspection/kvmi_int.h |  1 +
 3 files changed, 81 insertions(+), 2 deletions(-)

diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 9b0008c66321..b3874419511d 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -6,8 +6,18 @@
 
 #include 
 
+struct kvmi_job {
+   struct list_head link;
+   void *ctx;
+   void (*fct)(struct kvm_vcpu *vcpu, void *ctx);
+   void (*free_fct)(void *ctx);
+};
+
 struct kvm_vcpu_introspection {
struct kvm_vcpu_arch_introspection arch;
+
+   struct list_head job_list;
+   spinlock_t job_lock;
 };
 
 struct kvm_introspection {
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 358dc6c2a969..cdb4175aecff 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -23,6 +23,7 @@ static DECLARE_BITMAP(Kvmi_known_vm_events, KVMI_NUM_EVENTS);
 static DECLARE_BITMAP(Kvmi_known_vcpu_events, KVMI_NUM_EVENTS);
 
 static struct kmem_cache *msg_cache;
+static struct kmem_cache *job_cache;
 
 void *kvmi_msg_alloc(void)
 {
@@ -39,14 +40,19 @@ static void kvmi_cache_destroy(void)
 {
kmem_cache_destroy(msg_cache);
msg_cache = NULL;
+   kmem_cache_destroy(job_cache);
+   job_cache = NULL;
 }
 
 static int kvmi_cache_create(void)
 {
msg_cache = kmem_cache_create("kvmi_msg", KVMI_MSG_SIZE_ALLOC,
  4096, SLAB_ACCOUNT, NULL);
+   job_cache = kmem_cache_create("kvmi_job",
+ sizeof(struct kvmi_job),
+ 0, SLAB_ACCOUNT, NULL);
 
-   if (!msg_cache) {
+   if (!msg_cache || !job_cache) {
kvmi_cache_destroy();
 
return -1;
@@ -118,6 +124,48 @@ void kvmi_uninit(void)
kvmi_cache_destroy();
 }
 
+static int __kvmi_add_job(struct kvm_vcpu *vcpu,
+ void (*fct)(struct kvm_vcpu *vcpu, void *ctx),
+ void *ctx, void (*free_fct)(void *ctx))
+{
+   struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+   struct kvmi_job *job;
+
+   job = kmem_cache_zalloc(job_cache, GFP_KERNEL);
+   if (unlikely(!job))
+   return -ENOMEM;
+
+   INIT_LIST_HEAD(&job->link);
+   job->fct = fct;
+   job->ctx = ctx;
+   job->free_fct = free_fct;
+
+   spin_lock(&vcpui->job_lock);
+   list_add_tail(&job->link, &vcpui->job_list);
+   spin_unlock(&vcpui->job_lock);
+
+   return 0;
+}
+
+int kvmi_add_job(struct kvm_vcpu *vcpu,
+void (*fct)(struct kvm_vcpu *vcpu, void *ctx),
+void *ctx, void (*free_fct)(void *ctx))
+{
+   int err;
+
+   err = __kvmi_add_job(vcpu, fct, ctx, free_fct);
+
+   return err;
+}
+
+static void kvmi_free_job(struct kvmi_job *job)
+{
+   if (job->free_fct)
+   job->free_fct(job->ctx);
+
+   kmem_cache_free(job_cache, job);
+}
+
 static bool kvmi_alloc_vcpui(struct kvm_vcpu *vcpu)
 {
struct kvm_vcpu_introspection *vcpui;
@@ -126,6 +174,9 @@ static bool kvmi_alloc_vcpui(struct kvm_vcpu *vcpu)
if (!vcpui)
return false;
 
+   INIT_LIST_HEAD(&vcpui->job_list);
+   spin_lock_init(&vcpui->job_lock);
+
vcpu->kvmi = vcpui;
 
return true;
@@ -139,9 +190,26 @@ static int kvmi_create_vcpui(struct kvm_vcpu *vcpu)
return 0;
 }
 
+static void kvmi_free_vcpu_jobs(struct kvm_vcpu_introspection *vcpui)
+{
+   struct kvmi_job *cur, *next;
+
+   list_for_each_entry_safe(cur, next, &vcpui->job_list, link) {
+   list_del(&cur->link);
+   kvmi_free_job(cur);
+   }
+}
+
 static void kvmi_free_vcpui(struct kvm_vcpu *vcpu)
 {
-   kfree(vcpu->kvmi);
+   struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+   if (!vcpui)
+   return;
+
+   kvmi_free_vcpu_jobs(vcpui);
+
+   kfree(vcpui);
vcpu->kvmi = NULL;
 }
 
diff --git a/virt/kvm/introspection/kvmi_int.h 
b/virt/kvm/introspection/kvmi_int.h
index b7c8730e7e6d..c3aa12554c2b 100644
--- a/virt/kvm/introspection/kvmi_int.h
+++ b/virt/kvm/introspection/kvmi_int.h
@@ -7,6 +7,7 @@
 #include 
 
 #define KVMI(kvm) ((kvm)->kvmi)
+#define VCPUI(vcpu) ((vcpu)->kvmi)
 /*
  * This limit is used to accommodate the largest known fixed-length
  * message.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 50/81] KVM: introspection: add KVMI_VCPU_EVENT_PAUSE

2020-11-25 Thread Adalbert Lazăr
This event is sent by the vCPU thread as a response to the
KVMI_VM_PAUSE_VCPU command, but it has a lower priority, being sent
after any other introspection event and when no other introspection
command is queued.

The number of KVMI_VCPU_EVENT_PAUSE will match the number of successful
KVMI_VM_PAUSE_VCPU commands.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 26 
 include/uapi/linux/kvmi.h |  2 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 65 ++-
 virt/kvm/introspection/kvmi.c | 26 +++-
 virt/kvm/introspection/kvmi_int.h |  1 +
 virt/kvm/introspection/kvmi_msg.c | 18 +
 6 files changed, 136 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 5e99baf7e2f3..c86c83566c3d 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -596,3 +596,29 @@ the guest (see **Unhooking**) and the introspection has 
been enabled for
 this event (see **KVMI_VM_CONTROL_EVENTS**). The introspection tool has
 a chance to unhook and close the introspection socket (signaling that
 the operation can proceed).
+
+2. KVMI_VCPU_EVENT_PAUSE
+
+
+:Architectures: all
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_event_hdr;
+   struct kvmi_vcpu_event;
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent in response to a *KVMI_VCPU_PAUSE* command and
+cannot be controlled with *KVMI_VCPU_CONTROL_EVENTS*.
+Because it has a low priority, it will be sent after any other vCPU
+introspection event and when no other vCPU introspection command is
+queued.
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 6a57efb5664d..757d4b84f473 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -50,6 +50,8 @@ enum {
 };
 
 enum {
+   KVMI_VCPU_EVENT_PAUSE = KVMI_VCPU_EVENT_ID(0),
+
KVMI_NEXT_VCPU_EVENT
 };
 
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 52765ca3f9c8..4c9dc6560ad9 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -34,6 +34,17 @@ static vm_paddr_t test_gpa;
 
 static int page_size;
 
+struct vcpu_event {
+   struct kvmi_event_hdr hdr;
+   struct kvmi_vcpu_event common;
+};
+
+struct vcpu_reply {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_event_reply reply;
+};
+
 struct vcpu_worker_data {
struct kvm_vm *vm;
int vcpu_id;
@@ -690,14 +701,66 @@ static void pause_vcpu(void)
cmd_vcpu_pause(1, 0);
 }
 
+static void reply_to_event(struct kvmi_msg_hdr *ev_hdr, struct vcpu_event *ev,
+  __u8 action, struct vcpu_reply *rpl, size_t rpl_size)
+{
+   ssize_t r;
+
+   rpl->hdr.id = ev_hdr->id;
+   rpl->hdr.seq = ev_hdr->seq;
+   rpl->hdr.size = rpl_size - sizeof(rpl->hdr);
+
+   rpl->vcpu_hdr.vcpu = ev->common.vcpu;
+
+   rpl->reply.action = action;
+   rpl->reply.event = ev->hdr.event;
+
+   r = send(Userspace_socket, rpl, rpl_size, 0);
+   TEST_ASSERT(r == rpl_size,
+   "send() failed, sending %zd, result %zd, errno %d (%s)\n",
+   rpl_size, r, errno, strerror(errno));
+}
+
+static void receive_vcpu_event(struct kvmi_msg_hdr *msg_hdr,
+  struct vcpu_event *ev,
+  size_t ev_size, u16 ev_id)
+{
+   receive_event(msg_hdr, KVMI_VCPU_EVENT,
+ &ev->hdr, ev_id, ev_size);
+}
+
+static void discard_pause_event(struct kvm_vm *vm)
+{
+   struct vcpu_worker_data data = {.vm = vm, .vcpu_id = VCPU_ID};
+   struct vcpu_reply rpl = {};
+   struct kvmi_msg_hdr hdr;
+   pthread_t vcpu_thread;
+   struct vcpu_event ev;
+
+   vcpu_thread = start_vcpu_worker(&data);
+
+   receive_vcpu_event(&hdr, &ev, sizeof(ev), KVMI_VCPU_EVENT_PAUSE);
+
+   reply_to_event(&hdr, &ev, KVMI_EVENT_ACTION_CONTINUE,
+   &rpl, sizeof(rpl));
+
+   wait_vcpu_worker(vcpu_thread);
+}
+
 static void test_pause(struct kvm_vm *vm)
 {
-   __u8 wait = 1, wait_inval = 2;
+   __u8 no_wait = 0, wait = 1, wait_inval = 2;
 
pause_vcpu();
+   discard_pause_event(vm);
 
cmd_vcpu_pause(wait, 0);
+   discard_pause_event(vm);
cmd_vcpu_pause(wait_inval, -KVM_EINVAL);
+
+   disallow_event(vm, KVMI_VCPU_EVENT_PAUSE);
+   cmd_vcpu_pause(no_wait, -KVM_EPERM);
+   allow_event(vm, KVMI_VCPU_EVENT_PAUSE);
 }
 
 static void test_introspection(struct kvm_vm *vm)
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 771e6b545698..3d26a7319fb7 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection

[PATCH v10 12/81] KVM: svm: add support for descriptor-table VM-exits

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function is needed for the KVMI_VCPU_EVENT_DESCRIPTOR event.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/svm/svm.c | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f3ee6bad0db5..00bda794609c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2335,6 +2335,13 @@ static int rsm_interception(struct vcpu_svm *svm)
return kvm_emulate_instruction_from_buffer(&svm->vcpu, rsm_ins_bytes, 
2);
 }
 
+static int descriptor_access_interception(struct vcpu_svm *svm)
+{
+   struct kvm_vcpu *vcpu = &svm->vcpu;
+
+   return kvm_emulate_instruction(vcpu, 0);
+}
+
 static int rdpmc_interception(struct vcpu_svm *svm)
 {
int err;
@@ -2959,6 +2966,14 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm 
*svm) = {
[SVM_EXIT_RSM]  = rsm_interception,
[SVM_EXIT_AVIC_INCOMPLETE_IPI]  = 
avic_incomplete_ipi_interception,
[SVM_EXIT_AVIC_UNACCELERATED_ACCESS]= 
avic_unaccelerated_access_interception,
+   [SVM_EXIT_IDTR_READ]= 
descriptor_access_interception,
+   [SVM_EXIT_GDTR_READ]= 
descriptor_access_interception,
+   [SVM_EXIT_LDTR_READ]= 
descriptor_access_interception,
+   [SVM_EXIT_TR_READ]  = 
descriptor_access_interception,
+   [SVM_EXIT_IDTR_WRITE]   = 
descriptor_access_interception,
+   [SVM_EXIT_GDTR_WRITE]   = 
descriptor_access_interception,
+   [SVM_EXIT_LDTR_WRITE]   = 
descriptor_access_interception,
+   [SVM_EXIT_TR_WRITE] = 
descriptor_access_interception,
 };
 
 static void dump_vmcb(struct kvm_vcpu *vcpu)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 32/81] KVM: introduce VM introspection

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

The KVM introspection subsystem provides a facility for applications
to control the execution of any running VMs (pause, resume, shutdown),
query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits
in the shadow page tables and receive notifications when events of interest
have taken place (shadow page table level faults, key MSR writes,
hypercalls etc.). Some notifications can be responded to with an action
(like preventing an MSR from being written), others are mere informative
(like breakpoint events which can be used for execution tracing).

Signed-off-by: Mihai Donțu 
Co-developed-by: Marian Rotariu 
Signed-off-by: Marian Rotariu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 139 ++
 arch/x86/include/asm/kvm_host.h   |   2 +
 arch/x86/kvm/Kconfig  |   9 ++
 arch/x86/kvm/Makefile |   2 +
 arch/x86/kvm/x86.c|   3 +
 include/linux/kvmi_host.h |  21 +
 virt/kvm/introspection/kvmi.c |  25 ++
 virt/kvm/introspection/kvmi_int.h |   7 ++
 virt/kvm/kvm_main.c   |  12 +++
 9 files changed, 220 insertions(+)
 create mode 100644 Documentation/virt/kvm/kvmi.rst
 create mode 100644 include/linux/kvmi_host.h
 create mode 100644 virt/kvm/introspection/kvmi.c
 create mode 100644 virt/kvm/introspection/kvmi_int.h

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
new file mode 100644
index ..59cc33a39f9f
--- /dev/null
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -0,0 +1,139 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=
+KVMI - The kernel virtual machine introspection subsystem
+=
+
+The KVM introspection subsystem provides a facility for applications running
+on the host or in a separate VM, to control the execution of any running VMs
+(pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.),
+alter the page access bits in the shadow page tables (only for the hardware
+backed ones, eg. Intel's EPT) and receive notifications when events of
+interest have taken place (shadow page table level faults, key MSR writes,
+hypercalls etc.). Some notifications can be responded to with an action
+(like preventing an MSR from being written), others are mere informative
+(like breakpoint events which can be used for execution tracing).
+With few exceptions, all events are optional. An application using this
+subsystem will explicitly register for them.
+
+The use case that gave way for the creation of this subsystem is to monitor
+the guest OS and as such the ABI/API is highly influenced by how the guest
+software (kernel, applications) sees the world. For example, some events
+provide information specific for the host CPU architecture
+(eg. MSR_IA32_SYSENTER_EIP) merely because its leveraged by guest software
+to implement a critical feature (fast system calls).
+
+At the moment, the target audience for KVMI are security software authors
+that wish to perform forensics on newly discovered threats (exploits) or
+to implement another layer of security like preventing a large set of
+kernel rootkits simply by "locking" the kernel image in the shadow page
+tables (ie. enforce .text r-x, .rodata rw- etc.). It's the latter case that
+made KVMI a separate subsystem, even though many of these features are
+available in the device manager (eg. QEMU). The ability to build a security
+application that does not interfere (in terms of performance) with the
+guest software asks for a specialized interface that is designed for minimum
+overhead.
+
+API/ABI
+===
+
+This chapter describes the VMI interface used to monitor and control local
+guests from a user application.
+
+Overview
+
+
+The interface is socket based, one connection for every VM. One end is in the
+host kernel while the other is held by the user application (introspection
+tool).
+
+The initial connection is established by an application running on the
+host (eg. QEMU) that connects to the introspection tool and after a
+handshake the file descriptor is passed to the host kernel making all
+further communication take place between it and the introspection tool.
+
+The socket protocol allows for commands and events to be multiplexed over
+the same connection. As such, it is possible for the introspection tool to
+receive an event while waiting for the result of a command. Also, it can
+send a command while the host kernel is waiting for a reply to an event.
+
+The kernel side of the socket communication is blocking and will wait
+for an answer from its peer indefinitely or until the guest is powered
+off (killed), restarted or the peer goes away, at which point it will
+wake up and properly cleanup as if the introspection subsystem has never
+been used on that guest (if requested). Obviously, whether the guest can
+really continue normal execution

[PATCH v10 38/81] KVM: introspection: add KVMI_VM_GET_INFO

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command returns the number of online vCPUs.

The introspection tool uses the vCPU index to specify to which vCPU
the introspection command applies to.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 18 ++
 include/uapi/linux/kvmi.h |  6 
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 35 +--
 virt/kvm/introspection/kvmi_msg.c | 13 +++
 4 files changed, 69 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 13169575f75f..6f8583d4aeb2 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -312,3 +312,21 @@ This command is always allowed.
 * -KVM_ENOENT - the event specified by ``id`` is unsupported
 * -KVM_EPERM - the event specified by ``id`` is disallowed
 * -KVM_EINVAL - the padding is not zero
+
+4. KVMI_VM_GET_INFO
+---
+
+:Architectures: all
+:Versions: >= 1
+:Parameters: none
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vm_get_info_reply {
+   __u32 vcpu_count;
+   __u32 padding[3];
+   };
+
+Returns the number of online vCPUs.
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 0c2d0cedde6f..e06a7b80d4d9 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -20,6 +20,7 @@ enum {
KVMI_GET_VERSION  = KVMI_VM_MESSAGE_ID(1),
KVMI_VM_CHECK_COMMAND = KVMI_VM_MESSAGE_ID(2),
KVMI_VM_CHECK_EVENT   = KVMI_VM_MESSAGE_ID(3),
+   KVMI_VM_GET_INFO  = KVMI_VM_MESSAGE_ID(4),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -67,4 +68,9 @@ struct kvmi_vm_check_event {
__u32 padding2;
 };
 
+struct kvmi_vm_get_info_reply {
+   __u32 vcpu_count;
+   __u32 padding[3];
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index cd8f16a3ce3a..d60ee23fa833 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -80,6 +80,16 @@ static void set_command_perm(struct kvm_vm *vm, __s32 id, 
__u32 allow,
 "KVM_INTROSPECTION_COMMAND");
 }
 
+static void disallow_command(struct kvm_vm *vm, __s32 id)
+{
+   set_command_perm(vm, id, 0, 0);
+}
+
+static void allow_command(struct kvm_vm *vm, __s32 id)
+{
+   set_command_perm(vm, id, 1, 0);
+}
+
 static void hook_introspection(struct kvm_vm *vm)
 {
__u32 allow = 1, disallow = 0, allow_inval = 2;
@@ -256,12 +266,16 @@ static void cmd_vm_check_command(__u16 id, int 
expected_err)
expected_err);
 }
 
-static void test_cmd_vm_check_command(void)
+static void test_cmd_vm_check_command(struct kvm_vm *vm)
 {
-   __u16 valid_id = KVMI_GET_VERSION, invalid_id = 0x;
+   __u16 valid_id = KVMI_VM_GET_INFO, invalid_id = 0x;
 
cmd_vm_check_command(valid_id, 0);
cmd_vm_check_command(invalid_id, -KVM_ENOENT);
+
+   disallow_command(vm, valid_id);
+   cmd_vm_check_command(valid_id, -KVM_EPERM);
+   allow_command(vm, valid_id);
 }
 
 static void cmd_vm_check_event(__u16 id, int expected_err)
@@ -284,6 +298,20 @@ static void test_cmd_vm_check_event(void)
cmd_vm_check_event(invalid_id, -KVM_ENOENT);
 }
 
+static void test_cmd_vm_get_info(void)
+{
+   struct kvmi_vm_get_info_reply rpl;
+   struct kvmi_msg_hdr req;
+
+   test_vm_command(KVMI_VM_GET_INFO, &req, sizeof(req), &rpl,
+   sizeof(rpl), 0);
+   TEST_ASSERT(rpl.vcpu_count == 1,
+   "Unexpected number of vCPU count %u\n",
+   rpl.vcpu_count);
+
+   pr_debug("vcpu count: %u\n", rpl.vcpu_count);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
setup_socket();
@@ -291,8 +319,9 @@ static void test_introspection(struct kvm_vm *vm)
 
test_cmd_invalid();
test_cmd_get_version();
-   test_cmd_vm_check_command();
+   test_cmd_vm_check_command(vm);
test_cmd_vm_check_event();
+   test_cmd_vm_get_info();
 
unhook_introspection(vm);
 }
diff --git a/virt/kvm/introspection/kvmi_msg.c 
b/virt/kvm/introspection/kvmi_msg.c
index 6538c7af710a..f0f5058403dd 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -150,6 +150,18 @@ static int handle_vm_check_event(struct kvm_introspection 
*kvmi,
return kvmi_msg_vm_reply(kvmi, msg, ec, NULL, 0);
 }
 
+static int handle_vm_get_info(struct kvm_introspection *kvmi,
+ const struct kvmi_msg_hdr *msg,
+ const void *req)
+{
+   struct kvmi_vm_get_info_reply rpl;
+
+   memset(&rpl, 0, sizeof(rpl));
+   rpl.vcpu_count = atomic_read(&kvmi->kvm->online_vcpus);
+
+   return kvmi_msg_vm_reply(kvmi, msg, 0, &rpl, sizeof(rpl));
+}
+
 /*
  * Th

[PATCH v10 68/81] KVM: introspection: add KVMI_VCPU_SET_XSAVE

2020-11-25 Thread Adalbert Lazăr
This can be used by the introspection tool to emulate SSE instructions.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 28 +++
 arch/x86/include/uapi/asm/kvmi.h  |  4 +++
 arch/x86/kvm/kvmi_msg.c   | 21 ++
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 27 ++
 5 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index c1ac47def4e9..56efeeb38980 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -859,6 +859,34 @@ Returns a buffer containing the XSAVE area.
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 * -KVM_ENOMEM - there is not enough memory to allocate the reply
 
+20. KVMI_VCPU_SET_XSAVE
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_set_xsave {
+   struct kvm_xsave xsave;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+
+Modifies the XSAVE area.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 0d3696c52d88..6ec290b69b46 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -115,4 +115,8 @@ struct kvmi_vcpu_get_xsave_reply {
struct kvm_xsave xsave;
 };
 
+struct kvmi_vcpu_set_xsave {
+   struct kvm_xsave xsave;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 77c753cd9705..c1b3bd56a42c 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -213,6 +213,26 @@ static int handle_vcpu_get_xsave(const struct 
kvmi_vcpu_msg_job *job,
return err;
 }
 
+static int handle_vcpu_set_xsave(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *req)
+{
+   size_t req_size, msg_size = msg->size;
+   int ec = 0;
+
+   if (check_sub_overflow(msg_size, sizeof(struct kvmi_vcpu_hdr),
+  &req_size))
+   return -EINVAL;
+
+   if (req_size < sizeof(struct kvm_xsave))
+   ec = -KVM_EINVAL;
+   else if (kvm_vcpu_ioctl_x86_set_xsave(job->vcpu,
+ (struct kvm_xsave *) req))
+   ec = -KVM_EINVAL;
+
+   return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_CONTROL_CR]   = handle_vcpu_control_cr,
[KVMI_VCPU_GET_CPUID]= handle_vcpu_get_cpuid,
@@ -222,6 +242,7 @@ static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_GET_XSAVE]= handle_vcpu_get_xsave,
[KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception,
[KVMI_VCPU_SET_REGISTERS]= handle_vcpu_set_registers,
+   [KVMI_VCPU_SET_XSAVE]= handle_vcpu_set_xsave,
 };
 
 kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id)
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index e47c4ce0f8ed..3baf5c7842bb 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -46,6 +46,7 @@ enum {
KVMI_VCPU_INJECT_EXCEPTION = KVMI_VCPU_MESSAGE_ID(7),
KVMI_VCPU_GET_XCR  = KVMI_VCPU_MESSAGE_ID(8),
KVMI_VCPU_GET_XSAVE= KVMI_VCPU_MESSAGE_ID(9),
+   KVMI_VCPU_SET_XSAVE= KVMI_VCPU_MESSAGE_ID(10),
 
KVMI_NEXT_VCPU_MESSAGE
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 277b1061410b..45c1f3132a3c 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1448,21 +1448,35 @@ static void test_cmd_vcpu_get_xcr(struct kvm_vm *vm)
cmd_vcpu_get_xcr(vm, xcr1, &value, -KVM_EINVAL);
 }
 
-static void cmd_vcpu_get_xsave(struct kvm_vm *vm)
+static void cmd_vcpu_get_xsave(struct kvm_vm *vm, struct kvm_xsave *rpl)
 {
struct {
struct kvmi_msg_hdr hdr;
struct kvmi_vcpu_hdr vcpu_hdr;
} req = {};
-   struct kvm_xsave rpl;
 
test_vcpu0_command(vm, KVMI_VCPU_GET_XSAVE, &req.hdr, sizeof(req),
-  &rpl, sizeof(rpl), 0);
+  rpl, sizeof(*rpl), 0);
 }
 
-static void test_cmd_vcpu_get_xsave(struct kvm_vm *vm)
+static void cmd_vcpu_set_xsave(struct kvm_vm *vm, struct kvm_xsave *rpl)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvm_xsave xsave;
+   } req = {};
+
+   memcpy(&req.xsave, rpl, 

[PATCH v10 13/81] KVM: x86: add kvm_x86_ops.control_desc_intercept()

2020-11-25 Thread Adalbert Lazăr
This function is needed to intercept descriptor-table registers access.

Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm/svm.c  | 26 ++
 arch/x86/kvm/vmx/vmx.c  | 15 +--
 3 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1e9cb521324e..730429cd2e3d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1131,6 +1131,7 @@ struct kvm_x86_ops {
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
bool (*desc_ctrl_supported)(void);
+   void (*control_desc_intercept)(struct kvm_vcpu *vcpu, bool enable);
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 00bda794609c..c8e56ad9cbb1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1635,6 +1635,31 @@ static bool svm_desc_ctrl_supported(void)
return true;
 }
 
+static void svm_control_desc_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   struct vcpu_svm *svm = to_svm(vcpu);
+
+   if (enable) {
+   svm_set_intercept(svm, INTERCEPT_STORE_IDTR);
+   svm_set_intercept(svm, INTERCEPT_STORE_GDTR);
+   svm_set_intercept(svm, INTERCEPT_STORE_LDTR);
+   svm_set_intercept(svm, INTERCEPT_STORE_TR);
+   svm_set_intercept(svm, INTERCEPT_LOAD_IDTR);
+   svm_set_intercept(svm, INTERCEPT_LOAD_GDTR);
+   svm_set_intercept(svm, INTERCEPT_LOAD_LDTR);
+   svm_set_intercept(svm, INTERCEPT_LOAD_TR);
+   } else {
+   svm_clr_intercept(svm, INTERCEPT_STORE_IDTR);
+   svm_clr_intercept(svm, INTERCEPT_STORE_GDTR);
+   svm_clr_intercept(svm, INTERCEPT_STORE_LDTR);
+   svm_clr_intercept(svm, INTERCEPT_STORE_TR);
+   svm_clr_intercept(svm, INTERCEPT_LOAD_IDTR);
+   svm_clr_intercept(svm, INTERCEPT_LOAD_GDTR);
+   svm_clr_intercept(svm, INTERCEPT_LOAD_LDTR);
+   svm_clr_intercept(svm, INTERCEPT_LOAD_TR);
+   }
+}
+
 static void update_cr0_intercept(struct vcpu_svm *svm)
 {
ulong gcr0 = svm->vcpu.arch.cr0;
@@ -4281,6 +4306,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.get_gdt = svm_get_gdt,
.set_gdt = svm_set_gdt,
.desc_ctrl_supported = svm_desc_ctrl_supported,
+   .control_desc_intercept = svm_control_desc_intercept,
.set_dr7 = svm_set_dr7,
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
.cache_reg = svm_cache_reg,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a5e1f61d2622..20351e027898 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3120,6 +3120,16 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, 
unsigned long pgd,
vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
+static void vmx_control_desc_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+   if (enable)
+   secondary_exec_controls_setbit(vmx, SECONDARY_EXEC_DESC);
+   else
+   secondary_exec_controls_clearbit(vmx, SECONDARY_EXEC_DESC);
+}
+
 static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
/*
@@ -3157,11 +3167,11 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long 
cr4)
 
if (!boot_cpu_has(X86_FEATURE_UMIP) && vmx_umip_emulated()) {
if (cr4 & X86_CR4_UMIP) {
-   secondary_exec_controls_setbit(vmx, 
SECONDARY_EXEC_DESC);
+   vmx_control_desc_intercept(vcpu, true);
hw_cr4 &= ~X86_CR4_UMIP;
} else if (!is_guest_mode(vcpu) ||
!nested_cpu_has2(get_vmcs12(vcpu), 
SECONDARY_EXEC_DESC)) {
-   secondary_exec_controls_clearbit(vmx, 
SECONDARY_EXEC_DESC);
+   vmx_control_desc_intercept(vcpu, false);
}
}
 
@@ -7657,6 +7667,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.get_gdt = vmx_get_gdt,
.set_gdt = vmx_set_gdt,
.desc_ctrl_supported = vmx_desc_ctrl_supported,
+   .control_desc_intercept = vmx_control_desc_intercept,
.set_dr7 = vmx_set_dr7,
.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
.cache_reg = vmx_cache_reg,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 34/81] KVM: introspection: add permission access ioctls

2020-11-25 Thread Adalbert Lazăr
KVM_INTROSPECTION_COMMAND and KVM_INTROSPECTION_EVENTS ioctls are used
by the device manager to allow/disallow access to specific (or all)
introspection commands and events. The introspection tool will get the
KVM_EPERM error code on any attempt to use a disallowed command.

By default, all events and almost all commands are disallowed.
Some commands are always allowed (those querying the introspection
capabilities).

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/api.rst|  68 ++
 include/linux/kvmi_host.h |   7 +
 include/uapi/linux/kvm.h  |   8 ++
 include/uapi/linux/kvmi.h |  22 
 .../testing/selftests/kvm/x86_64/kvmi_test.c  |  49 +++
 virt/kvm/introspection/kvmi.c | 122 ++
 virt/kvm/kvm_main.c   |  18 +++
 7 files changed, 294 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 9b48be90ae7b..f3698413ddab 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4878,6 +4878,74 @@ Errors:
 This ioctl is used to free all introspection structures
 related to this VM.
 
+4.129 KVM_INTROSPECTION_COMMAND
+---
+
+:Capability: KVM_CAP_INTROSPECTION
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_introspection_feature (in)
+:Returns: 0 on success, a negative value on error
+
+Errors:
+
+  == ===
+  EFAULT the VM is not introspected yet (use KVM_INTROSPECTION_HOOK)
+  EINVAL the command is unknown
+  EPERM  the command can't be disallowed (e.g. KVMI_GET_VERSION)
+  EPERM  the introspection is disabled (kvm.introspection=0)
+  == ===
+
+This ioctl is used to allow or disallow introspection commands
+for the current VM. By default, almost all commands are disallowed
+except for those used to query the API features.
+
+::
+
+  struct kvm_introspection_feature {
+   __u32 allow;
+   __s32 id;
+  };
+
+If allow is 1, the command specified by id is allowed. If allow is 0,
+the command is disallowed.
+
+Unless set to -1 (meaning all commands), id must be a command ID
+(e.g. KVMI_GET_VERSION)
+
+4.130 KVM_INTROSPECTION_EVENT
+-
+
+:Capability: KVM_CAP_INTROSPECTION
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_introspection_feature (in)
+:Returns: 0 on success, a negative value on error
+
+Errors:
+
+  == ===
+  EFAULT the VM is not introspected yet (use KVM_INTROSPECTION_HOOK)
+  EINVAL the event is unknown
+  EPERM  the introspection is disabled (kvm.introspection=0)
+  == ===
+
+This ioctl is used to allow or disallow introspection events
+for the current VM. By default, all events are disallowed.
+
+::
+
+  struct kvm_introspection_feature {
+   __u32 allow;
+   __s32 id;
+  };
+
+If allow is 1, the event specified by id is allowed. If allow is 0,
+the event is disallowed.
+
+Unless set to -1 (meaning all events), id must be a event ID
+(e.g. KVMI_VM_EVENT_UNHOOK, KVMI_VCPU_EVENT_CR, etc.)
+
 5. The kvm_run structure
 
 
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 8574b9688736..a5ede07686b9 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -14,6 +14,9 @@ struct kvm_introspection {
 
struct socket *sock;
struct task_struct *recv;
+
+   unsigned long *cmd_allow_mask;
+   unsigned long *event_allow_mask;
 };
 
 int kvmi_version(void);
@@ -25,6 +28,10 @@ void kvmi_destroy_vm(struct kvm *kvm);
 int kvmi_ioctl_hook(struct kvm *kvm,
const struct kvm_introspection_hook *hook);
 int kvmi_ioctl_unhook(struct kvm *kvm);
+int kvmi_ioctl_command(struct kvm *kvm,
+  const struct kvm_introspection_feature *feat);
+int kvmi_ioctl_event(struct kvm *kvm,
+const struct kvm_introspection_feature *feat);
 
 #else
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a0be0ea5b13f..c69140893f68 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1653,6 +1653,14 @@ struct kvm_introspection_hook {
 #define KVM_INTROSPECTION_HOOK_IOW(KVMIO, 0xc8, struct 
kvm_introspection_hook)
 #define KVM_INTROSPECTION_UNHOOK  _IO(KVMIO, 0xc9)
 
+struct kvm_introspection_feature {
+   __u32 allow;
+   __s32 id;
+};
+
+#define KVM_INTROSPECTION_COMMAND _IOW(KVMIO, 0xca, struct 
kvm_introspection_feature)
+#define KVM_INTROSPECTION_EVENT   _IOW(KVMIO, 0xcb, struct 
kvm_introspection_feature)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX   (1 << 2)
d

[PATCH v10 76/81] KVM: introspection: extend KVMI_GET_VERSION with struct kvmi_features

2020-11-25 Thread Adalbert Lazăr
This is used by the introspection tool to check the hardware support
for the single step feature.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst| 15 +--
 arch/x86/include/uapi/asm/kvmi.h   |  5 +
 arch/x86/kvm/kvmi.c|  5 +
 include/uapi/linux/kvmi.h  |  1 +
 tools/testing/selftests/kvm/x86_64/kvmi_test.c |  6 ++
 virt/kvm/introspection/kvmi_int.h  |  1 +
 virt/kvm/introspection/kvmi_msg.c  |  2 ++
 7 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index bdcc9066ae28..991922897f1d 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -243,10 +243,21 @@ The vCPU commands start with::
struct kvmi_get_version_reply {
__u32 version;
__u32 max_msg_size;
+   struct kvmi_features features;
};
 
-Returns the introspection API version and the largest accepted message
-size (useful for variable length messages).
+For x86
+
+::
+
+   struct kvmi_features {
+   __u8 singlestep;
+   __u8 padding[7];
+   };
+
+Returns the introspection API version, the largest accepted message size
+(useful for variable length messages) and some of the hardware supported
+features.
 
 This command is always allowed and successful.
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 705e90c2137a..54d21e3c18c4 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -159,4 +159,9 @@ struct kvmi_vcpu_event_msr_reply {
__u64 new_val;
 };
 
+struct kvmi_features {
+   __u8 singlestep;
+   __u8 padding[7];
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index cd64762643d6..e0302883aec5 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -1081,3 +1081,8 @@ static void kvmi_track_flush_slot(struct kvm *kvm, struct 
kvm_memory_slot *slot,
 
kvmi_put(kvm);
 }
+
+void kvmi_arch_features(struct kvmi_features *feat)
+{
+   feat->singlestep = !!kvm_x86_ops.control_singlestep;
+}
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 3b432b37b17c..43631ed2b06c 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -103,6 +103,7 @@ struct kvmi_error_code {
 struct kvmi_get_version_reply {
__u32 version;
__u32 max_msg_size;
+   struct kvmi_features features;
 };
 
 struct kvmi_vm_check_command {
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index e36b574c264e..9984b0247ae9 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -59,6 +59,8 @@ struct vcpu_worker_data {
bool restart_on_shutdown;
 };
 
+static struct kvmi_features features;
+
 typedef void (*fct_pf_event)(struct kvm_vm *vm, struct kvmi_msg_hdr *hdr,
struct pf_ev *ev,
struct vcpu_reply *rpl);
@@ -443,6 +445,10 @@ static void test_cmd_get_version(void)
 
pr_debug("KVMI version: %u\n", rpl.version);
pr_debug("Max message size: %u\n", rpl.max_msg_size);
+
+   features = rpl.features;
+
+   pr_debug("singlestep support: %u\n", features.singlestep);
 }
 
 static void cmd_vm_check_command(__u16 id, int expected_err)
diff --git a/virt/kvm/introspection/kvmi_int.h 
b/virt/kvm/introspection/kvmi_int.h
index bf6545e66425..a51e7e4ed511 100644
--- a/virt/kvm/introspection/kvmi_int.h
+++ b/virt/kvm/introspection/kvmi_int.h
@@ -121,5 +121,6 @@ void kvmi_arch_update_page_tracking(struct kvm *kvm,
struct kvmi_mem_access *m);
 void kvmi_arch_hook(struct kvm *kvm);
 void kvmi_arch_unhook(struct kvm *kvm);
+void kvmi_arch_features(struct kvmi_features *feat);
 
 #endif
diff --git a/virt/kvm/introspection/kvmi_msg.c 
b/virt/kvm/introspection/kvmi_msg.c
index 276b898912fd..ee887ade62cb 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -134,6 +134,8 @@ static int handle_get_version(struct kvm_introspection 
*kvmi,
rpl.version = kvmi_version();
rpl.max_msg_size = KVMI_MAX_MSG_SIZE;
 
+   kvmi_arch_features(&rpl.features);
+
return kvmi_msg_vm_reply(kvmi, msg, 0, &rpl, sizeof(rpl));
 }
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 01/81] KVM: UAPI: add error codes used by the VM introspection code

2020-11-25 Thread Adalbert Lazăr
These new error codes help the introspection tool to identify the cause
of the introspection command failure and to recover from some error
cases or to give more information to the user.

Signed-off-by: Adalbert Lazăr 
---
 include/uapi/linux/kvm_para.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index 8b86609849b9..3ce388249682 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -17,6 +17,10 @@
 #define KVM_E2BIG  E2BIG
 #define KVM_EPERM  EPERM
 #define KVM_EOPNOTSUPP 95
+#define KVM_EAGAIN 11
+#define KVM_ENOENT ENOENT
+#define KVM_ENOMEM ENOMEM
+#define KVM_EBUSY  EBUSY
 
 #define KVM_HC_VAPIC_POLL_IRQ  1
 #define KVM_HC_MMU_OP  2
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 80/81] KVM: introspection: emulate a guest page table walk on SPT violations due to A/D bit updates

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

On SPT page faults caused by guest page table walks, use the existing
guest page table walk code to make the necessary adjustments to the A/D
bits and return to guest. This effectively bypasses the x86 emulator
who was making the wrong modifications leading one OS (Windows 8.1 x64)
to triple-fault very early in the boot process with the introspection
enabled.

With introspection disabled, these faults are handled by simply removing
the protection from the affected guest page and returning to guest.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h |  2 ++
 arch/x86/kvm/kvmi.c  | 30 ++
 arch/x86/kvm/mmu/mmu.c   | 12 ++--
 include/linux/kvmi_host.h|  3 +++
 virt/kvm/introspection/kvmi.c| 26 ++
 5 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 31500d3ff69d..0502293bd0c9 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -77,6 +77,7 @@ bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 
descriptor, bool write);
 bool kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr);
 bool kvmi_monitor_msrw_intercept(struct kvm_vcpu *vcpu, u32 msr, bool enable);
 bool kvmi_msrw_intercept_originator(struct kvm_vcpu *vcpu);
+bool kvmi_update_ad_flags(struct kvm_vcpu *vcpu);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -102,6 +103,7 @@ static inline bool kvmi_monitor_msrw_intercept(struct 
kvm_vcpu *vcpu, u32 msr,
   bool enable) { return false; }
 static inline bool kvmi_msrw_intercept_originator(struct kvm_vcpu *vcpu)
{ return false; }
+static inline bool kvmi_update_ad_flags(struct kvm_vcpu *vcpu) { return false; 
}
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index b010d2369756..6dc5df59f274 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -1099,3 +1099,33 @@ void kvmi_arch_stop_singlestep(struct kvm_vcpu *vcpu)
 {
kvm_x86_ops.control_singlestep(vcpu, false);
 }
+
+bool kvmi_update_ad_flags(struct kvm_vcpu *vcpu)
+{
+   struct kvm_introspection *kvmi;
+   bool ret = false;
+   gva_t gva;
+   gpa_t gpa;
+
+   kvmi = kvmi_get(vcpu->kvm);
+   if (!kvmi)
+   return false;
+
+   gva = kvm_x86_ops.fault_gla(vcpu);
+   if (gva == ~0ull)
+   goto out;
+
+   gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva, PFERR_WRITE_MASK, NULL);
+   if (gpa == UNMAPPED_GVA) {
+   struct x86_exception exception = { };
+
+   gpa = kvm_mmu_gva_to_gpa_system(vcpu, gva, 0, &exception);
+   }
+
+   ret = (gpa != UNMAPPED_GVA);
+
+out:
+   kvmi_put(vcpu->kvm);
+
+   return ret;
+}
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f79cf58a27dc..204e44d4e465 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -43,6 +43,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -5184,8 +5185,15 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t 
cr2_or_gpa, u64 error_code,
 */
if (vcpu->arch.mmu->direct_map &&
(error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) {
-   kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa));
-   return 1;
+   gfn_t gfn = gpa_to_gfn(cr2_or_gpa);
+
+   if (kvmi_tracked_gfn(vcpu, gfn)) {
+   if (kvmi_update_ad_flags(vcpu))
+   return 1;
+   } else {
+   kvm_mmu_unprotect_page(vcpu->kvm, gfn);
+   return 1;
+   }
}
 
/*
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index ec38e434c8e9..90647bb2a570 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -83,6 +83,7 @@ bool kvmi_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 
insn_len);
 bool kvmi_vcpu_running_singlestep(struct kvm_vcpu *vcpu);
 void kvmi_singlestep_done(struct kvm_vcpu *vcpu);
 void kvmi_singlestep_failed(struct kvm_vcpu *vcpu);
+bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn);
 
 #else
 
@@ -101,6 +102,8 @@ static inline bool kvmi_vcpu_running_singlestep(struct 
kvm_vcpu *vcpu)
{ return false; }
 static inline void kvmi_singlestep_done(struct kvm_vcpu *vcpu) { }
 static inline void kvmi_singlestep_failed(struct kvm_vcpu *vcpu) { }
+static inline bool kvmi_tracked_gfn(struct kvm_vcpu *vcpu, gfn_t gfn)
+   { return false; }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 3057f9f343c0..7816af9709d8 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -1235,3 +1235,2

[PATCH v10 54/81] KVM: introspection: add KVMI_VCPU_SET_REGISTERS

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

During an introspection event, the introspection tool might need to
change the vCPU state, for example, to skip the current instruction.

This command is allowed only during vCPU events and the registers will
be set when the reply has been received.

Signed-off-by: Mihai Donțu 
Co-developed-by: Mircea Cîrjaliu 
Signed-off-by: Mircea Cîrjaliu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 29 +++
 arch/x86/include/asm/kvmi_host.h  |  2 +
 arch/x86/kvm/kvmi.c   | 22 +
 arch/x86/kvm/kvmi.h   |  2 +
 arch/x86/kvm/kvmi_msg.c   | 18 
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 83 +++
 virt/kvm/introspection/kvmi_int.h |  1 +
 virt/kvm/introspection/kvmi_msg.c |  6 +-
 9 files changed, 162 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index dbaedbee9dee..178832304458 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -601,6 +601,35 @@ registers, the special registers and the requested set of 
MSRs.
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 * -KVM_ENOMEM - there is not enough memory to allocate the reply
 
+12. KVMI_VCPU_SET_REGISTERS
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvm_regs;
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Sets the general purpose registers for the given vCPU. The changes become
+visible to other threads accessing the KVM vCPU structure after the event
+currently being handled is replied to.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EOPNOTSUPP - the command hasn't been received during an introspection 
event
+
 Events
 ==
 
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 05ade3a16b24..cc945151cb36 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -5,6 +5,8 @@
 #include 
 
 struct kvm_vcpu_arch_introspection {
+   struct kvm_regs delayed_regs;
+   bool have_delayed_regs;
 };
 
 struct kvm_arch_introspection {
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index fa9b20277dad..39638af7757e 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -118,3 +118,25 @@ int kvmi_arch_cmd_vcpu_get_registers(struct kvm_vcpu *vcpu,
 
return err ? -KVM_EINVAL : 0;
 }
+
+void kvmi_arch_cmd_vcpu_set_registers(struct kvm_vcpu *vcpu,
+ const struct kvm_regs *regs)
+{
+   struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+   struct kvm_regs *dest = &vcpui->arch.delayed_regs;
+
+   memcpy(dest, regs, sizeof(*dest));
+
+   vcpui->arch.have_delayed_regs = true;
+}
+
+void kvmi_arch_post_reply(struct kvm_vcpu *vcpu)
+{
+   struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+   if (!vcpui->arch.have_delayed_regs)
+   return;
+
+   kvm_arch_vcpu_set_regs(vcpu, &vcpui->arch.delayed_regs, false);
+   vcpui->arch.have_delayed_regs = false;
+}
diff --git a/arch/x86/kvm/kvmi.h b/arch/x86/kvm/kvmi.h
index 7aab4aaabcda..4eeb0c900083 100644
--- a/arch/x86/kvm/kvmi.h
+++ b/arch/x86/kvm/kvmi.h
@@ -5,5 +5,7 @@
 int kvmi_arch_cmd_vcpu_get_registers(struct kvm_vcpu *vcpu,
const struct kvmi_vcpu_get_registers *req,
struct kvmi_vcpu_get_registers_reply *rpl);
+void kvmi_arch_cmd_vcpu_set_registers(struct kvm_vcpu *vcpu,
+ const struct kvm_regs *regs);
 
 #endif
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index fd837c241340..4046a5c4d306 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -90,9 +90,27 @@ static int handle_vcpu_get_registers(const struct 
kvmi_vcpu_msg_job *job,
return err;
 }
 
+static int handle_vcpu_set_registers(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *req)
+{
+   const struct kvm_regs *regs = req;
+   int ec = 0;
+
+   if (msg->size < sizeof(*regs))
+   ec = -KVM_EINVAL;
+   else if (!VCPUI(job->vcpu)->waiting_for_reply)
+   ec = -KVM_EOPNOTSUPP;
+   else
+   kvmi_arch_cmd_vcpu_set_registers(job->vcpu, regs);
+
+   return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_GET_INFO]  = handle_vcpu_get_info,
[KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers,
+   [KVMI_VCPU_SET_REGISTERS] = handle_vcpu_

[PATCH v10 22/81] KVM: x86: export kvm_arch_vcpu_set_guest_debug()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function is needed in order to notify the introspection tool
through KVMI_VCPU_EVENT_BP events on guest breakpoints.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c   | 18 +-
 include/linux/kvm_host.h |  2 ++
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 816801d6c95d..00ab76366868 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9684,14 +9684,12 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return ret;
 }
 
-int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
-   struct kvm_guest_debug *dbg)
+int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
+ struct kvm_guest_debug *dbg)
 {
unsigned long rflags;
int i, r;
 
-   vcpu_load(vcpu);
-
if (dbg->control & (KVM_GUESTDBG_INJECT_DB | KVM_GUESTDBG_INJECT_BP)) {
r = -EBUSY;
if (vcpu->arch.exception.pending)
@@ -9737,10 +9735,20 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu 
*vcpu,
r = 0;
 
 out:
-   vcpu_put(vcpu);
return r;
 }
 
+int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
+   struct kvm_guest_debug *dbg)
+{
+   int ret;
+
+   vcpu_load(vcpu);
+   ret = kvm_arch_vcpu_set_guest_debug(vcpu, dbg);
+   vcpu_put(vcpu);
+   return ret;
+}
+
 /*
  * Translate a guest virtual address to a guest physical address.
  */
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6d622d8bd339..2c640ea9d7ba 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -919,6 +919,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg);
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu);
+int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
+ struct kvm_guest_debug *dbg);
 
 int kvm_arch_init(void *opaque);
 void kvm_arch_exit(void);
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 71/81] KVM: introspection: restore the state of descriptor-table register interception on unhook

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This commit also ensures that the introspection tool and the userspace
do not disable each other the descriptor-table access VM-exit.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h |  4 +++
 arch/x86/kvm/kvmi.c  | 45 
 arch/x86/kvm/svm/svm.c   |  3 +++
 arch/x86/kvm/vmx/vmx.c   |  3 +++
 4 files changed, 55 insertions(+)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index a24ba87036f7..a872277eba67 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -17,6 +17,7 @@ struct kvmi_interception {
bool restore_interception;
struct kvmi_monitor_interception breakpoint;
struct kvmi_monitor_interception cr3w;
+   struct kvmi_monitor_interception descriptor;
 };
 
 struct kvm_vcpu_arch_introspection {
@@ -48,6 +49,7 @@ bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool 
enable);
 void kvmi_enter_guest(struct kvm_vcpu *vcpu);
 void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
   u64 old_value, u64 new_value);
+bool kvmi_monitor_desc_intercept(struct kvm_vcpu *vcpu, bool enable);
 bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, bool write);
 
 #else /* CONFIG_KVM_INTROSPECTION */
@@ -64,6 +66,8 @@ static inline bool kvmi_monitor_cr3w_intercept(struct 
kvm_vcpu *vcpu,
 static inline void kvmi_enter_guest(struct kvm_vcpu *vcpu) { }
 static inline void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
u64 old_value, u64 new_value) { }
+static inline bool kvmi_monitor_desc_intercept(struct kvm_vcpu *vcpu,
+  bool enable) { return false; }
 static inline bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor,
 bool write) { return true; }
 
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 3d5b041de634..4106ae63a115 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -286,12 +286,52 @@ static void kvmi_arch_disable_cr3w_intercept(struct 
kvm_vcpu *vcpu)
vcpu->arch.kvmi->cr3w.kvm_intercepted = false;
 }
 
+/*
+ * Returns true if one side (kvm or kvmi) tries to disable the descriptor
+ * interception while the other side is still tracking it.
+ */
+bool kvmi_monitor_desc_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   struct kvmi_interception *arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
+
+   return (arch_vcpui && arch_vcpui->descriptor.monitor_fct(vcpu, enable));
+}
+EXPORT_SYMBOL(kvmi_monitor_desc_intercept);
+
+static bool monitor_desc_fct_kvmi(struct kvm_vcpu *vcpu, bool enable)
+{
+   vcpu->arch.kvmi->descriptor.kvmi_intercepted = enable;
+
+   if (enable)
+   vcpu->arch.kvmi->descriptor.kvm_intercepted =
+   kvm_x86_ops.desc_intercepted(vcpu);
+   else if (vcpu->arch.kvmi->descriptor.kvm_intercepted)
+   return true;
+
+   return false;
+}
+
+static bool monitor_desc_fct_kvm(struct kvm_vcpu *vcpu, bool enable)
+{
+   if (!vcpu->arch.kvmi->descriptor.kvmi_intercepted)
+   return false;
+
+   vcpu->arch.kvmi->descriptor.kvm_intercepted = enable;
+
+   if (!enable)
+   return true;
+
+   return false;
+}
+
 static int kvmi_control_desc_intercept(struct kvm_vcpu *vcpu, bool enable)
 {
if (!kvm_x86_ops.desc_ctrl_supported())
return -KVM_EOPNOTSUPP;
 
+   vcpu->arch.kvmi->descriptor.monitor_fct = monitor_desc_fct_kvmi;
kvm_x86_ops.control_desc_intercept(vcpu, enable);
+   vcpu->arch.kvmi->descriptor.monitor_fct = monitor_desc_fct_kvm;
 
return 0;
 }
@@ -299,6 +339,9 @@ static int kvmi_control_desc_intercept(struct kvm_vcpu 
*vcpu, bool enable)
 static void kvmi_arch_disable_desc_intercept(struct kvm_vcpu *vcpu)
 {
kvmi_control_desc_intercept(vcpu, false);
+
+   vcpu->arch.kvmi->descriptor.kvmi_intercepted = false;
+   vcpu->arch.kvmi->descriptor.kvm_intercepted = false;
 }
 
 int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
@@ -370,11 +413,13 @@ bool kvmi_arch_vcpu_alloc_interception(struct kvm_vcpu 
*vcpu)
 
arch_vcpui->breakpoint.monitor_fct = monitor_bp_fct_kvm;
arch_vcpui->cr3w.monitor_fct = monitor_cr3w_fct_kvm;
+   arch_vcpui->descriptor.monitor_fct = monitor_desc_fct_kvm;
 
/*
 * paired with:
 *  - kvmi_monitor_bp_intercept()
 *  - kvmi_monitor_cr3w_intercept()
+*  - kvmi_monitor_desc_intercept()
 */
smp_wmb();
WRITE_ONCE(vcpu->arch.kvmi, arch_vcpui);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5b689d3fe3e4..834e4b6c4112 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1670,6 +1670,9 @@ static void svm_control_desc_intercept(struct kvm_vcpu 
*vcpu, bool enable)
 {
struc

[PATCH v10 28/81] KVM: x86: page track: add track_create_slot() callback

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This is used to add page access notifications as soon as a slot appears
or when a slot is moved.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_page_track.h | 13 -
 arch/x86/kvm/mmu/page_track.c | 17 -
 arch/x86/kvm/x86.c|  7 ---
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_page_track.h 
b/arch/x86/include/asm/kvm_page_track.h
index 9a261e463eb3..00a66c4d4d3c 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -36,6 +36,17 @@ struct kvm_page_track_notifier_node {
void (*track_write)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
const u8 *new, int bytes,
struct kvm_page_track_notifier_node *node);
+   /*
+* It is called when memory slot is being created
+*
+* @kvm: the kvm where memory slot being moved or removed
+* @slot: the memory slot being moved or removed
+* @npages: the number of pages
+* @node: this node
+*/
+   void (*track_create_slot)(struct kvm *kvm, struct kvm_memory_slot *slot,
+ unsigned long npages,
+ struct kvm_page_track_notifier_node *node);
/*
 * It is called when memory slot is being moved or removed
 * users can drop write-protection for the pages in that memory slot
@@ -52,7 +63,7 @@ void kvm_page_track_init(struct kvm *kvm);
 void kvm_page_track_cleanup(struct kvm *kvm);
 
 void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
-int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
+int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot 
*slot,
  unsigned long npages);
 
 void kvm_slot_page_track_add_page(struct kvm *kvm,
diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
index d7a591a85af8..27a8d1a02e84 100644
--- a/arch/x86/kvm/mmu/page_track.c
+++ b/arch/x86/kvm/mmu/page_track.c
@@ -28,9 +28,12 @@ void kvm_page_track_free_memslot(struct kvm_memory_slot 
*slot)
}
 }
 
-int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
+int kvm_page_track_create_memslot(struct kvm *kvm, struct kvm_memory_slot 
*slot,
  unsigned long npages)
 {
+   struct kvm_page_track_notifier_head *head;
+   struct kvm_page_track_notifier_node *n;
+   int idx;
int  i;
 
for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) {
@@ -41,6 +44,18 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot 
*slot,
goto track_free;
}
 
+   head = &kvm->arch.track_notifier_head;
+
+   if (hlist_empty(&head->track_notifier_list))
+   return 0;
+
+   idx = srcu_read_lock(&head->track_srcu);
+   hlist_for_each_entry_srcu(n, &head->track_notifier_list, node,
+   srcu_read_lock_held(&head->track_srcu))
+   if (n->track_create_slot)
+   n->track_create_slot(kvm, slot, npages, n);
+   srcu_read_unlock(&head->track_srcu, idx);
+
return 0;
 
 track_free:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c2f13a275448..4d19da016c12 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10524,7 +10524,8 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct 
kvm_memory_slot *slot)
kvm_page_track_free_memslot(slot);
 }
 
-static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
+static int kvm_alloc_memslot_metadata(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
  unsigned long npages)
 {
int i;
@@ -10576,7 +10577,7 @@ static int kvm_alloc_memslot_metadata(struct 
kvm_memory_slot *slot,
}
}
 
-   if (kvm_page_track_create_memslot(slot, npages))
+   if (kvm_page_track_create_memslot(kvm, slot, npages))
goto out_free;
 
return 0;
@@ -10616,7 +10617,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
enum kvm_mr_change change)
 {
if (change == KVM_MR_CREATE || change == KVM_MR_MOVE)
-   return kvm_alloc_memslot_metadata(memslot,
+   return kvm_alloc_memslot_metadata(kvm, memslot,
  mem->memory_size >> 
PAGE_SHIFT);
return 0;
 }
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 42/81] KVM: introspection: add KVMI_VM_READ_PHYSICAL/KVMI_VM_WRITE_PHYSICAL

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

These commands allow the introspection tool to read/write from/to
the guest memory.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  68 ++
 include/uapi/linux/kvmi.h |  17 +++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 124 ++
 virt/kvm/introspection/kvmi.c |  98 ++
 virt/kvm/introspection/kvmi_int.h |   7 +
 virt/kvm/introspection/kvmi_msg.c |  44 +++
 6 files changed, 358 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index b4ce7db32150..7812d62240c0 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -365,6 +365,74 @@ the following events::
 * -KVM_EINVAL - the event ID is unknown (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
 
+6. KVMI_VM_READ_PHYSICAL
+
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_read_physical {
+   __u64 gpa;
+   __u16 size;
+   __u16 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   __u8 data[0];
+
+Reads from the guest memory.
+
+Currently, the size must be non-zero and the read must be restricted to
+one page (offset + size <= PAGE_SIZE).
+
+:Errors:
+
+* -KVM_ENOENT - the guest page doesn't exists
+* -KVM_EINVAL - the specified gpa/size pair is invalid
+* -KVM_EINVAL - the padding is not zero
+
+7. KVMI_VM_WRITE_PHYSICAL
+-
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_write_physical {
+   __u64 gpa;
+   __u16 size;
+   __u16 padding1;
+   __u32 padding2;
+   __u8  data[0];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Writes into the guest memory.
+
+Currently, the size must be non-zero and the write must be restricted to
+one page (offset + size <= PAGE_SIZE).
+
+:Errors:
+
+* -KVM_ENOENT - the guest page doesn't exists
+* -KVM_EINVAL - the specified gpa/size pair is invalid
+* -KVM_EINVAL - the padding is not zero
+
 Events
 ==
 
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 9a10ef2cd890..048afad01be6 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -24,6 +24,8 @@ enum {
KVMI_VM_CHECK_EVENT= KVMI_VM_MESSAGE_ID(3),
KVMI_VM_GET_INFO   = KVMI_VM_MESSAGE_ID(4),
KVMI_VM_CONTROL_EVENTS = KVMI_VM_MESSAGE_ID(5),
+   KVMI_VM_READ_PHYSICAL  = KVMI_VM_MESSAGE_ID(6),
+   KVMI_VM_WRITE_PHYSICAL = KVMI_VM_MESSAGE_ID(7),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -90,4 +92,19 @@ struct kvmi_vm_control_events {
__u32 padding2;
 };
 
+struct kvmi_vm_read_physical {
+   __u64 gpa;
+   __u16 size;
+   __u16 padding1;
+   __u32 padding2;
+};
+
+struct kvmi_vm_write_physical {
+   __u64 gpa;
+   __u16 size;
+   __u16 padding1;
+   __u32 padding2;
+   __u8  data[0];
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 430685a3371e..b493edb534b0 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -8,6 +8,7 @@
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include 
 #include 
+#include 
 
 #include "test_util.h"
 
@@ -24,6 +25,12 @@ static int socket_pair[2];
 #define Kvm_socket   socket_pair[0]
 #define Userspace_socket socket_pair[1]
 
+static vm_vaddr_t test_gva;
+static void *test_hva;
+static vm_paddr_t test_gpa;
+
+static int page_size;
+
 void setup_socket(void)
 {
int r;
@@ -420,8 +427,112 @@ static void test_cmd_vm_control_events(struct kvm_vm *vm)
allow_event(vm, id);
 }
 
+static void cmd_vm_write_page(__u64 gpa, __u64 size, void *p,
+ int expected_err)
+{
+   struct kvmi_vm_write_physical *cmd;
+   struct kvmi_msg_hdr *req;
+   size_t req_size;
+
+   req_size = sizeof(*req) + sizeof(*cmd) + size;
+   req = calloc(1, req_size);
+
+   cmd = (struct kvmi_vm_write_physical *)(req + 1);
+   cmd->gpa = gpa;
+   cmd->size = size;
+
+   memcpy(cmd + 1, p, size);
+
+   test_vm_command(KVMI_VM_WRITE_PHYSICAL, req, req_size, NULL, 0,
+   expected_err);
+
+   free(req);
+}
+
+static void write_guest_page(__u64 gpa, void *p)
+{
+   cmd_vm_write_page(gpa, page_size, p, 0);
+}
+
+static void write_with_invalid_arguments(__u64 gpa, __u64 size, void *p)
+{
+   cmd_vm_write_page(gpa, size, p, -KVM_EINVAL);
+}
+
+static void write_invalid_guest_page(struct kvm_vm *vm, void *p)
+{
+   __u64 gpa = vm->max_gfn << v

[PATCH v10 08/81] KVM: x86: add kvm_x86_ops.bp_intercepted()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

Both, the introspection tool and the device manager can request #BP
interception. This function will be used to check if this interception
is already enabled by either side.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm/svm.c  | 8 
 arch/x86/kvm/svm/svm.h  | 7 +++
 arch/x86/kvm/vmx/vmx.c  | 6 ++
 4 files changed, 22 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f002cdb13a0b..e46fee59d4ed 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1104,6 +1104,7 @@ struct kvm_x86_ops {
void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
void (*vcpu_put)(struct kvm_vcpu *vcpu);
 
+   bool (*bp_intercepted)(struct kvm_vcpu *vcpu);
void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 6dc337b9c231..95c7072cde8e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1738,6 +1738,13 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
vmcb_mark_dirty(svm->vmcb, VMCB_SEG);
 }
 
+static bool svm_bp_intercepted(struct kvm_vcpu *vcpu)
+{
+   struct vcpu_svm *svm = to_svm(vcpu);
+
+   return get_exception_intercept(svm, BP_VECTOR);
+}
+
 static void update_exception_bitmap(struct kvm_vcpu *vcpu)
 {
struct vcpu_svm *svm = to_svm(vcpu);
@@ -4213,6 +4220,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.vcpu_blocking = svm_vcpu_blocking,
.vcpu_unblocking = svm_vcpu_unblocking,
 
+   .bp_intercepted = svm_bp_intercepted,
.update_exception_bitmap = update_exception_bitmap,
.get_msr_feature = svm_get_msr_feature,
.get_msr = svm_get_msr,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index fdff76eb6ceb..dca2dfe2e30d 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -294,6 +294,13 @@ static inline void clr_exception_intercept(struct vcpu_svm 
*svm, u32 bit)
recalc_intercepts(svm);
 }
 
+static inline bool get_exception_intercept(struct vcpu_svm *svm, int bit)
+{
+   struct vmcb *vmcb = get_host_vmcb(svm);
+
+   return vmcb_is_intercept(&vmcb->control, INTERCEPT_EXCEPTION_OFFSET + 
bit);
+}
+
 static inline void svm_set_intercept(struct vcpu_svm *svm, int bit)
 {
struct vmcb *vmcb = get_host_vmcb(svm);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c3441e7e5a87..93a97aa3d847 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -807,6 +807,11 @@ static u32 vmx_read_guest_seg_ar(struct vcpu_vmx *vmx, 
unsigned seg)
return *p;
 }
 
+static bool vmx_bp_intercepted(struct kvm_vcpu *vcpu)
+{
+   return (vmcs_read32(EXCEPTION_BITMAP) & (1u << BP_VECTOR));
+}
+
 void update_exception_bitmap(struct kvm_vcpu *vcpu)
 {
u32 eb;
@@ -7611,6 +7616,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.vcpu_load = vmx_vcpu_load,
.vcpu_put = vmx_vcpu_put,
 
+   .bp_intercepted = vmx_bp_intercepted,
.update_exception_bitmap = update_exception_bitmap,
.get_msr_feature = vmx_get_msr_feature,
.get_msr = vmx_get_msr,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 43/81] KVM: introspection: add vCPU related data

2020-11-25 Thread Adalbert Lazăr
From: Mircea Cîrjaliu 

Add an introspection structure to all vCPUs when the VM is hooked.

Signed-off-by: Mircea Cîrjaliu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h |  3 ++
 include/linux/kvm_host.h |  1 +
 include/linux/kvmi_host.h|  6 
 virt/kvm/introspection/kvmi.c| 51 
 virt/kvm/kvm_main.c  |  2 ++
 5 files changed, 63 insertions(+)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 38c398262913..360a57dd9019 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -2,6 +2,9 @@
 #ifndef _ASM_X86_KVMI_HOST_H
 #define _ASM_X86_KVMI_HOST_H
 
+struct kvm_vcpu_arch_introspection {
+};
+
 struct kvm_arch_introspection {
 };
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 51e6a4d7e5c9..60347c3a0e95 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -321,6 +321,7 @@ struct kvm_vcpu {
bool ready;
struct kvm_vcpu_arch arch;
struct kvm_dirty_ring dirty_ring;
+   struct kvm_vcpu_introspection *kvmi;
 };
 
 static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index a59307dac6bf..9b0008c66321 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -6,6 +6,10 @@
 
 #include 
 
+struct kvm_vcpu_introspection {
+   struct kvm_vcpu_arch_introspection arch;
+};
+
 struct kvm_introspection {
struct kvm_arch_introspection arch;
struct kvm *kvm;
@@ -28,6 +32,7 @@ int kvmi_init(void);
 void kvmi_uninit(void);
 void kvmi_create_vm(struct kvm *kvm);
 void kvmi_destroy_vm(struct kvm *kvm);
+void kvmi_vcpu_uninit(struct kvm_vcpu *vcpu);
 
 int kvmi_ioctl_hook(struct kvm *kvm,
const struct kvm_introspection_hook *hook);
@@ -45,6 +50,7 @@ static inline int kvmi_init(void) { return 0; }
 static inline void kvmi_uninit(void) { }
 static inline void kvmi_create_vm(struct kvm *kvm) { }
 static inline void kvmi_destroy_vm(struct kvm *kvm) { }
+static inline void kvmi_vcpu_uninit(struct kvm_vcpu *vcpu) { }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 2bff4707cc57..358dc6c2a969 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -118,8 +118,41 @@ void kvmi_uninit(void)
kvmi_cache_destroy();
 }
 
+static bool kvmi_alloc_vcpui(struct kvm_vcpu *vcpu)
+{
+   struct kvm_vcpu_introspection *vcpui;
+
+   vcpui = kzalloc(sizeof(*vcpui), GFP_KERNEL);
+   if (!vcpui)
+   return false;
+
+   vcpu->kvmi = vcpui;
+
+   return true;
+}
+
+static int kvmi_create_vcpui(struct kvm_vcpu *vcpu)
+{
+   if (!kvmi_alloc_vcpui(vcpu))
+   return -ENOMEM;
+
+   return 0;
+}
+
+static void kvmi_free_vcpui(struct kvm_vcpu *vcpu)
+{
+   kfree(vcpu->kvmi);
+   vcpu->kvmi = NULL;
+}
+
 static void kvmi_free(struct kvm *kvm)
 {
+   struct kvm_vcpu *vcpu;
+   int i;
+
+   kvm_for_each_vcpu(i, vcpu, kvm)
+   kvmi_free_vcpui(vcpu);
+
bitmap_free(kvm->kvmi->cmd_allow_mask);
bitmap_free(kvm->kvmi->event_allow_mask);
bitmap_free(kvm->kvmi->vm_event_enable_mask);
@@ -128,10 +161,19 @@ static void kvmi_free(struct kvm *kvm)
kvm->kvmi = NULL;
 }
 
+void kvmi_vcpu_uninit(struct kvm_vcpu *vcpu)
+{
+   mutex_lock(&vcpu->kvm->kvmi_lock);
+   kvmi_free_vcpui(vcpu);
+   mutex_unlock(&vcpu->kvm->kvmi_lock);
+}
+
 static struct kvm_introspection *
 kvmi_alloc(struct kvm *kvm, const struct kvm_introspection_hook *hook)
 {
struct kvm_introspection *kvmi;
+   struct kvm_vcpu *vcpu;
+   int i;
 
kvmi = kzalloc(sizeof(*kvmi), GFP_KERNEL);
if (!kvmi)
@@ -157,6 +199,15 @@ kvmi_alloc(struct kvm *kvm, const struct 
kvm_introspection_hook *hook)
 
atomic_set(&kvmi->ev_seq, 0);
 
+   kvm_for_each_vcpu(i, vcpu, kvm) {
+   int err = kvmi_create_vcpui(vcpu);
+
+   if (err) {
+   kvmi_free(kvm);
+   return NULL;
+   }
+   }
+
kvmi->kvm = kvm;
 
return kvmi;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f30d1bd9495a..55017a3f6283 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -418,6 +418,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm 
*kvm, unsigned id)
 
 void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
 {
+   kvmi_vcpu_uninit(vcpu);
kvm_dirty_ring_free(&vcpu->dirty_ring);
kvm_arch_vcpu_destroy(vcpu);
 
@@ -3250,6 +3251,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 
id)
 
 unlock_vcpu_destroy:
mutex_unlock(&kvm->lock);
+   kvmi_vcpu_uninit(vcpu);
kvm_dirty_ring_free(&vcpu->dirty_ring);
 arch_vcpu_destroy:
kvm_arch_vcpu_destroy(vcpu);
__

[PATCH v10 14/81] KVM: x86: add kvm_x86_ops.desc_intercepted()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function will be used to test if the descriptor-table registers
access is already tracked by userspace.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm/svm.c  | 15 +++
 arch/x86/kvm/vmx/vmx.c  |  8 
 3 files changed, 24 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 730429cd2e3d..0e9144e23ce6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1132,6 +1132,7 @@ struct kvm_x86_ops {
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
bool (*desc_ctrl_supported)(void);
void (*control_desc_intercept)(struct kvm_vcpu *vcpu, bool enable);
+   bool (*desc_intercepted)(struct kvm_vcpu *vcpu);
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c8e56ad9cbb1..86f0dcf9fecd 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1660,6 +1660,20 @@ static void svm_control_desc_intercept(struct kvm_vcpu 
*vcpu, bool enable)
}
 }
 
+static inline bool svm_desc_intercepted(struct kvm_vcpu *vcpu)
+{
+   struct vcpu_svm *svm = to_svm(vcpu);
+
+   return (svm_is_intercept(svm, INTERCEPT_STORE_IDTR) ||
+   svm_is_intercept(svm, INTERCEPT_STORE_GDTR) ||
+   svm_is_intercept(svm, INTERCEPT_STORE_LDTR) ||
+   svm_is_intercept(svm, INTERCEPT_STORE_TR) ||
+   svm_is_intercept(svm, INTERCEPT_LOAD_IDTR) ||
+   svm_is_intercept(svm, INTERCEPT_LOAD_GDTR) ||
+   svm_is_intercept(svm, INTERCEPT_LOAD_LDTR) ||
+   svm_is_intercept(svm, INTERCEPT_LOAD_TR));
+}
+
 static void update_cr0_intercept(struct vcpu_svm *svm)
 {
ulong gcr0 = svm->vcpu.arch.cr0;
@@ -4307,6 +4321,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.set_gdt = svm_set_gdt,
.desc_ctrl_supported = svm_desc_ctrl_supported,
.control_desc_intercept = svm_control_desc_intercept,
+   .desc_intercepted = svm_desc_intercepted,
.set_dr7 = svm_set_dr7,
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
.cache_reg = svm_cache_reg,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 20351e027898..5bd6a4add27e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3361,6 +3361,13 @@ static void vmx_set_gdt(struct kvm_vcpu *vcpu, struct 
desc_ptr *dt)
vmcs_writel(GUEST_GDTR_BASE, dt->address);
 }
 
+static bool vmx_desc_intercepted(struct kvm_vcpu *vcpu)
+{
+   struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+   return !!(secondary_exec_controls_get(vmx) & SECONDARY_EXEC_DESC);
+}
+
 static bool rmode_segment_valid(struct kvm_vcpu *vcpu, int seg)
 {
struct kvm_segment var;
@@ -7668,6 +7675,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.set_gdt = vmx_set_gdt,
.desc_ctrl_supported = vmx_desc_ctrl_supported,
.control_desc_intercept = vmx_control_desc_intercept,
+   .desc_intercepted = vmx_desc_intercepted,
.set_dr7 = vmx_set_dr7,
.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
.cache_reg = vmx_cache_reg,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 62/81] KVM: introspection: restore the state of CR3 interception on unhook

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This commit also ensures that the introspection tool and the userspace
do not disable each other the CR3-write VM-exit.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h |  4 ++
 arch/x86/kvm/kvmi.c  | 67 ++--
 arch/x86/kvm/kvmi.h  |  4 +-
 arch/x86/kvm/kvmi_msg.c  |  4 +-
 arch/x86/kvm/svm/svm.c   |  5 +++
 arch/x86/kvm/vmx/vmx.c   |  5 +++
 6 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 7613088d0ae2..edbedf031467 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -16,6 +16,7 @@ struct kvmi_interception {
bool cleanup;
bool restore_interception;
struct kvmi_monitor_interception breakpoint;
+   struct kvmi_monitor_interception cr3w;
 };
 
 struct kvm_vcpu_arch_introspection {
@@ -34,6 +35,7 @@ bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 
dbg);
 bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
   unsigned long old_value, unsigned long *new_value);
 bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu);
+bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool enable);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -44,6 +46,8 @@ static inline bool kvmi_cr_event(struct kvm_vcpu *vcpu, 
unsigned int cr,
 unsigned long *new_value)
{ return true; }
 static inline bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu) { return false; 
}
+static inline bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu,
+   bool enable) { return false; }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 2bb6b4bb932b..8ad3698e5988 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -230,6 +230,59 @@ static void kvmi_arch_disable_bp_intercept(struct kvm_vcpu 
*vcpu)
vcpu->arch.kvmi->breakpoint.kvm_intercepted = false;
 }
 
+static bool monitor_cr3w_fct_kvmi(struct kvm_vcpu *vcpu, bool enable)
+{
+   vcpu->arch.kvmi->cr3w.kvmi_intercepted = enable;
+
+   if (enable)
+   vcpu->arch.kvmi->cr3w.kvm_intercepted =
+   kvm_x86_ops.cr3_write_intercepted(vcpu);
+   else if (vcpu->arch.kvmi->cr3w.kvm_intercepted)
+   return true;
+
+   return false;
+}
+
+static bool monitor_cr3w_fct_kvm(struct kvm_vcpu *vcpu, bool enable)
+{
+   if (!vcpu->arch.kvmi->cr3w.kvmi_intercepted)
+   return false;
+
+   vcpu->arch.kvmi->cr3w.kvm_intercepted = enable;
+
+   if (!enable)
+   return true;
+
+   return false;
+}
+
+/*
+ * Returns true if one side (kvm or kvmi) tries to disable the CR3 write
+ * interception while the other side is still tracking it.
+ */
+bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   struct kvmi_interception *arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
+
+   return (arch_vcpui && arch_vcpui->cr3w.monitor_fct(vcpu, enable));
+}
+EXPORT_SYMBOL(kvmi_monitor_cr3w_intercept);
+
+static void kvmi_control_cr3w_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   vcpu->arch.kvmi->cr3w.monitor_fct = monitor_cr3w_fct_kvmi;
+   kvm_x86_ops.control_cr3_intercept(vcpu, CR_TYPE_W, enable);
+   vcpu->arch.kvmi->cr3w.monitor_fct = monitor_cr3w_fct_kvm;
+}
+
+static void kvmi_arch_disable_cr3w_intercept(struct kvm_vcpu *vcpu)
+{
+   kvmi_control_cr3w_intercept(vcpu, false);
+
+   vcpu->arch.kvmi->cr3w.kvmi_intercepted = false;
+   vcpu->arch.kvmi->cr3w.kvm_intercepted = false;
+}
+
 int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
unsigned int event_id, bool enable)
 {
@@ -269,6 +322,7 @@ void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 
gva, u8 insn_len)
 static void kvmi_arch_restore_interception(struct kvm_vcpu *vcpu)
 {
kvmi_arch_disable_bp_intercept(vcpu);
+   kvmi_arch_disable_cr3w_intercept(vcpu);
 }
 
 bool kvmi_arch_clean_up_interception(struct kvm_vcpu *vcpu)
@@ -293,8 +347,13 @@ bool kvmi_arch_vcpu_alloc_interception(struct kvm_vcpu 
*vcpu)
return false;
 
arch_vcpui->breakpoint.monitor_fct = monitor_bp_fct_kvm;
+   arch_vcpui->cr3w.monitor_fct = monitor_cr3w_fct_kvm;
 
-   /* pair with kvmi_monitor_bp_intercept() */
+   /*
+* paired with:
+*  - kvmi_monitor_bp_intercept()
+*  - kvmi_monitor_cr3w_intercept()
+*/
smp_wmb();
WRITE_ONCE(vcpu->arch.kvmi, arch_vcpui);
 
@@ -326,7 +385,7 @@ void kvmi_arch_request_interception_cleanup(struct kvm_vcpu 
*vcpu,
 int kvmi_arch_cmd_vcpu_control_cr(struct kvm_vcpu *vcpu, int cr, bool enable)
 {
if (cr == 3)
-   kvm_x86_ops.control_cr3_intercept(vcpu, CR_TYPE_W, enable);
+

[PATCH v10 39/81] KVM: introspection: add KVM_INTROSPECTION_PREUNHOOK

2020-11-25 Thread Adalbert Lazăr
In certain situations (when the guest has to be paused, suspended,
migrated, etc.), the device manager will use this new ioctl in order to
trigger the KVMI_VM_EVENT_UNHOOK event. If the event is sent successfully
(the VM has an active introspection channel), the device manager should
delay the action (pause/suspend/...) to give the introspection tool the
chance to remove its hooks (eg. breakpoints) while the guest is still
running. Once a timeout is reached or the introspection tool has closed
the socket, the device manager should resume the action.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/api.rst| 28 
 Documentation/virt/kvm/kvmi.rst   |  7 ---
 include/linux/kvmi_host.h |  1 +
 include/uapi/linux/kvm.h  |  2 ++
 virt/kvm/introspection/kvmi.c | 30 ++
 virt/kvm/introspection/kvmi_int.h |  1 +
 virt/kvm/introspection/kvmi_msg.c |  5 +
 virt/kvm/kvm_main.c   |  5 +
 8 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index f3698413ddab..e6544d94e040 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4946,6 +4946,34 @@ the event is disallowed.
 Unless set to -1 (meaning all events), id must be a event ID
 (e.g. KVMI_VM_EVENT_UNHOOK, KVMI_VCPU_EVENT_CR, etc.)
 
+4.131 KVM_INTROSPECTION_PREUNHOOK
+-
+
+:Capability: KVM_CAP_INTROSPECTION
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: none
+:Returns: 0 on success, a negative value on error
+
+Errors:
+
+  == 
+  EFAULT the VM is not introspected yet (use KVM_INTROSPECTION_HOOK)
+  ENOENT the socket (passed with KVM_INTROSPECTION_HOOK) had an error
+  ENOENT the introspection tool didn't subscribed
+ to this type of introspection event (unhook)
+  == 
+
+This ioctl is used to inform that the current VM is
+paused/suspended/migrated/etc.
+
+KVM should send an 'unhook' introspection event to the introspection tool.
+
+If this ioctl is successful, the userspace should give the
+introspection tool a chance to unhook the VM and then it should use
+KVM_INTROSPECTION_UNHOOK to make sure all the introspection structures
+are freed.
+
 5. The kvm_run structure
 
 
diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 6f8583d4aeb2..33490bc9d1c1 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -183,9 +183,10 @@ becomes necessary to remove them before the guest is 
suspended, moved
 (migrated) or a snapshot with memory is created.
 
 The actions are normally performed by the device manager. In the case
-of QEMU, it will use another ioctl to notify the introspection tool and
-wait for a limited amount of time (a few seconds) for a confirmation that
-is OK to proceed (the introspection tool will close the connection).
+of QEMU, it will use the *KVM_INTROSPECTION_PREUNHOOK* ioctl to trigger
+the *KVMI_VM_EVENT_UNHOOK* event and wait for a limited amount of time (a
+few seconds) for a confirmation that is OK to proceed. The introspection
+tool will close the connection to signal this.
 
 Live migrations
 ---
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index a5ede07686b9..81eac9f53a3f 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -32,6 +32,7 @@ int kvmi_ioctl_command(struct kvm *kvm,
   const struct kvm_introspection_feature *feat);
 int kvmi_ioctl_event(struct kvm *kvm,
 const struct kvm_introspection_feature *feat);
+int kvmi_ioctl_preunhook(struct kvm *kvm);
 
 #else
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index c69140893f68..a29fbdf93b84 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1661,6 +1661,8 @@ struct kvm_introspection_feature {
 #define KVM_INTROSPECTION_COMMAND _IOW(KVMIO, 0xca, struct 
kvm_introspection_feature)
 #define KVM_INTROSPECTION_EVENT   _IOW(KVMIO, 0xcb, struct 
kvm_introspection_feature)
 
+#define KVM_INTROSPECTION_PREUNHOOK  _IO(KVMIO, 0xcc)
+
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
 #define KVM_DEV_ASSIGN_MASK_INTX   (1 << 2)
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 9125e6c92ded..72dd41915048 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -383,3 +383,33 @@ int kvmi_ioctl_command(struct kvm *kvm,
mutex_unlock(&kvm->kvmi_lock);
return err;
 }
+
+static bool kvmi_unhook_event(struct kvm_introspection *kvmi)
+{
+   int err;
+
+   err = kvmi_msg_send_unhook(kvmi);
+
+   return !err;
+}
+
+int kvmi_ioctl_preunhook(struct kvm *kvm)
+

[PATCH v10 41/81] KVM: introspection: add KVMI_VM_CONTROL_EVENTS

2020-11-25 Thread Adalbert Lazăr
By default, all introspection VM events are disabled. The introspection
tool must explicitly enable the VM events it wants to receive. With this
command it can enable/disable any VM event (e.g. KVMI_VM_EVENT_UNHOOK)
if allowed by the device manager.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 42 ++--
 include/linux/kvmi_host.h |  2 +
 include/uapi/linux/kvmi.h | 16 +--
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 48 +++
 virt/kvm/introspection/kvmi.c | 30 +++-
 virt/kvm/introspection/kvmi_int.h |  3 ++
 virt/kvm/introspection/kvmi_msg.c | 29 +--
 7 files changed, 158 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index e9c40c7ae154..b4ce7db32150 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -332,10 +332,44 @@ This command is always allowed.
 
 Returns the number of online vCPUs.
 
+5. KVMI_VM_CONTROL_EVENTS
+-
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_control_events {
+   __u16 event_id;
+   __u8 enable;
+   __u8 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Enables/disables VM introspection events. This command can be used with
+the following events::
+
+   KVMI_VM_EVENT_UNHOOK
+
+:Errors:
+
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the event ID is unknown (use *KVMI_VM_CHECK_EVENT* first)
+* -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
+
 Events
 ==
 
 The VM introspection events are sent using the KVMI_VM_EVENT message id.
+No event is sent unless it is explicitly enabled.
 The message data begins with a common structure having the event id::
 
struct kvmi_event_hdr {
@@ -359,6 +393,8 @@ Specific event data can follow this common structure.
 
 :Returns: none
 
-This event is sent when the device manager has to pause/stop/migrate the
-guest (see **Unhooking**).  The introspection tool has a chance to unhook
-and close the KVMI channel (signaling that the operation can proceed).
+This event is sent when the device manager has to pause/stop/migrate
+the guest (see **Unhooking**) and the introspection has been enabled for
+this event (see **KVMI_VM_CONTROL_EVENTS**). The introspection tool has
+a chance to unhook and close the introspection socket (signaling that
+the operation can proceed).
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 6476c7d6a4d3..a59307dac6bf 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -18,6 +18,8 @@ struct kvm_introspection {
unsigned long *cmd_allow_mask;
unsigned long *event_allow_mask;
 
+   unsigned long *vm_event_enable_mask;
+
atomic_t ev_seq;
 };
 
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 18fb51078d48..9a10ef2cd890 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -19,10 +19,11 @@ enum {
 enum {
KVMI_VM_EVENT = KVMI_VM_MESSAGE_ID(0),
 
-   KVMI_GET_VERSION  = KVMI_VM_MESSAGE_ID(1),
-   KVMI_VM_CHECK_COMMAND = KVMI_VM_MESSAGE_ID(2),
-   KVMI_VM_CHECK_EVENT   = KVMI_VM_MESSAGE_ID(3),
-   KVMI_VM_GET_INFO  = KVMI_VM_MESSAGE_ID(4),
+   KVMI_GET_VERSION   = KVMI_VM_MESSAGE_ID(1),
+   KVMI_VM_CHECK_COMMAND  = KVMI_VM_MESSAGE_ID(2),
+   KVMI_VM_CHECK_EVENT= KVMI_VM_MESSAGE_ID(3),
+   KVMI_VM_GET_INFO   = KVMI_VM_MESSAGE_ID(4),
+   KVMI_VM_CONTROL_EVENTS = KVMI_VM_MESSAGE_ID(5),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -82,4 +83,11 @@ struct kvmi_event_hdr {
__u16 padding[3];
 };
 
+struct kvmi_vm_control_events {
+   __u16 event_id;
+   __u8 enable;
+   __u8 padding1;
+   __u32 padding2;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 01b260379c2a..430685a3371e 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -332,6 +332,31 @@ static void trigger_event_unhook_notification(struct 
kvm_vm *vm)
errno, strerror(errno));
 }
 
+static void cmd_vm_control_events(__u16 event_id, __u8 enable,
+ int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vm_control_events cmd;
+   } req = {};
+
+   req.cmd.event_id = event_id;
+   req.cmd.enable = enable;
+
+   test_vm_command(KVMI_VM_CONTROL_EVENTS, &req.hdr, sizeof(req),
+   NULL, 0, expected_err);
+}
+
+static void enable_vm_event(__u16 event_id)
+{
+   cmd_vm_control_events(event_id, 1, 0);
+}
+
+static void disable_vm_event

[PATCH v10 07/81] KVM: x86: avoid injecting #PF when emulate the VMCALL instruction

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

It can happened to end up emulating the VMCALL instruction as a result
of the handling of an EPT write fault. In this situation,
the emulator will try to unconditionally patch the correct hypercall
opcode bytes using emulator_write_emulated(). However, this last call
uses the fault GPA (if available) or walks the guest page tables at RIP,
otherwise. The trouble begins when using VM introspection,
when we forbid the use of the fault GPA and fallback to the guest pt walk:
in Windows (8.1 and newer) the page that we try to write into
is marked read-execute and as such emulator_write_emulated() fails
and we inject a write #PF, leading to a guest crash.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5951458408fb..816801d6c95d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8144,11 +8144,15 @@ static int emulator_fix_hypercall(struct 
x86_emulate_ctxt *ctxt)
struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
char instruction[3];
unsigned long rip = kvm_rip_read(vcpu);
+   int err;
 
kvm_x86_ops.patch_hypercall(vcpu, instruction);
 
-   return emulator_write_emulated(ctxt, rip, instruction, 3,
+   err = emulator_write_emulated(ctxt, rip, instruction, 3,
&ctxt->exception);
+   if (err == X86EMUL_PROPAGATE_FAULT)
+   err = X86EMUL_CONTINUE;
+   return err;
 }
 
 static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 52/81] KVM: introspection: add KVMI_VCPU_CONTROL_EVENTS

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

By default, all introspection events are disabled. The introspection tool
must explicitly enable the events it wants to receive. With this command
(KVMI_VCPU_CONTROL_EVENTS) it can enable/disable any vCPU event allowed
by the device manager.

Some vCPU events doesn't have to be explicitly enabled (and can't
be disabled) with this command because they are implicitly enabled
or requested by the use of certain commands. For example, if the
introspection tool uses the KVMI_VM_PAUSE_VCPU command, it wants to
receive an KVMI_VCPU_EVENT_PAUSE event.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 48 +++
 include/linux/kvmi_host.h |  2 +
 include/uapi/linux/kvmi.h | 10 +++-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 46 ++
 virt/kvm/introspection/kvmi.c | 26 ++
 virt/kvm/introspection/kvmi_int.h |  3 ++
 virt/kvm/introspection/kvmi_msg.c | 24 +-
 7 files changed, 157 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index c86c83566c3d..a502cf9baead 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -367,6 +367,9 @@ the following events::
 
KVMI_VM_EVENT_UNHOOK
 
+The vCPU events (e.g. *KVMI_VCPU_EVENT_PAUSE*) are controlled with
+the *KVMI_VCPU_CONTROL_EVENTS* command.
+
 :Errors:
 
 * -KVM_EINVAL - the padding is not zero
@@ -509,6 +512,51 @@ command) before returning to guest.
 *KVMI_VCPU_EVENT_PAUSE* events
 * -KVM_EPERM  - the *KVMI_VCPU_EVENT_PAUSE* event is disallowed
 
+10. KVMI_VCPU_CONTROL_EVENTS
+
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_control_events {
+   __u16 event_id;
+   __u8 enable;
+   __u8 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Enables/disables vCPU introspection events.
+
+When an event is enabled, the introspection tool is notified and
+must reply with: continue, retry, crash, etc. (see **Events** below).
+
+The following vCPU events doesn't have to be enabled and can't be disabled,
+because these are sent as a result of certain commands (but they can be
+disallowed by the device manager) ::
+
+   KVMI_VCPU_EVENT_PAUSE
+
+The VM events (e.g. *KVMI_VM_EVENT_UNHOOK*) are controlled with
+the *KVMI_VM_CONTROL_EVENTS* command.
+
+:Errors:
+
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the event ID is unknown (use *KVMI_VM_CHECK_EVENT* first)
+* -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 4a43e51a44c9..5e5d255e5a2c 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -31,6 +31,8 @@ struct kvm_vcpu_introspection {
 
struct kvmi_vcpu_reply reply;
bool waiting_for_reply;
+
+   unsigned long *ev_enable_mask;
 };
 
 struct kvm_introspection {
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 757d4b84f473..acd00e883dc9 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -35,7 +35,8 @@ enum {
 enum {
KVMI_VCPU_EVENT = KVMI_VCPU_MESSAGE_ID(0),
 
-   KVMI_VCPU_GET_INFO = KVMI_VCPU_MESSAGE_ID(1),
+   KVMI_VCPU_GET_INFO   = KVMI_VCPU_MESSAGE_ID(1),
+   KVMI_VCPU_CONTROL_EVENTS = KVMI_VCPU_MESSAGE_ID(2),
 
KVMI_NEXT_VCPU_MESSAGE
 };
@@ -148,4 +149,11 @@ struct kvmi_vcpu_event_reply {
__u32 padding2;
 };
 
+struct kvmi_vcpu_control_events {
+   __u16 event_id;
+   __u8 enable;
+   __u8 padding1;
+   __u32 padding2;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 4c9dc6560ad9..5948f9b79ed0 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -763,6 +763,51 @@ static void test_pause(struct kvm_vm *vm)
allow_event(vm, KVMI_VCPU_EVENT_PAUSE);
 }
 
+static void cmd_vcpu_control_event(struct kvm_vm *vm, __u16 event_id,
+  __u8 enable, int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_control_events cmd;
+   } req = {};
+
+   req.cmd.event_id = event_id;
+   req.cmd.enable = enable;
+
+   test_vcpu0_command(vm, KVMI_VCPU_CONTROL_EVENTS,
+  &req.hdr, sizeof(req), NULL, 0,
+  expected_err);
+}
+

[PATCH v10 74/81] KVM: introspection: add KVMI_VM_SET_PAGE_ACCESS

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command sets the spte access bits (rwx) for an array of guest
physical addresses (through the page tracking subsystem).

These GPAs, with the requested access bits, are also kept in a radix
tree in order to filter out the #PF events which are of no interest to
the introspection tool and to reapply the settings when a memory slot
is moved.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  59 +
 arch/x86/include/asm/kvm_host.h   |   2 +
 arch/x86/include/asm/kvmi_host.h  |   7 ++
 arch/x86/kvm/kvmi.c   |  40 ++
 include/linux/kvmi_host.h |   3 +
 include/uapi/linux/kvmi.h |  20 +++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  |  50 
 virt/kvm/introspection/kvmi.c | 119 +-
 virt/kvm/introspection/kvmi_int.h |  10 ++
 virt/kvm/introspection/kvmi_msg.c |  59 +
 10 files changed, 368 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 3466db72e5e8..1540f75c4462 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -965,6 +965,65 @@ to control events for any other register will fail with 
-KVM_EINVAL::
 * -KVM_EPERM  - the interception of the selected MSR is disallowed
 from userspace (KVM_X86_SET_MSR_FILTER)
 
+23. KVMI_VM_SET_PAGE_ACCESS
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_set_page_access {
+   __u16 count;
+   __u16 padding1;
+   __u32 padding2;
+   struct kvmi_page_access_entry entries[0];
+   };
+
+where::
+
+   struct kvmi_page_access_entry {
+   __u64 gpa;
+   __u8 access;
+   __u8 padding[7];
+   };
+
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Sets the access bits (rwx) for an array of ``count`` guest physical
+addresses (``gpa``).
+
+The valid access bits are::
+
+   KVMI_PAGE_ACCESS_R
+   KVMI_PAGE_ACCESS_W
+   KVMI_PAGE_ACCESS_X
+
+
+The command will fail with -KVM_EINVAL if any of the specified combination
+of access bits is not supported or the address (``gpa``) is not valid
+(visible).
+
+The command will try to apply all changes and return the first error if
+some failed. The introspection tool should handle the rollback.
+
+In order to 'forget' an address, all three bits ('rwx') must be set.
+
+:Errors:
+
+* -KVM_EINVAL - the specified access bits combination is invalid
+* -KVM_EINVAL - the address is not valid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the message size is invalid
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_ENOMEM - there is not enough memory to add the page tracking structures
+
 Events
 ==
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d4e2fe493419..27406462aa05 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -45,6 +45,8 @@
 #define KVM_PRIVATE_MEM_SLOTS 3
 #define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)
 
+#include 
+
 #define KVM_HALT_POLL_NS_DEFAULT 20
 
 #define KVM_IRQCHIP_NUM_PINS  KVM_IOAPIC_NUM_PINS
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 8822f0310156..420358c4a9ae 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -2,6 +2,7 @@
 #ifndef _ASM_X86_KVMI_HOST_H
 #define _ASM_X86_KVMI_HOST_H
 
+#include 
 #include 
 
 struct msr_data;
@@ -54,6 +55,12 @@ struct kvm_vcpu_arch_introspection {
 struct kvm_arch_introspection {
 };
 
+#define SLOTS_SIZE BITS_TO_LONGS(KVM_MEM_SLOTS_NUM)
+
+struct kvmi_arch_mem_access {
+   unsigned long active[KVM_PAGE_TRACK_MAX][SLOTS_SIZE];
+};
+
 #ifdef CONFIG_KVM_INTROSPECTION
 
 bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg);
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index e325dad88dbb..acd4756e0d78 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -919,3 +919,43 @@ bool kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data 
*msr)
 
return ret;
 }
+
+static const struct {
+   unsigned int allow_bit;
+   enum kvm_page_track_mode track_mode;
+} track_modes[] = {
+   { KVMI_PAGE_ACCESS_R, KVM_PAGE_TRACK_PREREAD },
+   { KVMI_PAGE_ACCESS_W, KVM_PAGE_TRACK_PREWRITE },
+   { KVMI_PAGE_ACCESS_X, KVM_PAGE_TRACK_PREEXEC },
+};
+
+void kvmi_arch_update_page_tracking(struct kvm *kvm,
+   struct kvm_memory_slot *slot,
+   struct kvmi_mem_access *m)
+{
+   struct kvmi_arch_mem_access *arch = &m->arch;
+   int i;
+
+   if (!slot) {
+   slot = gfn_to_memslot(kvm, m->gfn);
+  

[PATCH v10 65/81] KVM: introspection: add KVMI_VCPU_EVENT_XSETBV

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This event is sent when an extended control register XCR is going to
be changed.

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 34 
 arch/x86/include/asm/kvmi_host.h  |  4 +
 arch/x86/include/uapi/asm/kvmi.h  |  7 ++
 arch/x86/kvm/kvmi.c   | 30 +++
 arch/x86/kvm/kvmi.h   |  2 +
 arch/x86/kvm/kvmi_msg.c   | 20 +
 arch/x86/kvm/x86.c|  6 ++
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 84 +++
 9 files changed, 188 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index ecf4207b42d0..24dc1867c1f1 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -541,6 +541,7 @@ the following events::
KVMI_VCPU_EVENT_BREAKPOINT
KVMI_VCPU_EVENT_CR
KVMI_VCPU_EVENT_HYPERCALL
+   KVMI_VCPU_EVENT_XSETBV
 
 When an event is enabled, the introspection tool is notified and
 must reply with: continue, retry, crash, etc. (see **Events** below).
@@ -1061,3 +1062,36 @@ other vCPU introspection event.
 (``nr``), exception code (``error_code``) and ``address`` are sent to
 the introspection tool, which should check if its exception has been
 injected or overridden.
+
+7. KVMI_VCPU_EVENT_XSETBV
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_xsetbv {
+   __u8 xcr;
+   __u8 padding[7];
+   __u64 old_value;
+   __u64 new_value;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent when an extended control register XCR is going
+to be changed and the introspection has been enabled for this event
+(see *KVMI_VCPU_CONTROL_EVENTS*).
+
+``kvmi_vcpu_event`` (with the vCPU state), the extended control register
+number (``xcr``), the old value (``old_value``) and the new value
+(``new_value``) are sent to the introspection tool.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 97f5b1a01c9e..d66349208a6b 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -46,6 +46,8 @@ bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
 bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu);
 bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool enable);
 void kvmi_enter_guest(struct kvm_vcpu *vcpu);
+void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
+  u64 old_value, u64 new_value);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -59,6 +61,8 @@ static inline bool kvmi_cr3_intercepted(struct kvm_vcpu 
*vcpu) { return false; }
 static inline bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu,
bool enable) { return false; }
 static inline void kvmi_enter_guest(struct kvm_vcpu *vcpu) { }
+static inline void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
+   u64 old_value, u64 new_value) { }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index aa991fbab473..604a8b3d4ac2 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -95,4 +95,11 @@ struct kvmi_vcpu_inject_exception {
__u64 address;
 };
 
+struct kvmi_vcpu_event_xsetbv {
+   __u8 xcr;
+   __u8 padding[7];
+   __u64 old_value;
+   __u64 new_value;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 52b46d56ebb5..5219b6faf4b5 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -16,6 +16,7 @@ void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
set_bit(KVMI_VCPU_EVENT_CR, supported);
set_bit(KVMI_VCPU_EVENT_HYPERCALL, supported);
set_bit(KVMI_VCPU_EVENT_TRAP, supported);
+   set_bit(KVMI_VCPU_EVENT_XSETBV, supported);
 }
 
 static unsigned int kvmi_vcpu_mode(const struct kvm_vcpu *vcpu,
@@ -567,3 +568,32 @@ void kvmi_arch_send_pending_event(struct kvm_vcpu *vcpu)
kvmi_send_trap_event(vcpu);
}
 }
+
+static void __kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
+   u64 old_value, u64 new_value)
+{
+   u32 action;
+
+   action = kvmi_msg_send_vcpu_xsetbv(vcpu, xcr, old_value, new_value);
+   switch (action) {
+   case KVMI_EVENT_ACTION_CONTINUE:
+   break;
+   default:
+   kvmi_handle_common_event_actions(vcpu, action);
+   }
+}
+
+void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
+  

[PATCH v10 72/81] KVM: introspection: add KVMI_VCPU_CONTROL_MSR and KVMI_VCPU_EVENT_MSR

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command is used to enable/disable introspection for a specific
MSR. The KVMI_VCPU_EVENT_MSR event is sent when the tracked MSR is going
to be changed. The introspection tool can respond by allowing the guest
to continue with normal execution or by discarding the change.

This is meant to prevent malicious changes to MSRs
such as MSR_IA32_SYSENTER_EIP.

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  79 +++
 arch/x86/include/asm/kvmi_host.h  |  12 ++
 arch/x86/include/uapi/asm/kvmi.h  |  18 +++
 arch/x86/kvm/kvmi.c   | 125 ++
 arch/x86/kvm/kvmi.h   |   3 +
 arch/x86/kvm/kvmi_msg.c   |  52 
 arch/x86/kvm/x86.c|   3 +
 include/uapi/linux/kvmi.h |   2 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 111 
 9 files changed, 405 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 649a679a485b..3466db72e5e8 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -542,6 +542,7 @@ the following events::
KVMI_VCPU_EVENT_CR
KVMI_VCPU_EVENT_DESCRIPTOR
KVMI_VCPU_EVENT_HYPERCALL
+   KVMI_VCPU_EVENT_MSR
KVMI_VCPU_EVENT_XSETBV
 
 When an event is enabled, the introspection tool is notified and
@@ -922,6 +923,48 @@ Returns the guest memory type for a specific guest 
physical address (``gpa``).
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+22. KVMI_VCPU_CONTROL_MSR
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_control_msr {
+   __u8 enable;
+   __u8 padding1;
+   __u16 padding2;
+   __u32 msr;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Enables/disables introspection for a specific MSR and must be used
+in addition to *KVMI_VCPU_CONTROL_EVENTS* with the *KVMI_VCPU_EVENT_MSR*
+ID set.
+
+Currently, only MSRs within the following two ranges are supported. Trying
+to control events for any other register will fail with -KVM_EINVAL::
+
+   0  ... 0x1fff
+   0xc000 ... 0xc0001fff
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the specified MSR is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EPERM  - the interception of the selected MSR is disallowed
+from userspace (KVM_X86_SET_MSR_FILTER)
+
 Events
 ==
 
@@ -1260,3 +1303,39 @@ introspection tool.
KVMI_DESC_TR
 
 ``write`` is 1 if the descriptor was written, 0 otherwise.
+
+9. KVMI_VCPU_EVENT_MSR
+--
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_msr {
+   __u32 msr;
+   __u32 padding;
+   __u64 old_value;
+   __u64 new_value;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+   struct kvmi_vcpu_event_msr_reply {
+   __u64 new_val;
+   };
+
+This event is sent when a model specific register is going to be changed
+and the introspection has been enabled for this event and for this specific
+register (see **KVMI_VCPU_CONTROL_EVENTS**).
+
+``kvmi_vcpu_event`` (with the vCPU state), the MSR number (``msr``),
+the old value (``old_value``) and the new value (``new_value``) are sent
+to the introspection tool. The *CONTINUE* action will set the ``new_val``.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index a872277eba67..5a4fc5b80907 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -4,7 +4,10 @@
 
 #include 
 
+struct msr_data;
+
 #define KVMI_NUM_CR 5
+#define KVMI_NUM_MSR 0x2000
 
 struct kvmi_monitor_interception {
bool kvmi_intercepted;
@@ -18,6 +21,12 @@ struct kvmi_interception {
struct kvmi_monitor_interception breakpoint;
struct kvmi_monitor_interception cr3w;
struct kvmi_monitor_interception descriptor;
+   struct {
+   struct {
+   DECLARE_BITMAP(low, KVMI_NUM_MSR);
+   DECLARE_BITMAP(high, KVMI_NUM_MSR);
+   } kvmi_mask;
+   } msrw;
 };
 
 struct kvm_vcpu_arch_introspection {
@@ -51,6 +60,7 @@ void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
   u64 old_value, u64 new_value);
 bool kvmi_monitor_desc_intercept(struct kvm_vcpu *vcpu, bool enable);
 bool kvmi_descriptor_event(struct kvm_

[PATCH v10 29/81] KVM: x86: page_track: add support for preread, prewrite and preexec

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

The access to a tracked memory page leads to two types of actions from the
introspection tool: either the access is allowed (maybe with different
data for the source operand) or the vCPU should re-enter in guest
(the page is not tracked anymore, the instruction was skipped/emulated by
the introspection tool, etc.). These new callbacks must return 'true'
for the first case and 'false' for the second.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_page_track.h |  48 +-
 arch/x86/kvm/mmu/mmu.c|  81 +
 arch/x86/kvm/mmu/mmu_internal.h   |   4 +
 arch/x86/kvm/mmu/page_track.c | 123 --
 4 files changed, 246 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/kvm_page_track.h 
b/arch/x86/include/asm/kvm_page_track.h
index 00a66c4d4d3c..c10f0f65c77a 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -3,7 +3,10 @@
 #define _ASM_X86_KVM_PAGE_TRACK_H
 
 enum kvm_page_track_mode {
+   KVM_PAGE_TRACK_PREREAD,
+   KVM_PAGE_TRACK_PREWRITE,
KVM_PAGE_TRACK_WRITE,
+   KVM_PAGE_TRACK_PREEXEC,
KVM_PAGE_TRACK_MAX,
 };
 
@@ -22,6 +25,33 @@ struct kvm_page_track_notifier_head {
 struct kvm_page_track_notifier_node {
struct hlist_node node;
 
+   /*
+* It is called when guest is reading the read-tracked page
+* and the read emulation is about to happen.
+*
+* @vcpu: the vcpu where the read access happened.
+* @gpa: the physical address read by guest.
+* @gva: the virtual address read by guest.
+* @bytes: the read length.
+* @node: this node.
+*/
+   bool (*track_preread)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+ int bytes,
+ struct kvm_page_track_notifier_node *node);
+   /*
+* It is called when guest is writing the write-tracked page
+* and the write emulation didn't happened yet.
+*
+* @vcpu: the vcpu where the write access happened.
+* @gpa: the physical address written by guest.
+* @gva: the virtual address written by guest.
+* @new: the data was written to the address.
+* @bytes: the written length.
+* @node: this node
+*/
+   bool (*track_prewrite)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+  const u8 *new, int bytes,
+  struct kvm_page_track_notifier_node *node);
/*
 * It is called when guest is writing the write-tracked page
 * and write emulation is finished at that time.
@@ -36,6 +66,17 @@ struct kvm_page_track_notifier_node {
void (*track_write)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
const u8 *new, int bytes,
struct kvm_page_track_notifier_node *node);
+   /*
+* It is called when guest is fetching from a exec-tracked page
+* and the fetch emulation is about to happen.
+*
+* @vcpu: the vcpu where the fetch access happened.
+* @gpa: the physical address fetched by guest.
+* @gva: the virtual address fetched by guest.
+* @node: this node.
+*/
+   bool (*track_preexec)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+ struct kvm_page_track_notifier_node *node);
/*
 * It is called when memory slot is being created
 *
@@ -49,7 +90,7 @@ struct kvm_page_track_notifier_node {
  struct kvm_page_track_notifier_node *node);
/*
 * It is called when memory slot is being moved or removed
-* users can drop write-protection for the pages in that memory slot
+* users can drop active protection for the pages in that memory slot
 *
 * @kvm: the kvm where memory slot being moved or removed
 * @slot: the memory slot being moved or removed
@@ -81,7 +122,12 @@ kvm_page_track_register_notifier(struct kvm *kvm,
 void
 kvm_page_track_unregister_notifier(struct kvm *kvm,
   struct kvm_page_track_notifier_node *n);
+bool kvm_page_track_preread(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+   int bytes);
+bool kvm_page_track_prewrite(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+const u8 *new, int bytes);
 void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
  const u8 *new, int bytes);
+bool kvm_page_track_preexec(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva);
 void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot);
 #endif
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1631e2367085..36add6fb712f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1124,6 +

[PATCH v10 24/81] KVM: x86: export kvm_inject_pending_exception()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function is needed for the KVMI_VCPU_INJECT_EXCEPTION command.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/x86.c  | 52 +++--
 2 files changed, 31 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3a06a7799571..7dc1ebac8d91 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1514,6 +1514,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu);
 void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
 bool kvm_rdpmc(struct kvm_vcpu *vcpu);
 
+bool kvm_inject_pending_exception(struct kvm_vcpu *vcpu);
 void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr);
 void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code);
 void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, unsigned long 
payload);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8eda5c3bd244..741505f405b1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8200,6 +8200,35 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
kvm_x86_ops.update_cr8_intercept(vcpu, tpr, max_irr);
 }
 
+bool kvm_inject_pending_exception(struct kvm_vcpu *vcpu)
+{
+   if (vcpu->arch.exception.pending) {
+   trace_kvm_inj_exception(vcpu->arch.exception.nr,
+   vcpu->arch.exception.has_error_code,
+   vcpu->arch.exception.error_code);
+
+   vcpu->arch.exception.pending = false;
+   vcpu->arch.exception.injected = true;
+
+   if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT)
+   __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
+X86_EFLAGS_RF);
+
+   if (vcpu->arch.exception.nr == DB_VECTOR) {
+   kvm_deliver_exception_payload(vcpu);
+   if (vcpu->arch.dr7 & DR7_GD) {
+   vcpu->arch.dr7 &= ~DR7_GD;
+   kvm_update_dr7(vcpu);
+   }
+   }
+
+   kvm_x86_ops.queue_exception(vcpu);
+   return true;
+   }
+
+   return false;
+}
+
 static void inject_pending_event(struct kvm_vcpu *vcpu, bool 
*req_immediate_exit)
 {
int r;
@@ -8251,29 +8280,8 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, 
bool *req_immediate_exit
}
 
/* try to inject new event if pending */
-   if (vcpu->arch.exception.pending) {
-   trace_kvm_inj_exception(vcpu->arch.exception.nr,
-   vcpu->arch.exception.has_error_code,
-   vcpu->arch.exception.error_code);
-
-   vcpu->arch.exception.pending = false;
-   vcpu->arch.exception.injected = true;
-
-   if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT)
-   __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
-X86_EFLAGS_RF);
-
-   if (vcpu->arch.exception.nr == DB_VECTOR) {
-   kvm_deliver_exception_payload(vcpu);
-   if (vcpu->arch.dr7 & DR7_GD) {
-   vcpu->arch.dr7 &= ~DR7_GD;
-   kvm_update_dr7(vcpu);
-   }
-   }
-
-   kvm_x86_ops.queue_exception(vcpu);
+   if (kvm_inject_pending_exception(vcpu))
can_inject = false;
-   }
 
/*
 * Finally, inject interrupt events.  If an event cannot be injected
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 21/81] KVM: x86: add kvm_x86_ops.control_singlestep()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function is needed for KVMI_VCPU_CONTROL_SINGLESTEP.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/vmx.c  | 11 +++
 2 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 45c72af05fa2..c2da5c24e825 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1305,6 +1305,7 @@ struct kvm_x86_ops {
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
 
u64 (*fault_gla)(struct kvm_vcpu *vcpu);
+   void (*control_singlestep)(struct kvm_vcpu *vcpu, bool enable);
 };
 
 struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 41ea1ee9d419..1c8fbd6209ce 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7648,6 +7648,16 @@ static u64 vmx_fault_gla(struct kvm_vcpu *vcpu)
return ~0ull;
 }
 
+static void vmx_control_singlestep(struct kvm_vcpu *vcpu, bool enable)
+{
+   if (enable)
+   exec_controls_setbit(to_vmx(vcpu),
+ CPU_BASED_MONITOR_TRAP_FLAG);
+   else
+   exec_controls_clearbit(to_vmx(vcpu),
+   CPU_BASED_MONITOR_TRAP_FLAG);
+}
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
.hardware_unsetup = hardware_unsetup,
 
@@ -7788,6 +7798,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.cpu_dirty_log_size = vmx_cpu_dirty_log_size,
 
.fault_gla = vmx_fault_gla,
+   .control_singlestep = vmx_control_singlestep,
 };
 
 static __init int hardware_setup(void)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 59/81] KVM: introspection: restore the state of #BP interception on unhook

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This commit also ensures that only the userspace or the introspection
tool can control the #BP interception exclusively at one time.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h  | 18 ++
 arch/x86/kvm/kvmi.c   | 60 +++
 arch/x86/kvm/x86.c|  5 ++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 14 +
 4 files changed, 97 insertions(+)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index b776be4bb49f..e008662f91a5 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -4,8 +4,15 @@
 
 #include 
 
+struct kvmi_monitor_interception {
+   bool kvmi_intercepted;
+   bool kvm_intercepted;
+   bool (*monitor_fct)(struct kvm_vcpu *vcpu, bool enable);
+};
+
 struct kvmi_interception {
bool restore_interception;
+   struct kvmi_monitor_interception breakpoint;
 };
 
 struct kvm_vcpu_arch_introspection {
@@ -16,4 +23,15 @@ struct kvm_vcpu_arch_introspection {
 struct kvm_arch_introspection {
 };
 
+#ifdef CONFIG_KVM_INTROSPECTION
+
+bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg);
+
+#else /* CONFIG_KVM_INTROSPECTION */
+
+static inline bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg)
+   { return false; }
+
+#endif /* CONFIG_KVM_INTROSPECTION */
+
 #endif /* _ASM_X86_KVMI_HOST_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index b4a7d581f68c..3fd73087276e 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -162,19 +162,72 @@ bool kvmi_arch_is_agent_hypercall(struct kvm_vcpu *vcpu)
&& subfunc2 == 0);
 }
 
+/*
+ * Returns true if one side (kvm or kvmi) tries to enable/disable the 
breakpoint
+ * interception while the other side is still tracking it.
+ */
+bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg)
+{
+   struct kvmi_interception *arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
+   u32 bp_mask = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP;
+   bool enable = false;
+
+   if ((dbg & bp_mask) == bp_mask)
+   enable = true;
+
+   return (arch_vcpui && arch_vcpui->breakpoint.monitor_fct(vcpu, enable));
+}
+EXPORT_SYMBOL(kvmi_monitor_bp_intercept);
+
+static bool monitor_bp_fct_kvmi(struct kvm_vcpu *vcpu, bool enable)
+{
+   if (enable) {
+   if (kvm_x86_ops.bp_intercepted(vcpu))
+   return true;
+   } else if (!vcpu->arch.kvmi->breakpoint.kvmi_intercepted)
+   return true;
+
+   vcpu->arch.kvmi->breakpoint.kvmi_intercepted = enable;
+
+   return false;
+}
+
+static bool monitor_bp_fct_kvm(struct kvm_vcpu *vcpu, bool enable)
+{
+   if (enable) {
+   if (kvm_x86_ops.bp_intercepted(vcpu))
+   return true;
+   } else if (!vcpu->arch.kvmi->breakpoint.kvm_intercepted)
+   return true;
+
+   vcpu->arch.kvmi->breakpoint.kvm_intercepted = enable;
+
+   return false;
+}
+
 static int kvmi_control_bp_intercept(struct kvm_vcpu *vcpu, bool enable)
 {
struct kvm_guest_debug dbg = {};
int err = 0;
 
+   vcpu->arch.kvmi->breakpoint.monitor_fct = monitor_bp_fct_kvmi;
if (enable)
dbg.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP;
 
err = kvm_arch_vcpu_set_guest_debug(vcpu, &dbg);
+   vcpu->arch.kvmi->breakpoint.monitor_fct = monitor_bp_fct_kvm;
 
return err;
 }
 
+static void kvmi_arch_disable_bp_intercept(struct kvm_vcpu *vcpu)
+{
+   kvmi_control_bp_intercept(vcpu, false);
+
+   vcpu->arch.kvmi->breakpoint.kvmi_intercepted = false;
+   vcpu->arch.kvmi->breakpoint.kvm_intercepted = false;
+}
+
 int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
unsigned int event_id, bool enable)
 {
@@ -213,6 +266,7 @@ void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 
gva, u8 insn_len)
 
 static void kvmi_arch_restore_interception(struct kvm_vcpu *vcpu)
 {
+   kvmi_arch_disable_bp_intercept(vcpu);
 }
 
 bool kvmi_arch_clean_up_interception(struct kvm_vcpu *vcpu)
@@ -238,6 +292,12 @@ bool kvmi_arch_vcpu_alloc_interception(struct kvm_vcpu 
*vcpu)
if (!arch_vcpui)
return false;
 
+   arch_vcpui->breakpoint.monitor_fct = monitor_bp_fct_kvm;
+
+   /* pair with kvmi_monitor_bp_intercept() */
+   smp_wmb();
+   WRITE_ONCE(vcpu->arch.kvmi, arch_vcpui);
+
return true;
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index aad12df742df..824d9d20a6ea 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9756,6 +9756,11 @@ int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
kvm_queue_exception(vcpu, BP_VECTOR);
}
 
+   if (kvmi_monitor_bp_intercept(vcpu, dbg->control)) {
+   r = -EBUSY;
+   goto out;
+   }
+
   

[PATCH v10 17/81] KVM: x86: add kvm_x86_ops.control_msr_intercept()

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This is needed for the KVMI_VCPU_EVENT_MSR event, which is used notify
the introspection tool about any change made to a MSR of interest.

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/svm/svm.c  | 11 +++
 arch/x86/kvm/vmx/vmx.c  |  7 +++
 3 files changed, 20 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8586c9f4feba..01853453a659 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1116,6 +1116,8 @@ struct kvm_x86_ops {
void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
+   void (*control_msr_intercept)(struct kvm_vcpu *vcpu, unsigned int msr,
+ int type, bool enable);
bool (*msr_write_intercepted)(struct kvm_vcpu *vcpu, u32 msr);
u64 (*get_segment_base)(struct kvm_vcpu *vcpu, int seg);
void (*get_segment)(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8d662ccf5b62..2bfefcfbddd7 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -677,6 +677,16 @@ static void set_msr_interception(struct kvm_vcpu *vcpu, 
u32 *msrpm, u32 msr,
set_msr_interception_bitmap(vcpu, msrpm, msr, type, value);
 }
 
+static void svm_control_msr_intercept(struct kvm_vcpu *vcpu, unsigned int msr,
+ int type, bool enable)
+{
+   const struct vcpu_svm *svm = to_svm(vcpu);
+   u32 *msrpm = is_guest_mode(vcpu) ? svm->nested.msrpm :
+  svm->msrpm;
+
+   set_msr_interception(vcpu, msrpm, msr, type, enable);
+}
+
 u32 *svm_vcpu_alloc_msrpm(void)
 {
struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER);
@@ -4328,6 +4338,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.get_msr_feature = svm_get_msr_feature,
.get_msr = svm_get_msr,
.set_msr = svm_set_msr,
+   .control_msr_intercept = svm_control_msr_intercept,
.msr_write_intercepted = msr_write_intercepted,
.get_segment_base = svm_get_segment_base,
.get_segment = svm_get_segment,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d4833d3bf966..c1497b8e506c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3826,6 +3826,12 @@ static __always_inline void 
vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu,
vmx_disable_intercept_for_msr(vcpu, msr, type);
 }
 
+static void vmx_control_msr_intercept(struct kvm_vcpu *vcpu, unsigned int msr,
+ int type, bool enable)
+{
+   vmx_set_intercept_for_msr(vcpu, msr, type, enable);
+}
+
 static u8 vmx_msr_bitmap_mode(struct kvm_vcpu *vcpu)
 {
u8 mode = 0;
@@ -7658,6 +7664,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.get_msr_feature = vmx_get_msr_feature,
.get_msr = vmx_get_msr,
.set_msr = vmx_set_msr,
+   .control_msr_intercept = vmx_control_msr_intercept,
.msr_write_intercepted = msr_write_intercepted,
.get_segment_base = vmx_get_segment_base,
.get_segment = vmx_get_segment,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 61/81] KVM: introspection: add KVMI_VCPU_CONTROL_CR and KVMI_VCPU_EVENT_CR

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

Using the KVMI_VCPU_CONTROL_CR command, the introspection tool subscribes
to KVMI_VCPU_EVENT_CR events that will be sent when a control register
(CR0, CR3 or CR4) is going to be changed.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  73 +
 arch/x86/include/asm/kvmi_host.h  |  12 +++
 arch/x86/include/uapi/asm/kvmi.h  |  18 
 arch/x86/kvm/kvmi.c   |  78 ++
 arch/x86/kvm/kvmi.h   |   4 +
 arch/x86/kvm/kvmi_msg.c   |  44 
 arch/x86/kvm/vmx/vmx.c|   6 +-
 arch/x86/kvm/x86.c|  12 ++-
 include/uapi/linux/kvmi.h |   2 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 100 ++
 virt/kvm/introspection/kvmi_int.h |   2 +
 11 files changed, 348 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index f9c10d27ce14..85e14b82aa2f 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -539,6 +539,7 @@ Enables/disables vCPU introspection events. This command 
can be used with
 the following events::
 
KVMI_VCPU_EVENT_BREAKPOINT
+   KVMI_VCPU_EVENT_CR
KVMI_VCPU_EVENT_HYPERCALL
 
 When an event is enabled, the introspection tool is notified and
@@ -701,6 +702,40 @@ interceptions). By default it is enabled.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EINVAL - ``enable`` is not 1 or 0
 
+15. KVMI_VCPU_CONTROL_CR
+
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_control_cr {
+   __u8 cr;
+   __u8 enable;
+   __u16 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Enables/disables introspection for a specific control register and must
+be used in addition to *KVMI_VCPU_CONTROL_EVENTS* with the *KVMI_VCPU_EVENT_CR*
+ID set.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the specified control register is not CR0, CR3 or CR4
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
@@ -893,3 +928,41 @@ before returning this action.
 
 The *CONTINUE* action will cause the breakpoint exception to be reinjected
 (the OS will handle it).
+
+5. KVMI_VCPU_EVENT_CR
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_event_hdr;
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_cr {
+   __u8 cr;
+   __u8 padding[7];
+   __u64 old_value;
+   __u64 new_value;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+   struct kvmi_vcpu_event_cr_reply {
+   __u64 new_val;
+   };
+
+This event is sent when a control register is going to be changed and the
+introspection has been enabled for this event and for this specific
+register (see **KVMI_VCPU_CONTROL_EVENTS**).
+
+``kvmi_vcpu_event`` (with the vCPU state), the control register number
+(``cr``), the old value (``old_value``) and the new value (``new_value``)
+are sent to the introspection tool. The *CONTINUE* action will set the
+``new_val``.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 161d1ae5a7cf..7613088d0ae2 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -4,6 +4,8 @@
 
 #include 
 
+#define KVMI_NUM_CR 5
+
 struct kvmi_monitor_interception {
bool kvmi_intercepted;
bool kvm_intercepted;
@@ -19,6 +21,8 @@ struct kvmi_interception {
 struct kvm_vcpu_arch_introspection {
struct kvm_regs delayed_regs;
bool have_delayed_regs;
+
+   DECLARE_BITMAP(cr_mask, KVMI_NUM_CR);
 };
 
 struct kvm_arch_introspection {
@@ -27,11 +31,19 @@ struct kvm_arch_introspection {
 #ifdef CONFIG_KVM_INTROSPECTION
 
 bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg);
+bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
+  unsigned long old_value, unsigned long *new_value);
+bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
 static inline bool kvmi_monitor_bp_intercept(struct kvm_vcpu *vcpu, u32 dbg)
{ return false; }
+static inline bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
+unsigned long old_value,
+unsigned long *new_value)
+   { return true; }
+static inline bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu) { return false; 
}
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/a

[PATCH v10 78/81] KVM: introspection: add KVMI_VCPU_EVENT_SINGLESTEP

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This event is sent after each instruction when the singlestep has been
enabled for a vCPU.

Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 31 +++
 arch/x86/kvm/kvmi.c   |  1 +
 arch/x86/kvm/kvmi_msg.c   |  6 +++
 arch/x86/kvm/vmx/vmx.c|  6 +++
 include/linux/kvmi_host.h |  4 ++
 include/uapi/linux/kvmi.h |  6 +++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 54 +--
 virt/kvm/introspection/kvmi.c | 43 +++
 virt/kvm/introspection/kvmi_int.h |  1 +
 virt/kvm/introspection/kvmi_msg.c | 17 ++
 10 files changed, 166 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index c8822912d8c8..4b2e7809f052 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -565,6 +565,7 @@ because these are sent as a result of certain commands (but 
they can be
 disallowed by the device manager) ::
 
KVMI_VCPU_EVENT_PAUSE
+   KVMI_VCPU_EVENT_SINGLESTEP
KVMI_VCPU_EVENT_TRAP
 
 The VM events (e.g. *KVMI_VM_EVENT_UNHOOK*) are controlled with
@@ -1063,8 +1064,12 @@ Enables/disables singlestep for the selected vCPU.
 The introspection tool should use *KVMI_GET_VERSION*, to check
 if the hardware supports singlestep (see **KVMI_GET_VERSION**).
 
+After every instruction, a *KVMI_VCPU_EVENT_SINGLESTEP* event is sent
+to the introspection tool.
+
 :Errors:
 
+* -KVM_EPERM  - the *KVMI_VCPU_EVENT_SINGLESTEP* event is disallowed
 * -KVM_EOPNOTSUPP - the hardware doesn't support singlestep
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
@@ -1508,3 +1513,29 @@ emulation).
 The *RETRY* action is used by the introspection tool to retry the
 execution of the current instruction, usually because it changed the
 instruction pointer or the page restrictions.
+
+11. KVMI_VCPU_EVENT_SINGLESTEP
+--
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+   struct kvmi_vcpu_event_singlestep {
+   __u8 failed;
+   __u8 padding[7];
+   };
+
+This event is sent after each instruction, as long as the singlestep is
+enabled for the current vCPU (see **KVMI_VCPU_CONTROL_SINGLESTEP**).
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 31a2de24de29..b010d2369756 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -18,6 +18,7 @@ void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
set_bit(KVMI_VCPU_EVENT_DESCRIPTOR, supported);
set_bit(KVMI_VCPU_EVENT_MSR, supported);
set_bit(KVMI_VCPU_EVENT_PF, supported);
+   set_bit(KVMI_VCPU_EVENT_SINGLESTEP, supported);
set_bit(KVMI_VCPU_EVENT_TRAP, supported);
set_bit(KVMI_VCPU_EVENT_XSETBV, supported);
 }
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 8b59f9f73c5d..c4b43b3b7b92 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -285,6 +285,12 @@ static int handle_vcpu_control_singlestep(const struct 
kvmi_vcpu_msg_job *job,
struct kvm_vcpu *vcpu = job->vcpu;
int ec = 0;
 
+   if (!kvmi_is_event_allowed(KVMI(vcpu->kvm),
+  KVMI_VCPU_EVENT_SINGLESTEP)) {
+   ec = -KVM_EPERM;
+   goto reply;
+   }
+
if (non_zero_padding(req->padding, ARRAY_SIZE(req->padding)) ||
req->enable > 1) {
ec = -KVM_EINVAL;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 01d18c9243bc..4804eaa012de 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5614,6 +5614,7 @@ static int handle_invalid_op(struct kvm_vcpu *vcpu)
 
 static int handle_monitor_trap(struct kvm_vcpu *vcpu)
 {
+   kvmi_singlestep_done(vcpu);
return 1;
 }
 
@@ -6142,6 +6143,11 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, 
fastpath_t exit_fastpath)
}
}
 
+   if (kvmi_vcpu_running_singlestep(vcpu) &&
+   exit_reason != EXIT_REASON_EPT_VIOLATION &&
+   exit_reason != EXIT_REASON_MONITOR_TRAP_FLAG)
+   kvmi_singlestep_failed(vcpu);
+
if (exit_fastpath != EXIT_FASTPATH_NONE)
return 1;
 
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index e2103ab9d0d5..ec38e434c8e9 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -81,6 +81,8 @@ void kvmi_handle_requests(struct kvm_vcpu *vcpu);
 bool kvmi_hypercall_event(struct kvm_vcpu *vcpu);
 bool kvmi_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len);
 bool kvmi

[PATCH v10 53/81] KVM: introspection: add KVMI_VCPU_GET_REGISTERS

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command is used to get kvm_regs and kvm_sregs structures,
plus a list of struct kvm_msrs from a specific vCPU.

While the kvm_regs and kvm_sregs structures are included with every
event, this command allows reading any MSR.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 44 
 arch/x86/include/uapi/asm/kvmi.h  | 15 
 arch/x86/kvm/kvmi.c   | 25 +++
 arch/x86/kvm/kvmi.h   |  9 +++
 arch/x86/kvm/kvmi_msg.c   | 72 ++-
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 59 +++
 7 files changed, 224 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kvm/kvmi.h

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index a502cf9baead..dbaedbee9dee 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -557,6 +557,50 @@ the *KVMI_VM_CONTROL_EVENTS* command.
 * -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+11. KVMI_VCPU_GET_REGISTERS
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_get_registers {
+   __u16 nmsrs;
+   __u16 padding1;
+   __u32 padding2;
+   __u32 msrs_idx[0];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_registers_reply {
+   __u32 mode;
+   __u32 padding;
+   struct kvm_regs regs;
+   struct kvm_sregs sregs;
+   struct kvm_msrs msrs;
+   };
+
+For the given vCPU and the ``nmsrs`` sized array of MSRs registers,
+returns the current vCPU mode (in bytes: 2, 4 or 8), the general purpose
+registers, the special registers and the requested set of MSRs.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - one of the indicated MSRs is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the reply size is larger than
+kvmi_get_version_reply.max_msg_size (too many MSRs)
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_ENOMEM - there is not enough memory to allocate the reply
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 9d9df09d381a..11835bf9bdc6 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -30,4 +30,19 @@ struct kvmi_vcpu_event_arch {
} msrs;
 };
 
+struct kvmi_vcpu_get_registers {
+   __u16 nmsrs;
+   __u16 padding1;
+   __u32 padding2;
+   __u32 msrs_idx[0];
+};
+
+struct kvmi_vcpu_get_registers_reply {
+   __u32 mode;
+   __u32 padding;
+   struct kvm_regs regs;
+   struct kvm_sregs sregs;
+   struct kvm_msrs msrs;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 383b19dcf054..fa9b20277dad 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -93,3 +93,28 @@ void kvmi_arch_setup_vcpu_event(struct kvm_vcpu *vcpu,
ev->arch.mode = kvmi_vcpu_mode(vcpu, &event->sregs);
kvmi_get_msrs(vcpu, event);
 }
+
+int kvmi_arch_cmd_vcpu_get_registers(struct kvm_vcpu *vcpu,
+   const struct kvmi_vcpu_get_registers *req,
+   struct kvmi_vcpu_get_registers_reply *rpl)
+{
+   struct msr_data m = {.host_initiated = true};
+   int k, err = 0;
+
+   kvm_arch_vcpu_get_regs(vcpu, &rpl->regs);
+   kvm_arch_vcpu_get_sregs(vcpu, &rpl->sregs);
+   rpl->mode = kvmi_vcpu_mode(vcpu, &rpl->sregs);
+   rpl->msrs.nmsrs = req->nmsrs;
+
+   for (k = 0; k < req->nmsrs && !err; k++) {
+   m.index = req->msrs_idx[k];
+
+   err = kvm_x86_ops.get_msr(vcpu, &m);
+   if (!err) {
+   rpl->msrs.entries[k].index = m.index;
+   rpl->msrs.entries[k].data = m.data;
+   }
+   }
+
+   return err ? -KVM_EINVAL : 0;
+}
diff --git a/arch/x86/kvm/kvmi.h b/arch/x86/kvm/kvmi.h
new file mode 100644
index ..7aab4aaabcda
--- /dev/null
+++ b/arch/x86/kvm/kvmi.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ARCH_X86_KVM_KVMI_H
+#define ARCH_X86_KVM_KVMI_H
+
+int kvmi_arch_cmd_vcpu_get_registers(struct kvm_vcpu *vcpu,
+   const struct kvmi_vcpu_get_registers *req,
+   struct kvmi_vcpu_get_registers_reply *rpl);
+
+#endif
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 77552bf50984..fd837c241340 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -7,6 +7,7 @@
  */
 
 #include "..

[PATCH v10 77/81] KVM: introspection: add KVMI_VCPU_CONTROL_SINGLESTEP

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

The next commit that adds the KVMI_VCPU_EVENT_SINGLESTEP event will make
this command more useful.

Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 33 +++
 arch/x86/kvm/kvmi.c   | 14 -
 arch/x86/kvm/kvmi_msg.c   | 56 +++
 arch/x86/kvm/x86.c| 12 +++-
 include/linux/kvmi_host.h |  7 +++
 include/uapi/linux/kvmi.h | 30 ++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 39 +
 virt/kvm/introspection/kvmi.c | 22 
 virt/kvm/introspection/kvmi_int.h |  2 +
 9 files changed, 187 insertions(+), 28 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 991922897f1d..c8822912d8c8 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -791,6 +791,7 @@ exception.
 * -KVM_EINVAL - the selected vCPU is invalid
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY - the vCPU is switched in singlestep mode 
(*KVMI_VCPU_CONTROL_SINGLESTEP*)
 * -KVM_EBUSY - another *KVMI_VCPU_INJECT_EXCEPTION*-*KVMI_VCPU_EVENT_TRAP*
pair is in progress
 
@@ -1036,6 +1037,38 @@ In order to 'forget' an address, all three bits ('rwx') 
must be set.
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 * -KVM_ENOMEM - there is not enough memory to add the page tracking structures
 
+24. KVMI_VCPU_CONTROL_SINGLESTEP
+
+
+:Architectures: x86 (vmx)
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_control_singlestep {
+   __u8 enable;
+   __u8 padding[7];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+
+Enables/disables singlestep for the selected vCPU.
+
+The introspection tool should use *KVMI_GET_VERSION*, to check
+if the hardware supports singlestep (see **KVMI_GET_VERSION**).
+
+:Errors:
+
+* -KVM_EOPNOTSUPP - the hardware doesn't support singlestep
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index e0302883aec5..31a2de24de29 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -776,7 +776,9 @@ void kvmi_enter_guest(struct kvm_vcpu *vcpu)
if (kvmi) {
vcpui = VCPUI(vcpu);
 
-   if (vcpui->arch.exception.pending)
+   if (vcpui->singlestep.loop)
+   kvmi_arch_start_singlestep(vcpu);
+   else if (vcpui->arch.exception.pending)
kvmi_inject_pending_exception(vcpu);
 
kvmi_put(vcpu->kvm);
@@ -1086,3 +1088,13 @@ void kvmi_arch_features(struct kvmi_features *feat)
 {
feat->singlestep = !!kvm_x86_ops.control_singlestep;
 }
+
+void kvmi_arch_start_singlestep(struct kvm_vcpu *vcpu)
+{
+   kvm_x86_ops.control_singlestep(vcpu, true);
+}
+
+void kvmi_arch_stop_singlestep(struct kvm_vcpu *vcpu)
+{
+   kvm_x86_ops.control_singlestep(vcpu, false);
+}
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index c961c5367a13..8b59f9f73c5d 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -166,7 +166,8 @@ static int handle_vcpu_inject_exception(const struct 
kvmi_vcpu_msg_job *job,
else if (req->padding1 || req->padding2)
ec = -KVM_EINVAL;
else if (VCPUI(vcpu)->arch.exception.pending ||
-   VCPUI(vcpu)->arch.exception.send_event)
+   VCPUI(vcpu)->arch.exception.send_event ||
+   VCPUI(vcpu)->singlestep.loop)
ec = -KVM_EBUSY;
else
ec = kvmi_arch_cmd_vcpu_inject_exception(vcpu, req);
@@ -276,18 +277,49 @@ static int handle_vcpu_control_msr(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+static int handle_vcpu_control_singlestep(const struct kvmi_vcpu_msg_job *job,
+ const struct kvmi_msg_hdr *msg,
+ const void *_req)
+{
+   const struct kvmi_vcpu_control_singlestep *req = _req;
+   struct kvm_vcpu *vcpu = job->vcpu;
+   int ec = 0;
+
+   if (non_zero_padding(req->padding, ARRAY_SIZE(req->padding)) ||
+   req->enable > 1) {
+   ec = -KVM_EINVAL;
+   goto reply;
+   }
+
+   if (!kvm_x86_ops.control_singlestep) {
+   ec = -KVM_EOPNOTSUPP;
+   goto reply;
+   }
+
+   if (req->enable)
+   kvmi_arch_start_singlestep(vcpu);
+   else
+   kvmi_arch_stop_singlestep(vcpu);
+
+   VCPUI(vcpu)->singlestep.loop = !!req

[PATCH v10 49/81] KVM: introspection: add support for vCPU events

2020-11-25 Thread Adalbert Lazăr
This is the common code used by vCPU threads to send events and wait for
replies (received and dispatched by the receiving thread). While waiting
for an event reply, the vCPU thread will handle any introspection command
already queued or received during this period.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  56 ++-
 arch/x86/include/uapi/asm/kvmi.h  |  20 
 arch/x86/kvm/kvmi.c   |  85 
 include/linux/kvmi_host.h |  11 +++
 include/uapi/linux/kvmi.h |  23 +
 virt/kvm/introspection/kvmi.c |   1 +
 virt/kvm/introspection/kvmi_int.h |   6 ++
 virt/kvm/introspection/kvmi_msg.c | 156 +-
 8 files changed, 354 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index a71fb78d546e..5e99baf7e2f3 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -521,7 +521,61 @@ The message data begins with a common structure having the 
event id::
__u16 padding[3];
};
 
-Specific event data can follow this common structure.
+The vCPU introspection events are sent using the KVMI_VCPU_EVENT message id.
+No event is sent unless it is explicitly enabled or requested
+(e.g. *KVMI_VCPU_EVENT_PAUSE*).
+A vCPU event begins with a common structure having the size of the
+structure and the vCPU index::
+
+   struct kvmi_vcpu_event {
+   __u16 size;
+   __u16 vcpu;
+   __u32 padding;
+   struct kvmi_vcpu_event_arch arch;
+   };
+
+On x86::
+
+   struct kvmi_vcpu_event_arch {
+   __u8 mode;
+   __u8 padding[7];
+   struct kvm_regs regs;
+   struct kvm_sregs sregs;
+   struct {
+   __u64 sysenter_cs;
+   __u64 sysenter_esp;
+   __u64 sysenter_eip;
+   __u64 efer;
+   __u64 star;
+   __u64 lstar;
+   __u64 cstar;
+   __u64 pat;
+   __u64 shadow_gs;
+   } msrs;
+   };
+
+It contains information about the vCPU state at the time of the event.
+
+A vCPU event reply begins with two common structures::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply {
+   __u8 action;
+   __u8 event;
+   __u16 padding1;
+   __u32 padding2;
+   };
+
+All events accept the KVMI_EVENT_ACTION_CRASH action, which stops the
+guest ungracefully, but as soon as possible.
+
+Most events accept the KVMI_EVENT_ACTION_CONTINUE action, which
+means that KVM will continue handling the event.
+
+Some events accept the KVMI_EVENT_ACTION_RETRY action, which means that
+KVM will stop handling the event and re-enter in guest.
+
+Specific event data can follow these common structures.
 
 1. KVMI_VM_EVENT_UNHOOK
 ---
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 2b6192e1a9a4..9d9df09d381a 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -6,8 +6,28 @@
  * KVM introspection - x86 specific structures and definitions
  */
 
+#include 
+
 struct kvmi_vcpu_get_info_reply {
__u64 tsc_speed;
 };
 
+struct kvmi_vcpu_event_arch {
+   __u8 mode;  /* 2, 4 or 8 */
+   __u8 padding[7];
+   struct kvm_regs regs;
+   struct kvm_sregs sregs;
+   struct {
+   __u64 sysenter_cs;
+   __u64 sysenter_esp;
+   __u64 sysenter_eip;
+   __u64 efer;
+   __u64 star;
+   __u64 lstar;
+   __u64 cstar;
+   __u64 pat;
+   __u64 shadow_gs;
+   } msrs;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 35742d927be5..383b19dcf054 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -5,6 +5,91 @@
  * Copyright (C) 2019-2020 Bitdefender S.R.L.
  */
 
+#include "linux/kvm_host.h"
+#include "x86.h"
+#include "../../../virt/kvm/introspection/kvmi_int.h"
+
 void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
 {
 }
+
+static unsigned int kvmi_vcpu_mode(const struct kvm_vcpu *vcpu,
+  const struct kvm_sregs *sregs)
+{
+   unsigned int mode = 0;
+
+   if (is_long_mode((struct kvm_vcpu *) vcpu)) {
+   if (sregs->cs.l)
+   mode = 8;
+   else if (!sregs->cs.db)
+   mode = 2;
+   else
+   mode = 4;
+   } else if (sregs->cr0 & X86_CR0_PE) {
+   if (!sregs->cs.db)
+   mode = 2;
+   else
+   mode = 4;
+   } else if (!sregs->cs.db) {
+   mode = 2;
+   } else {
+   mode = 4;

[PATCH v10 60/81] KVM: introspection: add KVMI_VM_CONTROL_CLEANUP

2020-11-25 Thread Adalbert Lazăr
This command will allow more control over the guest state on
unhook.  However, the memory restrictions (e.g. those set with
KVMI_VM_SET_PAGE_ACCESS) will be removed on unhook.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 28 +++
 arch/x86/include/asm/kvmi_host.h  |  1 +
 arch/x86/kvm/kvmi.c   | 17 +-
 include/linux/kvmi_host.h |  2 ++
 include/uapi/linux/kvmi.h | 22 +++-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 24 +
 virt/kvm/introspection/kvmi.c | 18 +++---
 virt/kvm/introspection/kvmi_int.h | 12 ++-
 virt/kvm/introspection/kvmi_msg.c | 34 ++-
 9 files changed, 129 insertions(+), 29 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index c89f383e48f9..f9c10d27ce14 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -673,6 +673,34 @@ Returns a CPUID leaf (as seen by the guest OS).
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 * -KVM_ENOENT - the selected leaf is not present or is invalid
 
+14. KVMI_VM_CONTROL_CLEANUP
+---
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_control_cleanup {
+   __u8 enable;
+   __u8 padding[7];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Enables/disables the automatic cleanup of the changes made by
+the introspection tool at the hypervisor level (e.g. CR/MSR/BP
+interceptions). By default it is enabled.
+
+:Errors:
+
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - ``enable`` is not 1 or 0
+
 Events
 ==
 
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index e008662f91a5..161d1ae5a7cf 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -11,6 +11,7 @@ struct kvmi_monitor_interception {
 };
 
 struct kvmi_interception {
+   bool cleanup;
bool restore_interception;
struct kvmi_monitor_interception breakpoint;
 };
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 3fd73087276e..e7a4ef48ed61 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -273,13 +273,11 @@ bool kvmi_arch_clean_up_interception(struct kvm_vcpu 
*vcpu)
 {
struct kvmi_interception *arch_vcpui = vcpu->arch.kvmi;
 
-   if (!arch_vcpui)
+   if (!arch_vcpui || !arch_vcpui->cleanup)
return false;
 
-   if (!arch_vcpui->restore_interception)
-   return false;
-
-   kvmi_arch_restore_interception(vcpu);
+   if (arch_vcpui->restore_interception)
+   kvmi_arch_restore_interception(vcpu);
 
return true;
 }
@@ -312,10 +310,13 @@ bool kvmi_arch_vcpu_introspected(struct kvm_vcpu *vcpu)
return !!READ_ONCE(vcpu->arch.kvmi);
 }
 
-void kvmi_arch_request_interception_cleanup(struct kvm_vcpu *vcpu)
+void kvmi_arch_request_interception_cleanup(struct kvm_vcpu *vcpu,
+   bool restore_interception)
 {
struct kvmi_interception *arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
 
-   if (arch_vcpui)
-   arch_vcpui->restore_interception = true;
+   if (arch_vcpui) {
+   arch_vcpui->restore_interception = restore_interception;
+   arch_vcpui->cleanup = true;
+   }
 }
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 30b7269468dd..7a7360306812 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -50,6 +50,8 @@ struct kvm_introspection {
unsigned long *vm_event_enable_mask;
 
atomic_t ev_seq;
+
+   bool restore_on_unhook;
 };
 
 int kvmi_version(void);
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index ea66f3f803e7..9e28961a8387 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -20,14 +20,15 @@ enum {
 enum {
KVMI_VM_EVENT = KVMI_VM_MESSAGE_ID(0),
 
-   KVMI_GET_VERSION   = KVMI_VM_MESSAGE_ID(1),
-   KVMI_VM_CHECK_COMMAND  = KVMI_VM_MESSAGE_ID(2),
-   KVMI_VM_CHECK_EVENT= KVMI_VM_MESSAGE_ID(3),
-   KVMI_VM_GET_INFO   = KVMI_VM_MESSAGE_ID(4),
-   KVMI_VM_CONTROL_EVENTS = KVMI_VM_MESSAGE_ID(5),
-   KVMI_VM_READ_PHYSICAL  = KVMI_VM_MESSAGE_ID(6),
-   KVMI_VM_WRITE_PHYSICAL = KVMI_VM_MESSAGE_ID(7),
-   KVMI_VM_PAUSE_VCPU = KVMI_VM_MESSAGE_ID(8),
+   KVMI_GET_VERSION= KVMI_VM_MESSAGE_ID(1),
+   KVMI_VM_CHECK_COMMAND   = KVMI_VM_MESSAGE_ID(2),
+   KVMI_VM_CHECK_EVENT = KVMI_VM_MESSAGE_ID(3),
+   KVMI_VM_GET_INFO= KVMI_VM_MESSAGE_ID(4),
+   KVMI_VM_CONTROL_EVENTS  = KVMI_VM_MESSAGE_ID(5),
+   KVMI_VM_READ_PHYSICAL   = KVMI_VM_MESSAGE_ID(6),
+   KVMI_VM_WRITE_PHYSICAL  = KVMI_VM_MESSAGE_ID(7),
+   KVMI_VM_PAUSE_VCPU  = KVMI_V

[PATCH v10 36/81] KVM: introspection: add KVMI_GET_VERSION

2020-11-25 Thread Adalbert Lazăr
When handling introspection commands from tools built with older or
newer versions of the introspection API, the receiving thread silently
accepts smaller/larger messages, but it replies with messages related to
current/kernel version. Smaller introspection event replies are accepted
too. However, larger messages for event replies are not allowed.

Even if an introspection tool can use the API version returned by the
KVMI_GET_VERSION command to check the supported features, the most
important usage of this command is to avoid sending newer versions of
event replies that the kernel side doesn't know. On larger messages,
the introspection socket will be closed.

Any attempt from the device manager to explicitly disallow this command
through the KVM_INTROSPECTION_COMMAND ioctl will get -EPERM, unless all
commands are disallowed (using id=-1), in which case KVMI_GET_VERSION
is silently allowed, without error.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 38 +++
 include/uapi/linux/kvmi.h | 10 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 35 +
 virt/kvm/introspection/kvmi.c | 27 +++--
 virt/kvm/introspection/kvmi_msg.c | 13 +++
 5 files changed, 119 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index ae6bbf37aef3..d3d672a07872 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -212,3 +212,41 @@ device-specific memory (DMA, emulated MMIO, reserved by a 
passthrough
 device etc.). It is up to the user to determine, using the guest operating
 system data structures, the areas that are safe to access (code, stack, heap
 etc.).
+
+Commands
+
+
+The following C structures are meant to be used directly when communicating
+over the wire. The peer that detects any size mismatch should simply close
+the connection and report the error.
+
+1. KVMI_GET_VERSION
+---
+
+:Architectures: all
+:Versions: >= 1
+:Parameters: none
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_get_version_reply {
+   __u32 version;
+   __u32 max_msg_size;
+   };
+
+Returns the introspection API version and the largest accepted message
+size (useful for variable length messages).
+
+This command is always allowed and successful.
+
+The messages used for introspection commands/events might be extended
+in future versions and while the kernel will accept commands with
+shorter messages (older versions) or larger messages (newer versions,
+ignoring the extra information), it will not accept event replies with
+larger messages.
+
+The introspection tool should use this command to identify the features
+supported by the kernel side and what messages must be used for event
+replies.
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 2b37eee82c52..77dd727dfe18 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -6,6 +6,9 @@
  * KVMI structures and definitions
  */
 
+#include 
+#include 
+
 enum {
KVMI_VERSION = 0x0001
 };
@@ -14,6 +17,8 @@ enum {
 #define KVMI_VCPU_MESSAGE_ID(id) (((id) << 1) | 1)
 
 enum {
+   KVMI_GET_VERSION = KVMI_VM_MESSAGE_ID(1),
+
KVMI_NEXT_VM_MESSAGE
 };
 
@@ -43,4 +48,9 @@ struct kvmi_error_code {
__u32 padding;
 };
 
+struct kvmi_get_version_reply {
+   __u32 version;
+   __u32 max_msg_size;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 95bd0a60eb47..30acd3a2d030 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -92,6 +92,7 @@ static void hook_introspection(struct kvm_vm *vm)
do_hook_ioctl(vm, Kvm_socket, 0);
do_hook_ioctl(vm, Kvm_socket, EEXIST);
 
+   set_command_perm(vm, KVMI_GET_VERSION, disallow, EPERM);
set_command_perm(vm, all_IDs, allow_inval, EINVAL);
set_command_perm(vm, all_IDs, disallow, 0);
set_command_perm(vm, all_IDs, allow, 0);
@@ -207,12 +208,46 @@ static void test_cmd_invalid(void)
-r, kvm_strerror(-r));
 }
 
+static void test_vm_command(int cmd_id, struct kvmi_msg_hdr *req,
+   size_t req_size, void *rpl, size_t rpl_size,
+   int expected_err)
+{
+   int r;
+
+   r = do_command(cmd_id, req, req_size, rpl, rpl_size);
+   TEST_ASSERT(r == expected_err,
+   "Command %d failed, error %d (%s) instead of %d (%s)\n",
+   cmd_id, -r, kvm_strerror(-r),
+   expected_err, kvm_strerror(expected_err));
+}
+
+static void cmd_vm_get_version(struct kvmi_get_version_reply *ver)
+{
+   struct kvmi_msg_hdr req;
+
+   test_vm_command(KVMI_GET_VERSION, &req, sizeof(req), ver, sizeof(*ver), 
0)

[PATCH v10 66/81] KVM: introspection: add KVMI_VCPU_GET_XCR

2020-11-25 Thread Adalbert Lazăr
This can be used by the introspection tool to emulate SSE instructions.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 33 +++
 arch/x86/include/uapi/asm/kvmi.h  |  9 +
 arch/x86/kvm/kvmi_msg.c   | 21 
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 33 +++
 5 files changed, 97 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 24dc1867c1f1..008c7c73a46f 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -797,6 +797,39 @@ Provides the maximum GFN allocated to the VM by walking 
through all
 memory slots. Stricly speaking, the returned value refers to the first
 inaccessible GFN, next to the maximum accessible GFN.
 
+18. KVMI_VCPU_GET_XCR
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_get_xcr {
+   __u8 xcr;
+   __u8 padding[7];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_xcr_reply {
+   u64 value;
+   };
+
+Returns the value of an extended control register XCR.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the specified control register is not XCR0
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 604a8b3d4ac2..5ca6190d85ec 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -102,4 +102,13 @@ struct kvmi_vcpu_event_xsetbv {
__u64 new_value;
 };
 
+struct kvmi_vcpu_get_xcr {
+   __u8 xcr;
+   __u8 padding[7];
+};
+
+struct kvmi_vcpu_get_xcr_reply {
+   u64 value;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index d0dc917118b5..596f607296b5 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -174,11 +174,32 @@ static int handle_vcpu_inject_exception(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+static int handle_vcpu_get_xcr(const struct kvmi_vcpu_msg_job *job,
+  const struct kvmi_msg_hdr *msg,
+  const void *_req)
+{
+   const struct kvmi_vcpu_get_xcr *req = _req;
+   struct kvmi_vcpu_get_xcr_reply rpl;
+   int ec = 0;
+
+   memset(&rpl, 0, sizeof(rpl));
+
+   if (non_zero_padding(req->padding, ARRAY_SIZE(req->padding)))
+   ec = -KVM_EINVAL;
+   else if (req->xcr != 0)
+   ec = -KVM_EINVAL;
+   else
+   rpl.value = job->vcpu->arch.xcr0;
+
+   return kvmi_msg_vcpu_reply(job, msg, ec, &rpl, sizeof(rpl));
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_CONTROL_CR]   = handle_vcpu_control_cr,
[KVMI_VCPU_GET_CPUID]= handle_vcpu_get_cpuid,
[KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
[KVMI_VCPU_GET_REGISTERS]= handle_vcpu_get_registers,
+   [KVMI_VCPU_GET_XCR]  = handle_vcpu_get_xcr,
[KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception,
[KVMI_VCPU_SET_REGISTERS]= handle_vcpu_set_registers,
 };
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index d503e15baf60..07b6d383641a 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -44,6 +44,7 @@ enum {
KVMI_VCPU_GET_CPUID= KVMI_VCPU_MESSAGE_ID(5),
KVMI_VCPU_CONTROL_CR   = KVMI_VCPU_MESSAGE_ID(6),
KVMI_VCPU_INJECT_EXCEPTION = KVMI_VCPU_MESSAGE_ID(7),
+   KVMI_VCPU_GET_XCR  = KVMI_VCPU_MESSAGE_ID(8),
 
KVMI_NEXT_VCPU_MESSAGE
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index f73dbfe1407d..da90c6a8d535 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1416,6 +1416,38 @@ static void test_event_xsetbv(struct kvm_vm *vm)
disable_vcpu_event(vm, event_id);
 }
 
+static void cmd_vcpu_get_xcr(struct kvm_vm *vm, u8 xcr, u64 *value,
+int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_get_xcr cmd;
+   } req = { 0 };
+   struct kvmi_vcpu_get_xcr_reply rpl = { 0 };
+   int r;
+
+   req.cmd.xcr = xcr;
+
+   r = do_vcpu0_command(vm, KVMI_VCPU_GET_XCR, &req.hdr, sizeof(req),
+&rpl, sizeof(rpl));
+   TEST_ASSERT(r == expected_err,
+   "KVMI_VCPU_GET_XCR failed, error %d (%s), expected %d\n",
+   -r, kvm_str

[PATCH v10 48/81] KVM: introspection: add KVMI_VM_PAUSE_VCPU

2020-11-25 Thread Adalbert Lazăr
This command increments a pause requests counter for a vCPU and kicks
it out of guest.

The introspection tool can pause a VM by sending this command for all
vCPUs. If it sets 'wait=1', it can consider that the VM is paused when
it receives the reply for the last KVMI_VM_PAUSE_VCPU command.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 39 +++
 include/linux/kvmi_host.h |  2 +
 include/uapi/linux/kvmi.h |  8 
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 30 
 virt/kvm/introspection/kvmi.c | 47 +--
 virt/kvm/introspection/kvmi_int.h |  1 +
 virt/kvm/introspection/kvmi_msg.c | 24 ++
 7 files changed, 147 insertions(+), 4 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 902ced4dd0c4..a71fb78d546e 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -470,6 +470,45 @@ Returns the TSC frequency (in HZ) for the specified vCPU 
if available
 * -KVM_EINVAL - the selected vCPU is invalid
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+9. KVMI_VM_PAUSE_VCPU
+-
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_pause_vcpu {
+   __u16 vcpu;
+   __u8 wait;
+   __u8 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+
+Kicks the vCPU out of guest.
+
+If `wait` is 1, the command will wait for vCPU to acknowledge the IPI.
+
+The vCPU will handle the pending commands/events and send the
+*KVMI_VCPU_EVENT_PAUSE* event (one for every successful *KVMI_VM_PAUSE_VCPU*
+command) before returning to guest.
+
+:Errors:
+
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY  - the selected vCPU has too many queued
+*KVMI_VCPU_EVENT_PAUSE* events
+* -KVM_EPERM  - the *KVMI_VCPU_EVENT_PAUSE* event is disallowed
+
 Events
 ==
 
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 736edb400c05..59e645d9ea34 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -18,6 +18,8 @@ struct kvm_vcpu_introspection {
 
struct list_head job_list;
spinlock_t job_lock;
+
+   atomic_t pause_requests;
 };
 
 struct kvm_introspection {
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index da766427231e..bb90d03f059b 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -26,6 +26,7 @@ enum {
KVMI_VM_CONTROL_EVENTS = KVMI_VM_MESSAGE_ID(5),
KVMI_VM_READ_PHYSICAL  = KVMI_VM_MESSAGE_ID(6),
KVMI_VM_WRITE_PHYSICAL = KVMI_VM_MESSAGE_ID(7),
+   KVMI_VM_PAUSE_VCPU = KVMI_VM_MESSAGE_ID(8),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -115,4 +116,11 @@ struct kvmi_vcpu_hdr {
__u32 padding2;
 };
 
+struct kvmi_vm_pause_vcpu {
+   __u16 vcpu;
+   __u8 wait;
+   __u8 padding1;
+   __u32 padding2;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 9350ba8b7f9b..52765ca3f9c8 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -671,6 +671,35 @@ static void test_cmd_vcpu_get_info(struct kvm_vm *vm)
&rpl, sizeof(rpl), -KVM_EINVAL);
 }
 
+static void cmd_vcpu_pause(__u8 wait, int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vm_pause_vcpu cmd;
+   } req = {};
+   __u16 vcpu_idx = 0;
+
+   req.cmd.wait = wait;
+   req.cmd.vcpu = vcpu_idx;
+
+   test_vm_command(KVMI_VM_PAUSE_VCPU, &req.hdr, sizeof(req), NULL, 0, 
expected_err);
+}
+
+static void pause_vcpu(void)
+{
+   cmd_vcpu_pause(1, 0);
+}
+
+static void test_pause(struct kvm_vm *vm)
+{
+   __u8 wait = 1, wait_inval = 2;
+
+   pause_vcpu();
+
+   cmd_vcpu_pause(wait, 0);
+   cmd_vcpu_pause(wait_inval, -KVM_EINVAL);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
srandom(time(0));
@@ -686,6 +715,7 @@ static void test_introspection(struct kvm_vm *vm)
test_cmd_vm_control_events(vm);
test_memory_access(vm);
test_cmd_vcpu_get_info(vm);
+   test_pause(vm);
 
unhook_introspection(vm);
 }
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index b608751780fc..4999132a65bc 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -17,6 +17,8 @@
 
 #define KVMI_MSG_SIZE_ALLOC (sizeof(struct kvmi_msg_hdr) + KVMI_MAX_MSG_SIZE)
 
+#define MAX_PAUSE_REQUESTS 1001
+
 static DECLARE_BITMAP(Kvmi_always_allowed_commands, KVMI_NUM_COMMANDS);
 static DECLARE_BITMAP(Kvmi_known_e

[PATCH v10 23/81] KVM: x86: extend kvm_mmu_gva_to_gpa_system() with the 'access' parameter

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This is needed for kvmi_update_ad_flags() to emulate a guest page
table walk on SPT violations due to A/D bit updates.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/x86.c  | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c2da5c24e825..3a06a7799571 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1568,7 +1568,7 @@ gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, 
gva_t gva,
 gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
   struct x86_exception *exception);
 gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,
-   struct x86_exception *exception);
+   u32 access, struct x86_exception *exception);
 
 bool kvm_apicv_activated(struct kvm *kvm);
 void kvm_apicv_init(struct kvm *kvm, bool enable);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 00ab76366868..8eda5c3bd244 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5890,9 +5890,9 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, 
gva_t gva,
 
 /* uses this to access any guest's mapped memory without checking CPL */
 gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,
-   struct x86_exception *exception)
+   u32 access, struct x86_exception *exception)
 {
-   return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, 0, exception);
+   return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
 
 static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int 
bytes,
@@ -9762,7 +9762,7 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
vcpu_load(vcpu);
 
idx = srcu_read_lock(&vcpu->kvm->srcu);
-   gpa = kvm_mmu_gva_to_gpa_system(vcpu, vaddr, NULL);
+   gpa = kvm_mmu_gva_to_gpa_system(vcpu, vaddr, 0, NULL);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
tr->physical_address = gpa;
tr->valid = gpa != UNMAPPED_GVA;
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 30/81] KVM: x86: wire in the preread/prewrite/preexec page trackers

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

These are needed in order to notify the introspection tool when
read/write/execute access happens on one of the tracked memory pages.

Also, this patch adds the case when the introspection tool requests
that the vCPU re-enter in guest (and abort the emulation of the current
instruction).

Signed-off-by: Mihai Donțu 
Co-developed-by: Marian Rotariu 
Signed-off-by: Marian Rotariu 
Co-developed-by: Stefan Sicleru 
Signed-off-by: Stefan Sicleru 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/emulate.c |  4 
 arch/x86/kvm/kvm_emulate.h |  1 +
 arch/x86/kvm/mmu/mmu.c | 38 +---
 arch/x86/kvm/mmu/spte.c| 17 ++
 arch/x86/kvm/x86.c | 45 ++
 5 files changed, 83 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 56cae1ff9e3f..ad32632d4dcb 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -5474,6 +5474,8 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void 
*insn, int insn_len)
ctxt->memopp->addr.mem.ea + ctxt->_eip);
 
 done:
+   if (rc == X86EMUL_RETRY_INSTR)
+   return EMULATION_RETRY_INSTR;
if (rc == X86EMUL_PROPAGATE_FAULT)
ctxt->have_exception = true;
return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK;
@@ -5845,6 +5847,8 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
if (rc == X86EMUL_INTERCEPTED)
return EMULATION_INTERCEPTED;
 
+   if (rc == X86EMUL_RETRY_INSTR)
+   return EMULATION_RETRY_INSTR;
if (rc == X86EMUL_CONTINUE)
writeback_registers(ctxt);
 
diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
index 43c93ffa76ed..5bfab8d65cd1 100644
--- a/arch/x86/kvm/kvm_emulate.h
+++ b/arch/x86/kvm/kvm_emulate.h
@@ -496,6 +496,7 @@ bool x86_page_table_writing_insn(struct x86_emulate_ctxt 
*ctxt);
 #define EMULATION_OK 0
 #define EMULATION_RESTART 1
 #define EMULATION_INTERCEPTED 2
+#define EMULATION_RETRY_INSTR 3
 void init_decode_cache(struct x86_emulate_ctxt *ctxt);
 int x86_emulate_insn(struct x86_emulate_ctxt *ctxt);
 int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 36add6fb712f..23b72532cd18 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -769,9 +769,13 @@ static void account_shadowed(struct kvm *kvm, struct 
kvm_mmu_page *sp)
slot = __gfn_to_memslot(slots, gfn);
 
/* the non-leaf shadow pages are keeping readonly. */
-   if (sp->role.level > PG_LEVEL_4K)
-   return kvm_slot_page_track_add_page(kvm, slot, gfn,
-   KVM_PAGE_TRACK_WRITE);
+   if (sp->role.level > PG_LEVEL_4K) {
+   kvm_slot_page_track_add_page(kvm, slot, gfn,
+KVM_PAGE_TRACK_PREWRITE);
+   kvm_slot_page_track_add_page(kvm, slot, gfn,
+KVM_PAGE_TRACK_WRITE);
+   return;
+   }
 
kvm_mmu_gfn_disallow_lpage(slot, gfn);
 }
@@ -797,9 +801,13 @@ static void unaccount_shadowed(struct kvm *kvm, struct 
kvm_mmu_page *sp)
gfn = sp->gfn;
slots = kvm_memslots_for_spte_role(kvm, sp->role);
slot = __gfn_to_memslot(slots, gfn);
-   if (sp->role.level > PG_LEVEL_4K)
-   return kvm_slot_page_track_remove_page(kvm, slot, gfn,
-  KVM_PAGE_TRACK_WRITE);
+   if (sp->role.level > PG_LEVEL_4K) {
+   kvm_slot_page_track_remove_page(kvm, slot, gfn,
+   KVM_PAGE_TRACK_PREWRITE);
+   kvm_slot_page_track_remove_page(kvm, slot, gfn,
+   KVM_PAGE_TRACK_WRITE);
+   return;
+   }
 
kvm_mmu_gfn_allow_lpage(slot, gfn);
 }
@@ -2601,7 +2609,8 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t 
gfn,
 {
struct kvm_mmu_page *sp;
 
-   if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE))
+   if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_PREWRITE) ||
+   kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE))
return true;
 
for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
@@ -3689,15 +3698,18 @@ static bool page_fault_handle_page_track(struct 
kvm_vcpu *vcpu,
if (unlikely(error_code & PFERR_RSVD_MASK))
return false;
 
-   if (!(error_code & PFERR_PRESENT_MASK) ||
- !(error_code & PFERR_WRITE_MASK))
-   return false;
-
/*
-* guest is writing the page which is write tracked which can
+* guest is reading/writing/fetching the page which is
+* read/write/execute tracked which can
 * not be fixed by page fault

[PATCH v10 47/81] KVM: introspection: add KVMI_VCPU_GET_INFO

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command returns the TSC frequency (in HZ) for the specified
vCPU if available (otherwise it returns zero).

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  29 
 arch/x86/include/asm/kvmi_host.h  |   2 +
 arch/x86/include/uapi/asm/kvmi.h  |  13 ++
 arch/x86/kvm/kvmi_msg.c   |  14 ++
 include/uapi/linux/kvmi.h |   2 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 144 +-
 virt/kvm/introspection/kvmi_int.h |   3 +
 virt/kvm/introspection/kvmi_msg.c |   9 ++
 8 files changed, 215 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/uapi/asm/kvmi.h

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 4d340528d2f4..902ced4dd0c4 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -441,6 +441,35 @@ one page (offset + size <= PAGE_SIZE).
 * -KVM_EINVAL - the specified gpa/size pair is invalid
 * -KVM_EINVAL - the padding is not zero
 
+8. KVMI_VCPU_GET_INFO
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_info_reply {
+   __u64 tsc_speed;
+   };
+
+Returns the TSC frequency (in HZ) for the specified vCPU if available
+(otherwise it returns zero).
+
+:Errors:
+
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 360a57dd9019..05ade3a16b24 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -2,6 +2,8 @@
 #ifndef _ASM_X86_KVMI_HOST_H
 #define _ASM_X86_KVMI_HOST_H
 
+#include 
+
 struct kvm_vcpu_arch_introspection {
 };
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
new file mode 100644
index ..2b6192e1a9a4
--- /dev/null
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_ASM_X86_KVMI_H
+#define _UAPI_ASM_X86_KVMI_H
+
+/*
+ * KVM introspection - x86 specific structures and definitions
+ */
+
+struct kvmi_vcpu_get_info_reply {
+   __u64 tsc_speed;
+};
+
+#endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 0f4717ca5fa8..77552bf50984 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -8,7 +8,21 @@
 
 #include "../../../virt/kvm/introspection/kvmi_int.h"
 
+static int handle_vcpu_get_info(const struct kvmi_vcpu_msg_job *job,
+   const struct kvmi_msg_hdr *msg,
+   const void *req)
+{
+   struct kvmi_vcpu_get_info_reply rpl;
+
+   memset(&rpl, 0, sizeof(rpl));
+   if (kvm_has_tsc_control)
+   rpl.tsc_speed = 1000ul * job->vcpu->arch.virtual_tsc_khz;
+
+   return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl));
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
+   [KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
 };
 
 kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id)
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 7ba1c8758aba..da766427231e 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -31,6 +31,8 @@ enum {
 };
 
 enum {
+   KVMI_VCPU_GET_INFO = KVMI_VCPU_MESSAGE_ID(1),
+
KVMI_NEXT_VCPU_MESSAGE
 };
 
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index b493edb534b0..9350ba8b7f9b 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "test_util.h"
 
@@ -18,6 +19,7 @@
 
 #include "linux/kvm_para.h"
 #include "linux/kvmi.h"
+#include "asm/kvmi.h"
 
 #define VCPU_ID 1
 
@@ -25,12 +27,46 @@ static int socket_pair[2];
 #define Kvm_socket   socket_pair[0]
 #define Userspace_socket socket_pair[1]
 
+static int test_id;
 static vm_vaddr_t test_gva;
 static void *test_hva;
 static vm_paddr_t test_gpa;
 
 static int page_size;
 
+struct vcpu_worker_data {
+   struct kvm_vm *vm;
+   int vcpu_id;
+   int test_id;
+};
+
+enum {
+   GUEST_TEST_NOOP = 0,
+};
+
+#define GUEST_REQUEST_TEST() GUEST_SYNC(0)
+#define GUEST_SIGNAL_TEST_DONE() GUEST_SYNC(1)
+
+#define HOST_SEND_TEST(uc)   (uc.cmd == UCALL_SYNC && uc.args[1] == 0)
+#define HOST_TEST_DONE(uc)   (uc.cmd == UCALL_SYNC && uc.args[1] == 1)
+
+static int guest_test_id(void)
+{
+   GUEST_REQUEST_TEST();
+   return READ_ONCE(test_id);
+}
+
+static void guest_code(void)
+{
+   while (true) {

[PATCH v10 27/81] KVM: x86: page track: provide all callbacks with the guest virtual address

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This is needed because the emulator calls the page tracking code
irrespective of the current VM-exit reason or available information.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h   |  2 +-
 arch/x86/include/asm/kvm_page_track.h | 10 ++
 arch/x86/kvm/mmu/mmu.c|  2 +-
 arch/x86/kvm/mmu/page_track.c |  6 +++---
 arch/x86/kvm/x86.c| 16 
 drivers/gpu/drm/i915/gvt/kvmgt.c  |  2 +-
 6 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7dc1ebac8d91..0342835a79d2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1396,7 +1396,7 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned 
long kvm_nr_mmu_pages);
 int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3);
 bool pdptrs_changed(struct kvm_vcpu *vcpu);
 
-int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
+int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
  const void *val, int bytes);
 
 struct kvm_irq_mask_notifier {
diff --git a/arch/x86/include/asm/kvm_page_track.h 
b/arch/x86/include/asm/kvm_page_track.h
index 87bd6025d91d..9a261e463eb3 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -28,12 +28,14 @@ struct kvm_page_track_notifier_node {
 *
 * @vcpu: the vcpu where the write access happened.
 * @gpa: the physical address written by guest.
+* @gva: the virtual address written by guest.
 * @new: the data was written to the address.
 * @bytes: the written length.
 * @node: this node
 */
-   void (*track_write)(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
-   int bytes, struct kvm_page_track_notifier_node 
*node);
+   void (*track_write)(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+   const u8 *new, int bytes,
+   struct kvm_page_track_notifier_node *node);
/*
 * It is called when memory slot is being moved or removed
 * users can drop write-protection for the pages in that memory slot
@@ -68,7 +70,7 @@ kvm_page_track_register_notifier(struct kvm *kvm,
 void
 kvm_page_track_unregister_notifier(struct kvm *kvm,
   struct kvm_page_track_notifier_node *n);
-void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
- int bytes);
+void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+ const u8 *new, int bytes);
 void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot);
 #endif
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5dfe0ede0e81..1631e2367085 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4963,7 +4963,7 @@ static const union kvm_mmu_page_role role_ign = {
.invalid = 0x1,
 };
 
-static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
  const u8 *new, int bytes,
  struct kvm_page_track_notifier_node *node)
 {
diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
index 8443a675715b..d7a591a85af8 100644
--- a/arch/x86/kvm/mmu/page_track.c
+++ b/arch/x86/kvm/mmu/page_track.c
@@ -216,8 +216,8 @@ EXPORT_SYMBOL_GPL(kvm_page_track_unregister_notifier);
  * The node should figure out if the written page is the one that node is
  * interested in by itself.
  */
-void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
- int bytes)
+void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,
+ const u8 *new, int bytes)
 {
struct kvm_page_track_notifier_head *head;
struct kvm_page_track_notifier_node *n;
@@ -232,7 +232,7 @@ void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, 
const u8 *new,
hlist_for_each_entry_srcu(n, &head->track_notifier_list, node,
srcu_read_lock_held(&head->track_srcu))
if (n->track_write)
-   n->track_write(vcpu, gpa, new, bytes, n);
+   n->track_write(vcpu, gpa, gva, new, bytes, n);
srcu_read_unlock(&head->track_srcu, idx);
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f48603c8e44d..c2f13a275448 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6115,7 +6115,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, 
unsigned long gva,
return vcpu_is_mmio_gpa(vcpu, gva, *gpa, write);
 }
 
-int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
+int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t g

[PATCH v10 06/81] KVM: x86: add kvm_arch_vcpu_set_regs()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This is needed for the KVMI_VCPU_SET_REGISTERS command, which allows
an introspection tool to override the kvm_regs structure for a specific
vCPU without clearing the pending exception. In most cases this is used
to increment the program counter.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c   | 21 ++---
 include/linux/kvm_host.h |  2 ++
 2 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 540e42341435..5951458408fb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9406,16 +9406,23 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct 
kvm_regs *regs)
 
kvm_rip_write(vcpu, regs->rip);
kvm_set_rflags(vcpu, regs->rflags | X86_EFLAGS_FIXED);
-
-   vcpu->arch.exception.pending = false;
-
-   kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 
-int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+void kvm_arch_vcpu_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs,
+   bool clear_exception)
 {
-   vcpu_load(vcpu);
__set_regs(vcpu, regs);
+
+   if (clear_exception)
+   vcpu->arch.exception.pending = false;
+
+   kvm_make_request(KVM_REQ_EVENT, vcpu);
+}
+
+int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+   vcpu_load(vcpu);
+   kvm_arch_vcpu_set_regs(vcpu, regs, true);
vcpu_put(vcpu);
return 0;
 }
@@ -9816,7 +9823,7 @@ static int sync_regs(struct kvm_vcpu *vcpu)
return -EINVAL;
 
if (vcpu->run->kvm_dirty_regs & KVM_SYNC_X86_REGS) {
-   __set_regs(vcpu, &vcpu->run->s.regs.regs);
+   kvm_arch_vcpu_set_regs(vcpu, &vcpu->run->s.regs.regs, true);
vcpu->run->kvm_dirty_regs &= ~KVM_SYNC_X86_REGS;
}
if (vcpu->run->kvm_dirty_regs & KVM_SYNC_X86_SREGS) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 13c6b806477b..6d622d8bd339 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -904,6 +904,8 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
 void kvm_arch_vcpu_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
 int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs);
+void kvm_arch_vcpu_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs,
+   bool clear_exception);
 int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
  struct kvm_sregs *sregs);
 void kvm_arch_vcpu_get_sregs(struct kvm_vcpu *vcpu,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 55/81] KVM: introspection: add KVMI_VCPU_GET_CPUID

2020-11-25 Thread Adalbert Lazăr
From: Marian Rotariu 

This command returns a CPUID leaf (as seen by the guest OS).

Signed-off-by: Marian Rotariu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 36 +++
 arch/x86/include/uapi/asm/kvmi.h  | 12 +++
 arch/x86/kvm/kvmi_msg.c   | 26 ++
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 30 
 5 files changed, 105 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 178832304458..10966430621c 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -630,6 +630,42 @@ currently being handled is replied to.
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 * -KVM_EOPNOTSUPP - the command hasn't been received during an introspection 
event
 
+13. KVMI_VCPU_GET_CPUID
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_get_cpuid {
+   __u32 function;
+   __u32 index;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_cpuid_reply {
+   __u32 eax;
+   __u32 ebx;
+   __u32 ecx;
+   __u32 edx;
+   };
+
+Returns a CPUID leaf (as seen by the guest OS).
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_ENOENT - the selected leaf is not present or is invalid
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 11835bf9bdc6..3631da9eef8c 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -45,4 +45,16 @@ struct kvmi_vcpu_get_registers_reply {
struct kvm_msrs msrs;
 };
 
+struct kvmi_vcpu_get_cpuid {
+   __u32 function;
+   __u32 index;
+};
+
+struct kvmi_vcpu_get_cpuid_reply {
+   __u32 eax;
+   __u32 ebx;
+   __u32 ecx;
+   __u32 edx;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 4046a5c4d306..1651ef877e3e 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -6,6 +6,7 @@
  *
  */
 
+#include "cpuid.h"
 #include "../../../virt/kvm/introspection/kvmi_int.h"
 #include "kvmi.h"
 
@@ -107,7 +108,32 @@ static int handle_vcpu_set_registers(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+static int handle_vcpu_get_cpuid(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *_req)
+{
+   const struct kvmi_vcpu_get_cpuid *req = _req;
+   struct kvmi_vcpu_get_cpuid_reply rpl;
+   struct kvm_cpuid_entry2 *entry;
+   int ec = 0;
+
+   entry = kvm_find_cpuid_entry(job->vcpu, req->function, req->index);
+   if (!entry) {
+   ec = -KVM_ENOENT;
+   } else {
+   memset(&rpl, 0, sizeof(rpl));
+
+   rpl.eax = entry->eax;
+   rpl.ebx = entry->ebx;
+   rpl.ecx = entry->ecx;
+   rpl.edx = entry->edx;
+   }
+
+   return kvmi_msg_vcpu_reply(job, msg, ec, &rpl, sizeof(rpl));
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
+   [KVMI_VCPU_GET_CPUID] = handle_vcpu_get_cpuid,
[KVMI_VCPU_GET_INFO]  = handle_vcpu_get_info,
[KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers,
[KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers,
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 4b756d388ad3..2c93a36bfa43 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -39,6 +39,7 @@ enum {
KVMI_VCPU_CONTROL_EVENTS = KVMI_VCPU_MESSAGE_ID(2),
KVMI_VCPU_GET_REGISTERS  = KVMI_VCPU_MESSAGE_ID(3),
KVMI_VCPU_SET_REGISTERS  = KVMI_VCPU_MESSAGE_ID(4),
+   KVMI_VCPU_GET_CPUID  = KVMI_VCPU_MESSAGE_ID(5),
 
KVMI_NEXT_VCPU_MESSAGE
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 311a050c26c1..542b59466d12 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -948,6 +948,35 @@ static void test_cmd_vcpu_set_registers(struct kvm_vm *vm)
wait_vcpu_worker(vcpu_thread);
 }
 
+static void cmd_vcpu_get_cpuid(struct kvm_vm *vm,
+  __u32 function, __u32 index,
+  struct kvmi_vcpu_get_cpuid_reply *rpl)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_get_cpuid cmd;
+   } req = {};
+
+   req.cmd.function

[PATCH v10 03/81] KVM: add kvm_get_max_gfn()

2020-11-25 Thread Adalbert Lazăr
From: Ștefan Șicleru 

This function is needed for the KVMI_VM_GET_MAX_GFN command.

Signed-off-by: Ștefan Șicleru 
Signed-off-by: Adalbert Lazăr 
---
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c  | 25 +
 2 files changed, 26 insertions(+)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1bbb07b87d1a..cd6ac3a43c9a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -807,6 +807,7 @@ bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t 
gfn);
 unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn);
 void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, 
gfn_t gfn);
 void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
+gfn_t kvm_get_max_gfn(struct kvm *kvm);
 
 struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu);
 struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t 
gfn);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 069668b8afc2..e19dd6f92709 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1410,6 +1410,31 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm 
*kvm,
return kvm_set_memory_region(kvm, mem);
 }
 
+gfn_t kvm_get_max_gfn(struct kvm *kvm)
+{
+   u32 skip_mask = KVM_MEM_READONLY | KVM_MEMSLOT_INVALID;
+   struct kvm_memory_slot *memslot;
+   struct kvm_memslots *slots;
+   gfn_t max_gfn = 0;
+   int idx;
+
+   idx = srcu_read_lock(&kvm->srcu);
+   spin_lock(&kvm->mmu_lock);
+
+   slots = kvm_memslots(kvm);
+   kvm_for_each_memslot(memslot, slots)
+   if (memslot->id < KVM_USER_MEM_SLOTS &&
+  (memslot->flags & skip_mask) == 0 &&
+  memslot->npages)
+   max_gfn = max(max_gfn, memslot->base_gfn
+   + memslot->npages);
+
+   spin_unlock(&kvm->mmu_lock);
+   srcu_read_unlock(&kvm->srcu, idx);
+
+   return max_gfn;
+}
+
 #ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
 /**
  * kvm_get_dirty_log - get a snapshot of dirty pages
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 67/81] KVM: introspection: add KVMI_VCPU_GET_XSAVE

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This vCPU command is used to get the XSAVE area.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 29 +++
 arch/x86/include/uapi/asm/kvmi.h  |  4 +++
 arch/x86/kvm/kvmi_msg.c   | 20 +
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 26 +
 5 files changed, 80 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 008c7c73a46f..c1ac47def4e9 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -830,6 +830,35 @@ Returns the value of an extended control register XCR.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+19. KVMI_VCPU_GET_XSAVE
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_xsave_reply {
+   struct kvm_xsave xsave;
+   };
+
+Returns a buffer containing the XSAVE area.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_ENOMEM - there is not enough memory to allocate the reply
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 5ca6190d85ec..0d3696c52d88 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -111,4 +111,8 @@ struct kvmi_vcpu_get_xcr_reply {
u64 value;
 };
 
+struct kvmi_vcpu_get_xsave_reply {
+   struct kvm_xsave xsave;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 596f607296b5..77c753cd9705 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -194,12 +194,32 @@ static int handle_vcpu_get_xcr(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, &rpl, sizeof(rpl));
 }
 
+static int handle_vcpu_get_xsave(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *req)
+{
+   struct kvmi_vcpu_get_xsave_reply *rpl;
+   int err, ec = 0;
+
+   rpl = kvmi_msg_alloc();
+   if (!rpl)
+   ec = -KVM_ENOMEM;
+   else
+   kvm_vcpu_ioctl_x86_get_xsave(job->vcpu, &rpl->xsave);
+
+   err = kvmi_msg_vcpu_reply(job, msg, 0, rpl, sizeof(*rpl));
+
+   kvmi_msg_free(rpl);
+   return err;
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_CONTROL_CR]   = handle_vcpu_control_cr,
[KVMI_VCPU_GET_CPUID]= handle_vcpu_get_cpuid,
[KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
[KVMI_VCPU_GET_REGISTERS]= handle_vcpu_get_registers,
[KVMI_VCPU_GET_XCR]  = handle_vcpu_get_xcr,
+   [KVMI_VCPU_GET_XSAVE]= handle_vcpu_get_xsave,
[KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception,
[KVMI_VCPU_SET_REGISTERS]= handle_vcpu_set_registers,
 };
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 07b6d383641a..e47c4ce0f8ed 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -45,6 +45,7 @@ enum {
KVMI_VCPU_CONTROL_CR   = KVMI_VCPU_MESSAGE_ID(6),
KVMI_VCPU_INJECT_EXCEPTION = KVMI_VCPU_MESSAGE_ID(7),
KVMI_VCPU_GET_XCR  = KVMI_VCPU_MESSAGE_ID(8),
+   KVMI_VCPU_GET_XSAVE= KVMI_VCPU_MESSAGE_ID(9),
 
KVMI_NEXT_VCPU_MESSAGE
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index da90c6a8d535..277b1061410b 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1448,6 +1448,31 @@ static void test_cmd_vcpu_get_xcr(struct kvm_vm *vm)
cmd_vcpu_get_xcr(vm, xcr1, &value, -KVM_EINVAL);
 }
 
+static void cmd_vcpu_get_xsave(struct kvm_vm *vm)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   } req = {};
+   struct kvm_xsave rpl;
+
+   test_vcpu0_command(vm, KVMI_VCPU_GET_XSAVE, &req.hdr, sizeof(req),
+  &rpl, sizeof(rpl), 0);
+}
+
+static void test_cmd_vcpu_get_xsave(struct kvm_vm *vm)
+{
+   struct kvm_cpuid_entry2 *entry;
+
+   entry = kvm_get_supported_cpuid_entry(1);
+   if (!(entry->ecx & X86_FEATURE_XSAVE)) {
+   print_skip("XSAVE not supported, ecx 0x%x", entry->ecx);
+   return;
+   }
+
+   cmd_vcpu_get_xsave(vm);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
srandom(time(0));
@@ -1476,6 +1501,7 @@ static void test_introsp

[PATCH v10 79/81] KVM: introspection: add KVMI_VCPU_TRANSLATE_GVA

2020-11-25 Thread Adalbert Lazăr
This helps the introspection tool with the GVA to GPA translations
without the need to read or monitor the guest page tables.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 32 +++
 arch/x86/kvm/kvmi_msg.c   | 15 +
 include/uapi/linux/kvmi.h |  9 ++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 30 +
 4 files changed, 86 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 4b2e7809f052..1a33d009ad49 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -1074,6 +1074,38 @@ to the introspection tool.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+25. KVMI_VCPU_TRANSLATE_GVA
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_translate_gva {
+   __u64 gva;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_translate_gva_reply {
+   __u64 gpa;
+   };
+
+Translates a guest virtual address (``gva``) to a guest physical address
+(``gpa``) or ~0 if the address cannot be translated.
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index c4b43b3b7b92..a2b37ef3cf2c 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -313,6 +313,20 @@ static int handle_vcpu_control_singlestep(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+int handle_vcpu_translate_gva(const struct kvmi_vcpu_msg_job *job,
+ const struct kvmi_msg_hdr *msg,
+ const void *_req)
+{
+   const struct kvmi_vcpu_translate_gva *req = _req;
+   struct kvmi_vcpu_translate_gva_reply rpl;
+
+   memset(&rpl, 0, sizeof(rpl));
+
+   rpl.gpa = kvm_mmu_gva_to_gpa_system(job->vcpu, req->gva, 0, NULL);
+
+   return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl));
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_CONTROL_CR] = handle_vcpu_control_cr,
[KVMI_VCPU_CONTROL_MSR]= handle_vcpu_control_msr,
@@ -326,6 +340,7 @@ static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_INJECT_EXCEPTION]   = handle_vcpu_inject_exception,
[KVMI_VCPU_SET_REGISTERS]  = handle_vcpu_set_registers,
[KVMI_VCPU_SET_XSAVE]  = handle_vcpu_set_xsave,
+   [KVMI_VCPU_TRANSLATE_GVA]  = handle_vcpu_translate_gva,
 };
 
 kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id)
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 9c646229a25a..a78d42dd6415 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -51,6 +51,7 @@ enum {
KVMI_VCPU_GET_MTRR_TYPE  = KVMI_VCPU_MESSAGE_ID(11),
KVMI_VCPU_CONTROL_MSR= KVMI_VCPU_MESSAGE_ID(12),
KVMI_VCPU_CONTROL_SINGLESTEP = KVMI_VCPU_MESSAGE_ID(13),
+   KVMI_VCPU_TRANSLATE_GVA  = KVMI_VCPU_MESSAGE_ID(14),
 
KVMI_NEXT_VCPU_MESSAGE
 };
@@ -233,4 +234,12 @@ struct kvmi_vcpu_event_singlestep {
__u8 padding[7];
 };
 
+struct kvmi_vcpu_translate_gva {
+   __u64 gva;
+};
+
+struct kvmi_vcpu_translate_gva_reply {
+   __u64 gpa;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index c6f41f54f9b0..6deaf8dee610 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1905,6 +1905,35 @@ static void test_cmd_vcpu_control_singlestep(struct 
kvm_vm *vm)
test_unsupported_singlestep(vm);
 }
 
+static void cmd_translate_gva(struct kvm_vm *vm, vm_vaddr_t gva,
+ vm_paddr_t expected_gpa)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_translate_gva cmd;
+   } req = { 0 };
+   struct kvmi_vcpu_translate_gva_reply rpl;
+
+   req.cmd.gva = gva;
+
+   test_vcpu0_command(vm, KVMI_VCPU_TRANSLATE_GVA, &req.hdr, sizeof(req),
+ &rpl, sizeof(rpl), 0);
+   TEST_ASSERT(rpl.gpa == expected_gpa,
+   "Translation failed for gva 0x%lx -> gpa 0x%llx instead of 
0x%lx\n",
+   gva, rpl.gpa, expected_gpa);
+}
+
+static void test_cmd_translate_gva(struct kvm_vm *vm)
+{
+   cmd_translate_gva(vm, test_gva, test_gpa);
+   pr_info("Tested gva 0x%lx to gpa 0x%lx\n", test_gva, test_gpa);
+
+   cmd_translate_gva(vm, -1, ~0);
+   pr_info("Tested gva 0x%lx to gpa 0x%lx\n",
+ 

[PATCH v10 10/81] KVM: x86: add kvm_x86_ops.cr3_write_intercepted()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function will be used to allow the introspection tool to disable the
CR3-write interception when it is no longer interested in these events,
but only if nothing else depends on these VM-exits.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm/svm.c  | 8 
 arch/x86/kvm/vmx/vmx.c  | 8 
 3 files changed, 17 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0eeb1d829a1d..a402384a9326 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1124,6 +1124,7 @@ struct kvm_x86_ops {
void (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4);
void (*control_cr3_intercept)(struct kvm_vcpu *vcpu, int type,
  bool enable);
+   bool (*cr3_write_intercepted)(struct kvm_vcpu *vcpu);
int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer);
void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4f28fa035048..5000ee25545b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1720,6 +1720,13 @@ static void svm_control_cr3_intercept(struct kvm_vcpu 
*vcpu, int type,
 svm_clr_intercept(svm, INTERCEPT_CR3_WRITE);
 }
 
+static bool svm_cr3_write_intercepted(struct kvm_vcpu *vcpu)
+{
+   struct vcpu_svm *svm = to_svm(vcpu);
+
+   return svm_is_intercept(svm, INTERCEPT_CR3_WRITE);
+}
+
 static void svm_set_segment(struct kvm_vcpu *vcpu,
struct kvm_segment *var, int seg)
 {
@@ -4247,6 +4254,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.is_valid_cr4 = svm_is_valid_cr4,
.set_cr4 = svm_set_cr4,
.control_cr3_intercept = svm_control_cr3_intercept,
+   .cr3_write_intercepted = svm_cr3_write_intercepted,
.set_efer = svm_set_efer,
.get_idt = svm_get_idt,
.set_idt = svm_set_idt,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c5a53642d1c0..7b2a60cd7a76 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2995,6 +2995,13 @@ static void vmx_control_cr3_intercept(struct kvm_vcpu 
*vcpu, int type,
exec_controls_clearbit(vmx, cr3_exec_control);
 }
 
+static bool vmx_cr3_write_intercepted(struct kvm_vcpu *vcpu)
+{
+   struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+   return !!(exec_controls_get(vmx) & CPU_BASED_CR3_LOAD_EXITING);
+}
+
 static void ept_update_paging_mode_cr0(unsigned long *hw_cr0,
unsigned long cr0,
struct kvm_vcpu *vcpu)
@@ -7643,6 +7650,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.is_valid_cr4 = vmx_is_valid_cr4,
.set_cr4 = vmx_set_cr4,
.control_cr3_intercept = vmx_control_cr3_intercept,
+   .cr3_write_intercepted = vmx_cr3_write_intercepted,
.set_efer = vmx_set_efer,
.get_idt = vmx_get_idt,
.set_idt = vmx_set_idt,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 51/81] KVM: introspection: add the crash action handling on the event reply

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This action is used in extreme cases such as blocking the spread of
malware as fast as possible.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 virt/kvm/introspection/kvmi.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 3d26a7319fb7..d25b83dce8ed 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -751,6 +751,10 @@ void kvmi_handle_common_event_actions(struct kvm_vcpu 
*vcpu, u32 action)
struct kvm *kvm = vcpu->kvm;
 
switch (action) {
+   case KVMI_EVENT_ACTION_CRASH:
+   vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
+   break;
+
default:
kvmi_handle_unsupported_event_action(kvm);
}
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 37/81] KVM: introspection: add KVMI_VM_CHECK_COMMAND and KVMI_VM_CHECK_EVENT

2020-11-25 Thread Adalbert Lazăr
These commands are used to check what introspection commands and events
are supported (kernel) and allowed (device manager).

These are alternative methods to KVMI_GET_VERSION in checking if the
introspection supports a specific command/event.

As with the KVMI_GET_VERSION command, these two commands can never be
disallowed by the device manager.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 62 +++
 include/uapi/linux/kvmi.h | 16 -
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 45 ++
 virt/kvm/introspection/kvmi.c | 19 ++
 virt/kvm/introspection/kvmi_int.h |  2 +
 virt/kvm/introspection/kvmi_msg.c | 40 +++-
 6 files changed, 182 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index d3d672a07872..13169575f75f 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -250,3 +250,65 @@ larger messages.
 The introspection tool should use this command to identify the features
 supported by the kernel side and what messages must be used for event
 replies.
+
+2. KVMI_VM_CHECK_COMMAND
+
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_check_command {
+   __u16 id;
+   __u16 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+
+Checks if the command specified by ``id`` is supported and allowed.
+
+This command is always allowed.
+
+:Errors:
+
+* -KVM_ENOENT - the command specified by ``id`` is unsupported
+* -KVM_EPERM - the command specified by ``id`` is disallowed
+* -KVM_EINVAL - the padding is not zero
+
+3. KVMI_VM_CHECK_EVENT
+--
+
+:Architectures: all
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vm_check_event {
+   __u16 id;
+   __u16 padding1;
+   __u32 padding2;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+
+Checks if the event specified by ``id`` is supported and allowed.
+
+This command is always allowed.
+
+:Errors:
+
+* -KVM_ENOENT - the event specified by ``id`` is unsupported
+* -KVM_EPERM - the event specified by ``id`` is disallowed
+* -KVM_EINVAL - the padding is not zero
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 77dd727dfe18..0c2d0cedde6f 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -17,7 +17,9 @@ enum {
 #define KVMI_VCPU_MESSAGE_ID(id) (((id) << 1) | 1)
 
 enum {
-   KVMI_GET_VERSION = KVMI_VM_MESSAGE_ID(1),
+   KVMI_GET_VERSION  = KVMI_VM_MESSAGE_ID(1),
+   KVMI_VM_CHECK_COMMAND = KVMI_VM_MESSAGE_ID(2),
+   KVMI_VM_CHECK_EVENT   = KVMI_VM_MESSAGE_ID(3),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -53,4 +55,16 @@ struct kvmi_get_version_reply {
__u32 max_msg_size;
 };
 
+struct kvmi_vm_check_command {
+   __u16 id;
+   __u16 padding1;
+   __u32 padding2;
+};
+
+struct kvmi_vm_check_event {
+   __u16 id;
+   __u16 padding1;
+   __u32 padding2;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 30acd3a2d030..cd8f16a3ce3a 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -93,6 +93,8 @@ static void hook_introspection(struct kvm_vm *vm)
do_hook_ioctl(vm, Kvm_socket, EEXIST);
 
set_command_perm(vm, KVMI_GET_VERSION, disallow, EPERM);
+   set_command_perm(vm, KVMI_VM_CHECK_COMMAND, disallow, EPERM);
+   set_command_perm(vm, KVMI_VM_CHECK_EVENT, disallow, EPERM);
set_command_perm(vm, all_IDs, allow_inval, EINVAL);
set_command_perm(vm, all_IDs, disallow, 0);
set_command_perm(vm, all_IDs, allow, 0);
@@ -241,6 +243,47 @@ static void test_cmd_get_version(void)
pr_debug("Max message size: %u\n", rpl.max_msg_size);
 }
 
+static void cmd_vm_check_command(__u16 id, int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vm_check_command cmd;
+   } req = {};
+
+   req.cmd.id = id;
+
+   test_vm_command(KVMI_VM_CHECK_COMMAND, &req.hdr, sizeof(req), NULL, 0,
+   expected_err);
+}
+
+static void test_cmd_vm_check_command(void)
+{
+   __u16 valid_id = KVMI_GET_VERSION, invalid_id = 0x;
+
+   cmd_vm_check_command(valid_id, 0);
+   cmd_vm_check_command(invalid_id, -KVM_ENOENT);
+}
+
+static void cmd_vm_check_event(__u16 id, int expected_err)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vm_check_event cmd;
+   } req = {};
+
+   req.cmd.id = id;
+
+   test_vm_command(KVMI_VM_CHECK_EVENT, &req.hdr, sizeof(req), NULL, 0,
+   expected_err);
+}
+
+static void test_cmd

[PATCH v10 16/81] KVM: x86: svm: use the vmx convention to control the MSR interception

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This is a preparatory patch in order to use a common interface to
enable/disable the MSR interception.

Also, it will allow to independently control the read and write
interceptions.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  4 ++
 arch/x86/kvm/svm/svm.c  | 88 +
 arch/x86/kvm/vmx/vmx.h  |  4 --
 3 files changed, 60 insertions(+), 36 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5236008d231f..8586c9f4feba 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -141,6 +141,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t 
base_gfn, int level)
 #define CR_TYPE_W  2
 #define CR_TYPE_RW 3
 
+#define MSR_TYPE_R 1
+#define MSR_TYPE_W 2
+#define MSR_TYPE_RW3
+
 #define ASYNC_PF_PER_VCPU 64
 
 enum kvm_reg {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4478942f10a5..8d662ccf5b62 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -584,8 +584,8 @@ static int direct_access_msr_slot(u32 msr)
return -ENOENT;
 }
 
-static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read,
-int write)
+static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr,
+int type, bool value)
 {
struct vcpu_svm *svm = to_svm(vcpu);
int slot = direct_access_msr_slot(msr);
@@ -594,15 +594,19 @@ static void set_shadow_msr_intercept(struct kvm_vcpu 
*vcpu, u32 msr, int read,
return;
 
/* Set the shadow bitmaps to the desired intercept states */
-   if (read)
-   set_bit(slot, svm->shadow_msr_intercept.read);
-   else
-   clear_bit(slot, svm->shadow_msr_intercept.read);
+   if (type & MSR_TYPE_R) {
+   if (value)
+   set_bit(slot, svm->shadow_msr_intercept.read);
+   else
+   clear_bit(slot, svm->shadow_msr_intercept.read);
+   }
 
-   if (write)
-   set_bit(slot, svm->shadow_msr_intercept.write);
-   else
-   clear_bit(slot, svm->shadow_msr_intercept.write);
+   if (type & MSR_TYPE_W) {
+   if (value)
+   set_bit(slot, svm->shadow_msr_intercept.write);
+   else
+   clear_bit(slot, svm->shadow_msr_intercept.write);
+   }
 }
 
 static bool valid_msr_intercept(u32 index)
@@ -630,7 +634,7 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, 
u32 msr)
 }
 
 static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm,
-   u32 msr, int read, int write)
+   u32 msr, int type, bool value)
 {
u8 bit_read, bit_write;
unsigned long tmp;
@@ -643,11 +647,13 @@ static void set_msr_interception_bitmap(struct kvm_vcpu 
*vcpu, u32 *msrpm,
WARN_ON(!valid_msr_intercept(msr));
 
/* Enforce non allowed MSRs to trap */
-   if (read && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ))
-   read = 0;
+   if (value && (type & MSR_TYPE_R) &&
+   !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ))
+   type &= ~MSR_TYPE_R;
 
-   if (write && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE))
-   write = 0;
+   if (value && (type & MSR_TYPE_W) &&
+   !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE))
+   type &= ~MSR_TYPE_W;
 
offset= svm_msrpm_offset(msr);
bit_read  = 2 * (msr & 0x0f);
@@ -656,17 +662,19 @@ static void set_msr_interception_bitmap(struct kvm_vcpu 
*vcpu, u32 *msrpm,
 
BUG_ON(offset == MSR_INVALID);
 
-   read  ? clear_bit(bit_read,  &tmp) : set_bit(bit_read,  &tmp);
-   write ? clear_bit(bit_write, &tmp) : set_bit(bit_write, &tmp);
+   if (type & MSR_TYPE_R)
+   value  ? clear_bit(bit_read,  &tmp) : set_bit(bit_read,  &tmp);
+   if (type & MSR_TYPE_W)
+   value  ? clear_bit(bit_write, &tmp) : set_bit(bit_write, &tmp);
 
msrpm[offset] = tmp;
 }
 
 static void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
-int read, int write)
+int type, bool value)
 {
-   set_shadow_msr_intercept(vcpu, msr, read, write);
-   set_msr_interception_bitmap(vcpu, msrpm, msr, read, write);
+   set_shadow_msr_intercept(vcpu, msr, type, value);
+   set_msr_interception_bitmap(vcpu, msrpm, msr, type, value);
 }
 
 u32 *svm_vcpu_alloc_msrpm(void)
@@ -690,7 +698,8 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm)
for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
if (!direct_access_msrs[i].always)
continue;
-   set_ms

[PATCH v10 35/81] KVM: introspection: add the read/dispatch message function

2020-11-25 Thread Adalbert Lazăr
Based on the common header (struct kvmi_msg_hdr), the receiving thread
will read/validate all messages, execute the VM introspection commands
(eg. KVMI_VM_GET_INFO) and dispatch the vCPU introspection commands
(eg. KVMI_VCPU_GET_REGISTERS) to the vCPU threads.

The vCPU threads will reply to vCPU introspection commands without
the help of the receiving thread. Same for sending vCPU events, but
the vCPU thread will wait for the receiving thread to get the event
reply. Meanwhile, it will execute any queued vCPU introspection command.

The receiving thread will end when the socket is closed or on the first
API error (eg. wrong message size).

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  75 
 include/uapi/linux/kvmi.h |  11 ++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 100 +++
 virt/kvm/introspection/kvmi.c |  43 -
 virt/kvm/introspection/kvmi_int.h |  10 ++
 virt/kvm/introspection/kvmi_msg.c | 161 +-
 6 files changed, 398 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 59cc33a39f9f..ae6bbf37aef3 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -65,6 +65,74 @@ been used on that guest (if requested). Obviously, whether 
the guest can
 really continue normal execution depends on whether the introspection
 tool has made any modifications that require an active KVMI channel.
 
+All messages (commands or events) have a common header::
+
+   struct kvmi_msg_hdr {
+   __u16 id;
+   __u16 size;
+   __u32 seq;
+   };
+
+The replies have the same header, with the sequence number (``seq``)
+and message id (``id``) matching the command/event.
+
+After ``kvmi_msg_hdr``, ``id`` specific data of ``size`` bytes will
+follow.
+
+The message header and its data must be sent with one ``sendmsg()`` call
+to the socket. This simplifies the receiver loop and avoids
+the reconstruction of messages on the other side.
+
+The wire protocol uses the host native byte-order. The introspection tool
+must check this during the handshake and do the necessary conversion.
+
+A command reply begins with::
+
+   struct kvmi_error_code {
+   __s32 err;
+   __u32 padding;
+   }
+
+followed by the command specific data if the error code ``err`` is zero.
+
+The error code -KVM_ENOSYS is returned for unsupported commands.
+
+The error code -KVM_EPERM is returned for disallowed commands (see 
**Hooking**).
+
+Other error codes can be returned during message handling, but for
+some errors (incomplete messages, wrong sequence numbers, socket errors
+etc.) the socket will be closed. The device manager should reconnect.
+
+When a vCPU thread sends an introspection event, it will wait (and handle
+any related introspection command) until it gets the event reply::
+
+   Host kernel   Introspection tool
+   ---   --
+   event 1 ->
+ <- command 1
+   command 1 reply ->
+ <- command 2
+   command 2 reply ->
+ <- event 1 reply
+
+As it can be seen below, the wire protocol specifies occasional padding. This
+is to permit working with the data by directly using C structures or to round
+the structure size to a multiple of 8 bytes (64bit) to improve the copy
+operations that happen during ``recvmsg()`` or ``sendmsg()``. The members
+should have the native alignment of the host. All padding must be
+initialized with zero otherwise the respective command will fail with
+-KVM_EINVAL.
+
+To describe the commands/events, we reuse some conventions from api.rst:
+
+  - Architectures: which instruction set architectures provide this 
command/event
+
+  - Versions: which versions provide this command/event
+
+  - Parameters: incoming message data
+
+  - Returns: outgoing/reply message data
+
 Handshake
 -
 
@@ -99,6 +167,13 @@ In the end, the device manager will pass the file 
descriptor (plus
 the allowed commands/events) to KVM. It will detect when the socket is
 shutdown and it will reinitiate the handshake.
 
+Once the file descriptor reaches KVM, the introspection tool should
+use the *KVMI_GET_VERSION* command to get the API version and/or the
+*KVMI_VM_CHECK_COMMAND* and *KVMI_VM_CHECK_EVENT* commands to see which
+commands/events are allowed for this guest. The error code -KVM_EPERM
+will be returned if the introspection tool uses a command or tries to
+enable an event which is disallowed.
+
 Unhooking
 -
 
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 85f8622ddf95..2b37eee82c52 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -32,4 +32,15 @@ enum {
KVMI_NEXT_VCPU_EVENT
 };
 
+struct kvmi_msg_hdr {
+   __u16 id;
+   __u16 size;
+   

[PATCH v10 25/81] KVM: x86: export kvm_vcpu_ioctl_x86_get_xsave()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function is needed for the KVMI_VCPU_GET_XSAVE command.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c   | 4 ++--
 include/linux/kvm_host.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 741505f405b1..4fadd1ab20ae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4497,8 +4497,8 @@ static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
}
 }
 
-static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
-struct kvm_xsave *guest_xsave)
+void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
+ struct kvm_xsave *guest_xsave)
 {
if (boot_cpu_has(X86_FEATURE_XSAVE)) {
memset(guest_xsave, 0, sizeof(struct kvm_xsave));
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 2c640ea9d7ba..6eec75f77d7e 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -921,6 +921,8 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu 
*vcpu,
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu);
 int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
  struct kvm_guest_debug *dbg);
+void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
+ struct kvm_xsave *guest_xsave);
 
 int kvm_arch_init(void *opaque);
 void kvm_arch_exit(void);
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 20/81] KVM: x86: add kvm_x86_ops.fault_gla()

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This function is needed for kvmi_update_ad_flags()
and kvm_page_track_emulation_failure().

kvmi_update_ad_flags() uses the existing guest page table walk code to
update the A/D bits and return to guest (when the introspection tool
write-protects the guest page tables).

kvm_page_track_emulation_failure() calls the page tracking code, that
can trigger an event for the introspection tool (which might need the
GVA in addition to the GPA).

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 2 ++
 arch/x86/include/asm/vmx.h  | 2 ++
 arch/x86/kvm/svm/svm.c  | 9 +
 arch/x86/kvm/vmx/vmx.c  | 9 +
 4 files changed, 22 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 86048037da23..45c72af05fa2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1303,6 +1303,8 @@ struct kvm_x86_ops {
 
void (*migrate_timers)(struct kvm_vcpu *vcpu);
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
+
+   u64 (*fault_gla)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_nested_ops {
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 38ca445a8429..5543332292b5 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -544,6 +544,7 @@ enum vm_entry_failure_code {
 #define EPT_VIOLATION_READABLE_BIT 3
 #define EPT_VIOLATION_WRITABLE_BIT 4
 #define EPT_VIOLATION_EXECUTABLE_BIT   5
+#define EPT_VIOLATION_GLA_VALID_BIT7
 #define EPT_VIOLATION_GVA_TRANSLATED_BIT 8
 #define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT)
 #define EPT_VIOLATION_ACC_WRITE(1 << 
EPT_VIOLATION_ACC_WRITE_BIT)
@@ -551,6 +552,7 @@ enum vm_entry_failure_code {
 #define EPT_VIOLATION_READABLE (1 << EPT_VIOLATION_READABLE_BIT)
 #define EPT_VIOLATION_WRITABLE (1 << EPT_VIOLATION_WRITABLE_BIT)
 #define EPT_VIOLATION_EXECUTABLE   (1 << EPT_VIOLATION_EXECUTABLE_BIT)
+#define EPT_VIOLATION_GLA_VALID(1 << 
EPT_VIOLATION_GLA_VALID_BIT)
 #define EPT_VIOLATION_GVA_TRANSLATED   (1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
 
 /*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 43a2e4ec6178..c6730ec39c58 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4314,6 +4314,13 @@ static int svm_vm_init(struct kvm *kvm)
return 0;
 }
 
+static u64 svm_fault_gla(struct kvm_vcpu *vcpu)
+{
+   const struct vcpu_svm *svm = to_svm(vcpu);
+
+   return svm->vcpu.arch.cr2 ? svm->vcpu.arch.cr2 : ~0ull;
+}
+
 static struct kvm_x86_ops svm_x86_ops __initdata = {
.hardware_unsetup = svm_hardware_teardown,
.hardware_enable = svm_hardware_enable,
@@ -4442,6 +4449,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
 
.msr_filter_changed = svm_msr_filter_changed,
+
+   .fault_gla = svm_fault_gla,
 };
 
 static struct kvm_x86_init_ops svm_init_ops __initdata = {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d5d4203378d3..41ea1ee9d419 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7641,6 +7641,13 @@ static int vmx_cpu_dirty_log_size(void)
return enable_pml ? PML_ENTITY_NUM : 0;
 }
 
+static u64 vmx_fault_gla(struct kvm_vcpu *vcpu)
+{
+   if (vcpu->arch.exit_qualification & EPT_VIOLATION_GLA_VALID)
+   return vmcs_readl(GUEST_LINEAR_ADDRESS);
+   return ~0ull;
+}
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
.hardware_unsetup = hardware_unsetup,
 
@@ -7779,6 +7786,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 
.msr_filter_changed = vmx_msr_filter_changed,
.cpu_dirty_log_size = vmx_cpu_dirty_log_size,
+
+   .fault_gla = vmx_fault_gla,
 };
 
 static __init int hardware_setup(void)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 56/81] KVM: introspection: add KVMI_VCPU_EVENT_HYPERCALL

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This event is sent on a specific hypercall.

It is used by the code residing inside the introspected guest to call the
introspection tool and to report certain details about its operation.
For example, a classic antimalware remediation tool can report
what it has found during a scan.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/hypercalls.rst | 35 
 Documentation/virt/kvm/kvmi.rst   | 40 +-
 arch/x86/include/uapi/asm/kvmi.h  |  4 ++
 arch/x86/kvm/kvmi.c   | 20 +
 arch/x86/kvm/x86.c| 18 ++--
 include/linux/kvmi_host.h |  2 +
 include/uapi/linux/kvm_para.h |  1 +
 include/uapi/linux/kvmi.h |  3 +-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 42 +++
 virt/kvm/introspection/kvmi.c | 38 +
 virt/kvm/introspection/kvmi_int.h |  8 
 virt/kvm/introspection/kvmi_msg.c | 13 ++
 12 files changed, 218 insertions(+), 6 deletions(-)

diff --git a/Documentation/virt/kvm/hypercalls.rst 
b/Documentation/virt/kvm/hypercalls.rst
index 70e77c66b64c..abfbff96b9e3 100644
--- a/Documentation/virt/kvm/hypercalls.rst
+++ b/Documentation/virt/kvm/hypercalls.rst
@@ -169,3 +169,38 @@ a0: destination APIC ID
 
 :Usage example: When sending a call-function IPI-many to vCPUs, yield if
any of the IPI target vCPUs was preempted.
+
+9. KVM_HC_XEN_HVM_OP
+
+
+:Architecture: x86
+:Status: active
+:Purpose: To enable communication between a guest agent and a VMI application
+
+Usage:
+
+An event will be sent to the VMI application (see kvmi.rst) if the following
+registers, which differ between 32bit and 64bit, have the following values:
+
+   = =
+   32bit   64bit value
+   = =
+   ebx (a0)rdi   KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT
+   ecx (a1)rsi   0
+   = =
+
+This specification copies Xen's { __HYPERVISOR_hvm_op,
+HVMOP_guest_request_vm_event } hypercall and can originate from kernel or
+userspace.
+
+It returns 0 if successful, or a negative POSIX.1 error code if it fails. The
+absence of an active VMI application is not signaled in any way.
+
+The following registers are clobbered:
+
+  * 32bit: edx, esi, edi, ebp
+  * 64bit: rdx, r10, r8, r9
+
+In particular, for KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT, the last two
+registers can be poisoned deliberately and cannot be used for passing
+information.
diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 10966430621c..023c885638af 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -535,7 +535,10 @@ command) before returning to guest.
 
struct kvmi_error_code
 
-Enables/disables vCPU introspection events.
+Enables/disables vCPU introspection events. This command can be used with
+the following events::
+
+   KVMI_VCPU_EVENT_HYPERCALL
 
 When an event is enabled, the introspection tool is notified and
 must reply with: continue, retry, crash, etc. (see **Events** below).
@@ -779,3 +782,38 @@ cannot be controlled with *KVMI_VCPU_CONTROL_EVENTS*.
 Because it has a low priority, it will be sent after any other vCPU
 introspection event and when no other vCPU introspection command is
 queued.
+
+3. KVMI_VCPU_EVENT_HYPERCALL
+
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_event_hdr;
+   struct kvmi_vcpu_event;
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent on a specific user hypercall when the introspection has
+been enabled for this event (see *KVMI_VCPU_CONTROL_EVENTS*).
+
+The hypercall number must be ``KVM_HC_XEN_HVM_OP`` with the
+``KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT`` sub-function
+(see hypercalls.rst).
+
+It is used by the code residing inside the introspected guest to call the
+introspection tool and to report certain details about its operation. For
+example, a classic antimalware remediation tool can report what it has
+found during a scan.
+
+The most useful registers describing the vCPU state can be read from
+``kvmi_vcpu_event.arch.regs``.
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 3631da9eef8c..a442ba4d2190 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -8,6 +8,10 @@
 
 #include 
 
+enum {
+   KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT = 24,
+};
+
 struct kvmi_vcpu_get_info_reply {
__u64 tsc_speed;
 };
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 39638af7757e..5f08cf0d19bc 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86

[PATCH v10 57/81] KVM: introspection: add KVMI_VCPU_EVENT_BREAKPOINT

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This event is sent when a breakpoint was reached.

The introspection tool can place breakpoints and use them as notification
for when the OS or an application has reached a certain state or is
trying to perform a certain operation (eg. create a process).

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 48 ++
 arch/x86/kvm/kvmi.c   | 50 +++
 arch/x86/kvm/svm/svm.c| 34 +
 arch/x86/kvm/vmx/vmx.c| 17 +--
 include/linux/kvmi_host.h |  3 ++
 include/uapi/linux/kvmi.h | 11 +++-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 46 +
 virt/kvm/introspection/kvmi.c | 25 ++
 virt/kvm/introspection/kvmi_int.h |  4 ++
 virt/kvm/introspection/kvmi_msg.c | 18 +++
 10 files changed, 250 insertions(+), 6 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 023c885638af..c89f383e48f9 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -538,6 +538,7 @@ command) before returning to guest.
 Enables/disables vCPU introspection events. This command can be used with
 the following events::
 
+   KVMI_VCPU_EVENT_BREAKPOINT
KVMI_VCPU_EVENT_HYPERCALL
 
 When an event is enabled, the introspection tool is notified and
@@ -559,6 +560,9 @@ the *KVMI_VM_CONTROL_EVENTS* command.
 * -KVM_EINVAL - the event ID is unknown (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY - the event can't be intercepted right now
+   (e.g. KVMI_VCPU_EVENT_BREAKPOINT if the #BP event
+is already intercepted by userspace)
 
 11. KVMI_VCPU_GET_REGISTERS
 ---
@@ -817,3 +821,47 @@ found during a scan.
 
 The most useful registers describing the vCPU state can be read from
 ``kvmi_vcpu_event.arch.regs``.
+
+4. KVMI_VCPU_EVENT_BREAKPOINT
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH, RETRY
+:Parameters:
+
+::
+
+   struct kvmi_event_hdr;
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_breakpoint {
+   __u64 gpa;
+   __u8 insn_len;
+   __u8 padding[7];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent when a breakpoint was reached and the introspection has
+been enabled for this event (see *KVMI_VCPU_CONTROL_EVENTS*).
+
+Some of these breakpoints could have been injected by the introspection tool,
+placed in the slack space of various functions and used as notification
+for when the OS or an application has reached a certain state or is
+trying to perform a certain operation (like creating a process).
+
+``kvmi_vcpu_event`` (with the vCPU state), the guest physical address
+(``gpa``) where the breakpoint instruction is placed and the breakpoint
+instruction length (``insn_len``) are sent to the introspection tool.
+
+The *RETRY* action is used by the introspection tool for its own
+breakpoints. In most cases, the tool will change the instruction pointer
+before returning this action.
+
+The *CONTINUE* action will cause the breakpoint exception to be reinjected
+(the OS will handle it).
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 5f08cf0d19bc..0bb6f38f1213 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -11,6 +11,7 @@
 
 void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
 {
+   set_bit(KVMI_VCPU_EVENT_BREAKPOINT, supported);
set_bit(KVMI_VCPU_EVENT_HYPERCALL, supported);
 }
 
@@ -160,3 +161,52 @@ bool kvmi_arch_is_agent_hypercall(struct kvm_vcpu *vcpu)
return (subfunc1 == KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT
&& subfunc2 == 0);
 }
+
+static int kvmi_control_bp_intercept(struct kvm_vcpu *vcpu, bool enable)
+{
+   struct kvm_guest_debug dbg = {};
+   int err = 0;
+
+   if (enable)
+   dbg.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP;
+
+   err = kvm_arch_vcpu_set_guest_debug(vcpu, &dbg);
+
+   return err;
+}
+
+int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
+   unsigned int event_id, bool enable)
+{
+   int err = 0;
+
+   switch (event_id) {
+   case KVMI_VCPU_EVENT_BREAKPOINT:
+   err = kvmi_control_bp_intercept(vcpu, enable);
+   break;
+   default:
+   break;
+   }
+
+   return err;
+}
+
+void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len)
+{
+   u32 action;
+   u6

[PATCH v10 18/81] KVM: x86: vmx: use a symbolic constant when checking the exit qualifications

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This should make the code more readable.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/vmx/vmx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c1497b8e506c..a7d2bab38233 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5386,8 +5386,8 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
EPT_VIOLATION_EXECUTABLE))
  ? PFERR_PRESENT_MASK : 0;
 
-   error_code |= (exit_qualification & 0x100) != 0 ?
-  PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
+   error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED)
+ ? PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
 
vcpu->arch.exit_qualification = exit_qualification;
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 31/81] KVM: x86: disable gpa_available optimization for fetch and page-walk SPT violations

2020-11-25 Thread Adalbert Lazăr
From: Mircea Cîrjaliu 

This change is needed because the introspection tool can write-protect
guest page tables or exec-protect heap/stack pages.

Signed-off-by: Mircea Cîrjaliu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 5 +
 arch/x86/kvm/mmu/mmu.c  | 7 +++
 arch/x86/kvm/x86.c  | 2 +-
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0342835a79d2..46849b92f937 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1448,6 +1448,10 @@ extern u64 kvm_mce_cap_supported;
  *  retry native execution under certain conditions,
  *  Can only be set in conjunction with EMULTYPE_PF.
  *
+ * EMULTYPE_GPA_AVAILABLE_PF - Set when the emulator can avoid a page walk
+ *   to get the GPA.
+ *   Can only be set in conjunction with EMULTYPE_PF.
+ *
  * EMULTYPE_TRAP_UD_FORCED - Set when emulating an intercepted #UD that was
  *  triggered by KVM's magic "force emulation" prefix,
  *  which is opt in via module param (off by default).
@@ -1470,6 +1474,7 @@ extern u64 kvm_mce_cap_supported;
 #define EMULTYPE_TRAP_UD_FORCED(1 << 4)
 #define EMULTYPE_VMWARE_GP (1 << 5)
 #define EMULTYPE_PF(1 << 6)
+#define EMULTYPE_GPA_AVAILABLE_PF   (1 << 7)
 
 int kvm_emulate_instruction(struct kvm_vcpu *vcpu, int emulation_type);
 int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 23b72532cd18..f79cf58a27dc 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5148,6 +5148,13 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t 
cr2_or_gpa, u64 error_code,
 
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
return RET_PF_RETRY;
+   /*
+* With shadow page tables, fault_address contains a GVA or nGPA.
+* On a fetch fault, fault_address contains the instruction pointer.
+*/
+   if (direct && likely(!(error_code & PFERR_FETCH_MASK)) &&
+   (error_code & PFERR_GUEST_FINAL_MASK))
+   emulation_type |= EMULTYPE_GPA_AVAILABLE_PF;
 
r = RET_PF_INVALID;
if (unlikely(error_code & PFERR_RSVD_MASK)) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a82db6b30aee..d9b1034465c8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7406,7 +7406,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t 
cr2_or_gpa,
ctxt->exception.address = cr2_or_gpa;
 
/* With shadow page tables, cr2 contains a GVA or nGPA. */
-   if (vcpu->arch.mmu->direct_map) {
+   if (emulation_type & EMULTYPE_GPA_AVAILABLE_PF) {
ctxt->gpa_available = true;
ctxt->gpa_val = cr2_or_gpa;
}
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 15/81] KVM: x86: add kvm_x86_ops.msr_write_intercepted()

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This function will be used to check if the write access for a specific
MSR is already intercepted. The information will be used to restore the
interception status when the introspection tool is no longer interested
in that MSR.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm/svm.c  | 1 +
 arch/x86/kvm/vmx/vmx.c  | 1 +
 3 files changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0e9144e23ce6..5236008d231f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1112,6 +1112,7 @@ struct kvm_x86_ops {
void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
+   bool (*msr_write_intercepted)(struct kvm_vcpu *vcpu, u32 msr);
u64 (*get_segment_base)(struct kvm_vcpu *vcpu, int seg);
void (*get_segment)(struct kvm_vcpu *vcpu,
struct kvm_segment *var, int seg);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 86f0dcf9fecd..4478942f10a5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4304,6 +4304,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.get_msr_feature = svm_get_msr_feature,
.get_msr = svm_get_msr,
.set_msr = svm_set_msr,
+   .msr_write_intercepted = msr_write_intercepted,
.get_segment_base = svm_get_segment_base,
.get_segment = svm_get_segment,
.set_segment = svm_set_segment,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5bd6a4add27e..d4833d3bf966 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7658,6 +7658,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.get_msr_feature = vmx_get_msr_feature,
.get_msr = vmx_get_msr,
.set_msr = vmx_set_msr,
+   .msr_write_intercepted = msr_write_intercepted,
.get_segment_base = vmx_get_segment_base,
.get_segment = vmx_get_segment,
.set_segment = vmx_set_segment,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 11/81] KVM: x86: add kvm_x86_ops.desc_ctrl_supported()

2020-11-25 Thread Adalbert Lazăr
When the introspection tool tries to enable the KVMI_VCPU_EVENT_DESCRIPTOR
event, this function is used to check if the control of VM-exits caused
by descriptor-table registers access is supported.

Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm/svm.c  | 6 ++
 arch/x86/kvm/vmx/capabilities.h | 7 ++-
 arch/x86/kvm/vmx/vmx.c  | 1 +
 4 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a402384a9326..1e9cb521324e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1130,6 +1130,7 @@ struct kvm_x86_ops {
void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+   bool (*desc_ctrl_supported)(void);
void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5000ee25545b..f3ee6bad0db5 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1630,6 +1630,11 @@ static void svm_set_gdt(struct kvm_vcpu *vcpu, struct 
desc_ptr *dt)
vmcb_mark_dirty(svm->vmcb, VMCB_DT);
 }
 
+static bool svm_desc_ctrl_supported(void)
+{
+   return true;
+}
+
 static void update_cr0_intercept(struct vcpu_svm *svm)
 {
ulong gcr0 = svm->vcpu.arch.cr0;
@@ -4260,6 +4265,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.set_idt = svm_set_idt,
.get_gdt = svm_get_gdt,
.set_gdt = svm_set_gdt,
+   .desc_ctrl_supported = svm_desc_ctrl_supported,
.set_dr7 = svm_set_dr7,
.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
.cache_reg = svm_cache_reg,
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 3a1861403d73..6695b061bae4 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -142,12 +142,17 @@ static inline bool cpu_has_vmx_ept(void)
SECONDARY_EXEC_ENABLE_EPT;
 }
 
-static inline bool vmx_umip_emulated(void)
+static inline bool vmx_desc_ctrl_supported(void)
 {
return vmcs_config.cpu_based_2nd_exec_ctrl &
SECONDARY_EXEC_DESC;
 }
 
+static inline bool vmx_umip_emulated(void)
+{
+   return vmx_desc_ctrl_supported();
+}
+
 static inline bool cpu_has_vmx_rdtscp(void)
 {
return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7b2a60cd7a76..a5e1f61d2622 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7656,6 +7656,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.set_idt = vmx_set_idt,
.get_gdt = vmx_get_gdt,
.set_gdt = vmx_set_gdt,
+   .desc_ctrl_supported = vmx_desc_ctrl_supported,
.set_dr7 = vmx_set_dr7,
.sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
.cache_reg = vmx_cache_reg,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 02/81] KVM: add kvm_vcpu_kick_and_wait()

2020-11-25 Thread Adalbert Lazăr
This function is needed for the KVMI_VM_PAUSE_VCPU command, which sets
the introspection request flag, kicks the vCPU out of guest and returns
a success error code (0). The vCPU will send the KVMI_VCPU_EVENT_PAUSE
event as soon as possible. Once the introspection tool receives the event,
it knows that the vCPU doesn't run guest code and can handle introspection
commands (until the reply for the pause event is sent).

To implement the "pause VM" command, the introspection tool will send
a KVMI_VM_PAUSE_VCPU command for every vCPU. To know when the VM is
paused, userspace has to receive and "parse" all events. For example,
with a 4 vCPU VM, if "pause VM" was sent by userspace while handling
an event from vCPU0 and at the same time a new vCPU was hot-plugged
(which could send another event for vCPU4), the "pause VM" command has
to receive and check all events until it gets the pause events for vCPU1,
vCPU2 and vCPU3 before returning to the upper layer.

In order to make it easier for userspace to implement the "pause VM"
command, KVMI_VM_PAUSE_VCPU has an optional 'wait' parameter. If this is
set, kvm_vcpu_kick_and_wait() will be used instead of kvm_vcpu_kick().
Once a sequence of KVMI_VM_PAUSE_VCPU commands with the 'wait' flag set
is handled, the introspection tool can consider the VM paused, without
the need to wait and check events.

Signed-off-by: Adalbert Lazăr 
---
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c  | 10 ++
 2 files changed, 11 insertions(+)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f3b1013fb22c..1bbb07b87d1a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -841,6 +841,7 @@ void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
+void kvm_vcpu_kick_and_wait(struct kvm_vcpu *vcpu);
 int kvm_vcpu_yield_to(struct kvm_vcpu *target);
 void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible);
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3abcb2ce5b7d..069668b8afc2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2887,6 +2887,16 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */
 
+void kvm_vcpu_kick_and_wait(struct kvm_vcpu *vcpu)
+{
+   if (kvm_vcpu_wake_up(vcpu))
+   return;
+
+   if (kvm_request_needs_ipi(vcpu, KVM_REQUEST_WAIT))
+   smp_call_function_single(vcpu->cpu, ack_flush, NULL, 1);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_kick_and_wait);
+
 int kvm_vcpu_yield_to(struct kvm_vcpu *target)
 {
struct pid *pid;
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 04/81] KVM: doc: fix the hypercalls numbering

2020-11-25 Thread Adalbert Lazăr
The next hypercalls will be correctly numbered.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/hypercalls.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/hypercalls.rst 
b/Documentation/virt/kvm/hypercalls.rst
index ed4fddd364ea..70e77c66b64c 100644
--- a/Documentation/virt/kvm/hypercalls.rst
+++ b/Documentation/virt/kvm/hypercalls.rst
@@ -137,7 +137,7 @@ compute the CLOCK_REALTIME for its clock, at the same 
instant.
 Returns KVM_EOPNOTSUPP if the host does not use TSC clocksource,
 or if clock type is different than KVM_CLOCK_PAIRING_WALLCLOCK.
 
-6. KVM_HC_SEND_IPI
+7. KVM_HC_SEND_IPI
 --
 
 :Architecture: x86
@@ -158,7 +158,7 @@ corresponds to the APIC ID a2+1, and so on.
 
 Returns the number of CPUs to which the IPIs were delivered successfully.
 
-7. KVM_HC_SCHED_YIELD
+8. KVM_HC_SCHED_YIELD
 -
 
 :Architecture: x86
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 63/81] KVM: introspection: add KVMI_VCPU_INJECT_EXCEPTION + KVMI_VCPU_EVENT_TRAP

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

The KVMI_VCPU_INJECT_EXCEPTION command is used by the introspection tool
to inject exceptions, for example, to get a page from swap.

The exception is injected right before entering in guest unless there is
already an exception pending. The introspection tool is notified with
an KVMI_VCPU_EVENT_TRAP event about the success of the injection. In
case of failure, the introspection tool is expected to try again later.

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  76 +++
 arch/x86/include/asm/kvmi_host.h  |  11 ++
 arch/x86/include/uapi/asm/kvmi.h  |  16 +++
 arch/x86/kvm/kvmi.c   | 110 
 arch/x86/kvm/kvmi.h   |   3 +
 arch/x86/kvm/kvmi_msg.c   |  55 +++-
 arch/x86/kvm/x86.c|   2 +
 include/uapi/linux/kvmi.h |  14 +-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 124 ++
 virt/kvm/introspection/kvmi.c |   2 +
 virt/kvm/introspection/kvmi_int.h |   4 +
 virt/kvm/introspection/kvmi_msg.c |  16 ++-
 12 files changed, 419 insertions(+), 14 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 85e14b82aa2f..e688ac387faf 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -550,6 +550,7 @@ because these are sent as a result of certain commands (but 
they can be
 disallowed by the device manager) ::
 
KVMI_VCPU_EVENT_PAUSE
+   KVMI_VCPU_EVENT_TRAP
 
 The VM events (e.g. *KVMI_VM_EVENT_UNHOOK*) are controlled with
 the *KVMI_VM_CONTROL_EVENTS* command.
@@ -736,6 +737,46 @@ ID set.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+16. KVMI_VCPU_INJECT_EXCEPTION
+--
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_inject_exception {
+   __u8 nr;
+   __u8 padding1;
+   __u16 padding2;
+   __u32 error_code;
+   __u64 address;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code
+
+Injects a vCPU exception (``nr``) with or without an error code 
(``error_code``).
+For page fault exceptions, the guest virtual address (``address``)
+has to be specified too.
+
+The *KVMI_VCPU_EVENT_TRAP* event will be sent with the effective injected
+exception.
+
+:Errors:
+
+* -KVM_EPERM  - the *KVMI_VCPU_EVENT_TRAP* event is disallowed
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY - another *KVMI_VCPU_INJECT_EXCEPTION*-*KVMI_VCPU_EVENT_TRAP*
+   pair is in progress
+
 Events
 ==
 
@@ -966,3 +1007,38 @@ register (see **KVMI_VCPU_CONTROL_EVENTS**).
 (``cr``), the old value (``old_value``) and the new value (``new_value``)
 are sent to the introspection tool. The *CONTINUE* action will set the
 ``new_val``.
+
+6. KVMI_VCPU_EVENT_TRAP
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_trap {
+   __u8 nr;
+   __u8 padding1;
+   __u16 padding2;
+   __u32 error_code;
+   __u64 address;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent if a previous *KVMI_VCPU_INJECT_EXCEPTION* command
+took place. Because it has a high priority, it will be sent before any
+other vCPU introspection event.
+
+``kvmi_vcpu_event`` (with the vCPU state), exception/interrupt number
+(``nr``), exception code (``error_code``) and ``address`` are sent to
+the introspection tool, which should check if its exception has been
+injected or overridden.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index edbedf031467..97f5b1a01c9e 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -24,6 +24,15 @@ struct kvm_vcpu_arch_introspection {
bool have_delayed_regs;
 
DECLARE_BITMAP(cr_mask, KVMI_NUM_CR);
+
+   struct {
+   u8 nr;
+   u32 error_code;
+   bool error_code_valid;
+   u64 address;
+   bool pending;
+   bool send_event;
+   } exception;
 };
 
 struct kvm_arch_introspection {
@@ -36,6 +45,7 @@ bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
   unsigned long old_value, unsigned long *new_value);
 bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu);
 bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, 

[PATCH v10 81/81] KVM: x86: call the page tracking code on emulation failure

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

The information we can provide this way is incomplete, but current users
of the page tracking code can work with it.

Signed-off-by: Mihai Donțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c | 49 ++
 1 file changed, 49 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc7292ee3b2d..c4de25778942 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7328,6 +7328,51 @@ static bool is_vmware_backdoor_opcode(struct 
x86_emulate_ctxt *ctxt)
return false;
 }
 
+/*
+ * With introspection enabled, emulation failures translate in events being
+ * missed because the read/write callbacks are not invoked. All we have is
+ * the fetch event (kvm_page_track_preexec). Below we use the EPT/NPT VMEXIT
+ * information to generate the events, but without providing accurate
+ * data and size (the emulator would have computed those). If an instruction
+ * would happen to read and write in the same page, the second event will
+ * initially be missed and we rely on the page tracking mechanism to bring
+ * us back here to send it.
+ */
+static bool kvm_page_track_emulation_failure(struct kvm_vcpu *vcpu, gpa_t gpa)
+{
+   u64 error_code = vcpu->arch.error_code;
+   u8 data = 0;
+   gva_t gva;
+   bool ret;
+
+   /* MMIO emulation failures should be treated the normal way */
+   if (unlikely(error_code & PFERR_RSVD_MASK))
+   return true;
+
+   /* EPT/NTP must be enabled */
+   if (unlikely(!vcpu->arch.mmu->direct_map))
+   return true;
+
+   /*
+* The A/D bit emulation should make this test unneeded, but just
+* in case
+*/
+   if (unlikely((error_code & PFERR_NESTED_GUEST_PAGE) ==
+PFERR_NESTED_GUEST_PAGE))
+   return true;
+
+   gva = kvm_x86_ops.fault_gla(vcpu);
+
+   if (error_code & PFERR_WRITE_MASK)
+   ret = kvm_page_track_prewrite(vcpu, gpa, gva, &data, 0);
+   else if (error_code & PFERR_USER_MASK)
+   ret = kvm_page_track_preread(vcpu, gpa, gva, 0);
+   else
+   ret = true;
+
+   return ret;
+}
+
 int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len)
 {
@@ -7381,6 +7426,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t 
cr2_or_gpa,
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
+   if (!kvm_page_track_emulation_failure(vcpu, cr2_or_gpa))
+   return 1;
if (reexecute_instruction(vcpu, cr2_or_gpa,
  write_fault_to_spt,
  emulation_type))
@@ -7450,6 +7497,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t 
cr2_or_gpa,
return 1;
 
if (r == EMULATION_FAILED) {
+   if (!kvm_page_track_emulation_failure(vcpu, cr2_or_gpa))
+   return 1;
if (reexecute_instruction(vcpu, cr2_or_gpa, write_fault_to_spt,
emulation_type))
return 1;
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 09/81] KVM: x86: add kvm_x86_ops.control_cr3_intercept()

2020-11-25 Thread Adalbert Lazăr
This function is needed for the KVMI_VCPU_CONTROL_CR command, when the
introspection tool has to intercept the read/write access to CR3.

Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h |  6 ++
 arch/x86/kvm/svm/svm.c  | 14 ++
 arch/x86/kvm/vmx/vmx.c  | 26 --
 3 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e46fee59d4ed..0eeb1d829a1d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -137,6 +137,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t 
base_gfn, int level)
 #define KVM_NR_FIXED_MTRR_REGION 88
 #define KVM_NR_VAR_MTRR 8
 
+#define CR_TYPE_R  1
+#define CR_TYPE_W  2
+#define CR_TYPE_RW 3
+
 #define ASYNC_PF_PER_VCPU 64
 
 enum kvm_reg {
@@ -1118,6 +1122,8 @@ struct kvm_x86_ops {
void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0);
bool (*is_valid_cr4)(struct kvm_vcpu *vcpu, unsigned long cr0);
void (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4);
+   void (*control_cr3_intercept)(struct kvm_vcpu *vcpu, int type,
+ bool enable);
int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer);
void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 95c7072cde8e..4f28fa035048 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1707,6 +1707,19 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long 
cr4)
kvm_update_cpuid_runtime(vcpu);
 }
 
+static void svm_control_cr3_intercept(struct kvm_vcpu *vcpu, int type,
+ bool enable)
+{
+   struct vcpu_svm *svm = to_svm(vcpu);
+
+   if (type & CR_TYPE_R)
+   enable ? svm_set_intercept(svm, INTERCEPT_CR3_READ) :
+svm_clr_intercept(svm, INTERCEPT_CR3_READ);
+   if (type & CR_TYPE_W)
+   enable ? svm_set_intercept(svm, INTERCEPT_CR3_WRITE) :
+svm_clr_intercept(svm, INTERCEPT_CR3_WRITE);
+}
+
 static void svm_set_segment(struct kvm_vcpu *vcpu,
struct kvm_segment *var, int seg)
 {
@@ -4233,6 +4246,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.set_cr0 = svm_set_cr0,
.is_valid_cr4 = svm_is_valid_cr4,
.set_cr4 = svm_set_cr4,
+   .control_cr3_intercept = svm_control_cr3_intercept,
.set_efer = svm_set_efer,
.get_idt = svm_get_idt,
.set_idt = svm_set_idt,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 93a97aa3d847..c5a53642d1c0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2978,24 +2978,37 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu)
kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
 }
 
+static void vmx_control_cr3_intercept(struct kvm_vcpu *vcpu, int type,
+ bool enable)
+{
+   struct vcpu_vmx *vmx = to_vmx(vcpu);
+   u32 cr3_exec_control = 0;
+
+   if (type & CR_TYPE_R)
+   cr3_exec_control |= CPU_BASED_CR3_STORE_EXITING;
+   if (type & CR_TYPE_W)
+   cr3_exec_control |= CPU_BASED_CR3_LOAD_EXITING;
+
+   if (enable)
+   exec_controls_setbit(vmx, cr3_exec_control);
+   else
+   exec_controls_clearbit(vmx, cr3_exec_control);
+}
+
 static void ept_update_paging_mode_cr0(unsigned long *hw_cr0,
unsigned long cr0,
struct kvm_vcpu *vcpu)
 {
-   struct vcpu_vmx *vmx = to_vmx(vcpu);
-
if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
vmx_cache_reg(vcpu, VCPU_EXREG_CR3);
if (!(cr0 & X86_CR0_PG)) {
/* From paging/starting to nonpaging */
-   exec_controls_setbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
- CPU_BASED_CR3_STORE_EXITING);
+   vmx_control_cr3_intercept(vcpu, CR_TYPE_RW, true);
vcpu->arch.cr0 = cr0;
vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
} else if (!is_paging(vcpu)) {
/* From nonpaging to paging */
-   exec_controls_clearbit(vmx, CPU_BASED_CR3_LOAD_EXITING |
-   CPU_BASED_CR3_STORE_EXITING);
+   vmx_control_cr3_intercept(vcpu, CR_TYPE_RW, false);
vcpu->arch.cr0 = cr0;
vmx_set_cr4(vcpu, kvm_read_cr4(vcpu));
}
@@ -7629,6 +7642,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.set_cr0 = vmx_set_cr0,
.is_valid_cr4 = vmx_is_valid_cr4,
.set_cr4 = vmx_set_cr4,
+   .control_cr3_intercept = vmx_control_cr3_inter

[PATCH v10 00/81] VM introspection

2020-11-25 Thread Adalbert Lazăr
The KVM introspection subsystem provides a facility for applications
running on the host or in a separate VM, to control the execution of
other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs,
MSRs etc.), alter the page access bits in the shadow page tables (only
for the hardware backed ones, eg. Intel's EPT) and receive notifications
when events of interest have taken place (shadow page table level faults,
key MSR writes, hypercalls etc.). Some notifications can be responded
to with an action (like preventing an MSR from being written), others
are mere informative (like breakpoint events which can be used for
execution tracing).  With few exceptions, all events are optional. An
application using this subsystem will explicitly register for them.

The use case that gave way for the creation of this subsystem is to
monitor the guest OS and as such the ABI/API is highly influenced by how
the guest software (kernel, applications) sees the world. For example,
some events provide information specific for the host CPU architecture
(eg. MSR_IA32_SYSENTER_EIP) merely because its leveraged by guest software
to implement a critical feature (fast system calls).

At the moment, the target audience for KVMI are security software authors
that wish to perform forensics on newly discovered threats (exploits)
or to implement another layer of security like preventing a large set
of kernel rootkits simply by "locking" the kernel image in the shadow
page tables (ie. enforce .text r-x, .rodata rw- etc.). It's the latter
case that made KVMI a separate subsystem, even though many of these
features are available in the device manager. The ability to build a
security application that does not interfere (in terms of performance)
with the guest software asks for a specialized interface that is designed
for minimum overhead.

This patch series is based on kvm/next,
commit dc924b062488 ("KVM: SVM: check CR4 changes against vcpu->arch").

The previous version (v9) can be read here:

  https://lore.kernel.org/kvm/20200721210922.7646-1-ala...@bitdefender.com/

Patches 1-31: make preparatory changes

Patches 32-79: add basic introspection capabilities

Patch 80: support introspection tools that write-protect guest page tables

Patch 81: notify the introspection tool even on emulation failures
  (when the read/write callbacks used by the emulator,
   kvm_page_preread/kvm_page_prewrite, are not invoked)

Changes since v9:
  - rebase on 5.10 from 5.8
  - complete the split of x86 and arch-independent code
  - split the VM and vCPU events
  - clean up the interface headers (VM vs vCPU messages/events)
  - clean up the tests
  - add a new exit code (for the CRASH action) instead of killing
the vCPU threads [Christoph]
  - other small changes (code refactoring, message validation, etc.).

Adalbert Lazăr (24):
  KVM: UAPI: add error codes used by the VM introspection code
  KVM: add kvm_vcpu_kick_and_wait()
  KVM: doc: fix the hypercalls numbering
  KVM: x86: add kvm_x86_ops.control_cr3_intercept()
  KVM: x86: add kvm_x86_ops.desc_ctrl_supported()
  KVM: x86: add kvm_x86_ops.control_desc_intercept()
  KVM: x86: export kvm_vcpu_ioctl_x86_set_xsave()
  KVM: introspection: add hook/unhook ioctls
  KVM: introspection: add permission access ioctls
  KVM: introspection: add the read/dispatch message function
  KVM: introspection: add KVMI_GET_VERSION
  KVM: introspection: add KVMI_VM_CHECK_COMMAND and KVMI_VM_CHECK_EVENT
  KVM: introspection: add KVM_INTROSPECTION_PREUNHOOK
  KVM: introspection: add KVMI_VM_EVENT_UNHOOK
  KVM: introspection: add KVMI_VM_CONTROL_EVENTS
  KVM: introspection: add a jobs list to every introspected vCPU
  KVM: introspection: add KVMI_VM_PAUSE_VCPU
  KVM: introspection: add support for vCPU events
  KVM: introspection: add KVMI_VCPU_EVENT_PAUSE
  KVM: introspection: add KVMI_VM_CONTROL_CLEANUP
  KVM: introspection: add KVMI_VCPU_GET_XCR
  KVM: introspection: add KVMI_VCPU_SET_XSAVE
  KVM: introspection: extend KVMI_GET_VERSION with struct kvmi_features
  KVM: introspection: add KVMI_VCPU_TRANSLATE_GVA

Marian Rotariu (1):
  KVM: introspection: add KVMI_VCPU_GET_CPUID

Mihai Donțu (33):
  KVM: x86: add kvm_arch_vcpu_get_regs() and kvm_arch_vcpu_get_sregs()
  KVM: x86: avoid injecting #PF when emulate the VMCALL instruction
  KVM: x86: add kvm_x86_ops.control_msr_intercept()
  KVM: x86: vmx: use a symbolic constant when checking the exit
qualifications
  KVM: x86: save the error code during EPT/NPF exits handling
  KVM: x86: add kvm_x86_ops.fault_gla()
  KVM: x86: extend kvm_mmu_gva_to_gpa_system() with the 'access'
parameter
  KVM: x86: page track: provide all callbacks with the guest virtual
address
  KVM: x86: page track: add track_create_slot() callback
  KVM: x86: page_track: add support for preread, prewrite and preexec
  KVM: x86: wire in the preread/prewrite/preexec page trackers
  KVM: introduce VM introspection
  KVM: introspection: add KVMI_VM_GET_INFO
  KVM: introsp

[PATCH v10 45/81] KVM: introspection: handle vCPU introspection requests

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

The receiving thread dispatches the vCPU introspection commands by
adding them to the vCPU's jobs list and kicking the vCPU. Before
entering in guest, the vCPU thread checks the introspection request
(KVM_REQ_INTROSPECTION) and runs its queued jobs.

Signed-off-by: Mihai Donțu 
Co-developed-by: Mircea Cîrjaliu 
Signed-off-by: Mircea Cîrjaliu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c|  3 ++
 include/linux/kvm_host.h  |  1 +
 include/linux/kvmi_host.h |  4 ++
 virt/kvm/introspection/kvmi.c | 73 +++
 virt/kvm/kvm_main.c   |  2 +
 5 files changed, 83 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fcf7e68cb6c8..4e91d6794b5e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9147,6 +9147,9 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
vcpu->arch.l1tf_flush_l1d = true;
 
for (;;) {
+   if (kvm_check_request(KVM_REQ_INTROSPECTION, vcpu))
+   kvmi_handle_requests(vcpu);
+
if (kvm_vcpu_running(vcpu)) {
r = vcpu_enter_guest(vcpu);
} else {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 60347c3a0e95..4d02f682782a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -147,6 +147,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_MMU_RELOAD(1 | KVM_REQUEST_WAIT | 
KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_PENDING_TIMER 2
 #define KVM_REQ_UNHALT3
+#define KVM_REQ_INTROSPECTION 4
 #define KVM_REQUEST_ARCH_BASE 8
 
 #define KVM_ARCH_REQ_FLAGS(nr, flags) ({ \
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index b3874419511d..736edb400c05 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -53,6 +53,8 @@ int kvmi_ioctl_event(struct kvm *kvm,
 const struct kvm_introspection_feature *feat);
 int kvmi_ioctl_preunhook(struct kvm *kvm);
 
+void kvmi_handle_requests(struct kvm_vcpu *vcpu);
+
 #else
 
 static inline int kvmi_version(void) { return 0; }
@@ -62,6 +64,8 @@ static inline void kvmi_create_vm(struct kvm *kvm) { }
 static inline void kvmi_destroy_vm(struct kvm *kvm) { }
 static inline void kvmi_vcpu_uninit(struct kvm_vcpu *vcpu) { }
 
+static inline void kvmi_handle_requests(struct kvm_vcpu *vcpu) { }
+
 #endif /* CONFIG_KVM_INTROSPECTION */
 
 #endif
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index cdb4175aecff..b608751780fc 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -124,6 +124,12 @@ void kvmi_uninit(void)
kvmi_cache_destroy();
 }
 
+static void kvmi_make_request(struct kvm_vcpu *vcpu)
+{
+   kvm_make_request(KVM_REQ_INTROSPECTION, vcpu);
+   kvm_vcpu_kick(vcpu);
+}
+
 static int __kvmi_add_job(struct kvm_vcpu *vcpu,
  void (*fct)(struct kvm_vcpu *vcpu, void *ctx),
  void *ctx, void (*free_fct)(void *ctx))
@@ -155,6 +161,9 @@ int kvmi_add_job(struct kvm_vcpu *vcpu,
 
err = __kvmi_add_job(vcpu, fct, ctx, free_fct);
 
+   if (!err)
+   kvmi_make_request(vcpu);
+
return err;
 }
 
@@ -323,6 +332,14 @@ int kvmi_ioctl_unhook(struct kvm *kvm)
return 0;
 }
 
+struct kvm_introspection * __must_check kvmi_get(struct kvm *kvm)
+{
+   if (refcount_inc_not_zero(&kvm->kvmi_ref))
+   return kvm->kvmi;
+
+   return NULL;
+}
+
 void kvmi_put(struct kvm *kvm)
 {
if (refcount_dec_and_test(&kvm->kvmi_ref))
@@ -340,6 +357,19 @@ static int __kvmi_hook(struct kvm *kvm,
return 0;
 }
 
+static void kvmi_job_release_vcpu(struct kvm_vcpu *vcpu, void *ctx)
+{
+}
+
+static void kvmi_release_vcpus(struct kvm *kvm)
+{
+   struct kvm_vcpu *vcpu;
+   int i;
+
+   kvm_for_each_vcpu(i, vcpu, kvm)
+   kvmi_add_job(vcpu, kvmi_job_release_vcpu, NULL, NULL);
+}
+
 static int kvmi_recv_thread(void *arg)
 {
struct kvm_introspection *kvmi = arg;
@@ -350,6 +380,8 @@ static int kvmi_recv_thread(void *arg)
/* Signal userspace and prevent the vCPUs from sending events. */
kvmi_sock_shutdown(kvmi);
 
+   kvmi_release_vcpus(kvmi->kvm);
+
kvmi_put(kvmi->kvm);
return 0;
 }
@@ -381,6 +413,10 @@ int kvmi_hook(struct kvm *kvm, const struct 
kvm_introspection_hook *hook)
init_completion(&kvm->kvmi_complete);
 
refcount_set(&kvm->kvmi_ref, 1);
+   /*
+* Paired with refcount_inc_not_zero() from kvmi_get().
+*/
+   smp_wmb();
 
kvmi->recv = kthread_run(kvmi_recv_thread, kvmi, "kvmi-recv");
if (IS_ERR(kvmi->recv)) {
@@ -669,3 +705,40 @@ int kvmi_cmd_write_physical(struct kvm *kvm, u64 gpa, 
size_t size,
 
return ec;
 }
+
+static struct kvmi_job *kvmi_pull_job(struct kvm_vcpu_introspection *vcpui)
+{
+   struct kvmi_job 

[PATCH v10 70/81] KVM: introspection: add KVMI_VCPU_EVENT_DESCRIPTOR

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This event is sent when IDTR, GDTR, LDTR or TR are accessed.

These could be used to implement a tiny agent which runs in the context
of an introspected guest and uses virtualized exceptions (#VE) and
alternate EPT views (VMFUNC #0) to filter converted VMEXITS. The events
of interested will be suppressed (after some appropriate guest-side
handling) while the rest will be sent to the introspector via a VMCALL.

Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 43 +++
 arch/x86/include/asm/kvmi_host.h  |  3 +
 arch/x86/include/uapi/asm/kvmi.h  | 13 
 arch/x86/kvm/kvmi.c   | 58 ++
 arch/x86/kvm/kvmi.h   |  1 +
 arch/x86/kvm/kvmi_msg.c   | 19 +
 arch/x86/kvm/svm/svm.c| 33 
 arch/x86/kvm/vmx/vmx.c| 23 ++
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 75 +++
 10 files changed, 269 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 58b50464b5f6..649a679a485b 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -540,6 +540,7 @@ the following events::
 
KVMI_VCPU_EVENT_BREAKPOINT
KVMI_VCPU_EVENT_CR
+   KVMI_VCPU_EVENT_DESCRIPTOR
KVMI_VCPU_EVENT_HYPERCALL
KVMI_VCPU_EVENT_XSETBV
 
@@ -563,6 +564,8 @@ the *KVMI_VM_CONTROL_EVENTS* command.
 * -KVM_EINVAL - the event ID is unknown (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EPERM - the access is disallowed (use *KVMI_VM_CHECK_EVENT* first)
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EOPNOTSUPP - the event can't be intercepted in the current setup
+(e.g. KVMI_VCPU_EVENT_DESCRIPTOR with AMD)
 * -KVM_EBUSY - the event can't be intercepted right now
(e.g. KVMI_VCPU_EVENT_BREAKPOINT if the #BP event
 is already intercepted by userspace)
@@ -1217,3 +1220,43 @@ to be changed and the introspection has been enabled for 
this event
 ``kvmi_vcpu_event`` (with the vCPU state), the extended control register
 number (``xcr``), the old value (``old_value``) and the new value
 (``new_value``) are sent to the introspection tool.
+
+8. KVMI_VCPU_EVENT_DESCRIPTOR
+-
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, RETRY, CRASH
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_descriptor {
+   __u8 descriptor;
+   __u8 write;
+   __u8 padding[6];
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent when a descriptor table register is accessed and the
+introspection has been enabled for this event (see 
**KVMI_VCPU_CONTROL_EVENTS**).
+
+``kvmi_vcpu_event`` (with the vCPU state), the descriptor-table register
+(``descriptor``) and the access type (``write``) are sent to the
+introspection tool.
+
+``descriptor`` can be one of::
+
+   KVMI_DESC_IDTR
+   KVMI_DESC_GDTR
+   KVMI_DESC_LDTR
+   KVMI_DESC_TR
+
+``write`` is 1 if the descriptor was written, 0 otherwise.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index d66349208a6b..a24ba87036f7 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -48,6 +48,7 @@ bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool 
enable);
 void kvmi_enter_guest(struct kvm_vcpu *vcpu);
 void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
   u64 old_value, u64 new_value);
+bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, bool write);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -63,6 +64,8 @@ static inline bool kvmi_monitor_cr3w_intercept(struct 
kvm_vcpu *vcpu,
 static inline void kvmi_enter_guest(struct kvm_vcpu *vcpu) { }
 static inline void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
u64 old_value, u64 new_value) { }
+static inline bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor,
+bool write) { return true; }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 998878215078..db01c56a95ff 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -128,4 +128,17 @@ struct kvmi_vcpu_get_mtrr_type_reply {
__u8 padding[7];
 };
 
+enum {
+   KVMI_DESC_IDTR = 1,
+   KVMI_DESC_GDTR = 2,
+   KVMI_DESC_LDTR = 3,
+   KVMI_DESC_TR   = 4,
+};
+
+struct kvmi_vcpu_event_descriptor {
+   __u8 descriptor;
+   __u8 write;
+   __u8 padding[6];
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H *

[PATCH v10 33/81] KVM: introspection: add hook/unhook ioctls

2020-11-25 Thread Adalbert Lazăr
On hook, a new thread is created to handle the messages coming from the
introspection tool (commands or event replies). The VM related commands
are handled by this thread, while the vCPU commands and events replies
are dispatched to the vCPU threads.

On unhook, the socket is shut down, which will signal: the receiving
thread to quit (because it might be blocked in recvmsg()) and the
introspection tool to clean up.

The mutex is used to protect the 'kvm->kvmi' pointer when accessed
through ioctls.

The reference counter is incremented by the receiving thread (for
its entire life time) and by the vCPU threads while sending events or
handling commands.

The completion objects is set when the reference counter reaches zero,
allowing the unhook process to continue and free the introspection
structures.

Co-developed-by: Mircea Cîrjaliu 
Signed-off-by: Mircea Cîrjaliu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/api.rst|  63 +++
 arch/x86/include/asm/kvmi_host.h  |   8 +
 arch/x86/kvm/Makefile |   2 +-
 arch/x86/kvm/x86.c|   5 +
 include/linux/kvm_host.h  |   5 +
 include/linux/kvmi_host.h |  18 ++
 include/uapi/linux/kvm.h  |  10 ++
 include/uapi/linux/kvmi.h |  13 ++
 tools/testing/selftests/kvm/Makefile  |   1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  |  87 ++
 virt/kvm/introspection/kvmi.c | 159 ++
 virt/kvm/introspection/kvmi_int.h |  10 ++
 virt/kvm/introspection/kvmi_msg.c |  39 +
 virt/kvm/kvm_main.c   |  21 +++
 14 files changed, 440 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/asm/kvmi_host.h
 create mode 100644 include/uapi/linux/kvmi.h
 create mode 100644 tools/testing/selftests/kvm/x86_64/kvmi_test.c
 create mode 100644 virt/kvm/introspection/kvmi_msg.c

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 70254eaa5229..9b48be90ae7b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4825,6 +4825,58 @@ into user space.
 If a vCPU is in running state while this ioctl is invoked, the vCPU may
 experience inconsistent filtering behavior on MSR accesses.
 
+4.127 KVM_INTROSPECTION_HOOK
+
+
+:Capability: KVM_CAP_INTROSPECTION
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: struct kvm_introspection (in)
+:Returns: 0 on success, a negative value on error
+
+Errors:
+
+  == ==
+  ENOMEM the memory allocation failed
+  EEXIST the VM is already introspected
+  EINVAL the file descriptor doesn't correspond to an active socket
+  EINVAL the padding is not zero
+  EPERM  the introspection is disabled (kvm.introspection=0)
+  == ==
+
+This ioctl is used to enable the introspection of the current VM.
+
+::
+
+  struct kvm_introspection {
+   __s32 fd;
+   __u32 padding;
+   __u8 uuid[16];
+  };
+
+fd is the file descriptor of a socket connected to the introspection tool,
+
+padding must be zero (it might be used in the future),
+
+uuid is used for debug and error messages.
+
+4.128 KVM_INTROSPECTION_UNHOOK
+--
+
+:Capability: KVM_CAP_INTROSPECTION
+:Architectures: x86
+:Type: vm ioctl
+:Parameters: none
+:Returns: 0 on success, a negative value on error
+
+Errors:
+
+  == ==
+  EPERM  the introspection is disabled (kvm.introspection=0)
+  == ==
+
+This ioctl is used to free all introspection structures
+related to this VM.
 
 5. The kvm_run structure
 
@@ -6496,3 +6548,14 @@ KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG.  After 
enabling
 KVM_CAP_DIRTY_LOG_RING with an acceptable dirty ring size, the virtual
 machine will switch to ring-buffer dirty page tracking and further
 KVM_GET_DIRTY_LOG or KVM_CLEAR_DIRTY_LOG ioctls will fail.
+
+8.30 KVM_CAP_INTROSPECTION
+--
+
+:Architectures: x86
+
+This capability indicates that KVM supports VM introspection
+and it is enabled.
+
+The KVM_CHECK_EXTENSION ioctl returns the introspection API version
+(a number larger than 0).
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
new file mode 100644
index ..38c398262913
--- /dev/null
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_KVMI_HOST_H
+#define _ASM_X86_KVMI_HOST_H
+
+struct kvm_arch_introspection {
+};
+
+#endif /* _ASM_X86_KVMI_HOST_H */
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
inde

[PATCH v10 26/81] KVM: x86: export kvm_vcpu_ioctl_x86_set_xsave()

2020-11-25 Thread Adalbert Lazăr
This function is needed for the KVMI_VCPU_SET_XSAVE command.

Signed-off-by: Adalbert Lazăr 
---
 arch/x86/kvm/x86.c   | 4 ++--
 include/linux/kvm_host.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4fadd1ab20ae..f48603c8e44d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4514,8 +4514,8 @@ void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
 
 #define XSAVE_MXCSR_OFFSET 24
 
-static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
-   struct kvm_xsave *guest_xsave)
+int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
+struct kvm_xsave *guest_xsave)
 {
u64 xstate_bv =
*(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)];
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6eec75f77d7e..db04dab23013 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -923,6 +923,8 @@ int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
  struct kvm_guest_debug *dbg);
 void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
  struct kvm_xsave *guest_xsave);
+int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
+struct kvm_xsave *guest_xsave);
 
 int kvm_arch_init(void *opaque);
 void kvm_arch_exit(void);
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 58/81] KVM: introspection: add cleanup support for vCPUs

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

On unhook the introspection channel is closed. This will signal the
receiving thread to call kvmi_put() and exit. There might be vCPU threads
handling introspection commands or waiting for event replies. These
will also call kvmi_put() and re-enter in guest. Once the reference
counter reaches zero, the structures keeping the introspection data
(kvm_introspection and kvm_vcpu_introspection) will be freed.

In order to restore the interception of CRs, MSRs, BP, descriptor-table
registers, from all vCPUs (some of which might run from userspace),
we keep the needed information in another structure (kvmi_interception)
which will be used and freed by each of them before re-entering in guest.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvm_host.h   |  3 ++
 arch/x86/include/asm/kvmi_host.h  |  4 +++
 arch/x86/kvm/kvmi.c   | 49 +++
 virt/kvm/introspection/kvmi.c | 32 ++--
 virt/kvm/introspection/kvmi_int.h |  5 
 5 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7d1e865193a9..d4e2fe493419 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -816,6 +816,9 @@ struct kvm_vcpu_arch {
 
/* #PF translated error code from EPT/NPT exit reason */
u64 error_code;
+
+   /* Control the interception of MSRs/CRs/BP... */
+   struct kvmi_interception *kvmi;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index cc945151cb36..b776be4bb49f 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -4,6 +4,10 @@
 
 #include 
 
+struct kvmi_interception {
+   bool restore_interception;
+};
+
 struct kvm_vcpu_arch_introspection {
struct kvm_regs delayed_regs;
bool have_delayed_regs;
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index 0bb6f38f1213..b4a7d581f68c 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -210,3 +210,52 @@ void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 
gva, u8 insn_len)
kvmi_handle_common_event_actions(vcpu, action);
}
 }
+
+static void kvmi_arch_restore_interception(struct kvm_vcpu *vcpu)
+{
+}
+
+bool kvmi_arch_clean_up_interception(struct kvm_vcpu *vcpu)
+{
+   struct kvmi_interception *arch_vcpui = vcpu->arch.kvmi;
+
+   if (!arch_vcpui)
+   return false;
+
+   if (!arch_vcpui->restore_interception)
+   return false;
+
+   kvmi_arch_restore_interception(vcpu);
+
+   return true;
+}
+
+bool kvmi_arch_vcpu_alloc_interception(struct kvm_vcpu *vcpu)
+{
+   struct kvmi_interception *arch_vcpui;
+
+   arch_vcpui = kzalloc(sizeof(*arch_vcpui), GFP_KERNEL);
+   if (!arch_vcpui)
+   return false;
+
+   return true;
+}
+
+void kvmi_arch_vcpu_free_interception(struct kvm_vcpu *vcpu)
+{
+   kfree(vcpu->arch.kvmi);
+   WRITE_ONCE(vcpu->arch.kvmi, NULL);
+}
+
+bool kvmi_arch_vcpu_introspected(struct kvm_vcpu *vcpu)
+{
+   return !!READ_ONCE(vcpu->arch.kvmi);
+}
+
+void kvmi_arch_request_interception_cleanup(struct kvm_vcpu *vcpu)
+{
+   struct kvmi_interception *arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
+
+   if (arch_vcpui)
+   arch_vcpui->restore_interception = true;
+}
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index 476af6dd8bf1..a0cd98839944 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -206,7 +206,7 @@ static bool kvmi_alloc_vcpui(struct kvm_vcpu *vcpu)
 
vcpu->kvmi = vcpui;
 
-   return true;
+   return kvmi_arch_vcpu_alloc_interception(vcpu);
 }
 
 static int kvmi_create_vcpui(struct kvm_vcpu *vcpu)
@@ -240,6 +240,9 @@ static void kvmi_free_vcpui(struct kvm_vcpu *vcpu)
 
kfree(vcpui);
vcpu->kvmi = NULL;
+
+   kvmi_arch_request_interception_cleanup(vcpu);
+   kvmi_make_request(vcpu, false);
 }
 
 static void kvmi_free(struct kvm *kvm)
@@ -262,6 +265,7 @@ void kvmi_vcpu_uninit(struct kvm_vcpu *vcpu)
 {
mutex_lock(&vcpu->kvm->kvmi_lock);
kvmi_free_vcpui(vcpu);
+   kvmi_arch_vcpu_free_interception(vcpu);
mutex_unlock(&vcpu->kvm->kvmi_lock);
 }
 
@@ -410,6 +414,21 @@ static int kvmi_recv_thread(void *arg)
return 0;
 }
 
+static bool ready_to_hook(struct kvm *kvm)
+{
+   struct kvm_vcpu *vcpu;
+   int i;
+
+   if (kvm->kvmi)
+   return false;
+
+   kvm_for_each_vcpu(i, vcpu, kvm)
+   if (kvmi_arch_vcpu_introspected(vcpu))
+   return false;
+
+   return true;
+}
+
 int kvmi_hook(struct kvm *kvm, const struct kvm_introspection_hook *hook)
 {
struct kvm_introspection *kvmi;
@@ -417,7 +436,7 @@ int kvmi_hook(struct kvm *kvm, const struct 
kvm_introspection_hook *hook)
 
m

[PATCH v10 73/81] KVM: introspection: restore the state of MSR interception on unhook

2020-11-25 Thread Adalbert Lazăr
From: Nicușor Cîțu 

This commit also ensures that the introspection tool and the userspace
do not disable each other the MSR access VM-exit.

Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 arch/x86/include/asm/kvmi_host.h |  12 +++
 arch/x86/kvm/kvmi.c  | 124 +++
 arch/x86/kvm/svm/svm.c   |  10 +++
 arch/x86/kvm/vmx/vmx.c   |  11 +++
 4 files changed, 142 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 5a4fc5b80907..8822f0310156 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -26,6 +26,12 @@ struct kvmi_interception {
DECLARE_BITMAP(low, KVMI_NUM_MSR);
DECLARE_BITMAP(high, KVMI_NUM_MSR);
} kvmi_mask;
+   struct {
+   DECLARE_BITMAP(low, KVMI_NUM_MSR);
+   DECLARE_BITMAP(high, KVMI_NUM_MSR);
+   } kvm_mask;
+   bool (*monitor_fct)(struct kvm_vcpu *vcpu, u32 msr,
+   bool enable);
} msrw;
 };
 
@@ -61,6 +67,8 @@ void kvmi_xsetbv_event(struct kvm_vcpu *vcpu, u8 xcr,
 bool kvmi_monitor_desc_intercept(struct kvm_vcpu *vcpu, bool enable);
 bool kvmi_descriptor_event(struct kvm_vcpu *vcpu, u8 descriptor, bool write);
 bool kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr);
+bool kvmi_monitor_msrw_intercept(struct kvm_vcpu *vcpu, u32 msr, bool enable);
+bool kvmi_msrw_intercept_originator(struct kvm_vcpu *vcpu);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -82,6 +90,10 @@ static inline bool kvmi_descriptor_event(struct kvm_vcpu 
*vcpu, u8 descriptor,
 bool write) { return true; }
 static inline bool kvmi_msr_event(struct kvm_vcpu *vcpu, struct msr_data *msr)
{ return true; }
+static inline bool kvmi_monitor_msrw_intercept(struct kvm_vcpu *vcpu, u32 msr,
+  bool enable) { return false; }
+static inline bool kvmi_msrw_intercept_originator(struct kvm_vcpu *vcpu)
+   { return false; }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index ce29e01ba7a6..e325dad88dbb 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -345,22 +345,25 @@ static void kvmi_arch_disable_desc_intercept(struct 
kvm_vcpu *vcpu)
vcpu->arch.kvmi->descriptor.kvm_intercepted = false;
 }
 
-static unsigned long *msr_mask(struct kvm_vcpu *vcpu, unsigned int *msr)
+static unsigned long *msr_mask(struct kvm_vcpu *vcpu, unsigned int *msr,
+  bool kvmi)
 {
switch (*msr) {
case 0 ... 0x1fff:
-   return vcpu->arch.kvmi->msrw.kvmi_mask.low;
+   return kvmi ? vcpu->arch.kvmi->msrw.kvmi_mask.low :
+ vcpu->arch.kvmi->msrw.kvm_mask.low;
case 0xc000 ... 0xc0001fff:
*msr &= 0x1fff;
-   return vcpu->arch.kvmi->msrw.kvmi_mask.high;
+   return kvmi ? vcpu->arch.kvmi->msrw.kvmi_mask.high :
+ vcpu->arch.kvmi->msrw.kvm_mask.high;
}
 
return NULL;
 }
 
-static bool test_msr_mask(struct kvm_vcpu *vcpu, unsigned int msr)
+static bool test_msr_mask(struct kvm_vcpu *vcpu, unsigned int msr, bool kvmi)
 {
-   unsigned long *mask = msr_mask(vcpu, &msr);
+   unsigned long *mask = msr_mask(vcpu, &msr, kvmi);
 
if (!mask)
return false;
@@ -368,9 +371,27 @@ static bool test_msr_mask(struct kvm_vcpu *vcpu, unsigned 
int msr)
return !!test_bit(msr, mask);
 }
 
-static bool msr_control(struct kvm_vcpu *vcpu, unsigned int msr, bool enable)
+/*
+ * Returns true if one side (kvm or kvmi) tries to disable the MSR write
+ * interception while the other side is still tracking it.
+ */
+bool kvmi_monitor_msrw_intercept(struct kvm_vcpu *vcpu, u32 msr, bool enable)
 {
-   unsigned long *mask = msr_mask(vcpu, &msr);
+   struct kvmi_interception *arch_vcpui;
+
+   if (!vcpu)
+   return false;
+
+   arch_vcpui = READ_ONCE(vcpu->arch.kvmi);
+
+   return (arch_vcpui && arch_vcpui->msrw.monitor_fct(vcpu, msr, enable));
+}
+EXPORT_SYMBOL(kvmi_monitor_msrw_intercept);
+
+static bool msr_control(struct kvm_vcpu *vcpu, unsigned int msr, bool enable,
+   bool kvmi)
+{
+   unsigned long *mask = msr_mask(vcpu, &msr, kvmi);
 
if (!mask)
return false;
@@ -383,6 +404,63 @@ static bool msr_control(struct kvm_vcpu *vcpu, unsigned 
int msr, bool enable)
return true;
 }
 
+static bool msr_intercepted_by_kvmi(struct kvm_vcpu *vcpu, u32 msr)
+{
+   return test_msr_mask(vcpu, msr, true);
+}
+
+static bool msr_intercepted_by_kvm(struct kvm_vcpu *vcpu, u32 msr)
+{
+   return test_msr_mask(vcpu, msr, false);
+}
+
+stati

[PATCH v10 69/81] KVM: introspection: add KVMI_VCPU_GET_MTRR_TYPE

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This command returns the memory type for a guest physical address.

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 32 +++
 arch/x86/include/uapi/asm/kvmi.h  |  9 ++
 arch/x86/kvm/kvmi_msg.c   | 17 ++
 include/uapi/linux/kvmi.h |  1 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 18 +++
 5 files changed, 77 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 56efeeb38980..58b50464b5f6 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -887,6 +887,38 @@ Modifies the XSAVE area.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+21. KVMI_VCPU_GET_MTRR_TYPE
+---
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_get_mtrr_type {
+   __u64 gpa;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_error_code;
+   struct kvmi_vcpu_get_mtrr_type_reply {
+   __u8 type;
+   __u8 padding[7];
+   };
+
+Returns the guest memory type for a specific guest physical address (``gpa``).
+
+:Errors:
+
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+
 Events
 ==
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 6ec290b69b46..998878215078 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -119,4 +119,13 @@ struct kvmi_vcpu_set_xsave {
struct kvm_xsave xsave;
 };
 
+struct kvmi_vcpu_get_mtrr_type {
+   __u64 gpa;
+};
+
+struct kvmi_vcpu_get_mtrr_type_reply {
+   __u8 type;
+   __u8 padding[7];
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index c1b3bd56a42c..fc4ee6acce4a 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -233,10 +233,27 @@ static int handle_vcpu_set_xsave(const struct 
kvmi_vcpu_msg_job *job,
return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+static int handle_vcpu_get_mtrr_type(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *_req)
+{
+   const struct kvmi_vcpu_get_mtrr_type *req = _req;
+   struct kvmi_vcpu_get_mtrr_type_reply rpl;
+   gfn_t gfn;
+
+   gfn = gpa_to_gfn(req->gpa);
+
+   memset(&rpl, 0, sizeof(rpl));
+   rpl.type = kvm_mtrr_get_guest_memory_type(job->vcpu, gfn);
+
+   return kvmi_msg_vcpu_reply(job, msg, 0, &rpl, sizeof(rpl));
+}
+
 static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
[KVMI_VCPU_CONTROL_CR]   = handle_vcpu_control_cr,
[KVMI_VCPU_GET_CPUID]= handle_vcpu_get_cpuid,
[KVMI_VCPU_GET_INFO] = handle_vcpu_get_info,
+   [KVMI_VCPU_GET_MTRR_TYPE]= handle_vcpu_get_mtrr_type,
[KVMI_VCPU_GET_REGISTERS]= handle_vcpu_get_registers,
[KVMI_VCPU_GET_XCR]  = handle_vcpu_get_xcr,
[KVMI_VCPU_GET_XSAVE]= handle_vcpu_get_xsave,
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 3baf5c7842bb..8d7c6027f12c 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -47,6 +47,7 @@ enum {
KVMI_VCPU_GET_XCR  = KVMI_VCPU_MESSAGE_ID(8),
KVMI_VCPU_GET_XSAVE= KVMI_VCPU_MESSAGE_ID(9),
KVMI_VCPU_SET_XSAVE= KVMI_VCPU_MESSAGE_ID(10),
+   KVMI_VCPU_GET_MTRR_TYPE= KVMI_VCPU_MESSAGE_ID(11),
 
KVMI_NEXT_VCPU_MESSAGE
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 45c1f3132a3c..b0906c7fb954 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1488,6 +1488,23 @@ static void test_cmd_vcpu_xsave(struct kvm_vm *vm)
cmd_vcpu_set_xsave(vm, &xsave);
 }
 
+static void test_cmd_vcpu_get_mtrr_type(struct kvm_vm *vm)
+{
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   struct kvmi_vcpu_get_mtrr_type cmd;
+   } req = {};
+   struct kvmi_vcpu_get_mtrr_type_reply rpl;
+
+   req.cmd.gpa = test_gpa;
+
+   test_vcpu0_command(vm, KVMI_VCPU_GET_MTRR_TYPE,
+  &req.hdr, sizeof(req), &rpl, sizeof(rpl), 0);
+
+   pr_debug("mtrr_type: gpa 0x%lx type 0x%x\n", test_gpa, rpl.type);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
srandom(time(0));
@@ -1517,6 +1534,7 @@ static void test_introspection(struct kvm_vm *vm)
test_event_xsetbv(vm);
test_cmd_vcpu_g

[PATCH v10 64/81] KVM: introspection: add KVMI_VM_GET_MAX_GFN

2020-11-25 Thread Adalbert Lazăr
From: Ștefan Șicleru 

The introspection tool will use this command to get the memory address
range for which it can set access restrictions.

Signed-off-by: Ștefan Șicleru 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 19 +++
 include/uapi/linux/kvmi.h |  5 +
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 12 
 virt/kvm/introspection/kvmi_msg.c | 13 +
 4 files changed, 49 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index e688ac387faf..ecf4207b42d0 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -777,6 +777,25 @@ exception.
 * -KVM_EBUSY - another *KVMI_VCPU_INJECT_EXCEPTION*-*KVMI_VCPU_EVENT_TRAP*
pair is in progress
 
+17. KVMI_VM_GET_MAX_GFN
+---
+
+:Architectures: all
+:Versions: >= 1
+:Parameters: none
+:Returns:
+
+::
+
+struct kvmi_error_code;
+struct kvmi_vm_get_max_gfn_reply {
+__u64 gfn;
+};
+
+Provides the maximum GFN allocated to the VM by walking through all
+memory slots. Stricly speaking, the returned value refers to the first
+inaccessible GFN, next to the maximum accessible GFN.
+
 Events
 ==
 
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 263d98a5903e..d0e06363c407 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -29,6 +29,7 @@ enum {
KVMI_VM_WRITE_PHYSICAL  = KVMI_VM_MESSAGE_ID(7),
KVMI_VM_PAUSE_VCPU  = KVMI_VM_MESSAGE_ID(8),
KVMI_VM_CONTROL_CLEANUP = KVMI_VM_MESSAGE_ID(9),
+   KVMI_VM_GET_MAX_GFN = KVMI_VM_MESSAGE_ID(10),
 
KVMI_NEXT_VM_MESSAGE
 };
@@ -177,4 +178,8 @@ struct kvmi_vm_control_cleanup {
__u8 padding[7];
 };
 
+struct kvmi_vm_get_max_gfn_reply {
+   __u64 gfn;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index dc9f2f0d99e8..b4565802db22 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -1322,6 +1322,17 @@ static void test_cmd_vcpu_inject_exception(struct kvm_vm 
*vm)
disable_vcpu_event(vm, KVMI_VCPU_EVENT_BREAKPOINT);
 }
 
+static void test_cmd_vm_get_max_gfn(void)
+{
+   struct kvmi_vm_get_max_gfn_reply rpl;
+   struct kvmi_msg_hdr req;
+
+   test_vm_command(KVMI_VM_GET_MAX_GFN, &req, sizeof(req),
+   &rpl, sizeof(rpl), 0);
+
+   pr_debug("max_gfn: 0x%llx\n", rpl.gfn);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
srandom(time(0));
@@ -1347,6 +1358,7 @@ static void test_introspection(struct kvm_vm *vm)
test_cmd_vm_control_cleanup(vm);
test_cmd_vcpu_control_cr(vm);
test_cmd_vcpu_inject_exception(vm);
+   test_cmd_vm_get_max_gfn();
 
unhook_introspection(vm);
 }
diff --git a/virt/kvm/introspection/kvmi_msg.c 
b/virt/kvm/introspection/kvmi_msg.c
index 762fb5227dd9..42d066e92ba2 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -290,6 +290,18 @@ static int handle_vm_control_cleanup(struct 
kvm_introspection *kvmi,
return kvmi_msg_vm_reply(kvmi, msg, ec, NULL, 0);
 }
 
+static int handle_vm_get_max_gfn(struct kvm_introspection *kvmi,
+const struct kvmi_msg_hdr *msg,
+const void *req)
+{
+   struct kvmi_vm_get_max_gfn_reply rpl;
+
+   memset(&rpl, 0, sizeof(rpl));
+   rpl.gfn = kvm_get_max_gfn(kvmi->kvm);
+
+   return kvmi_msg_vm_reply(kvmi, msg, 0, &rpl, sizeof(rpl));
+}
+
 /*
  * These commands are executed by the receiving thread.
  */
@@ -300,6 +312,7 @@ static kvmi_vm_msg_fct const msg_vm[] = {
[KVMI_VM_CONTROL_CLEANUP] = handle_vm_control_cleanup,
[KVMI_VM_CONTROL_EVENTS]  = handle_vm_control_events,
[KVMI_VM_GET_INFO]= handle_vm_get_info,
+   [KVMI_VM_GET_MAX_GFN] = handle_vm_get_max_gfn,
[KVMI_VM_PAUSE_VCPU]  = handle_vm_pause_vcpu,
[KVMI_VM_READ_PHYSICAL]   = handle_vm_read_physical,
[KVMI_VM_WRITE_PHYSICAL]  = handle_vm_write_physical,
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v10 75/81] KVM: introspection: add KVMI_VCPU_EVENT_PF

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

This event is sent when a #PF occurs due to a failed permission check
in the shadow page tables, for a page in which the introspection tool
has shown interest.

Signed-off-by: Mihai Donțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |  66 ++
 arch/x86/include/asm/kvmi_host.h  |   1 +
 arch/x86/kvm/kvmi.c   | 122 ++
 include/uapi/linux/kvmi.h |  10 ++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  |  76 +++
 virt/kvm/introspection/kvmi.c | 116 +
 virt/kvm/introspection/kvmi_int.h |   7 +
 virt/kvm/introspection/kvmi_msg.c |  19 +++
 8 files changed, 417 insertions(+)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 1540f75c4462..bdcc9066ae28 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -543,6 +543,7 @@ the following events::
KVMI_VCPU_EVENT_DESCRIPTOR
KVMI_VCPU_EVENT_HYPERCALL
KVMI_VCPU_EVENT_MSR
+   KVMI_VCPU_EVENT_PF
KVMI_VCPU_EVENT_XSETBV
 
 When an event is enabled, the introspection tool is notified and
@@ -1398,3 +1399,68 @@ register (see **KVMI_VCPU_CONTROL_EVENTS**).
 ``kvmi_vcpu_event`` (with the vCPU state), the MSR number (``msr``),
 the old value (``old_value``) and the new value (``new_value``) are sent
 to the introspection tool. The *CONTINUE* action will set the ``new_val``.
+
+10. KVMI_VCPU_EVENT_PF
+--
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH, RETRY
+:Parameters:
+
+::
+
+   struct kvmi_vcpu_event;
+   struct kvmi_vcpu_event_pf {
+   __u64 gva;
+   __u64 gpa;
+   __u8 access;
+   __u8 padding1;
+   __u16 padding2;
+   __u32 padding3;
+   };
+
+:Returns:
+
+::
+
+   struct kvmi_vcpu_hdr;
+   struct kvmi_vcpu_event_reply;
+
+This event is sent when a hypervisor page fault occurs due to a failed
+permission checks, the introspection has been enabled for this event
+(see *KVMI_VCPU_CONTROL_EVENTS*) and the event was generated for a
+page in which the introspection tool has shown interest (ie. has
+previously touched it by adjusting the spte permissions; see
+*KVMI_VM_SET_PAGE_ACCESS*).
+
+These permissions can be used by the introspection tool to guarantee
+the purpose of code areas inside the guest (code, rodata, stack, heap
+etc.) Each attempt at an operation unfitting for a certain memory
+range (eg. execute code in heap) triggers a page fault and gives the
+introspection tool the chance to audit the code attempting the operation.
+
+``kvmi_vcpu_event`` (with the vCPU state), guest virtual address (``gva``)
+if available or ~0 (UNMAPPED_GVA), guest physical address (``gpa``)
+and the ``access`` flags (e.g. KVMI_PAGE_ACCESS_R) are sent to the
+introspection tool.
+
+In case of a restricted read access, the guest address is the location
+of the memory being read. On write access, the guest address is the
+location of the memory being written. On execute access, the guest
+address is the location of the instruction being executed
+(``gva == kvmi_vcpu_event.arch.regs.rip``).
+
+In the current implementation, most of these events are sent during
+emulation. If the page fault has set more than one access bit
+(e.g. r-x/-rw), the introspection tool may receive more than one
+KVMI_VCPU_EVENT_PF and the order depends on the KVM emulator. Another
+cause of multiple events is when the page fault is triggered on access
+crossing the page boundary.
+
+The *CONTINUE* action will continue the page fault handling (e.g. via
+emulation).
+
+The *RETRY* action is used by the introspection tool to retry the
+execution of the current instruction, usually because it changed the
+instruction pointer or the page restrictions.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index 420358c4a9ae..31500d3ff69d 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -53,6 +53,7 @@ struct kvm_vcpu_arch_introspection {
 };
 
 struct kvm_arch_introspection {
+   struct kvm_page_track_notifier_node kptn_node;
 };
 
 #define SLOTS_SIZE BITS_TO_LONGS(KVM_MEM_SLOTS_NUM)
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index acd4756e0d78..cd64762643d6 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -17,10 +17,26 @@ void kvmi_arch_init_vcpu_events_mask(unsigned long 
*supported)
set_bit(KVMI_VCPU_EVENT_HYPERCALL, supported);
set_bit(KVMI_VCPU_EVENT_DESCRIPTOR, supported);
set_bit(KVMI_VCPU_EVENT_MSR, supported);
+   set_bit(KVMI_VCPU_EVENT_PF, supported);
set_bit(KVMI_VCPU_EVENT_TRAP, supported);
set_bit(KVMI_VCPU_EVENT_XSETBV, supported);
 }
 
+static bool kvmi_track_preread(struct kvm_vcpu *vcpu, gpa_t gpa, gva_t gva,

[PATCH v10 40/81] KVM: introspection: add KVMI_VM_EVENT_UNHOOK

2020-11-25 Thread Adalbert Lazăr
This event is sent when the guest is about to be
paused/suspended/migrated. The introspection tool has the chance to
remove its hooks (e.g. breakpoints) while the guest is still running.

Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   | 31 +
 arch/x86/kvm/Makefile |  2 +-
 arch/x86/kvm/kvmi.c   | 10 +++
 include/linux/kvmi_host.h |  2 +
 include/uapi/linux/kvmi.h |  9 +++
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 68 ++-
 virt/kvm/introspection/kvmi.c | 13 +++-
 virt/kvm/introspection/kvmi_int.h |  3 +
 virt/kvm/introspection/kvmi_msg.c | 42 +++-
 9 files changed, 173 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/kvm/kvmi.c

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 33490bc9d1c1..e9c40c7ae154 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -331,3 +331,34 @@ This command is always allowed.
};
 
 Returns the number of online vCPUs.
+
+Events
+==
+
+The VM introspection events are sent using the KVMI_VM_EVENT message id.
+The message data begins with a common structure having the event id::
+
+   struct kvmi_event_hdr {
+   __u16 event;
+   __u16 padding[3];
+   };
+
+Specific event data can follow this common structure.
+
+1. KVMI_VM_EVENT_UNHOOK
+---
+
+:Architectures: all
+:Versions: >= 1
+:Actions: none
+:Parameters:
+
+::
+
+   struct kvmi_event_hdr;
+
+:Returns: none
+
+This event is sent when the device manager has to pause/stop/migrate the
+guest (see **Unhooking**).  The introspection tool has a chance to unhook
+and close the KVMI channel (signaling that the operation can proceed).
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index db4121b4112d..8fad40649bcf 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y += $(KVM)/kvm_main.o 
$(KVM)/coalesced_mmio.o \
$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o 
\
$(KVM)/dirty_ring.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
-kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVMI)/kvmi.o $(KVMI)/kvmi_msg.o
+kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVMI)/kvmi.o $(KVMI)/kvmi_msg.o kvmi.o
 
 kvm-y  += x86.o emulate.o i8259.o irq.o lapic.o \
   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
new file mode 100644
index ..35742d927be5
--- /dev/null
+++ b/arch/x86/kvm/kvmi.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KVM introspection - x86
+ *
+ * Copyright (C) 2019-2020 Bitdefender S.R.L.
+ */
+
+void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
+{
+}
diff --git a/include/linux/kvmi_host.h b/include/linux/kvmi_host.h
index 81eac9f53a3f..6476c7d6a4d3 100644
--- a/include/linux/kvmi_host.h
+++ b/include/linux/kvmi_host.h
@@ -17,6 +17,8 @@ struct kvm_introspection {
 
unsigned long *cmd_allow_mask;
unsigned long *event_allow_mask;
+
+   atomic_t ev_seq;
 };
 
 int kvmi_version(void);
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index e06a7b80d4d9..18fb51078d48 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -17,6 +17,8 @@ enum {
 #define KVMI_VCPU_MESSAGE_ID(id) (((id) << 1) | 1)
 
 enum {
+   KVMI_VM_EVENT = KVMI_VM_MESSAGE_ID(0),
+
KVMI_GET_VERSION  = KVMI_VM_MESSAGE_ID(1),
KVMI_VM_CHECK_COMMAND = KVMI_VM_MESSAGE_ID(2),
KVMI_VM_CHECK_EVENT   = KVMI_VM_MESSAGE_ID(3),
@@ -33,6 +35,8 @@ enum {
 #define KVMI_VCPU_EVENT_ID(id) (((id) << 1) | 1)
 
 enum {
+   KVMI_VM_EVENT_UNHOOK = KVMI_VM_EVENT_ID(0),
+
KVMI_NEXT_VM_EVENT
 };
 
@@ -73,4 +77,9 @@ struct kvmi_vm_get_info_reply {
__u32 padding[3];
 };
 
+struct kvmi_event_hdr {
+   __u16 event;
+   __u16 padding[3];
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c 
b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index d60ee23fa833..01b260379c2a 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -68,6 +68,11 @@ static void set_event_perm(struct kvm_vm *vm, __s32 id, 
__u32 allow,
 "KVM_INTROSPECTION_EVENT");
 }
 
+static void disallow_event(struct kvm_vm *vm, __s32 event_id)
+{
+   set_event_perm(vm, event_id, 0, 0);
+}
+
 static void allow_event(struct kvm_vm *vm, __s32 event_id)
 {
set_event_perm(vm, event_id, 1, 0);
@@ -291,11 +296,16 @@ static void cmd_vm_check_event(__u16 id, int expected_err)
expected_err);
 }
 
-static void test_cmd_vm_check_event(void)
+static void test_cmd_vm_check_event(struct kvm_vm *vm)

[PATCH v10 46/81] KVM: introspection: handle vCPU commands

2020-11-25 Thread Adalbert Lazăr
From: Mihai Donțu 

Based on the common structure (kvmi_vcpu_hdr) used for all vCPU commands,
the receiving thread validates and dispatches the message to the proper
vCPU (adding the handling function to its jobs list).

Signed-off-by: Mihai Donțu 
Co-developed-by: Nicușor Cîțu 
Signed-off-by: Nicușor Cîțu 
Co-developed-by: Adalbert Lazăr 
Signed-off-by: Adalbert Lazăr 
---
 Documentation/virt/kvm/kvmi.rst   |   8 ++
 arch/x86/kvm/Makefile |   2 +-
 arch/x86/kvm/kvmi_msg.c   |  17 
 include/uapi/linux/kvmi.h |   6 ++
 virt/kvm/introspection/kvmi_int.h |  16 
 virt/kvm/introspection/kvmi_msg.c | 150 +-
 6 files changed, 196 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/kvm/kvmi_msg.c

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index 7812d62240c0..4d340528d2f4 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -221,6 +221,14 @@ The following C structures are meant to be used directly 
when communicating
 over the wire. The peer that detects any size mismatch should simply close
 the connection and report the error.
 
+The vCPU commands start with::
+
+   struct kvmi_vcpu_hdr {
+   __u16 vcpu;
+   __u16 padding1;
+   __u32 padding2;
+   }
+
 1. KVMI_GET_VERSION
 ---
 
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 8fad40649bcf..6d04731e235e 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y += $(KVM)/kvm_main.o 
$(KVM)/coalesced_mmio.o \
$(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o 
\
$(KVM)/dirty_ring.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
-kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVMI)/kvmi.o $(KVMI)/kvmi_msg.o kvmi.o
+kvm-$(CONFIG_KVM_INTROSPECTION) += $(KVMI)/kvmi.o $(KVMI)/kvmi_msg.o kvmi.o 
kvmi_msg.o
 
 kvm-y  += x86.o emulate.o i8259.o irq.o lapic.o \
   i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
new file mode 100644
index ..0f4717ca5fa8
--- /dev/null
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KVM introspection (message handling) - x86
+ *
+ * Copyright (C) 2020 Bitdefender S.R.L.
+ *
+ */
+
+#include "../../../virt/kvm/introspection/kvmi_int.h"
+
+static kvmi_vcpu_msg_job_fct const msg_vcpu[] = {
+};
+
+kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id)
+{
+   return id < ARRAY_SIZE(msg_vcpu) ? msg_vcpu[id] : NULL;
+}
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index 048afad01be6..7ba1c8758aba 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -107,4 +107,10 @@ struct kvmi_vm_write_physical {
__u8  data[0];
 };
 
+struct kvmi_vcpu_hdr {
+   __u16 vcpu;
+   __u16 padding1;
+   __u32 padding2;
+};
+
 #endif /* _UAPI__LINUX_KVMI_H */
diff --git a/virt/kvm/introspection/kvmi_int.h 
b/virt/kvm/introspection/kvmi_int.h
index c3aa12554c2b..c3e4da7e7f20 100644
--- a/virt/kvm/introspection/kvmi_int.h
+++ b/virt/kvm/introspection/kvmi_int.h
@@ -14,6 +14,18 @@
  */
 #define KVMI_MAX_MSG_SIZE (4096 * 2 - sizeof(struct kvmi_msg_hdr))
 
+struct kvmi_vcpu_msg_job {
+   struct {
+   struct kvmi_msg_hdr hdr;
+   struct kvmi_vcpu_hdr vcpu_hdr;
+   } *msg;
+   struct kvm_vcpu *vcpu;
+};
+
+typedef int (*kvmi_vcpu_msg_job_fct)(const struct kvmi_vcpu_msg_job *job,
+const struct kvmi_msg_hdr *msg,
+const void *req);
+
 /* kvmi_msg.c */
 bool kvmi_sock_get(struct kvm_introspection *kvmi, int fd);
 void kvmi_sock_shutdown(struct kvm_introspection *kvmi);
@@ -28,6 +40,9 @@ bool kvmi_is_command_allowed(struct kvm_introspection *kvmi, 
u16 id);
 bool kvmi_is_event_allowed(struct kvm_introspection *kvmi, u16 id);
 bool kvmi_is_known_event(u16 id);
 bool kvmi_is_known_vm_event(u16 id);
+int kvmi_add_job(struct kvm_vcpu *vcpu,
+void (*fct)(struct kvm_vcpu *vcpu, void *ctx),
+void *ctx, void (*free_fct)(void *ctx));
 int kvmi_cmd_vm_control_events(struct kvm_introspection *kvmi,
   u16 event_id, bool enable);
 int kvmi_cmd_read_physical(struct kvm *kvm, u64 gpa, size_t size,
@@ -40,5 +55,6 @@ int kvmi_cmd_write_physical(struct kvm *kvm, u64 gpa, size_t 
size,
 
 /* arch */
 void kvmi_arch_init_vcpu_events_mask(unsigned long *supported);
+kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id);
 
 #endif
diff --git a/virt/kvm/introspection/kvmi_msg.c 
b/virt/kvm/introspection/kvmi_msg.c
index 4fe385265758..6f2fe245a8b1 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -13,6 +13,7 @@ typedef int (*kvmi_vm_msg_fct)(struct kvm_introspec

Re: [PATCH v3 16/17] x86/ioapic: export a few functions and data structures via io_apic.h

2020-11-25 Thread Andy Shevchenko
On Wed, Nov 25, 2020 at 1:46 AM Wei Liu  wrote:
>
> We are about to implement an irqchip for IO-APIC when Linux runs as root
> on Microsoft Hypervisor. At the same time we would like to reuse
> existing code as much as possible.
>
> Move mp_chip_data to io_apic.h and make a few helper functions
> non-static.

> +struct mp_chip_data {
> +   struct list_head irq_2_pin;
> +   struct IO_APIC_route_entry entry;
> +   int trigger;
> +   int polarity;
> +   u32 count;
> +   bool isa_irq;
> +};

Since I see only this patch I am puzzled why you need to have this in
the header?
Maybe a couple of words in the commit message to elaborate?

-- 
With Best Regards,
Andy Shevchenko
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-25 Thread Andy Shevchenko
On Mon, Nov 23, 2020 at 10:39 PM James Bottomley
 wrote:
> On Mon, 2020-11-23 at 19:56 +0100, Miguel Ojeda wrote:
> > On Mon, Nov 23, 2020 at 4:58 PM James Bottomley
> >  wrote:

...

> > But if we do the math, for an author, at even 1 minute per line
> > change and assuming nothing can be automated at all, it would take 1
> > month of work. For maintainers, a couple of trivial lines is noise
> > compared to many other patches.
>
> So you think a one line patch should take one minute to produce ... I
> really don't think that's grounded in reality.  I suppose a one line
> patch only takes a minute to merge with b4 if no-one reviews or tests
> it, but that's not really desirable.

In my practice most of the one line patches were either to fix or to
introduce quite interesting issues.
1 minute is 2-3 orders less than usually needed for such patches.
That's why I don't like churn produced by people who often even didn't
compile their useful contributions.

-- 
With Best Regards,
Andy Shevchenko
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v10 33/81] KVM: introspection: add hook/unhook ioctls

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/fa233c4711c43446c876e84ca6f87b702b0990a8
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout fa233c4711c43446c876e84ca6f87b702b0990a8
# save the attached .config to linux build tree
make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c:90:6: warning: no 
previous prototype for 'kvmi_put' [-Wmissing-prototypes]
  90 | void kvmi_put(struct kvm *kvm)
 |  ^~~~
>> arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c:121:5: warning: no 
>> previous prototype for 'kvmi_hook' [-Wmissing-prototypes]
 121 | int kvmi_hook(struct kvm *kvm, const struct kvm_introspection_hook 
*hook)
 | ^

vim +/kvmi_hook +121 arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c

   120  
 > 121  int kvmi_hook(struct kvm *kvm, const struct kvm_introspection_hook 
 > *hook)
   122  {
   123  struct kvm_introspection *kvmi;
   124  int err = 0;
   125  
   126  mutex_lock(&kvm->kvmi_lock);
   127  
   128  if (kvm->kvmi) {
   129  err = -EEXIST;
   130  goto out;
   131  }
   132  
   133  kvmi = kvmi_alloc(kvm, hook);
   134  if (!kvmi) {
   135  err = -ENOMEM;
   136  goto out;
   137  }
   138  
   139  kvm->kvmi = kvmi;
   140  
   141  err = __kvmi_hook(kvm, hook);
   142  if (err)
   143  goto destroy;
   144  
   145  init_completion(&kvm->kvmi_complete);
   146  
   147  refcount_set(&kvm->kvmi_ref, 1);
   148  
   149  kvmi->recv = kthread_run(kvmi_recv_thread, kvmi, "kvmi-recv");
   150  if (IS_ERR(kvmi->recv)) {
   151  err = -ENOMEM;
   152  kvmi_put(kvm);
   153  goto unhook;
   154  }
   155  
   156  goto out;
   157  
   158  unhook:
   159  __kvmi_unhook(kvm);
   160  destroy:
   161  kvmi_destroy(kvmi);
   162  out:
   163  mutex_unlock(&kvm->kvmi_lock);
   164  return err;
   165  }
   166  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v10 25/81] KVM: x86: export kvm_vcpu_ioctl_x86_get_xsave()

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: s390-allyesconfig (attached as .config)
compiler: s390-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# 
https://github.com/0day-ci/linux/commit/311f49fb9bd7c7968a435ccfbf075cd4d1bd8079
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout 311f49fb9bd7c7968a435ccfbf075cd4d1bd8079
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   In file included from arch/s390/kernel/asm-offsets.c:11:
>> include/linux/kvm_host.h:925:14: warning: 'struct kvm_xsave' declared inside 
>> parameter list will not be visible outside of this definition or declaration
 925 |   struct kvm_xsave *guest_xsave);
 |  ^
--
   In file included from arch/s390/kernel/asm-offsets.c:11:
>> include/linux/kvm_host.h:925:14: warning: 'struct kvm_xsave' declared inside 
>> parameter list will not be visible outside of this definition or declaration
 925 |   struct kvm_xsave *guest_xsave);
 |  ^

vim +925 include/linux/kvm_host.h

   900  
   901  int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
   902  struct kvm_translation *tr);
   903  
   904  int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs 
*regs);
   905  void kvm_arch_vcpu_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs 
*regs);
   906  int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs 
*regs);
   907  void kvm_arch_vcpu_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs 
*regs,
   908  bool clear_exception);
   909  int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
   910struct kvm_sregs *sregs);
   911  void kvm_arch_vcpu_get_sregs(struct kvm_vcpu *vcpu,
   912struct kvm_sregs *sregs);
   913  int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
   914struct kvm_sregs *sregs);
   915  int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
   916  struct kvm_mp_state *mp_state);
   917  int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
   918  struct kvm_mp_state *mp_state);
   919  int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
   920  struct kvm_guest_debug *dbg);
   921  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu);
   922  int kvm_arch_vcpu_set_guest_debug(struct kvm_vcpu *vcpu,
   923struct kvm_guest_debug *dbg);
   924  void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
 > 925struct kvm_xsave *guest_xsave);
   926  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-25 Thread Nick Desaulniers via Virtualization
On Tue, Nov 24, 2020 at 11:05 PM James Bottomley
 wrote:
>
> On Tue, 2020-11-24 at 13:32 -0800, Kees Cook wrote:
> > We already enable -Wimplicit-fallthrough globally, so that's not the
> > discussion. The issue is that Clang is (correctly) even more strict
> > than GCC for this, so these are the remaining ones to fix for full
> > Clang coverage too.
> >
> > People have spent more time debating this already than it would have
> > taken to apply the patches. :)
>
> You mean we've already spent 90% of the effort to come this far so we
> might as well go the remaining 10% because then at least we get some
> return? It's certainly a clinching argument in defence procurement ...

So developers and distributions using Clang can't have
-Wimplicit-fallthrough enabled because GCC is less strict (which has
been shown in this thread to lead to bugs)?  We'd like to have nice
things too, you know.

I even agree that most of the churn comes from

case 0:
  ++x;
default:
  break;

which I have a patch for: https://reviews.llvm.org/D91895.  I agree
that can never lead to bugs.  But that's not the sole case of this
series, just most of them.

Though, note how the reviewer (C++ spec editor and clang front end
owner) in https://reviews.llvm.org/D91895 even asks in that review how
maybe a new flag would be more appropriate for a watered
down/stylistic variant of the existing behavior.  And if the current
wording of Documentation/process/deprecated.rst around "fallthrough"
is a straightforward rule of thumb, I kind of agree with him.

>
> > This is about robustness and language wrangling. It's a big code-
> > base, and this is the price of our managing technical debt for
> > permanent robustness improvements. (The numbers I ran from Gustavo's
> > earlier patches were that about 10% of the places adjusted were
> > identified as legitimate bugs being fixed. This final series may be
> > lower, but there are still bugs being found from it -- we need to
> > finish this and shut the door on it for good.)
>
> I got my six patches by analyzing the lwn.net report of the fixes that
> was cited which had 21 of which 50% didn't actually change the emitted
> code, and 25% didn't have a user visible effect.
>
> But the broader point I'm making is just because the compiler people
> come up with a shiny new warning doesn't necessarily mean the problem

That's not what this is though; you're attacking a strawman.  I'd
encourage you to bring that up when that actually occurs, unlike this
case since it's actively hindering getting -Wimplicit-fallthrough
enabled for Clang.  This is not a shiny new warning; it's already on
for GCC and has existed in both compilers for multiple releases.

And I'll also note that warnings are warnings and not errors because
they cannot be proven to be bugs in 100% of cases, but they have led
to bugs in the past.  They require a human to review their intent and
remove ambiguities.  If 97% of cases would end in a break ("Expert C
Programming: Deep C Secrets" - Peter van der Linden), then it starts
to look to me like a language defect; certainly an incorrectly chosen
default.  But the compiler can't know those 3% were intentional,
unless you're explicit for those exceptional cases.

> it's detecting is one that causes us actual problems in the code base.
> I'd really be happier if we had a theory about what classes of CVE or
> bug we could eliminate before we embrace the next new warning.

We don't generally file CVEs and waiting for them to occur might be
too reactive, but I agree that pointing to some additional
documentation in commit messages about how a warning could lead to a
bug would make it clearer to reviewers why being able to enable it
treewide, even if there's no bug in their particular subsystem, is in
the general interest of the commons.

On Mon, Nov 23, 2020 at 7:58 AM James Bottomley
 wrote:
>
> We're also complaining about the inability to recruit maintainers:
>
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
>
> And burn out:
>
> http://antirez.com/news/129
>
> The whole crux of your argument seems to be maintainers' time isn't
> important so we should accept all trivial patches ... I'm pushing back
> on that assumption in two places, firstly the valulessness of the time
> and secondly that all trivial patches are valuable.

It's critical to the longevity of any open source project that there
are not single points of failure.  If someone is not expendable or
replaceable (or claims to be) then that's a risk to the project and a
bottleneck.  Not having a replacement in training or some form of
redundancy is short sighted.

If trivial patches are adding too much to your workload, consider
training a co-maintainer or asking for help from one of your reviewers
whom you trust.  I don't doubt it's hard to find maintainers, but
existing maintainers should go out of their way to entrust
co-maintainers especially when they find their workload becomes too
high.  And review

Re: [PATCH v10 25/81] KVM: x86: export kvm_vcpu_ioctl_x86_get_xsave()

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: mips-randconfig-r013-20201125 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 
77e98eaee2e8d4b9b297b66fda5b1e51e2a6)
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# install mips cross compiling tool for clang build
# apt-get install binutils-mips-linux-gnu
# 
https://github.com/0day-ci/linux/commit/311f49fb9bd7c7968a435ccfbf075cd4d1bd8079
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout 311f49fb9bd7c7968a435ccfbf075cd4d1bd8079
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=mips 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   In file included from arch/mips/kernel/asm-offsets.c:24:
>> include/linux/kvm_host.h:925:14: warning: declaration of 'struct kvm_xsave' 
>> will not be visible outside of this function [-Wvisibility]
 struct kvm_xsave *guest_xsave);
^
   arch/mips/kernel/asm-offsets.c:26:6: warning: no previous prototype for 
function 'output_ptreg_defines' [-Wmissing-prototypes]
   void output_ptreg_defines(void)
^
   arch/mips/kernel/asm-offsets.c:26:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_ptreg_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:78:6: warning: no previous prototype for 
function 'output_task_defines' [-Wmissing-prototypes]
   void output_task_defines(void)
^
   arch/mips/kernel/asm-offsets.c:78:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_task_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:93:6: warning: no previous prototype for 
function 'output_thread_info_defines' [-Wmissing-prototypes]
   void output_thread_info_defines(void)
^
   arch/mips/kernel/asm-offsets.c:93:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_thread_info_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:110:6: warning: no previous prototype for 
function 'output_thread_defines' [-Wmissing-prototypes]
   void output_thread_defines(void)
^
   arch/mips/kernel/asm-offsets.c:110:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_thread_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:138:6: warning: no previous prototype for 
function 'output_thread_fpu_defines' [-Wmissing-prototypes]
   void output_thread_fpu_defines(void)
^
   arch/mips/kernel/asm-offsets.c:138:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_thread_fpu_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:181:6: warning: no previous prototype for 
function 'output_mm_defines' [-Wmissing-prototypes]
   void output_mm_defines(void)
^
   arch/mips/kernel/asm-offsets.c:181:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_mm_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:220:6: warning: no previous prototype for 
function 'output_sc_defines' [-Wmissing-prototypes]
   void output_sc_defines(void)
^
   arch/mips/kernel/asm-offsets.c:220:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_sc_defines(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:255:6: warning: no previous prototype for 
function 'output_signal_defined' [-Wmissing-prototypes]
   void output_signal_defined(void)
^
   arch/mips/kernel/asm-offsets.c:255:1: note: declare 'static' if the function 
is not intended to be used outside of this translation unit
   void output_signal_defined(void)
   ^
   static 
   arch/mips/kernel/asm-offsets.c:348:6: warning: no previous prototype for 
function 'output_kvm_defines' [-Wmissing-prototypes]
   void output_kvm_defines(void)
^
 

Re: [PATCH v10 32/81] KVM: introduce VM introspection

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: mips-malta_kvm_defconfig (attached as .config)
compiler: mipsel-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# 
https://github.com/0day-ci/linux/commit/6ffa5da71155bd0bed0d68c52af248bda256d0f2
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout 6ffa5da71155bd0bed0d68c52af248bda256d0f2
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=mips 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

   In file included from arch/mips/kvm/../../../virt/kvm/kvm_main.c:18:
   include/linux/kvm_host.h:925:14: warning: 'struct kvm_xsave' declared inside 
parameter list will not be visible outside of this definition or declaration
 925 |   struct kvm_xsave *guest_xsave);
 |  ^
   include/linux/kvm_host.h:927:13: warning: 'struct kvm_xsave' declared inside 
parameter list will not be visible outside of this definition or declaration
 927 |  struct kvm_xsave *guest_xsave);
 | ^
   arch/mips/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_create_vm':
>> arch/mips/kvm/../../../virt/kvm/kvm_main.c:806:6: error: 
>> 'enable_introspection' undeclared (first use in this function)
 806 |  if (enable_introspection)
 |  ^~~~
   arch/mips/kvm/../../../virt/kvm/kvm_main.c:806:6: note: each undeclared 
identifier is reported only once for each function it appears in
   arch/mips/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_destroy_vm':
   arch/mips/kvm/../../../virt/kvm/kvm_main.c:861:6: error: 
'enable_introspection' undeclared (first use in this function)
 861 |  if (enable_introspection)
 |  ^~~~
   arch/mips/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_init':
   arch/mips/kvm/../../../virt/kvm/kvm_main.c:5012:6: error: 
'enable_introspection' undeclared (first use in this function)
5012 |  if (enable_introspection) {
 |  ^~~~

vim +/enable_introspection +806 arch/mips/kvm/../../../virt/kvm/kvm_main.c

   797  
   798  r = kvm_init_mmu_notifier(kvm);
   799  if (r)
   800  goto out_err_no_mmu_notifier;
   801  
   802  r = kvm_arch_post_init_vm(kvm);
   803  if (r)
   804  goto out_err;
   805  
 > 806  if (enable_introspection)
   807  kvmi_create_vm(kvm);
   808  
   809  mutex_lock(&kvm_lock);
   810  list_add(&kvm->vm_list, &vm_list);
   811  mutex_unlock(&kvm_lock);
   812  
   813  preempt_notifier_inc();
   814  
   815  return kvm;
   816  
   817  out_err:
   818  #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
   819  if (kvm->mmu_notifier.ops)
   820  mmu_notifier_unregister(&kvm->mmu_notifier, 
current->mm);
   821  #endif
   822  out_err_no_mmu_notifier:
   823  hardware_disable_all();
   824  out_err_no_disable:
   825  kvm_arch_destroy_vm(kvm);
   826  out_err_no_arch_destroy_vm:
   827  WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count));
   828  for (i = 0; i < KVM_NR_BUSES; i++)
   829  kfree(kvm_get_bus(kvm, i));
   830  for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
   831  kvm_free_memslots(kvm, __kvm_memslots(kvm, i));
   832  cleanup_srcu_struct(&kvm->irq_srcu);
   833  out_err_no_irq_srcu:
   834  cleanup_srcu_struct(&kvm->srcu);
   835  out_err_no_srcu:
   836  kvm_arch_free_vm(kvm);
   837  mmdrop(current->mm);
   838  return ERR_PTR(r);
   839  }
   840  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v10 63/81] KVM: introspection: add KVMI_VCPU_INJECT_EXCEPTION + KVMI_VCPU_EVENT_TRAP

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/248beb976ebfd430c3535cc442d0ba198f1ab1d6
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout 248beb976ebfd430c3535cc442d0ba198f1ab1d6
# save the attached .config to linux build tree
make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   arch/x86/kvm/kvmi_msg.c: In function 'handle_vcpu_inject_exception':
>> arch/x86/kvm/kvmi_msg.c:158:38: warning: variable 'arch' set but not used 
>> [-Wunused-but-set-variable]
 158 |  struct kvm_vcpu_arch_introspection *arch;
 |  ^~~~

vim +/arch +158 arch/x86/kvm/kvmi_msg.c

   152  
   153  static int handle_vcpu_inject_exception(const struct kvmi_vcpu_msg_job 
*job,
   154  const struct kvmi_msg_hdr *msg,
   155  const void *_req)
   156  {
   157  const struct kvmi_vcpu_inject_exception *req = _req;
 > 158  struct kvm_vcpu_arch_introspection *arch;
   159  struct kvm_vcpu *vcpu = job->vcpu;
   160  int ec;
   161  
   162  arch = &VCPUI(vcpu)->arch;
   163  
   164  if (!kvmi_is_event_allowed(KVMI(vcpu->kvm), 
KVMI_VCPU_EVENT_TRAP))
   165  ec = -KVM_EPERM;
   166  else if (req->padding1 || req->padding2)
   167  ec = -KVM_EINVAL;
   168  else if (VCPUI(vcpu)->arch.exception.pending ||
   169  VCPUI(vcpu)->arch.exception.send_event)
   170  ec = -KVM_EBUSY;
   171  else
   172  ec = kvmi_arch_cmd_vcpu_inject_exception(vcpu, req);
   173  
   174  return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
   175  }
   176  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH] virtio-input: add multi-touch support

2020-11-25 Thread Vasyl Vavrychuk
From: Mathias Crombez 

Without multi-touch slots allocated, ABS_MT_SLOT events will be lost by
input_handle_abs_event.

Signed-off-by: Mathias Crombez 
Signed-off-by: Vasyl Vavrychuk 
Tested-by: Vasyl Vavrychuk 
---
 drivers/virtio/Kconfig| 11 +++
 drivers/virtio/virtio_input.c |  8 
 2 files changed, 19 insertions(+)

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 7b41130d3f35..2cfd5b01d96d 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -111,6 +111,17 @@ config VIRTIO_INPUT

 If unsure, say M.

+config VIRTIO_INPUT_MULTITOUCH_SLOTS
+   depends on VIRTIO_INPUT
+   int "Number of multitouch slots"
+   range 0 64
+   default 10
+   help
+Define the number of multitouch slots used. Default to 10.
+This parameter is unused if there is no multitouch capability.
+
+0 will disable the feature.
+
 config VIRTIO_MMIO
tristate "Platform bus driver for memory mapped virtio devices"
depends on HAS_IOMEM && HAS_DMA
diff --git a/drivers/virtio/virtio_input.c b/drivers/virtio/virtio_input.c
index f1f6208edcf5..13f3d90e6c30 100644
--- a/drivers/virtio/virtio_input.c
+++ b/drivers/virtio/virtio_input.c
@@ -7,6 +7,7 @@

 #include 
 #include 
+#include 

 struct virtio_input {
struct virtio_device   *vdev;
@@ -205,6 +206,7 @@ static int virtinput_probe(struct virtio_device *vdev)
unsigned long flags;
size_t size;
int abs, err;
+   bool is_mt = false;

if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
return -ENODEV;
@@ -287,9 +289,15 @@ static int virtinput_probe(struct virtio_device *vdev)
for (abs = 0; abs < ABS_CNT; abs++) {
if (!test_bit(abs, vi->idev->absbit))
continue;
+   if (input_is_mt_value(abs))
+   is_mt = true;
virtinput_cfg_abs(vi, abs);
}
}
+   if (is_mt)
+   input_mt_init_slots(vi->idev,
+   CONFIG_VIRTIO_INPUT_MULTITOUCH_SLOTS,
+   INPUT_MT_DIRECT);

virtio_device_ready(vdev);
vi->ready = true;
--
2.23.0


Please mind our privacy 
notice
 pursuant to Art. 13 GDPR. // Unsere Hinweise zum Datenschutz gem. Art. 13 
DSGVO finden Sie 
hier.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v10 32/81] KVM: introduce VM introspection

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: powerpc64-randconfig-r006-20201125 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 
77e98eaee2e8d4b9b297b66fda5b1e51e2a6)
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc64 cross compiling tool for clang build
# apt-get install binutils-powerpc64-linux-gnu
# 
https://github.com/0day-ci/linux/commit/6ffa5da71155bd0bed0d68c52af248bda256d0f2
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout 6ffa5da71155bd0bed0d68c52af248bda256d0f2
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross 
ARCH=powerpc64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:604:
   arch/powerpc/include/asm/io-defs.h:45:1: warning: performing pointer 
arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
   ^~~
   arch/powerpc/include/asm/io.h:601:3: note: expanded from macro 
'DEF_PCI_AC_NORET'
   __do_##name al; \
   ^~
   :45:1: note: expanded from here
   __do_insw
   ^
   arch/powerpc/include/asm/io.h:542:56: note: expanded from macro '__do_insw'
   #define __do_insw(p, b, n)  readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
  ~^
   In file included from arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:18:
   In file included from include/linux/kvm_host.h:7:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:604:
   arch/powerpc/include/asm/io-defs.h:47:1: warning: performing pointer 
arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
   ^~~
   arch/powerpc/include/asm/io.h:601:3: note: expanded from macro 
'DEF_PCI_AC_NORET'
   __do_##name al; \
   ^~
   :47:1: note: expanded from here
   __do_insl
   ^
   arch/powerpc/include/asm/io.h:543:56: note: expanded from macro '__do_insl'
   #define __do_insl(p, b, n)  readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
  ~^
   In file included from arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:18:
   In file included from include/linux/kvm_host.h:7:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:604:
   arch/powerpc/include/asm/io-defs.h:49:1: warning: performing pointer 
arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
   ^~
   arch/powerpc/include/asm/io.h:601:3: note: expanded from macro 
'DEF_PCI_AC_NORET'
   __do_##name al; \
   ^~
   :49:1: note: expanded from here
   __do_outsb
   ^
   arch/powerpc/include/asm/io.h:544:58: note: expanded from macro '__do_outsb'
   #define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
   ~^
   In file included from arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:18:
   In file included from include/linux/kvm_host.h:7:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/

Re: [PATCH v10 67/81] KVM: introspection: add KVMI_VCPU_GET_XSAVE

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/c8777a54e026a93f283f7dfac30ed2cb2563fd03
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout c8777a54e026a93f283f7dfac30ed2cb2563fd03
# save the attached .config to linux build tree
make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   arch/x86/kvm/kvmi_msg.c: In function 'handle_vcpu_inject_exception':
   arch/x86/kvm/kvmi_msg.c:158:38: warning: variable 'arch' set but not used 
[-Wunused-but-set-variable]
 158 |  struct kvm_vcpu_arch_introspection *arch;
 |  ^~~~
   arch/x86/kvm/kvmi_msg.c: In function 'handle_vcpu_get_xsave':
>> arch/x86/kvm/kvmi_msg.c:202:11: warning: variable 'ec' set but not used 
>> [-Wunused-but-set-variable]
 202 |  int err, ec = 0;
 |   ^~

vim +/ec +202 arch/x86/kvm/kvmi_msg.c

   196  
   197  static int handle_vcpu_get_xsave(const struct kvmi_vcpu_msg_job *job,
   198   const struct kvmi_msg_hdr *msg,
   199   const void *req)
   200  {
   201  struct kvmi_vcpu_get_xsave_reply *rpl;
 > 202  int err, ec = 0;
   203  
   204  rpl = kvmi_msg_alloc();
   205  if (!rpl)
   206  ec = -KVM_ENOMEM;
   207  else
   208  kvm_vcpu_ioctl_x86_get_xsave(job->vcpu, &rpl->xsave);
   209  
   210  err = kvmi_msg_vcpu_reply(job, msg, 0, rpl, sizeof(*rpl));
   211  
   212  kvmi_msg_free(rpl);
   213  return err;
   214  }
   215  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 01/15] drm/amdgpu: Remove references to struct drm_device.pdev

2020-11-25 Thread Alex Deucher
On Tue, Nov 24, 2020 at 6:38 AM Thomas Zimmermann  wrote:
>
> Using struct drm_device.pdev is deprecated. Convert amdgpu to struct
> drm_device.dev. No functional changes.
>
> Signed-off-by: Thomas Zimmermann 
> Cc: Alex Deucher 
> Cc: Christian König 

There are a few unrelated whitespace changes.  Other than that, patch is:
Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 23 ++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |  1 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 -
>  drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 10 -
>  7 files changed, 25 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 7560b05e4ac1..d61715133825 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -1404,9 +1404,9 @@ static void amdgpu_switcheroo_set_state(struct pci_dev 
> *pdev,
> /* don't suspend or resume card normally */
> dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
>
> -   pci_set_power_state(dev->pdev, PCI_D0);
> -   amdgpu_device_load_pci_state(dev->pdev);
> -   r = pci_enable_device(dev->pdev);
> +   pci_set_power_state(pdev, PCI_D0);
> +   amdgpu_device_load_pci_state(pdev);
> +   r = pci_enable_device(pdev);
> if (r)
> DRM_WARN("pci_enable_device failed (%d)\n", r);
> amdgpu_device_resume(dev, true);
> @@ -1418,10 +1418,10 @@ static void amdgpu_switcheroo_set_state(struct 
> pci_dev *pdev,
> drm_kms_helper_poll_disable(dev);
> dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
> amdgpu_device_suspend(dev, true);
> -   amdgpu_device_cache_pci_state(dev->pdev);
> +   amdgpu_device_cache_pci_state(pdev);
> /* Shut down the device */
> -   pci_disable_device(dev->pdev);
> -   pci_set_power_state(dev->pdev, PCI_D3cold);
> +   pci_disable_device(pdev);
> +   pci_set_power_state(pdev, PCI_D3cold);
> dev->switch_power_state = DRM_SWITCH_POWER_OFF;
> }
>  }
> @@ -1684,8 +1684,7 @@ static void amdgpu_device_enable_virtual_display(struct 
> amdgpu_device *adev)
> adev->enable_virtual_display = false;
>
> if (amdgpu_virtual_display) {
> -   struct drm_device *ddev = adev_to_drm(adev);
> -   const char *pci_address_name = pci_name(ddev->pdev);
> +   const char *pci_address_name = pci_name(adev->pdev);
> char *pciaddstr, *pciaddstr_tmp, *pciaddname_tmp, *pciaddname;
>
> pciaddstr = kstrdup(amdgpu_virtual_display, GFP_KERNEL);
> @@ -3375,7 +3374,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
> }
> }
>
> -   pci_enable_pcie_error_reporting(adev->ddev.pdev);
> +   pci_enable_pcie_error_reporting(adev->pdev);
>
> /* Post card if necessary */
> if (amdgpu_device_need_post(adev)) {
> @@ -4922,8 +4921,8 @@ pci_ers_result_t amdgpu_pci_error_detected(struct 
> pci_dev *pdev, pci_channel_sta
> case pci_channel_io_normal:
> return PCI_ERS_RESULT_CAN_RECOVER;
> /* Fatal error, prepare for slot reset */
> -   case pci_channel_io_frozen:
> -   /*
> +   case pci_channel_io_frozen:
> +   /*
>  * Cancel and wait for all TDRs in progress if failing to
>  * set  adev->in_gpu_reset in amdgpu_device_lock_adev
>  *
> @@ -5014,7 +5013,7 @@ pci_ers_result_t amdgpu_pci_slot_reset(struct pci_dev 
> *pdev)
> goto out;
> }
>
> -   adev->in_pci_err_recovery = true;
> +   adev->in_pci_err_recovery = true;
> r = amdgpu_device_pre_asic_reset(adev, NULL, &need_full_reset);
> adev->in_pci_err_recovery = false;
> if (r)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index 2e8a8b57639f..77974c3981fa 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -721,13 +721,14 @@ amdgpu_display_user_framebuffer_create(struct 
> drm_device *dev,
>struct drm_file *file_priv,
>const struct drm_mode_fb_cmd2 
> *mode_cmd)
>  {
> +   struct amdgpu_device *adev = drm_to_adev(dev);
> struct drm_gem_object *obj;
> struct amdgpu_framebuffer *amdgpu_fb;
> int ret;
>
> obj = drm_gem_object_lookup(file_priv, mode_cmd->handles[0])

Re: [PATCH 11/15] drm/radeon: Remove references to struct drm_device.pdev

2020-11-25 Thread Alex Deucher
On Tue, Nov 24, 2020 at 6:39 AM Thomas Zimmermann  wrote:
>
> Using struct drm_device.pdev is deprecated. Convert radeon to struct
> drm_device.dev. No functional changes.
>
> Signed-off-by: Thomas Zimmermann 
> Cc: Alex Deucher 
> Cc: Christian König 

There are a few unrelated whitespace changes.  Other than that, patch is:
Acked-by: Alex Deucher 

> ---
>  drivers/gpu/drm/radeon/atombios_encoders.c|  6 +-
>  drivers/gpu/drm/radeon/r100.c | 27 +++---
>  drivers/gpu/drm/radeon/radeon.h   | 32 +++
>  drivers/gpu/drm/radeon/radeon_atombios.c  | 89 ++-
>  drivers/gpu/drm/radeon/radeon_bios.c  |  6 +-
>  drivers/gpu/drm/radeon/radeon_combios.c   | 55 ++--
>  drivers/gpu/drm/radeon/radeon_cs.c|  3 +-
>  drivers/gpu/drm/radeon/radeon_device.c| 17 ++--
>  drivers/gpu/drm/radeon/radeon_display.c   |  2 +-
>  drivers/gpu/drm/radeon/radeon_drv.c   |  3 +-
>  drivers/gpu/drm/radeon/radeon_fb.c|  2 +-
>  drivers/gpu/drm/radeon/radeon_gem.c   |  6 +-
>  drivers/gpu/drm/radeon/radeon_i2c.c   |  2 +-
>  drivers/gpu/drm/radeon/radeon_irq_kms.c   |  2 +-
>  drivers/gpu/drm/radeon/radeon_kms.c   | 20 ++---
>  .../gpu/drm/radeon/radeon_legacy_encoders.c   |  6 +-
>  drivers/gpu/drm/radeon/rs780_dpm.c|  7 +-
>  17 files changed, 144 insertions(+), 141 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
> b/drivers/gpu/drm/radeon/atombios_encoders.c
> index cc5ee1b3af84..a9ae8b6c5991 100644
> --- a/drivers/gpu/drm/radeon/atombios_encoders.c
> +++ b/drivers/gpu/drm/radeon/atombios_encoders.c
> @@ -2065,9 +2065,9 @@ atombios_apply_encoder_quirks(struct drm_encoder 
> *encoder,
> struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
>
> /* Funky macbooks */
> -   if ((dev->pdev->device == 0x71C5) &&
> -   (dev->pdev->subsystem_vendor == 0x106b) &&
> -   (dev->pdev->subsystem_device == 0x0080)) {
> +   if ((rdev->pdev->device == 0x71C5) &&
> +   (rdev->pdev->subsystem_vendor == 0x106b) &&
> +   (rdev->pdev->subsystem_device == 0x0080)) {
> if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) {
> uint32_t lvtma_bit_depth_control = 
> RREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL);
>
> diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
> index 24c8db673931..984eeb893d76 100644
> --- a/drivers/gpu/drm/radeon/r100.c
> +++ b/drivers/gpu/drm/radeon/r100.c
> @@ -2611,7 +2611,6 @@ int r100_asic_reset(struct radeon_device *rdev, bool 
> hard)
>
>  void r100_set_common_regs(struct radeon_device *rdev)
>  {
> -   struct drm_device *dev = rdev->ddev;
> bool force_dac2 = false;
> u32 tmp;
>
> @@ -2629,7 +2628,7 @@ void r100_set_common_regs(struct radeon_device *rdev)
>  * don't report it in the bios connector
>  * table.
>  */
> -   switch (dev->pdev->device) {
> +   switch (rdev->pdev->device) {
> /* RN50 */
> case 0x515e:
> case 0x5969:
> @@ -2639,17 +2638,17 @@ void r100_set_common_regs(struct radeon_device *rdev)
> case 0x5159:
> case 0x515a:
> /* DELL triple head servers */
> -   if ((dev->pdev->subsystem_vendor == 0x1028 /* DELL */) &&
> -   ((dev->pdev->subsystem_device == 0x016c) ||
> -(dev->pdev->subsystem_device == 0x016d) ||
> -(dev->pdev->subsystem_device == 0x016e) ||
> -(dev->pdev->subsystem_device == 0x016f) ||
> -(dev->pdev->subsystem_device == 0x0170) ||
> -(dev->pdev->subsystem_device == 0x017d) ||
> -(dev->pdev->subsystem_device == 0x017e) ||
> -(dev->pdev->subsystem_device == 0x0183) ||
> -(dev->pdev->subsystem_device == 0x018a) ||
> -(dev->pdev->subsystem_device == 0x019a)))
> +   if ((rdev->pdev->subsystem_vendor == 0x1028 /* DELL */) &&
> +   ((rdev->pdev->subsystem_device == 0x016c) ||
> +(rdev->pdev->subsystem_device == 0x016d) ||
> +(rdev->pdev->subsystem_device == 0x016e) ||
> +(rdev->pdev->subsystem_device == 0x016f) ||
> +(rdev->pdev->subsystem_device == 0x0170) ||
> +(rdev->pdev->subsystem_device == 0x017d) ||
> +(rdev->pdev->subsystem_device == 0x017e) ||
> +(rdev->pdev->subsystem_device == 0x0183) ||
> +(rdev->pdev->subsystem_device == 0x018a) ||
> +(rdev->pdev->subsystem_device == 0x019a)))
> force_dac2 = true;
> break;
> }
> @@ -2797,7 +2796,7 @@ void r100_vram_init_sizes(struct radeon_device *rdev)
> rdev->mc.real_

Re: [PATCH v10 75/81] KVM: introspection: add KVMI_VCPU_EVENT_PF

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/e2e6aad169a874e4a2c4f4639c252f6b1d816d7b
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout e2e6aad169a874e4a2c4f4639c252f6b1d816d7b
# save the attached .config to linux build tree
make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c:481:5: warning: no 
previous prototype for 'kvmi_hook' [-Wmissing-prototypes]
 481 | int kvmi_hook(struct kvm *kvm, const struct kvm_introspection_hook 
*hook)
 | ^
>> arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c:1076:6: warning: no 
>> previous prototype for 'kvmi_restricted_page_access' [-Wmissing-prototypes]
1076 | bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, 
gpa_t gpa,
 |  ^~~

vim +/kvmi_restricted_page_access +1076 
arch/x86/kvm/../../../virt/kvm/introspection/kvmi.c

  1075  
> 1076  bool kvmi_restricted_page_access(struct kvm_introspection *kvmi, gpa_t 
> gpa,
  1077   u8 access)
  1078  {
  1079  u8 allowed_access;
  1080  int err;
  1081  
  1082  err = kvmi_get_gfn_access(kvmi, gpa_to_gfn(gpa), 
&allowed_access);
  1083  if (err)
  1084  return false;
  1085  
  1086  /*
  1087   * We want to be notified only for violations involving access
  1088   * bits that we've specifically cleared
  1089   */
  1090  if (access & (~allowed_access))
  1091  return true;
  1092  
  1093  return false;
  1094  }
  1095  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v10 66/81] KVM: introspection: add KVMI_VCPU_GET_XCR

2020-11-25 Thread kernel test robot
Hi "Adalbert,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on dc924b062488a0376aae41d3e0a27dc99f852a5e]

url:
https://github.com/0day-ci/linux/commits/Adalbert-Laz-r/VM-introspection/20201125-174530
base:dc924b062488a0376aae41d3e0a27dc99f852a5e
config: x86_64-randconfig-a015-20201125 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/f4ab174ddeb4c60f52bd15cab4263701c7f543c9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Adalbert-Laz-r/VM-introspection/20201125-174530
git checkout f4ab174ddeb4c60f52bd15cab4263701c7f543c9
# save the attached .config to linux build tree
make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

   In file included from ./usr/include/linux/kvmi.h:11,
from :32:
>> ./usr/include/asm/kvmi.h:111:2: error: unknown type name 'u64'
 111 |  u64 value;
 |  ^~~

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  1   2   >