Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-14 Thread Andy Lutomirski



> On May 14, 2019, at 8:36 AM, Alexandre Chartre  
> wrote:
> 
> 
>> On 5/14/19 9:21 AM, Peter Zijlstra wrote:
>>> On Mon, May 13, 2019 at 07:02:30PM -0700, Andy Lutomirski wrote:
>>> This sounds like a great use case for static_call().  PeterZ, do you
>>> suppose we could wire up static_call() with the module infrastructure
>>> to make it easy to do "static_call to such-and-such GPL module symbol
>>> if that symbol is in a loaded module, else nop"?
>> You're basically asking it to do dynamic linking. And I suppose that is
>> technically possible.
>> However, I'm really starting to think kvm (or at least these parts of it
>> that want to play these games) had better not be a module anymore.
> 
> Maybe we can use an atomic notifier (e.g. page_fault_notifier)?
> 
> 

IMO that’s worse. I want to be able to read do_page_fault() and understand what 
happens and in what order.

Having do_page_fault run with the wrong CR3 is so fundamental to its operation 
that it needs to be very obvious what’s happening.

Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-14 Thread Alexandre Chartre



On 5/14/19 9:21 AM, Peter Zijlstra wrote:

On Mon, May 13, 2019 at 07:02:30PM -0700, Andy Lutomirski wrote:


This sounds like a great use case for static_call().  PeterZ, do you
suppose we could wire up static_call() with the module infrastructure
to make it easy to do "static_call to such-and-such GPL module symbol
if that symbol is in a loaded module, else nop"?


You're basically asking it to do dynamic linking. And I suppose that is
technically possible.

However, I'm really starting to think kvm (or at least these parts of it
that want to play these games) had better not be a module anymore.



Maybe we can use an atomic notifier (e.g. page_fault_notifier)?

alex.


Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-14 Thread Peter Zijlstra
On Mon, May 13, 2019 at 07:02:30PM -0700, Andy Lutomirski wrote:

> This sounds like a great use case for static_call().  PeterZ, do you
> suppose we could wire up static_call() with the module infrastructure
> to make it easy to do "static_call to such-and-such GPL module symbol
> if that symbol is in a loaded module, else nop"?

You're basically asking it to do dynamic linking. And I suppose that is
technically possible.

However, I'm really starting to think kvm (or at least these parts of it
that want to play these games) had better not be a module anymore.




Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Andy Lutomirski
On Mon, May 13, 2019 at 2:26 PM Liran Alon  wrote:
>
>
>
> > On 13 May 2019, at 18:15, Peter Zijlstra  wrote:
> >
> > On Mon, May 13, 2019 at 04:38:32PM +0200, Alexandre Chartre wrote:
> >> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> >> index 46df4c6..317e105 100644
> >> --- a/arch/x86/mm/fault.c
> >> +++ b/arch/x86/mm/fault.c
> >> @@ -33,6 +33,10 @@
> >> #define CREATE_TRACE_POINTS
> >> #include 
> >>
> >> +bool (*kvm_page_fault_handler)(struct pt_regs *regs, unsigned long 
> >> error_code,
> >> +   unsigned long address);
> >> +EXPORT_SYMBOL(kvm_page_fault_handler);
> >
> > NAK NAK NAK NAK
> >
> > This is one of the biggest anti-patterns around.
>
> I agree.
> I think that mm should expose a mm_set_kvm_page_fault_handler() or something 
> (give it a better name).
> Similar to how arch/x86/kernel/irq.c have 
> kvm_set_posted_intr_wakeup_handler().
>
> -Liran
>

This sounds like a great use case for static_call().  PeterZ, do you
suppose we could wire up static_call() with the module infrastructure
to make it easy to do "static_call to such-and-such GPL module symbol
if that symbol is in a loaded module, else nop"?


Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Liran Alon



> On 13 May 2019, at 18:15, Peter Zijlstra  wrote:
> 
> On Mon, May 13, 2019 at 04:38:32PM +0200, Alexandre Chartre wrote:
>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>> index 46df4c6..317e105 100644
>> --- a/arch/x86/mm/fault.c
>> +++ b/arch/x86/mm/fault.c
>> @@ -33,6 +33,10 @@
>> #define CREATE_TRACE_POINTS
>> #include 
>> 
>> +bool (*kvm_page_fault_handler)(struct pt_regs *regs, unsigned long 
>> error_code,
>> +   unsigned long address);
>> +EXPORT_SYMBOL(kvm_page_fault_handler);
> 
> NAK NAK NAK NAK
> 
> This is one of the biggest anti-patterns around.

I agree.
I think that mm should expose a mm_set_kvm_page_fault_handler() or something 
(give it a better name).
Similar to how arch/x86/kernel/irq.c have kvm_set_posted_intr_wakeup_handler().

-Liran




Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Alexandre Chartre




On 5/13/19 6:02 PM, Andy Lutomirski wrote:

On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
 wrote:


The KVM page fault handler handles page fault occurring while using
the KVM address space by switching to the kernel address space and
retrying the access (except if the fault occurs while switching
to the kernel address space). Processing of page faults occurring
while using the kernel address space is unchanged.

Page fault log is cleared when creating a vm so that page fault
information doesn't persist when qemu is stopped and restarted.


Are you saying that a page fault will just exit isolation?  This
completely defeats most of the security, right?  Sure, it still helps
with side channels, but not with actual software bugs.



Yes, page fault exit isolation so that the faulty instruction can be retried
with the full kernel address space. When exiting isolation, we also want to
kick the sibling hyperthread and pinned it so that it can't steal secret while
we use the kernel address page, but that's not implemented in this serie
(see TODO comment in kvm_isolation_exit() in patch 25 "kvm/isolation:
implement actual KVM isolation enter/exit").

alex.


Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Andy Lutomirski
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
 wrote:
>
> The KVM page fault handler handles page fault occurring while using
> the KVM address space by switching to the kernel address space and
> retrying the access (except if the fault occurs while switching
> to the kernel address space). Processing of page faults occurring
> while using the kernel address space is unchanged.
>
> Page fault log is cleared when creating a vm so that page fault
> information doesn't persist when qemu is stopped and restarted.

Are you saying that a page fault will just exit isolation?  This
completely defeats most of the security, right?  Sure, it still helps
with side channels, but not with actual software bugs.


Re: [RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Peter Zijlstra
On Mon, May 13, 2019 at 04:38:32PM +0200, Alexandre Chartre wrote:
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 46df4c6..317e105 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -33,6 +33,10 @@
>  #define CREATE_TRACE_POINTS
>  #include 
>  
> +bool (*kvm_page_fault_handler)(struct pt_regs *regs, unsigned long 
> error_code,
> +unsigned long address);
> +EXPORT_SYMBOL(kvm_page_fault_handler);

NAK NAK NAK NAK

This is one of the biggest anti-patterns around.


[RFC KVM 24/27] kvm/isolation: KVM page fault handler

2019-05-13 Thread Alexandre Chartre
The KVM page fault handler handles page fault occurring while using
the KVM address space by switching to the kernel address space and
retrying the access (except if the fault occurs while switching
to the kernel address space). Processing of page faults occurring
while using the kernel address space is unchanged.

Page fault log is cleared when creating a vm so that page fault
information doesn't persist when qemu is stopped and restarted.

The KVM module parameter page_fault_stack can be used to disable
dumping stack trace when a page fault occurs while using the KVM
address space. The fault will still be reported but without the
stack trace.

Signed-off-by: Alexandre Chartre 
---
 arch/x86/kernel/dumpstack.c |1 +
 arch/x86/kvm/isolation.c|  202 +++
 arch/x86/mm/fault.c |   12 +++
 3 files changed, 215 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index 2b58864..aa28763 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -292,6 +292,7 @@ void show_stack(struct task_struct *task, unsigned long *sp)
 
show_trace_log_lvl(task, NULL, sp, KERN_DEFAULT);
 }
+EXPORT_SYMBOL(show_stack);
 
 void show_stack_regs(struct pt_regs *regs)
 {
diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index e7979b3..db0a7ce 100644
--- a/arch/x86/kvm/isolation.c
+++ b/arch/x86/kvm/isolation.c
@@ -8,6 +8,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -17,6 +18,9 @@
 
 #include "isolation.h"
 
+extern bool (*kvm_page_fault_handler)(struct pt_regs *regs,
+ unsigned long error_code,
+ unsigned long address);
 
 enum page_table_level {
PGT_LEVEL_PTE,
@@ -91,6 +95,25 @@ struct kvm_range_mapping {
 static LIST_HEAD(kvm_range_mapping_list);
 static DEFINE_MUTEX(kvm_range_mapping_lock);
 
+/*
+ * When a page fault occurs, while running with the KVM address space,
+ * the KVM page fault handler prints information about the fault (in
+ * particular the stack trace), and it switches back to the kernel
+ * address space.
+ *
+ * Information printed by the KVM page fault handler can be used to find
+ * out data not mapped in the KVM address space. Then the KVM address
+ * space can be augmented to include the missing mapping so that we don't
+ * fault at that same place anymore.
+ *
+ * The following variables keep track of page faults occurring while running
+ * with the KVM address space to prevent displaying the same information.
+ */
+
+#define KVM_LAST_FAULT_COUNT   128
+
+static unsigned long kvm_last_fault[KVM_LAST_FAULT_COUNT];
+
 
 struct mm_struct kvm_mm = {
.mm_rb  = RB_ROOT,
@@ -126,6 +149,14 @@ static void kvm_clear_mapping(void *ptr, size_t size,
 static bool __read_mostly address_space_isolation;
 module_param(address_space_isolation, bool, 0444);
 
+/*
+ * When set to true, KVM dumps the stack when a page fault occurs while
+ * running with the KVM address space. Otherwise the page fault is still
+ * reported but without the stack trace.
+ */
+static bool __read_mostly page_fault_stack = true;
+module_param(page_fault_stack, bool, 0444);
+
 static struct kvm_range_mapping *kvm_get_range_mapping_locked(void *ptr,
  bool *subset)
 {
@@ -1195,6 +1226,173 @@ static void kvm_reset_all_task_mapping(void)
mutex_unlock(_task_mapping_lock);
 }
 
+static int bad_address(void *p)
+{
+   unsigned long dummy;
+
+   return probe_kernel_address((unsigned long *)p, dummy);
+}
+
+static void kvm_dump_pagetable(pgd_t *base, unsigned long address)
+{
+   pgd_t *pgd = base + pgd_index(address);
+   p4d_t *p4d;
+   pud_t *pud;
+   pmd_t *pmd;
+   pte_t *pte;
+
+   pr_info("BASE %px ", base);
+
+   if (bad_address(pgd))
+   goto bad;
+
+   pr_cont("PGD %lx ", pgd_val(*pgd));
+
+   if (!pgd_present(*pgd))
+   goto out;
+
+   p4d = p4d_offset(pgd, address);
+   if (bad_address(p4d))
+   goto bad;
+
+   pr_cont("P4D %lx ", p4d_val(*p4d));
+   if (!p4d_present(*p4d) || p4d_large(*p4d))
+   goto out;
+
+   pud = pud_offset(p4d, address);
+   if (bad_address(pud))
+   goto bad;
+
+   pr_cont("PUD %lx ", pud_val(*pud));
+   if (!pud_present(*pud) || pud_large(*pud))
+   goto out;
+
+   pmd = pmd_offset(pud, address);
+   if (bad_address(pmd))
+   goto bad;
+
+   pr_cont("PMD %lx ", pmd_val(*pmd));
+   if (!pmd_present(*pmd) || pmd_large(*pmd))
+   goto out;
+
+   pte = pte_offset_kernel(pmd, address);
+   if (bad_address(pte))
+   goto bad;
+
+   pr_cont("PTE %lx", pte_val(*pte));
+out:
+   pr_cont("\n");
+   return;
+bad:
+   pr_info("BAD\n");
+}
+
+static void