Re: svn commit: r329162 - in head/sys/amd64/vmm: amd intel

2018-02-12 Thread Tycho Nightingale

Hi,

> On Feb 12, 2018, at 10:37 AM, Shawn Webb  wrote:
> 
> On Mon, Feb 12, 2018 at 02:45:27PM +, Tycho Nightingale wrote:
>> Author: tychon
>> Date: Mon Feb 12 14:45:27 2018
>> New Revision: 329162
>> URL: https://svnweb.freebsd.org/changeset/base/329162
>> 
>> Log:
>>  Provide further mitigation against CVE-2017-5715 by flushing the
>>  return stack buffer (RSB) upon returning from the guest.
>> 
>>  This was inspired by this linux commit:
>>  
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kvm?id=117cc7a908c83697b0b737d15ae1eb5943afe35b
>> 
>>  Reviewed by:grehan
>>  Sponsored by:   Dell EMC Isilon
>>  Differential Revision:  https://reviews.freebsd.org/D14272
>> 
>> Modified:
>>  head/sys/amd64/vmm/amd/svm_support.S
>>  head/sys/amd64/vmm/intel/vmcs.c
>>  head/sys/amd64/vmm/intel/vmx.h
>>  head/sys/amd64/vmm/intel/vmx_support.S
>> 
>> Modified: head/sys/amd64/vmm/amd/svm_support.S
>> ==
>> --- head/sys/amd64/vmm/amd/svm_support.S Mon Feb 12 14:44:21 2018
>> (r329161)
>> +++ head/sys/amd64/vmm/amd/svm_support.S Mon Feb 12 14:45:27 2018
>> (r329162)
>> @@ -113,6 +113,23 @@ ENTRY(svm_launch)
>>  movq %rdi, SCTX_RDI(%rax)
>>  movq %rsi, SCTX_RSI(%rax)
>> 
>> +/*
>> + * To prevent malicious branch target predictions from
>> + * affecting the host, overwrite all entries in the RSB upon
>> + * exiting a guest.
>> + */
>> +mov $16, %ecx   /* 16 iterations, two calls per loop */
>> +mov %rsp, %rax
>> +0:  call 2f /* create an RSB entry. */
>> +1:  pause
>> +call 1b /* capture rogue speculation. */
>> +2:  call 2f /* create an RSB entry. */
>> +1:  pause
>> +call 1b /* capture rogue speculation. */
>> +2:  sub $1, %ecx
>> +jnz 0b
>> +mov %rax, %rsp
>> +
>>  /* Restore host state */
>>  pop %r15
>>  pop %r14
>> 
> 
> For amd systems, isn't use of lfence required for performance
> reasons[1]? Or am I conflating two things?
> 
> 1: https://reviews.llvm.org/D41723

For what AMD calls V2 (the window of a speculative execution between indirect 
branch predictions and resolution of the correct target) there are a few 
mitigations cited in their white paper:


https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf

depending on the specific code you are trying to “fix”.  In my interpretation 
lfence is a component of several of the possible mitigations when one wants to 
“fix” a specific indirect branch but does not ensure that subsequent branches 
will not be speculated around.  In this case we are trying to guard against the 
more generic case of "entering more privileged code” i.e. returning from the 
guest to hypervisor aka host and protect all subsequent indirect branches 
without needing to apply an lfence to them individually.  To do that, I’ve 
implemented mitigation V2-3 where the return address predictor is filled with 
benign entries.

Does that help?

Tycho
___
svn-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"


Re: svn commit: r329162 - in head/sys/amd64/vmm: amd intel

2018-02-12 Thread Shawn Webb
On Mon, Feb 12, 2018 at 02:45:27PM +, Tycho Nightingale wrote:
> Author: tychon
> Date: Mon Feb 12 14:45:27 2018
> New Revision: 329162
> URL: https://svnweb.freebsd.org/changeset/base/329162
> 
> Log:
>   Provide further mitigation against CVE-2017-5715 by flushing the
>   return stack buffer (RSB) upon returning from the guest.
>   
>   This was inspired by this linux commit:
>   
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kvm?id=117cc7a908c83697b0b737d15ae1eb5943afe35b
>   
>   Reviewed by:grehan
>   Sponsored by:   Dell EMC Isilon
>   Differential Revision:  https://reviews.freebsd.org/D14272
> 
> Modified:
>   head/sys/amd64/vmm/amd/svm_support.S
>   head/sys/amd64/vmm/intel/vmcs.c
>   head/sys/amd64/vmm/intel/vmx.h
>   head/sys/amd64/vmm/intel/vmx_support.S
> 
> Modified: head/sys/amd64/vmm/amd/svm_support.S
> ==
> --- head/sys/amd64/vmm/amd/svm_support.S  Mon Feb 12 14:44:21 2018
> (r329161)
> +++ head/sys/amd64/vmm/amd/svm_support.S  Mon Feb 12 14:45:27 2018
> (r329162)
> @@ -113,6 +113,23 @@ ENTRY(svm_launch)
>   movq %rdi, SCTX_RDI(%rax)
>   movq %rsi, SCTX_RSI(%rax)
>  
> + /*
> +  * To prevent malicious branch target predictions from
> +  * affecting the host, overwrite all entries in the RSB upon
> +  * exiting a guest.
> +  */
> + mov $16, %ecx   /* 16 iterations, two calls per loop */
> + mov %rsp, %rax
> +0:   call 2f /* create an RSB entry. */
> +1:   pause
> + call 1b /* capture rogue speculation. */
> +2:   call 2f /* create an RSB entry. */
> +1:   pause
> + call 1b /* capture rogue speculation. */
> +2:   sub $1, %ecx
> + jnz 0b
> + mov %rax, %rsp
> +
>   /* Restore host state */
>   pop %r15
>   pop %r14
> 

For amd systems, isn't use of lfence required for performance
reasons[1]? Or am I conflating two things?

1: https://reviews.llvm.org/D41723

Thanks,

-- 
Shawn Webb
Cofounder and Security Engineer
HardenedBSD

Tor-ified Signal:+1 443-546-8752
GPG Key ID:  0x6A84658F52456EEE
GPG Key Fingerprint: 2ABA B6BD EF6A F486 BE89  3D9E 6A84 658F 5245 6EEE


signature.asc
Description: PGP signature


Re: svn commit: r329162 - in head/sys/amd64/vmm: amd intel

2018-02-12 Thread Rodney W. Grimes
> Author: tychon
> Date: Mon Feb 12 14:45:27 2018
> New Revision: 329162
> URL: https://svnweb.freebsd.org/changeset/base/329162
> 
> Log:
>   Provide further mitigation against CVE-2017-5715 by flushing the
>   return stack buffer (RSB) upon returning from the guest.
>   
>   This was inspired by this linux commit:
>   
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kvm?id=117cc7a908c83697b0b737d15ae1eb5943afe35b
>   
>   Reviewed by:grehan
>   Sponsored by:   Dell EMC Isilon
>   Differential Revision:  https://reviews.freebsd.org/D14272

Plans to MFC this?
It would be good to have as many meltdown/spectre patches as possible
in the upcomming 11.2 release.


> Modified:
>   head/sys/amd64/vmm/amd/svm_support.S
>   head/sys/amd64/vmm/intel/vmcs.c
>   head/sys/amd64/vmm/intel/vmx.h
>   head/sys/amd64/vmm/intel/vmx_support.S
> 
> Modified: head/sys/amd64/vmm/amd/svm_support.S
> ==
> --- head/sys/amd64/vmm/amd/svm_support.S  Mon Feb 12 14:44:21 2018
> (r329161)
> +++ head/sys/amd64/vmm/amd/svm_support.S  Mon Feb 12 14:45:27 2018
> (r329162)
> @@ -113,6 +113,23 @@ ENTRY(svm_launch)
>   movq %rdi, SCTX_RDI(%rax)
>   movq %rsi, SCTX_RSI(%rax)
>  
> + /*
> +  * To prevent malicious branch target predictions from
> +  * affecting the host, overwrite all entries in the RSB upon
> +  * exiting a guest.
> +  */
> + mov $16, %ecx   /* 16 iterations, two calls per loop */
> + mov %rsp, %rax
> +0:   call 2f /* create an RSB entry. */
> +1:   pause
> + call 1b /* capture rogue speculation. */
> +2:   call 2f /* create an RSB entry. */
> +1:   pause
> + call 1b /* capture rogue speculation. */
> +2:   sub $1, %ecx
> + jnz 0b
> + mov %rax, %rsp
> +
>   /* Restore host state */
>   pop %r15
>   pop %r14
> 
> Modified: head/sys/amd64/vmm/intel/vmcs.c
> ==
> --- head/sys/amd64/vmm/intel/vmcs.c   Mon Feb 12 14:44:21 2018
> (r329161)
> +++ head/sys/amd64/vmm/intel/vmcs.c   Mon Feb 12 14:45:27 2018
> (r329162)
> @@ -34,6 +34,7 @@
>  __FBSDID("$FreeBSD$");
>  
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -52,6 +53,12 @@ __FBSDID("$FreeBSD$");
>  #include 
>  #endif
>  
> +SYSCTL_DECL(_hw_vmm_vmx);
> +
> +static int no_flush_rsb;
> +SYSCTL_INT(_hw_vmm_vmx, OID_AUTO, no_flush_rsb, CTLFLAG_RW,
> +_flush_rsb, 0, "Do not flush RSB upon vmexit");
> +
>  static uint64_t
>  vmcs_fix_regval(uint32_t encoding, uint64_t val)
>  {
> @@ -403,8 +410,15 @@ vmcs_init(struct vmcs *vmcs)
>   goto done;
>  
>   /* instruction pointer */
> - if ((error = vmwrite(VMCS_HOST_RIP, (u_long)vmx_exit_guest)) != 0)
> - goto done;
> + if (no_flush_rsb) {
> + if ((error = vmwrite(VMCS_HOST_RIP,
> + (u_long)vmx_exit_guest)) != 0)
> + goto done;
> + } else {
> + if ((error = vmwrite(VMCS_HOST_RIP,
> + (u_long)vmx_exit_guest_flush_rsb)) != 0)
> + goto done;
> + }
>  
>   /* link pointer */
>   if ((error = vmwrite(VMCS_LINK_POINTER, ~0)) != 0)
> 
> Modified: head/sys/amd64/vmm/intel/vmx.h
> ==
> --- head/sys/amd64/vmm/intel/vmx.hMon Feb 12 14:44:21 2018
> (r329161)
> +++ head/sys/amd64/vmm/intel/vmx.hMon Feb 12 14:45:27 2018
> (r329162)
> @@ -150,5 +150,6 @@ u_longvmx_fix_cr4(u_long cr4);
>  int  vmx_set_tsc_offset(struct vmx *vmx, int vcpu, uint64_t offset);
>  
>  extern char  vmx_exit_guest[];
> +extern char  vmx_exit_guest_flush_rsb[];
>  
>  #endif
> 
> Modified: head/sys/amd64/vmm/intel/vmx_support.S
> ==
> --- head/sys/amd64/vmm/intel/vmx_support.SMon Feb 12 14:44:21 2018
> (r329161)
> +++ head/sys/amd64/vmm/intel/vmx_support.SMon Feb 12 14:45:27 2018
> (r329162)
> @@ -42,6 +42,29 @@
>  #define VLEAVE  pop %rbp
>  
>  /*
> + * Save the guest context.
> + */
> +#define  VMX_GUEST_SAVE  
> \
> + movq%rdi,VMXCTX_GUEST_RDI(%rsp);\
> + movq%rsi,VMXCTX_GUEST_RSI(%rsp);\
> + movq%rdx,VMXCTX_GUEST_RDX(%rsp);\
> + movq%rcx,VMXCTX_GUEST_RCX(%rsp);\
> + movq%r8,VMXCTX_GUEST_R8(%rsp);  \
> + movq%r9,VMXCTX_GUEST_R9(%rsp);  \
> + movq%rax,VMXCTX_GUEST_RAX(%rsp);\
> + movq%rbx,VMXCTX_GUEST_RBX(%rsp);\
> + movq