Re: [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-04-07 Thread Brijesh Singh



On 04/07/2017 06:33 AM, Borislav Petkov wrote:

On Thu, Apr 06, 2017 at 01:37:41PM -0500, Brijesh Singh wrote:

I did thought about prot idea but ran into another corner case which may require
us changing the signature of phys_pud_init and phys_pmd_init. The paddr_start
and paddr_end args into kernel_physical_mapping_init() should be aligned on PMD
level down (see comment [1]). So, if we encounter a case where our address range
is part of large page but we need to clear only one entry (i.e asked to clear 
just
one page into 2M region). In that case, now we need to pass additional arguments
into kernel_physical_mapping, phys_pud_init and phys_pmd_init to hint the 
splitting
code that it should use our prot for specific entries and all other entries 
will use
the old_prot.


Ok, but your !4K case:

+   /*
+* virtual address is part of large page, create the page
+* table mapping to use smaller pages (4K). The virtual and
+* physical address must be aligned to PMD level.
+*/
+   kernel_physical_mapping_init(__pa(vaddr & PMD_MASK),
+__pa((vaddr_end & PMD_MASK) + 
PMD_SIZE),
+0);


would map a 2M page as encrypted by default. What if we want to map a 2M page
frame as ~_PAGE_ENC?



Thanks for feedbacks, I will make sure that we cover all other cases in final 
patch.
Untested but something like this can be used to check whether we can change the 
large page
in one go or request the splitting.

+   psize = page_level_size(level);
+   pmask = page_level_mask(level);
+
+   /*
+* Check, whether we can change the large page in one go.
+* We request a split, when the address is not aligned and
+* the number of pages to set or clear encryption bit is smaller
+* than the number of pages in the large page.
+*/
+   if (vaddr == (vaddr & pmask) && ((vaddr_end - vaddr) >= psize)) 
{
+   /* UPDATE PMD HERE */
+   vaddr_next = (vaddr & pmask) + psize;
+   continue;
+   }
+

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-04-06 Thread Brijesh Singh



On 04/06/2017 12:25 PM, Borislav Petkov wrote:

Hi Brijesh,

On Thu, Apr 06, 2017 at 09:05:03AM -0500, Brijesh Singh wrote:

I looked into arch/x86/mm/init_{32,64}.c and as you pointed the file contains
routines to do basic page splitting. I think it sufficient for our usage.


Good :)


I should be able to drop the memblock patch from the series and update the
Patch 15 [1] to use the kernel_physical_mapping_init().

The kernel_physical_mapping_init() creates the page table mapping using
default KERNEL_PAGE attributes, I tried to extend the function by passing
'bool enc' flags to hint whether to clr or set _PAGE_ENC when splitting the
pages. The code did not looked clean hence I dropped that idea.


Or, you could have a

__kernel_physical_mapping_init_prot(..., prot)

helper which gets a protection argument and hands it down. The lower
levels already hand down prot which is good.



I did thought about prot idea but ran into another corner case which may require
us changing the signature of phys_pud_init and phys_pmd_init. The paddr_start
and paddr_end args into kernel_physical_mapping_init() should be aligned on PMD
level down (see comment [1]). So, if we encounter a case where our address range
is part of large page but we need to clear only one entry (i.e asked to clear 
just
one page into 2M region). In that case, now we need to pass additional arguments
into kernel_physical_mapping, phys_pud_init and phys_pmd_init to hint the 
splitting
code that it should use our prot for specific entries and all other entries 
will use
the old_prot.

[1] http://lxr.free-electrons.com/source/arch/x86/mm/init_64.c#L546



The interface kernel_physical_mapping_init() will then itself call:

__kernel_physical_mapping_init_prot(..., PAGE_KERNEL);

for the normal cases.

That in a pre-patch of course.

How does that sound?


___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-04-06 Thread Brijesh Singh

Hi Boris,

On 03/17/2017 05:17 AM, Borislav Petkov wrote:

On Thu, Mar 16, 2017 at 11:25:36PM +0100, Paolo Bonzini wrote:

The kvmclock memory is initially zero so there is no need for the
hypervisor to allocate anything; the point of these patches is just to
access the data in a natural way from Linux source code.


I realize that.


I also don't really like the patch as is (plus it fails modpost), but
IMO reusing __change_page_attr and __split_large_page is the right thing
to do.


Right, so teaching pageattr.c about memblock could theoretically come
around and bite us later when a page allocated with memblock gets freed
with free_page().

And looking at this more, we have all this kernel pagetable preparation
code down the init_mem_mapping() call and the pagetable setup in
arch/x86/mm/init_{32,64}.c

And that code even does some basic page splitting. Oh and it uses
alloc_low_pages() which knows whether to do memblock reservation or the
common __get_free_pages() when slabs are up.



I looked into arch/x86/mm/init_{32,64}.c and as you pointed the file contains
routines to do basic page splitting. I think it sufficient for our usage.

I should be able to drop the memblock patch from the series and update the
Patch 15 [1] to use the kernel_physical_mapping_init().

The kernel_physical_mapping_init() creates the page table mapping using
default KERNEL_PAGE attributes, I tried to extend the function by passing
'bool enc' flags to hint whether to clr or set _PAGE_ENC when splitting the
pages. The code did not looked clean hence I dropped that idea. Instead,
I took the below approach. I did some runtime test and it seem to be working 
okay.

[1] http://marc.info/?l=linux-mm=148846773731212=2

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 7df5f4c..de16ef4 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 

 #include 
@@ -22,6 +23,8 @@
 #include 
 #include 
 
+#include "mm_internal.h"

+
 extern pmdval_t early_pmd_flags;
 int __init __early_make_pgtable(unsigned long, pmdval_t);
 void __init __early_pgtable_flush(void);
@@ -258,6 +261,72 @@ static void sme_free(struct device *dev, size_t size, void 
*vaddr,
swiotlb_free_coherent(dev, size, vaddr, dma_handle);
 }
 
+static int __init early_set_memory_enc_dec(resource_size_t paddr,

+  unsigned long size, bool enc)
+{
+   pte_t *kpte;
+   int level;
+   unsigned long vaddr, vaddr_end, vaddr_next;
+
+   vaddr = (unsigned long)__va(paddr);
+   vaddr_next = vaddr;
+   vaddr_end = vaddr + size;
+
+   /*
+* We are going to change the physical page attribute from C=1 to C=0.
+* Flush the caches to ensure that all the data with C=1 is flushed to
+* memory. Any caching of the vaddr after function returns will
+* use C=0.
+*/
+   clflush_cache_range(__va(paddr), size);
+
+   for (; vaddr < vaddr_end; vaddr = vaddr_next) {
+   kpte = lookup_address(vaddr, );
+   if (!kpte || pte_none(*kpte) )
+   return 1;
+
+   if (level == PG_LEVEL_4K) {
+   pte_t new_pte;
+   unsigned long pfn = pte_pfn(*kpte);
+   pgprot_t new_prot = pte_pgprot(*kpte);
+
+   if (enc)
+   pgprot_val(new_prot) |= _PAGE_ENC;
+   else
+   pgprot_val(new_prot) &= ~_PAGE_ENC;
+
+   new_pte = pfn_pte(pfn, canon_pgprot(new_prot));
+   pr_info("  pte %016lx -> 0x%016lx\n", pte_val(*kpte),
+   pte_val(new_pte));
+   set_pte_atomic(kpte, new_pte);
+   vaddr_next = (vaddr & PAGE_MASK) + PAGE_SIZE;
+   continue;
+   }
+
+   /*
+* virtual address is part of large page, create the page
+* table mapping to use smaller pages (4K). The virtual and
+* physical address must be aligned to PMD level.
+*/
+   kernel_physical_mapping_init(__pa(vaddr & PMD_MASK),
+__pa((vaddr_end & PMD_MASK) + 
PMD_SIZE),
+0);
+   }
+
+   __flush_tlb_all();
+   return 0;
+}
+
+int __init early_set_memory_decrypted(resource_size_t paddr, unsigned long 
size)
+{
+   return early_set_memory_enc_dec(paddr, size, false);
+}
+
+int __init early_set_memory_encrypted(resource_size_t paddr, unsigned long 
size)
+{
+   return early_set_memory_enc_dec(paddr, size, true);
+}
+


So what would be much cleaner, IMHO, is if one would reuse that code to
change init_mm.pgd mappings early without copying pageattr.c.

init_mem_mapping() gets called before kvm_guest_init() in 

Re: [RFC PATCH v2 18/32] kvm: svm: Use the hardware provided GPA instead of page walk

2017-03-29 Thread Brijesh Singh

Hi Boris,

On 03/29/2017 10:14 AM, Borislav Petkov wrote:

On Thu, Mar 02, 2017 at 10:16:05AM -0500, Brijesh Singh wrote:

From: Tom Lendacky <thomas.lenda...@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
Reviewed-by: Borislav Petkov <b...@suse.de>


I think I already asked you to remove Revewed-by tags when you have to
change an already reviewed patch in non-trivial manner. Why does this
one still have my Reviewed-by tag?



Actually this patch is included in RFCv2 series for the completeness.

The patch is already been reviewed and accepted in kvm upstream tree but it
was not present in the tip branch hence I cherry-picked into RFC so that we do
not break the build. SEV runtime behavior needs this patch. I have tried to
highlight it in cover letter. It was my bad that I missed fixing the Reviewed-by
tag during cherry picking. Sorry about that and will be extra careful next time 
around. Thanks


~ Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 15/32] x86: Add support for changing memory encryption attribute in early boot

2017-03-27 Thread Brijesh Singh

Hi Boris,

On 03/24/2017 12:12 PM, Borislav Petkov wrote:

 }

+static inline int __init early_set_memory_decrypted(void *addr,
+   unsigned long size)
+{
+   return 1;



return 1 when !CONFIG_AMD_MEM_ENCRYPT ?

The non-early variants return 0.



I will fix it and use the same return value.


+}
+
+static inline int __init early_set_memory_encrypted(void *addr,
+   unsigned long size)
+{
+   return 1;
+}
+
 #define __sme_pa   __pa



+   unsigned long pfn, npages;
+   unsigned long addr = (unsigned long)vaddr & PAGE_MASK;
+
+   /* We are going to change the physical page attribute from C=1 to C=0.
+* Flush the caches to ensure that all the data with C=1 is flushed to
+* memory. Any caching of the vaddr after function returns will
+* use C=0.
+*/


Kernel comments style is:

/*
 * A sentence ending with a full-stop.
 * Another sentence. ...
 * More sentences. ...
 */



I will update to use kernel comment style.



+   clflush_cache_range(vaddr, size);
+
+   npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+   pfn = slow_virt_to_phys((void *)addr) >> PAGE_SHIFT;
+
+   return kernel_map_pages_in_pgd(init_mm.pgd, pfn, addr, npages,
+   flags & ~sme_me_mask);
+
+}
+
+int __init early_set_memory_decrypted(void *vaddr, unsigned long size)
+{
+   unsigned long flags = get_pte_flags((unsigned long)vaddr);


So this does lookup_address()...


+   return early_set_memory_enc_dec(vaddr, size, flags & ~sme_me_mask);


... and this does it too in slow_virt_to_phys(). So you do it twice per
vaddr.

So why don't you define a __slow_virt_to_phys() helper - notice
the "__" - which returns flags in its second parameter and which
slow_virt_to_phys() calls with a NULL second parameter in the other
cases?



I will look into creating a helper function. thanks

-Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 29/32] kvm: svm: Add support for SEV DEBUG_DECRYPT command

2017-03-16 Thread Brijesh Singh



On 03/16/2017 05:54 AM, Paolo Bonzini wrote:



On 02/03/2017 16:18, Brijesh Singh wrote:

+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+   void *dst, int *error)
+{
+   inpages = sev_pin_memory(src, PAGE_SIZE, );
+   if (!inpages) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   data->handle = sev_get_handle(kvm);
+   data->dst_addr = __psp_pa(dst);
+   data->src_addr = __sev_page_pa(inpages[0]);
+   data->length = PAGE_SIZE;
+
+   ret = sev_issue_cmd(kvm, SEV_CMD_DBG_DECRYPT, data, error);
+   if (ret)
+   printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+   ret, *error);
+   sev_unpin_memory(inpages, npages);
+err_1:
+   kfree(data);
+   return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   void *data;
+   int ret, offset, len;
+   struct kvm_sev_dbg debug;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(, (void *)argp->data,
+   sizeof(struct kvm_sev_dbg)))
+   return -EFAULT;
+   /*
+* TODO: add support for decrypting length which crosses the
+* page boundary.
+*/
+   offset = debug.src_addr & (PAGE_SIZE - 1);
+   if (offset + debug.length > PAGE_SIZE)
+   return -EINVAL;
+


Please do add it, it doesn't seem very different from what you're doing
in LAUNCH_UPDATE_DATA.  There's no need for a separate
__sev_dbg_decrypt_page function, you can just pin/unpin here and do a
per-page loop as in LAUNCH_UPDATE_DATA.



I can certainly add support to handle crossing the page boundary cases.
Should we limit the size to prevent user passing arbitrary long length
and we end up looping inside the kernel? I was thinking to limit to a PAGE_SIZE.

~ Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 30/32] kvm: svm: Add support for SEV DEBUG_ENCRYPT command

2017-03-16 Thread Brijesh Singh



On 03/16/2017 06:03 AM, Paolo Bonzini wrote:



On 02/03/2017 16:18, Brijesh Singh wrote:

+   data = (void *) get_zeroed_page(GFP_KERNEL);


The page does not need to be zeroed, does it?



No, we don't have to zero it. I will fix it.


+
+   if ((len & 15) || (dst_addr & 15)) {
+   /* if destination address and length are not 16-byte
+* aligned then:
+* a) decrypt destination page into temporary buffer
+* b) copy source data into temporary buffer at correct offset
+* c) encrypt temporary buffer
+*/
+   ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, >error);


Ah, I see now you're using this function here for read-modify-write.
data is already pinned here, so even if you keep the function it makes
sense to push pinning out of __sev_dbg_decrypt_page and into
sev_dbg_decrypt.


I can push out pinning part outside __sev_dbg_decrypt_page




+   if (ret)
+   goto err_3;
+   d_off = dst_addr & (PAGE_SIZE - 1);
+
+   if (copy_from_user(data + d_off,
+   (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }
+
+   encrypt->length = PAGE_SIZE;


Why decrypt/re-encrypt all the page instead of just the 16 byte area
around the [dst_addr, dst_addr+len) range?



good catch, I should be fine just decrypting a 16 byte area. Will fix in next 
rev


+   encrypt->src_addr = __psp_pa(data);
+   encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+   } else {
+   if (copy_from_user(data, (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }


Do you need copy_from_user, or can you just pin/unpin memory as for
DEBUG_DECRYPT?



We can work either with pin/unpin or copy_from_user. I think I choose 
copy_from_user because
in most of time ENCRYPT path was used when I set breakpoint through gdb which 
basically
requires copying pretty small data into guest memory. It may be very much 
possible that
someone can try to copy lot more data and then pin/unpin can speedup the things.

-Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 26/32] kvm: svm: Add support for SEV LAUNCH_UPDATE_DATA command

2017-03-16 Thread Brijesh Singh


On 03/16/2017 05:48 AM, Paolo Bonzini wrote:



On 02/03/2017 16:17, Brijesh Singh wrote:

+static struct page **sev_pin_memory(unsigned long uaddr, unsigned long ulen,
+   unsigned long *n)
+{
+   struct page **pages;
+   int first, last;
+   unsigned long npages, pinned;
+
+   /* Get number of pages */
+   first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+   last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+   npages = (last - first + 1);
+
+   pages = kzalloc(npages * sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return NULL;
+
+   /* pin the user virtual address */
+   down_read(>mm->mmap_sem);
+   pinned = get_user_pages_fast(uaddr, npages, 1, pages);
+   up_read(>mm->mmap_sem);


get_user_pages_fast, like get_user_pages_unlocked, must be called
without mmap_sem held.


Sure.




+   if (pinned != npages) {
+   printk(KERN_ERR "SEV: failed to pin  %ld pages (got %ld)\n",
+   npages, pinned);
+   goto err;
+   }
+
+   *n = npages;
+   return pages;
+err:
+   if (pinned > 0)
+   release_pages(pages, pinned, 0);
+   kfree(pages);
+
+   return NULL;
+}

+   /* the array of pages returned by get_user_pages() is a page-aligned
+* memory. Since the user buffer is probably not page-aligned, we need
+* to calculate the offset within a page for first update entry.
+*/
+   offset = uaddr & (PAGE_SIZE - 1);
+   len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+   ulen -= len;
+
+   /* update first page -
+* special care need to be taken for the first page because we might
+* be dealing with offset within the page
+*/


No need to special case the first page; just set "offset = 0" inside the
loop after the first iteration.



Will do.

-Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 32/32] x86: kvm: Pin the guest memory when SEV is active

2017-03-16 Thread Brijesh Singh



On 03/16/2017 05:38 AM, Paolo Bonzini wrote:



On 02/03/2017 16:18, Brijesh Singh wrote:

The SEV memory encryption engine uses a tweak such that two identical
plaintexts at different location will have a different ciphertexts.
So swapping or moving ciphertexts of two pages will not result in
plaintexts being swapped. Relocating (or migrating) a physical backing pages
for SEV guest will require some additional steps. The current SEV key
management spec [1] does not provide commands to swap or migrate (move)
ciphertexts. For now we pin the memory allocated for the SEV guest. In
future when SEV key management spec provides the commands to support the
page migration we can update the KVM code to remove the pinning logical
without making any changes into userspace (qemu).

The patch pins userspace memory when a new slot is created and unpin the
memory when slot is removed.

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf


This is not enough, because memory can be hidden temporarily from the
guest and remapped later.  Think of a PCI BAR that is backed by RAM, or
also SMRAM.  The pinning must be kept even in that case.

You need to add a pair of KVM_MEMORY_ENCRYPT_OPs (one that doesn't map
to a PSP operation), such as KVM_REGISTER/UNREGISTER_ENCRYPTED_RAM.  In
QEMU you can use a RAMBlockNotifier to invoke the ioctls.



I was hoping to avoid adding new ioctl, but I see your point. Will add a pair 
of ioctl's
and use RAMBlocNotifier to invoke those ioctls.

-Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-03-10 Thread Brijesh Singh

Hi Boris,

On 03/10/2017 05:06 AM, Borislav Petkov wrote:

On Thu, Mar 02, 2017 at 10:15:15AM -0500, Brijesh Singh wrote:

If kernel_maps_pages_in_pgd is called early in boot process to change the


kernel_map_pages_in_pgd()


memory attributes then it fails to allocate memory when spliting large
pages. The patch extends the cpa_data to provide the support to use
memblock_alloc when slab allocator is not available.

The feature will be used in Secure Encrypted Virtualization (SEV) mode,
where we may need to change the memory region attributes in early boot
process.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/mm/pageattr.c |   51 
 1 file changed, 42 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 46cc89d..9e4ab3b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 

 #include 
 #include 
@@ -37,6 +38,7 @@ struct cpa_data {
int flags;
unsigned long   pfn;
unsignedforce_split : 1;
+   unsignedforce_memblock :1;
int curpage;
struct page **pages;
 };
@@ -627,9 +629,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,

 static int
 __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
-  struct page *base)
+ pte_t *pbase, unsigned long new_pfn)
 {
-   pte_t *pbase = (pte_t *)page_address(base);
unsigned long ref_pfn, pfn, pfninc = 1;
unsigned int i, level;
pte_t *tmp;
@@ -646,7 +647,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 1;
}

-   paravirt_alloc_pte(_mm, page_to_pfn(base));
+   paravirt_alloc_pte(_mm, new_pfn);

switch (level) {
case PG_LEVEL_2M:
@@ -707,7 +708,8 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
 * pagetable protections, the actual ptes set above control the
 * primary protection behavior:
 */
-   __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));
+   __set_pmd_pte(kpte, address,
+   native_make_pte((new_pfn << PAGE_SHIFT) + _KERNPG_TABLE));

/*
 * Intel Atom errata AAH41 workaround.
@@ -723,21 +725,50 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 0;
 }

+static pte_t *try_alloc_pte(struct cpa_data *cpa, unsigned long *pfn)
+{
+   unsigned long phys;
+   struct page *base;
+
+   if (cpa->force_memblock) {
+   phys = memblock_alloc(PAGE_SIZE, PAGE_SIZE);


Maybe there's a reason this fires:

WARNING: modpost: Found 2 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'

WARNING: vmlinux.o(.text+0x48edc): Section mismatch in reference from the 
function __change_page_attr() to the function .init.text:memblock_alloc()
The function __change_page_attr() references
the function __init memblock_alloc().
This is often because __change_page_attr lacks a __init
annotation or the annotation of memblock_alloc is wrong.

WARNING: vmlinux.o(.text+0x491d1): Section mismatch in reference from the 
function __change_page_attr() to the function .meminit.text:memblock_free()
The function __change_page_attr() references
the function __meminit memblock_free().
This is often because __change_page_attr lacks a __meminit
annotation or the annotation of memblock_free is wrong.



I can take a look at fixing those warning. In my initial attempt was to create
a new function to clear encryption bit but it ended up looking very similar to
__change_page_attr_set_clr() hence decided to extend the exiting function to
use memblock_alloc().



Why do we need this whole early mapping? For the guest? I don't like
that memblock thing at all.


Early in boot process, guest kernel allocates some structure (its either
statically allocated or dynamic allocated via memblock_alloc). And shares the 
physical
address of these structure with hypervisor. Since entire guest memory area is 
mapped
as encrypted hence those structure's are mapped as encrypted memory range. We 
need
a method to clear the encryption bit. Sometime these structure maybe part of 2M 
pages
and need to split into smaller pages.



So I think the approach with the .data..percpu..hv_shared section is
fine and we should consider SEV-ES

http://support.amd.com/TechDocs/Protecting%20VM%20Register%20State%20with%20SEV-ES.pdf

and do this right from the get-go so that when SEV-ES comes along, we
should simply be ready and extend that mechanism to put the whole Guest
Hypervisor Communication Block in there.




But then the fact that you're mapping those decrypted in init_mm.pgd
makes me think you don't need that early mapping thing at all. Those are
the dec

Re: [RFC PATCH v2 12/32] x86: Add early boot support when running with SEV active

2017-03-10 Thread Brijesh Singh

Hi Boris and Paolo,

On 03/09/2017 10:29 AM, Borislav Petkov wrote:

On Thu, Mar 09, 2017 at 05:13:33PM +0100, Paolo Bonzini wrote:

This is not how you check if running under a hypervisor; you should
check the HYPERVISOR bit, i.e. bit 31 of cpuid(1).ecx.  This in turn
tells you if leaf 0x4000 is valid.


Ah, good point, I already do that in the microcode loader :)

/*
 * CPUID(1).ECX[31]: reserved for hypervisor use. This is still not
 * completely accurate as xen pv guests don't see that CPUID bit set but
 * that's good enough as they don't land on the BSP path anyway.
 */
if (native_cpuid_ecx(1) & BIT(31))
return *res;


That said, the main issue with this function is that it hardcodes the
behavior for KVM.  It is possible that another hypervisor defines its
0x4001 leaf in such a way that KVM_FEATURE_SEV has a different meaning.

Instead, AMD should define a "well-known" bit in its own space (i.e.
0x80xx) that is only used by hypervisors that support SEV.  This is
similar to how Intel defined one bit in leaf 1 to say "is leaf
0x4000 valid".


+   if (eax > 0x4000) {
+   eax = 0x4001;
+   ecx = 0;
+   native_cpuid(, , , );
+   if (!(eax & BIT(KVM_FEATURE_SEV)))
+   goto out;
+
+   eax = 0x801f;
+   ecx = 0;
+   native_cpuid(, , , );
+   if (!(eax & 1))


Right, so this is testing CPUID_0x801f_ECX(0)[0], SME. Why not
simply set that bit for the guest too, in kvm?



CPUID_8000_001F[EAX] indicates whether the feature is supported.
CPUID_0x801F[EAX]:
 * Bit 0 - SME supported
 * Bit 1 - SEV supported
 * Bit 3 - SEV-ES supported

We can use MSR_K8_SYSCFG[MemEncryptionModeEnc] to check if memory encryption is 
enabled.
Currently, KVM returns zero when guest OS read MSR_K8_SYSCFG. I can update my 
patch sets
to set this bit for SEV enabled guests.

We could update this patch to use the below logic:

 * CPUID(0) - Check for AuthenticAMD
 * CPID(1) - Check if under hypervisor
 * CPUID(0x8000) - Check for highest supported leaf
 * CPUID(0x801F).EAX - Check for SME and SEV support
 * rdmsr (MSR_K8_SYSCFG)[MemEncryptionModeEnc] - Check if SMEE is set


Paolo,

One question, do we need "AuthenticAMD" check when we are running under 
hypervisor ?
I was looking at qemu code and found that qemu exposes parameters to change the 
CPU
vendor id. The above check will fail if user changes the vendor id while 
launching
the SEV guest.

-Brijesh

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 01/32] x86: Add the Secure Encrypted Virtualization CPU feature

2017-03-06 Thread Brijesh Singh
On 03/04/2017 04:11 AM, Borislav Petkov wrote:
> On Fri, Mar 03, 2017 at 03:01:23PM -0600, Brijesh Singh wrote:
> 
> This looks like a wraparound...
> 
> $ test-apply.sh /tmp/brijesh.singh.delta
> checking file Documentation/admin-guide/kernel-parameters.txt
> Hunk #1 succeeded at 2144 (offset -9 lines).
> checking file Documentation/x86/amd-memory-encryption.txt
> patch:  malformed patch at line 23: DRAM from physical
> 
> Yap.
> 
> Looks like exchange or your mail client decided to do some patch editing
> on its own.
> 
> Please send it to yourself first and try applying.
> 

Sending it through stg mail to avoid line wrapping. Please let me know if 
something
is still messed up. I have tried applying it and it seems to apply okay.

---
 Documentation/admin-guide/kernel-parameters.txt |4 +--
 Documentation/x86/amd-memory-encryption.txt |   33 +--
 arch/x86/include/asm/cpufeature.h   |7 +
 arch/x86/include/asm/cpufeatures.h  |6 +---
 arch/x86/include/asm/disabled-features.h|3 +-
 arch/x86/include/asm/required-features.h|3 +-
 arch/x86/kernel/cpu/amd.c   |   23 
 arch/x86/kernel/cpu/common.c|   23 
 arch/x86/kernel/cpu/scattered.c |1 +
 9 files changed, 50 insertions(+), 53 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt 
b/Documentation/admin-guide/kernel-parameters.txt
index 91c40fa..b91e2495 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2153,8 +2153,8 @@
mem_encrypt=on: Activate SME
mem_encrypt=off:Do not activate SME
 
-   Refer to the SME documentation for details on when
-   memory encryption can be activated.
+   Refer to Documentation/x86/amd-memory-encryption.txt
+   for details on when memory encryption can be activated.
 
mem_sleep_default=  [SUSPEND] Default system suspend mode:
s2idle  - Suspend-To-Idle
diff --git a/Documentation/x86/amd-memory-encryption.txt 
b/Documentation/x86/amd-memory-encryption.txt
index 0938e89..0b72ff2 100644
--- a/Documentation/x86/amd-memory-encryption.txt
+++ b/Documentation/x86/amd-memory-encryption.txt
@@ -7,9 +7,9 @@ DRAM.  SME can therefore be used to protect the contents of 
DRAM from physical
 attacks on the system.
 
 A page is encrypted when a page table entry has the encryption bit set (see
-below how to determine the position of the bit).  The encryption bit can be
-specified in the cr3 register, allowing the PGD table to be encrypted. Each
-successive level of page tables can also be encrypted.
+below on how to determine its position).  The encryption bit can be specified
+in the cr3 register, allowing the PGD table to be encrypted. Each successive
+level of page tables can also be encrypted.
 
 Support for SME can be determined through the CPUID instruction. The CPUID
 function 0x801f reports information related to SME:
@@ -17,13 +17,14 @@ function 0x801f reports information related to SME:
0x801f[eax]:
Bit[0] indicates support for SME
0x801f[ebx]:
-   Bit[5:0]  pagetable bit number used to activate memory
- encryption
-   Bit[11:6] reduction in physical address space, in bits, when
- memory encryption is enabled (this only affects system
- physical addresses, not guest physical addresses)
-
-If support for SME is present, MSR 0xc00100010 (SYS_CFG) can be used to
+   Bits[5:0]  pagetable bit number used to activate memory
+  encryption
+   Bits[11:6] reduction in physical address space, in bits, when
+  memory encryption is enabled (this only affects
+  system physical addresses, not guest physical
+  addresses)
+
+If support for SME is present, MSR 0xc00100010 (MSR_K8_SYSCFG) can be used to
 determine if SME is enabled and/or to enable memory encryption:
 
0xc0010010:
@@ -41,7 +42,7 @@ The state of SME in the Linux kernel can be documented as 
follows:
  The CPU supports SME (determined through CPUID instruction).
 
- Enabled:
- Supported and bit 23 of the SYS_CFG MSR is set.
+ Supported and bit 23 of MSR_K8_SYSCFG is set.
 
- Active:
  Supported, Enabled and the Linux kernel is actively applying
@@ -51,7 +52,9 @@ The state of SME in the Linux kernel can be documented as 
follows:
 SME can also be enabled and activated in the BIOS. If SME is enabled and
 activated in the BIOS, then all memory accesses will be encrypted and it 

Re: [RFC PATCH v2 00/32] x86: Secure Encrypted Virtualization (AMD)

2017-03-03 Thread Brijesh Singh

Hi Bjorn,

On 03/03/2017 02:33 PM, Bjorn Helgaas wrote:

On Thu, Mar 02, 2017 at 10:12:01AM -0500, Brijesh Singh wrote:

This RFC series provides support for AMD's new Secure Encrypted Virtualization
(SEV) feature. This RFC is build upon Secure Memory Encryption (SME) RFCv4 [1].


What kernel version is this series based on?



This patch series is based off of the master branch of tip.
  Commit a27cb9e1b2b4 ("Merge branch 'WIP.sched/core'")
  Tom's RFC v4 patches (http://marc.info/?l=linux-mm=148725973013686=2)

Accidentally, I ended up rebasing SEV RFCv2 patches from updated SME v4 
instead of original SME v4. So you may need to apply patch [1]


[1] http://marc.info/?l=linux-mm=148857523132253=2

Optionally, I have posted the full git tree here [2]

[2] https://github.com/codomania/tip/branches

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v2 01/32] x86: Add the Secure Encrypted Virtualization CPU feature

2017-03-03 Thread Brijesh Singh

Hi Boris,

On 03/03/2017 10:59 AM, Borislav Petkov wrote:

On Thu, Mar 02, 2017 at 10:12:09AM -0500, Brijesh Singh wrote:

From: Tom Lendacky <thomas.lenda...@amd.com>

Update the CPU features to include identifying and reporting on the
Secure Encrypted Virtualization (SEV) feature.  SME is identified by
CPUID 0x801f, but requires BIOS support to enable it (set bit 23 of
MSR_K8_SYSCFG and set bit 0 of MSR_K7_HWCR).  Only show the SEV feature
as available if reported by CPUID and enabled by BIOS.

Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
---
 arch/x86/include/asm/cpufeatures.h |1 +
 arch/x86/include/asm/msr-index.h   |2 ++
 arch/x86/kernel/cpu/amd.c  |   22 ++
 arch/x86/kernel/cpu/scattered.c|1 +
 4 files changed, 22 insertions(+), 4 deletions(-)


So this patchset is not really ontop of Tom's patchset because this
patch doesn't apply. The reason is, Tom did the SME bit this way:

https://lkml.kernel.org/r/20170216154236.19244.7580.st...@tlendack-t1.amdoffice.net

but it should've been in scattered.c.


diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index cabda87..c3f58d9 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -31,6 +31,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_CPB,  CPUID_EDX,  9, 0x8007, 0 },
{ X86_FEATURE_PROC_FEEDBACK,CPUID_EDX, 11, 0x8007, 0 },
{ X86_FEATURE_SME,  CPUID_EAX,  0, 0x801f, 0 },
+   { X86_FEATURE_SEV,  CPUID_EAX,  1, 0x801f, 0 },
{ 0, 0, 0, 0, 0 }


... and here it is in scattered.c, as it should be. So you've used an
older version of the patch, it seems.

Please sync with Tom to see whether he's reworked the v4 version of that
patch already. If yes, then you could send only the SME and SEV adding
patches as a reply to this message so that I can continue reviewing in
the meantime.



Just realized my error, I actually end up using Tom's recent updates to 
v4 instead of original v4. Here is the diff. If you have Tom's v4 
applied then apply this diff before applying SEV v2 version. Sorry about 
that.


Optionally, you also pull the complete tree from github [1].

[1] https://github.com/codomania/tip/tree/sev-rfc-v2


diff --git a/Documentation/admin-guide/kernel-parameters.txt 
b/Documentation/admin-guide/kernel-parameters.txt

index 91c40fa..b91e2495 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2153,8 +2153,8 @@
mem_encrypt=on: Activate SME
mem_encrypt=off:Do not activate SME

-   Refer to the SME documentation for details on when
-   memory encryption can be activated.
+   Refer to Documentation/x86/amd-memory-encryption.txt
+   for details on when memory encryption can be activated.

mem_sleep_default=  [SUSPEND] Default system suspend mode:
s2idle  - Suspend-To-Idle
diff --git a/Documentation/x86/amd-memory-encryption.txt 
b/Documentation/x86/amd-memory-encryption.txt

index 0938e89..0b72ff2 100644
--- a/Documentation/x86/amd-memory-encryption.txt
+++ b/Documentation/x86/amd-memory-encryption.txt
@@ -7,9 +7,9 @@ DRAM.  SME can therefore be used to protect the contents 
of DRAM from physical

 attacks on the system.

 A page is encrypted when a page table entry has the encryption bit set 
(see

-below how to determine the position of the bit).  The encryption bit can be
-specified in the cr3 register, allowing the PGD table to be encrypted. Each
-successive level of page tables can also be encrypted.
+below on how to determine its position).  The encryption bit can be 
specified
+in the cr3 register, allowing the PGD table to be encrypted. Each 
successive

+level of page tables can also be encrypted.

 Support for SME can be determined through the CPUID instruction. The CPUID
 function 0x801f reports information related to SME:
@@ -17,13 +17,14 @@ function 0x801f reports information related to SME:
0x801f[eax]:
Bit[0] indicates support for SME
0x801f[ebx]:
-   Bit[5:0]  pagetable bit number used to activate memory
- encryption
-   Bit[11:6] reduction in physical address space, in bits, when
- memory encryption is enabled (this only affects system
- physical addresses, not guest physical addresses)
-
-If support for SME is present, MSR 0xc00100010 (SYS_CFG) can be used to
+   Bits[5:0]  pagetable bit number used to activate memory
+  encryption
+   Bits[11:6] reduction in physical address space, in bits, when
+  memory encryption is enabled (this 

Re: [RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device

2017-03-02 Thread Brijesh Singh

Hi Mark,

On 03/02/2017 11:39 AM, Mark Rutland wrote:

On Thu, Mar 02, 2017 at 10:16:15AM -0500, Brijesh Singh wrote:

The CCP device is part of the AMD Secure Processor. In order to expand the
usage of the AMD Secure Processor, create a framework that allows functional
components of the AMD Secure Processor to be initialized and handled
appropriately.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
---
 drivers/crypto/Kconfig   |   10 +
 drivers/crypto/ccp/Kconfig   |   43 +++--
 drivers/crypto/ccp/Makefile  |8 -
 drivers/crypto/ccp/ccp-dev-v3.c  |   86 +-
 drivers/crypto/ccp/ccp-dev-v5.c  |   73 -
 drivers/crypto/ccp/ccp-dev.c |  137 +---
 drivers/crypto/ccp/ccp-dev.h |   35 
 drivers/crypto/ccp/sp-dev.c  |  308 
 drivers/crypto/ccp/sp-dev.h  |  140 
 drivers/crypto/ccp/sp-pci.c  |  324 ++
 drivers/crypto/ccp/sp-platform.c |  268 +++
 include/linux/ccp.h  |3
 12 files changed, 1240 insertions(+), 195 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h
 create mode 100644 drivers/crypto/ccp/sp-pci.c
 create mode 100644 drivers/crypto/ccp/sp-platform.c



diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..8127e18 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,11 +1,11 @@
-obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o \
+obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
+ccp-objs := sp-dev.o sp-platform.o
+ccp-$(CONFIG_PCI) += sp-pci.o
+ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-ops.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
-   ccp-platform.o \
ccp-dmaengine.o


It looks like ccp-platform.c has morphed into sp-platform.c (judging by
the compatible string and general shape of the code), and the original
ccp-platform.c is no longer built.

Shouldn't ccp-platform.c be deleted by this patch?



Good catch. Both ccp-platform.c and ccp-pci.c should have been deleted 
by this patch. I missed deleting it, will fix in next rev.


~ Brijesh
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 15/32] x86: Add support for changing memory encryption attribute in early boot

2017-03-02 Thread Brijesh Singh
Some KVM-specific custom MSRs shares the guest physical address with
hypervisor. When SEV is active, the shared physical address must be mapped
with encryption attribute cleared so that both hypervsior and guest can
access the data.

Add APIs to change memory encryption attribute in early boot code.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/mem_encrypt.h |   15 +
 arch/x86/mm/mem_encrypt.c  |   63 
 2 files changed, 78 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9799835..95bbe4c 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -47,6 +47,9 @@ void __init sme_unmap_bootdata(char *real_mode_data);
 
 void __init sme_early_init(void);
 
+int __init early_set_memory_decrypted(void *addr, unsigned long size);
+int __init early_set_memory_encrypted(void *addr, unsigned long size);
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
 
@@ -110,6 +113,18 @@ static inline void __init sme_early_init(void)
 {
 }
 
+static inline int __init early_set_memory_decrypted(void *addr,
+   unsigned long size)
+{
+   return 1;
+}
+
+static inline int __init early_set_memory_encrypted(void *addr,
+   unsigned long size)
+{
+   return 1;
+}
+
 #define __sme_pa   __pa
 #define __sme_pa_nodebug   __pa_nodebug
 
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 7df5f4c..567e0d8 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -258,6 +259,68 @@ static void sme_free(struct device *dev, size_t size, void 
*vaddr,
swiotlb_free_coherent(dev, size, vaddr, dma_handle);
 }
 
+static unsigned long __init get_pte_flags(unsigned long address)
+{
+   int level;
+   pte_t *pte;
+   unsigned long flags = _KERNPG_TABLE_NOENC | _PAGE_ENC;
+
+   pte = lookup_address(address, );
+   if (!pte)
+   return flags;
+
+   switch (level) {
+   case PG_LEVEL_4K:
+   flags = pte_flags(*pte);
+   break;
+   case PG_LEVEL_2M:
+   flags = pmd_flags(*(pmd_t *)pte);
+   break;
+   case PG_LEVEL_1G:
+   flags = pud_flags(*(pud_t *)pte);
+   break;
+   default:
+   break;
+   }
+
+   return flags;
+}
+
+int __init early_set_memory_enc_dec(void *vaddr, unsigned long size,
+   unsigned long flags)
+{
+   unsigned long pfn, npages;
+   unsigned long addr = (unsigned long)vaddr & PAGE_MASK;
+
+   /* We are going to change the physical page attribute from C=1 to C=0.
+* Flush the caches to ensure that all the data with C=1 is flushed to
+* memory. Any caching of the vaddr after function returns will
+* use C=0.
+*/
+   clflush_cache_range(vaddr, size);
+
+   npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+   pfn = slow_virt_to_phys((void *)addr) >> PAGE_SHIFT;
+
+   return kernel_map_pages_in_pgd(init_mm.pgd, pfn, addr, npages,
+   flags & ~sme_me_mask);
+
+}
+
+int __init early_set_memory_decrypted(void *vaddr, unsigned long size)
+{
+   unsigned long flags = get_pte_flags((unsigned long)vaddr);
+
+   return early_set_memory_enc_dec(vaddr, size, flags & ~sme_me_mask);
+}
+
+int __init early_set_memory_encrypted(void *vaddr, unsigned long size)
+{
+   unsigned long flags = get_pte_flags((unsigned long)vaddr);
+
+   return early_set_memory_enc_dec(vaddr, size, flags | _PAGE_ENC);
+}
+
 static struct dma_map_ops sme_dma_ops = {
.alloc  = sme_alloc,
.free   = sme_free,

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 23/32] kvm: introduce KVM_MEMORY_ENCRYPT_OP ioctl

2017-03-02 Thread Brijesh Singh
If hardware supports encrypting then KVM_MEMORY_ENCRYPT_OP ioctl can
be used by qemu to issue platform specific memory encryption commands.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |2 ++
 arch/x86/kvm/x86.c  |   12 
 include/uapi/linux/kvm.h|2 ++
 3 files changed, 16 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index bff1f15..62651ad 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1033,6 +1033,8 @@ struct kvm_x86_ops {
void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+   int (*memory_encryption_op)(struct kvm *kvm, void __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2099df8..6a737e9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3926,6 +3926,14 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
return r;
 }
 
+static int kvm_vm_ioctl_memory_encryption_op(struct kvm *kvm, void __user 
*argp)
+{
+   if (kvm_x86_ops->memory_encryption_op)
+   return kvm_x86_ops->memory_encryption_op(kvm, argp);
+
+   return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
   unsigned int ioctl, unsigned long arg)
 {
@@ -4189,6 +4197,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
r = kvm_vm_ioctl_enable_cap(kvm, );
break;
}
+   case KVM_MEMORY_ENCRYPT_OP: {
+   r = kvm_vm_ioctl_memory_encryption_op(kvm, argp);
+   break;
+   }
default:
r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index cac48ed..fef7d83 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1281,6 +1281,8 @@ struct kvm_s390_ucas_mapping {
 #define KVM_S390_GET_IRQ_STATE   _IOW(KVMIO, 0xb6, struct kvm_s390_irq_state)
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI   _IO(KVMIO,   0xb7)
+/* Memory Encryption Commands */
+#define KVM_MEMORY_ENCRYPT_OP_IOWR(KVMIO, 0xb8, unsigned long)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device

2017-03-02 Thread Brijesh Singh
The CCP device is part of the AMD Secure Processor. In order to expand the
usage of the AMD Secure Processor, create a framework that allows functional
components of the AMD Secure Processor to be initialized and handled
appropriately.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
---
 drivers/crypto/Kconfig   |   10 +
 drivers/crypto/ccp/Kconfig   |   43 +++--
 drivers/crypto/ccp/Makefile  |8 -
 drivers/crypto/ccp/ccp-dev-v3.c  |   86 +-
 drivers/crypto/ccp/ccp-dev-v5.c  |   73 -
 drivers/crypto/ccp/ccp-dev.c |  137 +---
 drivers/crypto/ccp/ccp-dev.h |   35 
 drivers/crypto/ccp/sp-dev.c  |  308 
 drivers/crypto/ccp/sp-dev.h  |  140 
 drivers/crypto/ccp/sp-pci.c  |  324 ++
 drivers/crypto/ccp/sp-platform.c |  268 +++
 include/linux/ccp.h  |3 
 12 files changed, 1240 insertions(+), 195 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h
 create mode 100644 drivers/crypto/ccp/sp-pci.c
 create mode 100644 drivers/crypto/ccp/sp-platform.c

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 7956478..d31b469 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -456,14 +456,14 @@ config CRYPTO_DEV_ATMEL_SHA
  To compile this driver as a module, choose M here: the module
  will be called atmel-sha.
 
-config CRYPTO_DEV_CCP
-   bool "Support for AMD Cryptographic Coprocessor"
+config CRYPTO_DEV_SP
+   bool "Support for AMD Secure Processor"
depends on ((X86 && PCI) || (ARM64 && (OF_ADDRESS || ACPI))) && 
HAS_IOMEM
help
- The AMD Cryptographic Coprocessor provides hardware offload support
- for encryption, hashing and related operations.
+ The AMD Secure Processor provides hardware offload support for memory
+ encryption in virtualization and cryptographic hashing and related 
operations.
 
-if CRYPTO_DEV_CCP
+if CRYPTO_DEV_SP
source "drivers/crypto/ccp/Kconfig"
 endif
 
diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 2238f77..bc08f03 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -1,26 +1,37 @@
-config CRYPTO_DEV_CCP_DD
-   tristate "Cryptographic Coprocessor device driver"
-   depends on CRYPTO_DEV_CCP
-   default m
-   select HW_RANDOM
-   select DMA_ENGINE
-   select DMADEVICES
-   select CRYPTO_SHA1
-   select CRYPTO_SHA256
-   help
- Provides the interface to use the AMD Cryptographic Coprocessor
- which can be used to offload encryption operations such as SHA,
- AES and more. If you choose 'M' here, this module will be called
- ccp.
-
 config CRYPTO_DEV_CCP_CRYPTO
tristate "Encryption and hashing offload support"
-   depends on CRYPTO_DEV_CCP_DD
+   depends on CRYPTO_DEV_SP_DD
default m
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_AUTHENC
+   select CRYPTO_DEV_CCP
help
  Support for using the cryptographic API with the AMD Cryptographic
  Coprocessor. This module supports offload of SHA and AES algorithms.
  If you choose 'M' here, this module will be called ccp_crypto.
+
+config CRYPTO_DEV_SP_DD
+   tristate "Secure Processor device driver"
+   depends on CRYPTO_DEV_SP
+   default m
+   help
+ Provides the interface to use the AMD Secure Processor. The
+ AMD Secure Processor support the Platform Security Processor (PSP)
+ and Cryptographic Coprocessor (CCP). If you choose 'M' here, this
+ module will be called ccp.
+
+if CRYPTO_DEV_SP_DD
+config CRYPTO_DEV_CCP
+   bool "Cryptographic Coprocessor interface"
+   default y
+   select HW_RANDOM
+   select DMA_ENGINE
+   select DMADEVICES
+   select CRYPTO_SHA1
+   select CRYPTO_SHA256
+   help
+ Provides the interface to use the AMD Cryptographic Coprocessor
+ which can be used to offload encryption operations such as SHA,
+ AES and more.
+endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..8127e18 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,11 +1,11 @@
-obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o \
+obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
+ccp-objs := sp-dev.o sp-platform.o
+ccp-$(CONFIG_PCI) += sp-pci.o
+ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-ops.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
-   ccp-platform.o \
ccp-dmaengine.o
-ccp-$(CONFIG_PCI) += ccp-pci.o

[RFC PATCH v2 16/32] x86: kvm: Provide support to create Guest and HV shared per-CPU variables

2017-03-02 Thread Brijesh Singh
Some KVM specific MSR's (steal-time, asyncpf, avic_eio) allocates per-CPU
variable at compile time and share its physical address with hypervisor.
It presents a challege when SEV is active in guest OS. When SEV is active,
guest memory is encrypted with guest key and hypervisor will no longer able
to modify the guest memory. When SEV is active, we need to clear the
encryption attribute of shared physical addresses so that both guest and
hypervisor can access the data.

To solve this problem, I have tried these three options:

1) Convert the static per-CPU to dynamic per-CPU allocation. When SEV is
detected then clear the encryption attribute. But while doing so I found
that per-CPU dynamic allocator was not ready when kvm_guest_cpu_init was
called.

2) Since the encryption attributes works on PAGE_SIZE hence add some extra
padding to 'struct kvm-steal-time' to make it PAGE_SIZE and then at runtime
clear the encryption attribute of the full PAGE. The downside of this was
now we need to modify structure which may break the compatibility.

3) Define a new per-CPU section (.data..percpu.hv_shared) which will be
used to hold the compile time shared per-CPU variables. When SEV is
detected we map this section with encryption attribute cleared.

This patch implements #3. It introduces a new DEFINE_PER_CPU_HV_SHAHRED
macro to create a compile time per-CPU variable. When SEV is detected we
map the per-CPU variable as decrypted (i.e with encryption attribute cleared).

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kernel/kvm.c |   43 +++--
 include/asm-generic/vmlinux.lds.h |3 +++
 include/linux/percpu-defs.h   |9 
 3 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 099fcba..706a08e 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -75,8 +75,8 @@ static int parse_no_kvmclock_vsyscall(char *arg)
 
 early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
 
-static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
-static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
+static DEFINE_PER_CPU_HV_SHARED(struct kvm_vcpu_pv_apf_data, apf_reason) 
__aligned(64);
+static DEFINE_PER_CPU_HV_SHARED(struct kvm_steal_time, steal_time) 
__aligned(64);
 static int has_steal_clock = 0;
 
 /*
@@ -290,6 +290,22 @@ static void __init paravirt_ops_setup(void)
 #endif
 }
 
+static int kvm_map_percpu_hv_shared(void *addr, unsigned long size)
+{
+   /* When SEV is active, the percpu static variables initialized
+* in data section will contain the encrypted data so we first
+* need to decrypt it and then map it as decrypted.
+*/
+   if (sev_active()) {
+   unsigned long pa = slow_virt_to_phys(addr);
+
+   sme_early_decrypt(pa, size);
+   return early_set_memory_decrypted(addr, size);
+   }
+
+   return 0;
+}
+
 static void kvm_register_steal_time(void)
 {
int cpu = smp_processor_id();
@@ -298,12 +314,17 @@ static void kvm_register_steal_time(void)
if (!has_steal_clock)
return;
 
+   if (kvm_map_percpu_hv_shared(st, sizeof(*st))) {
+   pr_err("kvm-stealtime: failed to map hv_shared percpu\n");
+   return;
+   }
+
wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
pr_info("kvm-stealtime: cpu %d, msr %llx\n",
cpu, (unsigned long long) slow_virt_to_phys(st));
 }
 
-static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
+static DEFINE_PER_CPU_HV_SHARED(unsigned long, kvm_apic_eoi) = 
KVM_PV_EOI_DISABLED;
 
 static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
 {
@@ -327,25 +348,33 @@ static void kvm_guest_cpu_init(void)
if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
u64 pa = slow_virt_to_phys(this_cpu_ptr(_reason));
 
+   if (kvm_map_percpu_hv_shared(this_cpu_ptr(_reason),
+   sizeof(struct kvm_vcpu_pv_apf_data)))
+   goto skip_asyncpf;
 #ifdef CONFIG_PREEMPT
pa |= KVM_ASYNC_PF_SEND_ALWAYS;
 #endif
wrmsrl(MSR_KVM_ASYNC_PF_EN, pa | KVM_ASYNC_PF_ENABLED);
__this_cpu_write(apf_reason.enabled, 1);
-   printk(KERN_INFO"KVM setup async PF for cpu %d\n",
-  smp_processor_id());
+   printk(KERN_INFO"KVM setup async PF for cpu %d msr %llx\n",
+  smp_processor_id(), pa);
}
-
+skip_asyncpf:
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) {
unsigned long pa;
/* Size alignment is implied but just to make it explicit. */
BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);

[RFC PATCH v2 03/32] KVM: SVM: prepare for new bit definition in nested_ctl

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h |2 ++
 arch/x86/kvm/svm.c |7 ---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 08a4d3a..75b0645 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1246,7 +1246,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
if (npt_enabled) {
/* Setup VMCB for Nested Paging */
-   control->nested_ctl = 1;
+   control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
clr_intercept(svm, INTERCEPT_INVLPG);
clr_exception_intercept(svm, PF_VECTOR);
clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2840,7 +2840,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
if (vmcb->control.asid == 0)
return false;
 
-   if (vmcb->control.nested_ctl && !npt_enabled)
+   if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+   !npt_enabled)
return false;
 
return true;
@@ -2915,7 +2916,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
else
svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-   if (nested_vmcb->control.nested_ctl) {
+   if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
kvm_mmu_unload(>vcpu);
svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
nested_svm_init_mmu_context(>vcpu);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 21/32] crypto: ccp: Add Secure Encrypted Virtualization (SEV) interface support

2017-03-02 Thread Brijesh Singh
The Secure Encrypted Virtualization (SEV) interface allows the memory
contents of a virtual machine (VM) to be transparently encrypted with
a key unique to the guest.

The interface provides:
  - /dev/sev device and ioctl (SEV_ISSUE_CMD) to execute the platform
provisioning commands from the userspace.
  - in-kernel API's to encrypt the guest memory region. The in-kernel APIs
will be used by KVM to bootstrap and debug the SEV guest.

SEV key management spec is available here [1]
[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 drivers/crypto/ccp/Kconfig   |7 
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/psp-dev.h |6 
 drivers/crypto/ccp/sev-dev.c |  348 ++
 drivers/crypto/ccp/sev-dev.h |   67 
 drivers/crypto/ccp/sev-ops.c |  324 
 include/linux/psp-sev.h  |  672 ++
 include/uapi/linux/Kbuild|1 
 include/uapi/linux/psp-sev.h |  123 
 9 files changed, 1546 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/ccp/sev-dev.c
 create mode 100644 drivers/crypto/ccp/sev-dev.h
 create mode 100644 drivers/crypto/ccp/sev-ops.c
 create mode 100644 include/linux/psp-sev.h
 create mode 100644 include/uapi/linux/psp-sev.h

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 59c207e..67d1917 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -41,4 +41,11 @@ config CRYPTO_DEV_PSP
help
 Provide the interface for AMD Platform Security Processor (PSP) device.
 
+config CRYPTO_DEV_SEV
+   bool "Secure Encrypted Virtualization (SEV) interface"
+   default y
+   help
+Provide the kernel and userspace (/dev/sev) interface to issue the
+Secure Encrypted Virtualization (SEV) commands.
+
 endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 12e569d..4c4e77e 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -7,6 +7,7 @@ ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-dev-v5.o \
ccp-dmaengine.o
 ccp-$(CONFIG_CRYPTO_DEV_PSP) += psp-dev.o
+ccp-$(CONFIG_CRYPTO_DEV_SEV) += sev-dev.o sev-ops.o
 
 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
 ccp-crypto-objs := ccp-crypto-main.o \
diff --git a/drivers/crypto/ccp/psp-dev.h b/drivers/crypto/ccp/psp-dev.h
index bbd3d96..fd67b14 100644
--- a/drivers/crypto/ccp/psp-dev.h
+++ b/drivers/crypto/ccp/psp-dev.h
@@ -70,14 +70,14 @@ int psp_free_sev_irq(struct psp_device *psp, void *data);
 
 struct psp_device *psp_get_master_device(void);
 
-#ifdef CONFIG_AMD_SEV
+#ifdef CONFIG_CRYPTO_DEV_SEV
 
 int sev_dev_init(struct psp_device *psp);
 void sev_dev_destroy(struct psp_device *psp);
 int sev_dev_resume(struct psp_device *psp);
 int sev_dev_suspend(struct psp_device *psp, pm_message_t state);
 
-#else
+#else /* !CONFIG_CRYPTO_DEV_SEV */
 
 static inline int sev_dev_init(struct psp_device *psp)
 {
@@ -96,7 +96,7 @@ static inline int sev_dev_suspend(struct psp_device *psp, 
pm_message_t state)
return -ENODEV;
 }
 
-#endif /* __AMD_SEV_H */
+#endif /* CONFIG_CRYPTO_DEV_SEV */
 
 #endif /* __PSP_DEV_H */
 
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
new file mode 100644
index 000..a67e2d7
--- /dev/null
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -0,0 +1,348 @@
+/*
+ * AMD Secure Encrypted Virtualization (SEV) interface
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.si...@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "psp-dev.h"
+#include "sev-dev.h"
+
+extern struct file_operations sev_fops;
+
+static LIST_HEAD(sev_devs);
+static DEFINE_SPINLOCK(sev_devs_lock);
+static atomic_t sev_id;
+
+static unsigned int psp_poll;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for sev command completion - any non-zero 
value");
+
+DEFINE_MUTEX(sev_cmd_mutex);
+
+void sev_add_device(struct sev_device *sev)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_add_tail(>entry, _devs);
+
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+void sev_del_device(struct sev_device *sev)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_del(>entry);
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+static struct sev_device *get_sev_master_device(void)
+{
+   struct psp_device *psp = psp_get_master_device();
+
+   return psp ? psp->sev_data : NULL;
+}
+
+

[RFC PATCH v2 28/32] kvm: svm: Add support for SEV GUEST_STATUS command

2017-03-02 Thread Brijesh Singh
The command is used for querying the SEV guest status.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   37 +
 1 file changed, 37 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c108064..977aa22 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5953,6 +5953,39 @@ static int sev_launch_finish(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int ret;
+   struct kvm_sev_guest_status params;
+   struct sev_data_guest_status *data;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(, (void *) argp->data,
+   sizeof(struct kvm_sev_guest_status)))
+   return -EFAULT;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_GUEST_STATUS, data, >error);
+   if (ret)
+   goto err_1;
+
+   params.policy = data->policy;
+   params.state = data->state;
+
+   if (copy_to_user((void *) argp->data, ,
+   sizeof(struct kvm_sev_guest_status)))
+   ret = -EFAULT;
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -5976,6 +6009,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_launch_finish(kvm, _cmd);
break;
}
+   case KVM_SEV_GUEST_STATUS: {
+   r = sev_guest_status(kvm, _cmd);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 22/32] kvm: svm: prepare to reserve asid for SEV guest

2017-03-02 Thread Brijesh Singh
In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
Reviewed-by: Paolo Bonzini <pbonz...@redhat.com>
---
 arch/x86/kvm/svm.c |4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b581499..8d8fe62 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -507,6 +507,7 @@ struct svm_cpu_data {
u64 asid_generation;
u32 max_asid;
u32 next_asid;
+   u32 min_asid;
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
@@ -763,6 +764,7 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
+   sd->min_asid = 1;
 
native_store_gdt(_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -2026,7 +2028,7 @@ static void new_asid(struct vcpu_svm *svm, struct 
svm_cpu_data *sd)
 {
if (sd->next_asid > sd->max_asid) {
++sd->asid_generation;
-   sd->next_asid = 1;
+   sd->next_asid = sd->min_asid;
svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 24/32] kvm: x86: prepare for SEV guest management API support

2017-03-02 Thread Brijesh Singh
The patch adds initial support required to integrate Secure Encrypted
Virtualization (SEV) feature.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained through
   CPUID Fn8000_001f[ECX]. A non-SEV guest can use any asid outside the SEV
   asid range.
 - SEV guest must have asid value within asid range obtained through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush is required
   if different vcpu for the same ASID is to be run on the same host CPU.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |8 ++
 arch/x86/kvm/svm.c  |  189 +++
 include/uapi/linux/kvm.h|   98 
 3 files changed, 294 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 62651ad..fcc4710 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -719,6 +719,12 @@ struct kvm_hv {
HV_REFERENCE_TSC_PAGE tsc_ref;
 };
 
+struct kvm_sev_info {
+   unsigned int handle;/* firmware handle */
+   unsigned int asid;  /* asid for this guest */
+   int sev_fd; /* SEV device fd */
+};
+
 struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
@@ -805,6 +811,8 @@ struct kvm_arch {
 
bool x2apic_format;
bool x2apic_broadcast_quirk_disabled;
+
+   struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8d8fe62..fb63398 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -211,6 +212,9 @@ struct vcpu_svm {
 */
struct list_head ir_list;
spinlock_t ir_list_lock;
+
+   /* which host cpu was used for running this vcpu */
+   bool last_cpuid;
 };
 
 /*
@@ -490,6 +494,64 @@ static inline bool gif_set(struct vcpu_svm *svm)
return !!(svm->vcpu.arch.hflags & HF_GIF_MASK);
 }
 
+/* Secure Encrypted Virtualization */
+static unsigned int max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+static bool kvm_sev_enabled(void)
+{
+   return max_sev_asid ? 1 : 0;
+}
+
+static inline struct kvm_sev_info *sev_get_info(struct kvm *kvm)
+{
+   struct kvm_arch *vm_data = >arch;
+
+   return _data->sev_info;
+}
+
+static unsigned int sev_get_handle(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   return sev_info->handle;
+}
+
+static inline int sev_guest(struct kvm *kvm)
+{
+   return sev_get_handle(kvm);
+}
+
+static inline int sev_get_asid(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return -EINVAL;
+
+   return sev_info->asid;
+}
+
+static inline int sev_get_fd(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return -EINVAL;
+
+   return sev_info->sev_fd;
+}
+
+static inline void sev_set_asid(struct kvm *kvm, int asid)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return;
+
+   sev_info->asid = asid;
+}
+
 static unsigned long iopm_base;
 
 struct kvm_ldttss_desc {
@@ -511,6 +573,8 @@ struct svm_cpu_data {
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
+
+   struct vmcb **sev_vmcbs;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -764,7 +828,7 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
-   sd->min_asid = 1;
+   sd->min_asid = max_sev_asid + 1;
 
native_store_gdt(_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -825,6 +889,7 @@ static void svm_cpu_uninit(int cpu)
 
per_cpu(svm_data, raw_smp_processor_id()) = NULL;
__free_page(sd->save_area);
+   kfree(sd->sev_vmcbs);
kfree(sd);
 }
 
@@ -842,6 +907,14 @@ static int svm_cpu_init(int cpu)
if (!sd->save_area)
goto err_1;
 
+   if (kvm_sev_enabled()) {
+   sd->sev_vmcbs = kmalloc((max_sev_asid + 1) * sizeof(void *),
+   GFP_KERNEL);
+   r = -ENOMEM;
+   if (!sd->sev_vmcbs)
+   goto err_1;
+   }
+
per_cpu(svm_data, cpu) = sd;
 
return 0;
@@ -1017,6 +1090,61 @@ static int avic_ga_log_notifier(u32 ga_tag)
return 0;
 }
 
+static __init void sev_hardware_setup(void)
+{
+   int ret, error, nguests;
+   struct sev_data_init *init;
+   struct sev_data_status *status;
+
+   /*
+* Get maximum number of

[RFC PATCH v2 20/32] crypto: ccp: Add Platform Security Processor (PSP) interface support

2017-03-02 Thread Brijesh Singh
AMD Platform Security Processor (PSP) is a dedicated processor that
provides the support for encrypting the guest memory in a Secure Encrypted
Virtualiztion (SEV) mode, along with software-based Tursted Executation
Environment (TEE) to enable the third-party tursted applications.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 drivers/crypto/ccp/Kconfig   |7 +
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/psp-dev.c |  211 ++
 drivers/crypto/ccp/psp-dev.h |  102 
 drivers/crypto/ccp/sp-dev.c  |   16 +++
 drivers/crypto/ccp/sp-dev.h  |   34 +++
 drivers/crypto/ccp/sp-pci.c  |4 +
 7 files changed, 374 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/ccp/psp-dev.c
 create mode 100644 drivers/crypto/ccp/psp-dev.h

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index bc08f03..59c207e 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -34,4 +34,11 @@ config CRYPTO_DEV_CCP
  Provides the interface to use the AMD Cryptographic Coprocessor
  which can be used to offload encryption operations such as SHA,
  AES and more.
+
+config CRYPTO_DEV_PSP
+   bool "Platform Security Processor interface"
+   default y
+   help
+Provide the interface for AMD Platform Security Processor (PSP) device.
+
 endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 8127e18..12e569d 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -6,6 +6,7 @@ ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
ccp-dmaengine.o
+ccp-$(CONFIG_CRYPTO_DEV_PSP) += psp-dev.o
 
 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
 ccp-crypto-objs := ccp-crypto-main.o \
diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
new file mode 100644
index 000..6f64aa7
--- /dev/null
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -0,0 +1,211 @@
+/*
+ * AMD Platform Security Processor (PSP) interface
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh <brijesh.si...@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "sp-dev.h"
+#include "psp-dev.h"
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+const struct psp_vdata psp_entry = {
+   .offset = 0x10500,
+};
+
+void psp_add_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_add_tail(>entry, _devs);
+
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+void psp_del_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_del(>entry);
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+static struct psp_device *psp_alloc_struct(struct sp_device *sp)
+{
+   struct device *dev = sp->dev;
+   struct psp_device *psp;
+
+   psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+   if (!psp)
+   return NULL;
+
+   psp->dev = dev;
+   psp->sp = sp;
+
+   snprintf(psp->name, sizeof(psp->name), "psp-%u", sp->ord);
+
+   return psp;
+}
+
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+   unsigned int status;
+   irqreturn_t ret = IRQ_HANDLED;
+   struct psp_device *psp = data;
+
+   /* read the interrupt status */
+   status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+
+   /* invoke subdevice interrupt handlers */
+   if (status) {
+   if (psp->sev_irq_handler)
+   ret = psp->sev_irq_handler(irq, psp->sev_irq_data);
+   }
+
+   /* clear the interrupt status */
+   iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+   return ret;
+}
+
+static int psp_init(struct psp_device *psp)
+{
+   psp_add_device(psp);
+
+   sev_dev_init(psp);
+
+   return 0;
+}
+
+int psp_dev_init(struct sp_device *sp)
+{
+   struct device *dev = sp->dev;
+   struct psp_device *psp;
+   int ret;
+
+   ret = -ENOMEM;
+   psp = psp_alloc_struct(sp);
+   if (!psp)
+   goto e_err;
+   sp->psp_data = psp;
+
+   psp->vdata = (struct psp_vdata *)sp->dev_data->psp_vdata;
+   if (!psp->vdata) {
+   ret = -ENODEV;
+   dev_err(dev, "missing driver data\n");
+   goto e_err;
+   }
+
+   psp->io_regs = sp->io_map + psp->vdata->offset;
+
+   /* Disable and clear interrupts u

[RFC PATCH v2 26/32] kvm: svm: Add support for SEV LAUNCH_UPDATE_DATA command

2017-03-02 Thread Brijesh Singh
The command is used for encrypting the guest memory region using the VM
encryption key (VEK) created from LAUNCH_START.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |  150 
 1 file changed, 150 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b5fa8c0..62c2b22 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -38,6 +38,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -502,6 +504,7 @@ static void sev_deactivate_handle(struct kvm *kvm);
 static void sev_decommission_handle(struct kvm *kvm);
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
 
 static bool kvm_sev_enabled(void)
 {
@@ -5775,6 +5778,149 @@ static int sev_launch_start(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static struct page **sev_pin_memory(unsigned long uaddr, unsigned long ulen,
+   unsigned long *n)
+{
+   struct page **pages;
+   int first, last;
+   unsigned long npages, pinned;
+
+   /* Get number of pages */
+   first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+   last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+   npages = (last - first + 1);
+
+   pages = kzalloc(npages * sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return NULL;
+
+   /* pin the user virtual address */
+   down_read(>mm->mmap_sem);
+   pinned = get_user_pages_fast(uaddr, npages, 1, pages);
+   up_read(>mm->mmap_sem);
+   if (pinned != npages) {
+   printk(KERN_ERR "SEV: failed to pin  %ld pages (got %ld)\n",
+   npages, pinned);
+   goto err;
+   }
+
+   *n = npages;
+   return pages;
+err:
+   if (pinned > 0)
+   release_pages(pages, pinned, 0);
+   kfree(pages);
+
+   return NULL;
+}
+
+static void sev_unpin_memory(struct page **pages, unsigned long npages)
+{
+   release_pages(pages, npages, 0);
+   kfree(pages);
+}
+
+static void sev_clflush_pages(struct page *pages[], int num_pages)
+{
+   unsigned long i;
+   uint8_t *page_virtual;
+
+   if (num_pages == 0 || pages == NULL)
+   return;
+
+   for (i = 0; i < num_pages; i++) {
+   page_virtual = kmap_atomic(pages[i]);
+   clflush_cache_range(page_virtual, PAGE_SIZE);
+   kunmap_atomic(page_virtual);
+   }
+}
+
+static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   struct page **inpages;
+   unsigned long uaddr, ulen;
+   int i, len, ret, offset;
+   unsigned long nr_pages;
+   struct kvm_sev_launch_update_data params;
+   struct sev_data_launch_update_data *data;
+
+   if (!sev_guest(kvm))
+   return -EINVAL;
+
+   /* Get the parameters from the user */
+   ret = -EFAULT;
+   if (copy_from_user(, (void *)argp->data,
+   sizeof(struct kvm_sev_launch_update_data)))
+   goto err_1;
+
+   uaddr = params.address;
+   ulen = params.length;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   /* pin user pages */
+   inpages = sev_pin_memory(params.address, params.length, _pages);
+   if (!inpages) {
+   ret = -ENOMEM;
+   goto err_2;
+   }
+
+   /* invalidate the cache line for these pages to ensure that DRAM
+* has recent content before calling the SEV commands to perform
+* the encryption.
+*/
+   sev_clflush_pages(inpages, nr_pages);
+
+   /* the array of pages returned by get_user_pages() is a page-aligned
+* memory. Since the user buffer is probably not page-aligned, we need
+* to calculate the offset within a page for first update entry.
+*/
+   offset = uaddr & (PAGE_SIZE - 1);
+   len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+   ulen -= len;
+
+   /* update first page -
+* special care need to be taken for the first page because we might
+* be dealing with offset within the page
+*/
+   data->handle = sev_get_handle(kvm);
+   data->length = len;
+   data->address = __sev_page_pa(inpages[0]) + offset;
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA,
+   data, >error);
+   if (ret)
+   goto err_3;
+
+   /* update remaining pages */
+   for (i = 1; i < nr_pages; i++) {
+
+   len = min_t(size_t, PAGE_SIZE, ulen);
+   ulen -= len;
+   data->length = len;
+   data->address = __sev_

[RFC PATCH v2 18/32] kvm: svm: Use the hardware provided GPA instead of page walk

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky <thomas.lenda...@amd.com>

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
Reviewed-by: Borislav Petkov <b...@suse.de>
Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_emulate.h |1 +
 arch/x86/include/asm/kvm_host.h|3 ++
 arch/x86/kvm/emulate.c |   20 +---
 arch/x86/kvm/svm.c |2 ++
 arch/x86/kvm/x86.c |   45 
 5 files changed, 57 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h 
b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..3e8c287 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -441,5 +441,6 @@ int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
 int emulate_int_real(struct x86_emulate_ctxt *ctxt, int irq);
 void emulator_invalidate_register_cache(struct x86_emulate_ctxt *ctxt);
 void emulator_writeback_register_cache(struct x86_emulate_ctxt *ctxt);
+bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt);
 
 #endif /* _ASM_X86_KVM_X86_EMULATE_H */
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 37326b5..bff1f15 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -668,6 +668,9 @@ struct kvm_vcpu_arch {
 
int pending_ioapic_eoi;
int pending_external_vector;
+
+   /* GPA available (AMD only) */
+   bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index cedbba0..45c7306 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -173,6 +173,7 @@
 #define NearBranch  ((u64)1 << 52)  /* Near branches */
 #define No16   ((u64)1 << 53)  /* No 16 bit operand */
 #define IncSP   ((u64)1 << 54)  /* SP is incremented before ModRM calc */
+#define TwoMemOp((u64)1 << 55)  /* Instruction has two memory operand */
 
 #define DstXacc (DstAccLo | SrcAccHi | SrcWrite)
 
@@ -4298,7 +4299,7 @@ static const struct opcode group1[] = {
 };
 
 static const struct opcode group1A[] = {
-   I(DstMem | SrcNone | Mov | Stack | IncSP, em_pop), N, N, N, N, N, N, N,
+   I(DstMem | SrcNone | Mov | Stack | IncSP | TwoMemOp, em_pop), N, N, N, 
N, N, N, N,
 };
 
 static const struct opcode group2[] = {
@@ -4336,7 +4337,7 @@ static const struct opcode group5[] = {
I(SrcMemFAddr | ImplicitOps,em_call_far),
I(SrcMem | NearBranch,  em_jmp_abs),
I(SrcMemFAddr | ImplicitOps,em_jmp_far),
-   I(SrcMem | Stack,   em_push), D(Undefined),
+   I(SrcMem | Stack | TwoMemOp,em_push), D(Undefined),
 };
 
 static const struct opcode group6[] = {
@@ -4556,8 +4557,8 @@ static const struct opcode opcode_table[256] = {
/* 0xA0 - 0xA7 */
I2bv(DstAcc | SrcMem | Mov | MemAbs, em_mov),
I2bv(DstMem | SrcAcc | Mov | MemAbs | PageTable, em_mov),
-   I2bv(SrcSI | DstDI | Mov | String, em_mov),
-   F2bv(SrcSI | DstDI | String | NoWrite, em_cmp_r),
+   I2bv(SrcSI | DstDI | Mov | String | TwoMemOp, em_mov),
+   F2bv(SrcSI | DstDI | String | NoWrite | TwoMemOp, em_cmp_r),
/* 0xA8 - 0xAF */
F2bv(DstAcc | SrcImm | NoWrite, em_test),
I2bv(SrcAcc | DstDI | Mov | String, em_mov),
@@ -5671,3 +5672,14 @@ void emulator_writeback_register_cache(struct 
x86_emulate_ctxt *ctxt)
 {
writeback_registers(ctxt);
 }
+
+bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt)
+{
+   if (ctxt->rep_prefix && (ctxt->d & String))
+   return false;
+
+   if (ctxt->d & TwoMemOp)
+   return false;
+
+   return true;
+}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 36d61ff..b581499 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4184,6 +4184,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+   vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
vcpu->arch.cr0 = svm->vmcb->save.cr0;
if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9e6a593..2099df8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4465,6 +4465,21 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt 
*ctx

[RFC PATCH v2 17/32] x86: kvmclock: Clear encryption attribute when SEV is active

2017-03-02 Thread Brijesh Singh
The guest physical memory area holding the struct pvclock_wall_clock and
struct pvclock_vcpu_time_info are shared with the hypervisor. Hypervisor
periodically updates the contents of the memory. When SEV is active we must
clear the encryption attributes of the shared memory pages so that both
hypervisor and guest can access the data.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kernel/kvmclock.c |   65 ++--
 1 file changed, 56 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 278de4f..3b38b3d 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
@@ -44,7 +45,7 @@ early_param("no-kvmclock", parse_no_kvmclock);
 
 /* The hypervisor will put information about time periodically here */
 static struct pvclock_vsyscall_time_info *hv_clock;
-static struct pvclock_wall_clock wall_clock;
+static struct pvclock_wall_clock *wall_clock;
 
 struct pvclock_vsyscall_time_info *pvclock_pvti_cpu0_va(void)
 {
@@ -62,15 +63,18 @@ static void kvm_get_wallclock(struct timespec *now)
int low, high;
int cpu;
 
-   low = (int)__pa_symbol(_clock);
-   high = ((u64)__pa_symbol(_clock) >> 32);
+   if (!wall_clock)
+   return;
+
+   low = (int)slow_virt_to_phys(wall_clock);
+   high = ((u64)slow_virt_to_phys(wall_clock) >> 32);
 
native_write_msr(msr_kvm_wall_clock, low, high);
 
cpu = get_cpu();
 
vcpu_time = _clock[cpu].pvti;
-   pvclock_read_wallclock(_clock, vcpu_time, now);
+   pvclock_read_wallclock(wall_clock, vcpu_time, now);
 
put_cpu();
 }
@@ -246,11 +250,40 @@ static void kvm_shutdown(void)
native_machine_shutdown();
 }
 
+static phys_addr_t kvm_memblock_alloc(phys_addr_t size, phys_addr_t align)
+{
+   phys_addr_t mem;
+
+   mem = memblock_alloc(size, align);
+   if (!mem)
+   return 0;
+
+   /* When SEV is active clear the encryption attributes of the pages */
+   if (sev_active()) {
+   if (early_set_memory_decrypted(__va(mem), size))
+   goto e_free;
+   }
+
+   return mem;
+e_free:
+   memblock_free(mem, size);
+   return 0;
+}
+
+static void kvm_memblock_free(phys_addr_t addr, phys_addr_t size)
+{
+   /* When SEV is active restore the encryption attributes of the pages */
+   if (sev_active())
+   early_set_memory_encrypted(__va(addr), size);
+
+   memblock_free(addr, size);
+}
+
 void __init kvmclock_init(void)
 {
struct pvclock_vcpu_time_info *vcpu_time;
-   unsigned long mem;
-   int size, cpu;
+   unsigned long mem, mem_wall_clock;
+   int size, cpu, wall_clock_size;
u8 flags;
 
size = PAGE_ALIGN(sizeof(struct pvclock_vsyscall_time_info)*NR_CPUS);
@@ -267,15 +300,29 @@ void __init kvmclock_init(void)
printk(KERN_INFO "kvm-clock: Using msrs %x and %x",
msr_kvm_system_time, msr_kvm_wall_clock);
 
-   mem = memblock_alloc(size, PAGE_SIZE);
-   if (!mem)
+   wall_clock_size = PAGE_ALIGN(sizeof(struct pvclock_wall_clock));
+   mem_wall_clock = kvm_memblock_alloc(wall_clock_size, PAGE_SIZE);
+   if (!mem_wall_clock)
return;
+
+   wall_clock = __va(mem_wall_clock);
+   memset(wall_clock, 0, wall_clock_size);
+
+   mem = kvm_memblock_alloc(size, PAGE_SIZE);
+   if (!mem) {
+   kvm_memblock_free(mem_wall_clock, wall_clock_size);
+   wall_clock = NULL;
+   return;
+   }
+
hv_clock = __va(mem);
memset(hv_clock, 0, size);
 
if (kvm_register_clock("primary cpu clock")) {
hv_clock = NULL;
-   memblock_free(mem, size);
+   kvm_memblock_free(mem, size);
+   kvm_memblock_free(mem_wall_clock, wall_clock_size);
+   wall_clock = NULL;
return;
}
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-03-02 Thread Brijesh Singh
If kernel_maps_pages_in_pgd is called early in boot process to change the
memory attributes then it fails to allocate memory when spliting large
pages. The patch extends the cpa_data to provide the support to use
memblock_alloc when slab allocator is not available.

The feature will be used in Secure Encrypted Virtualization (SEV) mode,
where we may need to change the memory region attributes in early boot
process.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/mm/pageattr.c |   51 
 1 file changed, 42 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 46cc89d..9e4ab3b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -37,6 +38,7 @@ struct cpa_data {
int flags;
unsigned long   pfn;
unsignedforce_split : 1;
+   unsignedforce_memblock :1;
int curpage;
struct page **pages;
 };
@@ -627,9 +629,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 
 static int
 __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
-  struct page *base)
+ pte_t *pbase, unsigned long new_pfn)
 {
-   pte_t *pbase = (pte_t *)page_address(base);
unsigned long ref_pfn, pfn, pfninc = 1;
unsigned int i, level;
pte_t *tmp;
@@ -646,7 +647,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 1;
}
 
-   paravirt_alloc_pte(_mm, page_to_pfn(base));
+   paravirt_alloc_pte(_mm, new_pfn);
 
switch (level) {
case PG_LEVEL_2M:
@@ -707,7 +708,8 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
 * pagetable protections, the actual ptes set above control the
 * primary protection behavior:
 */
-   __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));
+   __set_pmd_pte(kpte, address,
+   native_make_pte((new_pfn << PAGE_SHIFT) + _KERNPG_TABLE));
 
/*
 * Intel Atom errata AAH41 workaround.
@@ -723,21 +725,50 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 0;
 }
 
+static pte_t *try_alloc_pte(struct cpa_data *cpa, unsigned long *pfn)
+{
+   unsigned long phys;
+   struct page *base;
+
+   if (cpa->force_memblock) {
+   phys = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+   if (!phys)
+   return NULL;
+   *pfn = phys >> PAGE_SHIFT;
+   return (pte_t *)__va(phys);
+   }
+
+   base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
+   if (!base)
+   return NULL;
+   *pfn = page_to_pfn(base);
+   return (pte_t *)page_address(base);
+}
+
+static void try_free_pte(struct cpa_data *cpa, pte_t *pte)
+{
+   if (cpa->force_memblock)
+   memblock_free(__pa(pte), PAGE_SIZE);
+   else
+   __free_page((struct page *)pte);
+}
+
 static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
unsigned long address)
 {
-   struct page *base;
+   pte_t *new_pte;
+   unsigned long new_pfn;
 
if (!debug_pagealloc_enabled())
spin_unlock(_lock);
-   base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
+   new_pte = try_alloc_pte(cpa, _pfn);
if (!debug_pagealloc_enabled())
spin_lock(_lock);
-   if (!base)
+   if (!new_pte)
return -ENOMEM;
 
-   if (__split_large_page(cpa, kpte, address, base))
-   __free_page(base);
+   if (__split_large_page(cpa, kpte, address, new_pte, new_pfn))
+   try_free_pte(cpa, new_pte);
 
return 0;
 }
@@ -2035,6 +2066,7 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned 
long address,
unsigned numpages, unsigned long page_flags)
 {
int retval = -EINVAL;
+   int use_memblock = !slab_is_available();
 
struct cpa_data cpa = {
.vaddr = ,
@@ -2044,6 +2076,7 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned 
long address,
.mask_set = __pgprot(0),
.mask_clr = __pgprot(0),
.flags = 0,
+   .force_memblock = use_memblock,
};
 
if (!(__supported_pte_mask & _PAGE_NX))

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 12/32] x86: Add early boot support when running with SEV active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Early in the boot process, add checks to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) active by issuing
a CPUID instruction.

During early compressed kernel booting, if SEV is active the pagetables are
updated so that data is accessed and decompressed with encryption.

During uncompressed kernel booting, if SEV is the memory encryption mask is
set and a flag is set to indicate that SEV is enabled.

Signed-off-by: Tom Lendacky 
---
 arch/x86/boot/compressed/Makefile  |2 +
 arch/x86/boot/compressed/head_64.S |   16 +++
 arch/x86/boot/compressed/mem_encrypt.S |   75 
 arch/x86/include/uapi/asm/hyperv.h |4 ++
 arch/x86/include/uapi/asm/kvm_para.h   |3 +
 arch/x86/kernel/mem_encrypt_init.c |   24 ++
 6 files changed, 124 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 44163e8..51f9cd0 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -72,6 +72,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o 
$(obj)/misc.o \
$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index d2ae1f8..625b5380 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -130,6 +130,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+   /*
+* If SEV is active set the encryption mask in the page tables. This
+* will insure that when the kernel is copied and decompressed it
+* will be done so encrypted.
+*/
+   callsev_enabled
+   xorl%edx, %edx
+   testl   %eax, %eax
+   jz  1f
+   subl$32, %eax   /* Encryption bit is always above bit 31 */
+   bts %eax, %edx  /* Set encryption mask for page tables */
+1:
+
/* Initialize Page tables to 0 */
lealpgtable(%ebx), %edi
xorl%eax, %eax
@@ -140,12 +153,14 @@ ENTRY(startup_32)
lealpgtable + 0(%ebx), %edi
leal0x1007 (%edi), %eax
movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
 
/* Build Level 3 */
lealpgtable + 0x1000(%ebx), %edi
leal0x1007(%edi), %eax
movl$4, %ecx
 1: movl%eax, 0x00(%edi)
+   addl%edx, 0x04(%edi)
addl$0x1000, %eax
addl$8, %edi
decl%ecx
@@ -156,6 +171,7 @@ ENTRY(startup_32)
movl$0x0183, %eax
movl$2048, %ecx
 1: movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
addl$0x0020, %eax
addl$8, %edi
decl%ecx
diff --git a/arch/x86/boot/compressed/mem_encrypt.S 
b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 000..8313c31
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,75 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+   .text
+   .code32
+ENTRY(sev_enabled)
+   xor %eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   push%ebx
+   push%ecx
+   push%edx
+
+   /* Check if running under a hypervisor */
+   movl$0x4000, %eax
+   cpuid
+   cmpl$0x4001, %eax
+   jb  .Lno_sev
+
+   movl$0x4001, %eax
+   cpuid
+   bt  $KVM_FEATURE_SEV, %eax
+   jnc .Lno_sev
+
+   /*
+* Check for memory encryption feature:
+*   CPUID Fn8000_001F[EAX] - Bit 0
+*/
+   movl$0x801f, %eax
+   cpuid
+   bt  $0, %eax
+   jnc .Lno_sev
+
+   /*
+* Get memory encryption information:
+*   CPUID Fn8000_001F[EBX] - Bits 5:0
+* Pagetable bit position used to indicate encryption
+*/
+   movl%ebx, %eax
+   andl$0x3f, %eax
+   movl%eax, sev_enc_bit(%ebp)
+   jmp .Lsev_exit
+
+.Lno_sev:
+   xor %eax, %eax
+
+.Lsev_exit:
+   pop %edx
+   pop %ecx
+   pop %ebx
+
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+
+   ret
+ENDPROC(sev_enabled)
+
+   .bss
+sev_enc_bit:
+   .word   0
diff --git 

[RFC PATCH v2 09/32] x86: Change early_ioremap to early_memremap for BOOT data

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

In order to map BOOT data with the proper encryption bit, the
early_ioremap() function calls are changed to early_memremap() calls.
This allows the proper access for both SME and SEV.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/acpi/boot.c |4 ++--
 arch/x86/kernel/mpparse.c   |   10 +-
 drivers/sfi/sfi_core.c  |6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 35174c6..468c25a 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -124,7 +124,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned 
long size)
if (!phys || !size)
return NULL;
 
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -132,7 +132,7 @@ void __init __acpi_unmap_table(char *map, unsigned long 
size)
if (!map || !size)
return;
 
-   early_iounmap(map, size);
+   early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0d904d7..fd37f39 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long 
physptr)
struct mpc_table *mpc;
unsigned long size;
 
-   mpc = early_ioremap(physptr, PAGE_SIZE);
+   mpc = early_memremap(physptr, PAGE_SIZE);
size = mpc->length;
-   early_iounmap(mpc, PAGE_SIZE);
+   early_memunmap(mpc, PAGE_SIZE);
apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
unsigned long size;
 
size = get_mpc_size(mpf->physptr);
-   mpc = early_ioremap(mpf->physptr, size);
+   mpc = early_memremap(mpf->physptr, size);
/*
 * Read the physical hardware table.  Anything here will
 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
 #endif
pr_err("BIOS bug, MP table errors detected!...\n");
pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
return -1;
}
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
 
if (early)
return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..d00ae3f 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 
size)
if (sfi_use_ioremap)
return ioremap_cache(phys, size);
else
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 
size)
if (sfi_use_ioremap)
iounmap(virt);
else
-   early_iounmap(virt, size);
+   early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 10/32] x86: DMA support for SEV memory encryption

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/mem_encrypt.c |   77 +
 1 file changed, 77 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 090419b..7df5f4c 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -197,8 +197,81 @@ void __init sme_early_init(void)
/* Update the protection map with memory encryption mask */
for (i = 0; i < ARRAY_SIZE(protection_map); i++)
protection_map[i] = pgprot_encrypted(protection_map[i]);
+
+   if (sev_active())
+   swiotlb_force = SWIOTLB_FORCE;
+}
+
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+  gfp_t gfp, unsigned long attrs)
+{
+   unsigned long dma_mask;
+   unsigned int order;
+   struct page *page;
+   void *vaddr = NULL;
+
+   dma_mask = dma_alloc_coherent_mask(dev, gfp);
+   order = get_order(size);
+
+   gfp &= ~__GFP_ZERO;
+
+   page = alloc_pages_node(dev_to_node(dev), gfp, order);
+   if (page) {
+   dma_addr_t addr;
+
+   /*
+* Since we will be clearing the encryption bit, check the
+* mask with it already cleared.
+*/
+   addr = phys_to_dma(dev, page_to_phys(page)) & ~sme_me_mask;
+   if ((addr + size) > dma_mask) {
+   __free_pages(page, get_order(size));
+   } else {
+   vaddr = page_address(page);
+   *dma_handle = addr;
+   }
+   }
+
+   if (!vaddr)
+   vaddr = swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
+
+   if (!vaddr)
+   return NULL;
+
+   /* Clear the SME encryption bit for DMA use if not swiotlb area */
+   if (!is_swiotlb_buffer(dma_to_phys(dev, *dma_handle))) {
+   set_memory_decrypted((unsigned long)vaddr, 1 << order);
+   *dma_handle &= ~sme_me_mask;
+   }
+
+   return vaddr;
 }
 
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+dma_addr_t dma_handle, unsigned long attrs)
+{
+   /* Set the SME encryption bit for re-use if not swiotlb area */
+   if (!is_swiotlb_buffer(dma_to_phys(dev, dma_handle)))
+   set_memory_encrypted((unsigned long)vaddr,
+1 << get_order(size));
+
+   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+   .alloc  = sme_alloc,
+   .free   = sme_free,
+   .map_page   = swiotlb_map_page,
+   .unmap_page = swiotlb_unmap_page,
+   .map_sg = swiotlb_map_sg_attrs,
+   .unmap_sg   = swiotlb_unmap_sg_attrs,
+   .sync_single_for_cpu= swiotlb_sync_single_for_cpu,
+   .sync_single_for_device = swiotlb_sync_single_for_device,
+   .sync_sg_for_cpu= swiotlb_sync_sg_for_cpu,
+   .sync_sg_for_device = swiotlb_sync_sg_for_device,
+   .mapping_error  = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -208,6 +281,10 @@ void __init mem_encrypt_init(void)
/* Call into SWIOTLB to update the SWIOTLB DMA buffers */
swiotlb_update_mem_attributes();
 
+   /* Use SEV DMA operations if SEV is active */
+   if (sev_active())
+   dma_ops = _dma_ops;
+
pr_info("AMD Secure Memory Encryption (SME) active\n");
 }
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 11/32] x86: Unroll string I/O when SEV is active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/io.h |   26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 833f7cc..b596114 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -327,14 +327,32 @@ static inline unsigned type in##bwl##_p(int port) 
\
\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; outs" #bwl   \
-: "+S"(addr), "+c"(count) : "d"(port));\
+   if (sev_active()) { \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   out##bwl(*value, port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; outs" #bwl   \
+: "+S"(addr), "+c"(count) : "d"(port));\
+   }   \
 }  \
\
 static inline void ins##bwl(int port, void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; ins" #bwl\
-: "+D"(addr), "+c"(count) : "d"(port));\
+   if (sev_active()) { \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   *value = in##bwl(port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; ins" #bwl\
+: "+D"(addr), "+c"(count) : "d"(port));\
+   }   \
 }
 
 BUILDIO(b, b, char)

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 07/32] x86/efi: Access EFI data as encrypted when SEV is active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky <thomas.lenda...@amd.com>

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/platform/efi/efi_64.c |   15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 2d8674d..9a76ed8 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * We allocate runtime services regions bottom-up, starting from -4G, i.e.
@@ -286,7 +287,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, 
unsigned num_pages)
 * as trim_bios_range() will reserve the first page and isolate it away
 * from memory allocators anyway.
 */
-   if (kernel_map_pages_in_pgd(pgd, 0x0, 0x0, 1, _PAGE_RW)) {
+   pf = _PAGE_RW;
+   if (sev_active())
+   pf |= _PAGE_ENC;
+   if (kernel_map_pages_in_pgd(pgd, 0x0, 0x0, 1, pf)) {
pr_err("Failed to create 1:1 mapping for the first page!\n");
return 1;
}
@@ -329,6 +333,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 
va)
if (!(md->attribute & EFI_MEMORY_WB))
flags |= _PAGE_PCD;
 
+   if (sev_active())
+   flags |= _PAGE_ENC;
+
pfn = md->phys_addr >> PAGE_SHIFT;
if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -455,6 +462,9 @@ static int __init efi_update_mem_attr(struct mm_struct *mm, 
efi_memory_desc_t *m
if (!(md->attribute & EFI_MEMORY_RO))
pf |= _PAGE_RW;
 
+   if (sev_active())
+   pf |= _PAGE_ENC;
+
return efi_update_mappings(md, pf);
 }
 
@@ -506,6 +516,9 @@ void __init efi_runtime_update_mappings(void)
(md->type != EFI_RUNTIME_SERVICES_CODE))
pf |= _PAGE_RW;
 
+   if (sev_active())
+   pf |= _PAGE_ENC;
+
efi_update_mappings(md, pf);
}
 }

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 13/32] KVM: SVM: Enable SEV by setting the SEV_ENABLE CPU feature

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active in the guest by setting the SEV KVM CPU
features bit. SEV is active if Secure Memory Encryption is enabled in
the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kvm/cpuid.c |4 +++-
 arch/x86/kvm/svm.c   |   18 ++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 1639de8..e0c40a8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -601,7 +601,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
entry->edx = 0;
break;
case 0x8000:
-   entry->eax = min(entry->eax, 0x801a);
+   entry->eax = min(entry->eax, 0x801f);
break;
case 0x8001:
entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -634,6 +634,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
break;
case 0x801d:
break;
+   case 0x801f:
+   break;
/*Add support for Centaur's CPUID instruction*/
case 0xC000:
/*Just support up to 0xC004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 75b0645..36d61ff 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -46,6 +46,7 @@
 #include 
 
 #include 
+#include 
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -5005,10 +5006,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_cpuid_entry2 *entry;
+   struct vmcb_control_area *ca = >vmcb->control;
+   struct kvm_cpuid_entry2 *features, *sev_info;
 
/* Update nrips enabled cache */
svm->nrips_enabled = !!guest_cpuid_has_nrips(>vcpu);
 
+   /* Check for Secure Encrypted Virtualization support */
+   features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+   if (!features)
+   return;
+
+   sev_info = kvm_find_cpuid_entry(vcpu, 0x801f, 0);
+   if (!sev_info)
+   return;
+
+   if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+   features->eax |= (1 << KVM_FEATURE_SEV);
+   cpuid(0x801f, _info->eax, _info->ebx,
+ _info->ecx, _info->edx);
+   }
+
if (!kvm_vcpu_apicv_active(vcpu))
return;
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 08/32] x86: Use PAGE_KERNEL protection for ioremap of memory page

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

In order for memory pages to be properly mapped when SEV is active, we
need to use the PAGE_KERNEL protection attribute as the base protection.
This will insure that memory mapping of, e.g. ACPI tables, receives the
proper mapping attributes.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/ioremap.c |8 
 include/linux/mm.h|1 +
 kernel/resource.c |   40 
 3 files changed, 49 insertions(+)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c400ab5..481c999 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -151,7 +151,15 @@ static void __iomem *__ioremap_caller(resource_size_t 
phys_addr,
pcm = new_pcm;
}
 
+   /*
+* If the page being mapped is in memory and SEV is active then
+* make sure the memory encryption attribute is enabled in the
+* resulting mapping.
+*/
prot = PAGE_KERNEL_IO;
+   if (sev_active() && page_is_mem(pfn))
+   prot = __pgprot(pgprot_val(prot) | _PAGE_ENC);
+
switch (pcm) {
case _PAGE_CACHE_MODE_UC:
default:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b84615b..825df27 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -445,6 +445,7 @@ static inline int get_page_unless_zero(struct page *page)
 }
 
 extern int page_is_ram(unsigned long pfn);
+extern int page_is_mem(unsigned long pfn);
 
 enum {
REGION_INTERSECTS,
diff --git a/kernel/resource.c b/kernel/resource.c
index 9b5f044..db56ba3 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -518,6 +518,46 @@ int __weak page_is_ram(unsigned long pfn)
 }
 EXPORT_SYMBOL_GPL(page_is_ram);
 
+/*
+ * This function returns true if the target memory is marked as
+ * IORESOURCE_MEM and IORESOUCE_BUSY and described as other than
+ * IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES).
+ */
+static int walk_mem_range(unsigned long start_pfn, unsigned long nr_pages)
+{
+   struct resource res;
+   unsigned long pfn, end_pfn;
+   u64 orig_end;
+   int ret = -1;
+
+   res.start = (u64) start_pfn << PAGE_SHIFT;
+   res.end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+   res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+   orig_end = res.end;
+   while ((res.start < res.end) &&
+   (find_next_iomem_res(, IORES_DESC_NONE, true) >= 0)) {
+   pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT;
+   end_pfn = (res.end + 1) >> PAGE_SHIFT;
+   if (end_pfn > pfn)
+   ret = (res.desc != IORES_DESC_NONE) ? 1 : 0;
+   if (ret)
+   break;
+   res.start = res.end + 1;
+   res.end = orig_end;
+   }
+   return ret;
+}
+
+/*
+ * This generic page_is_mem() returns true if specified address is
+ * registered as memory in iomem_resource list.
+ */
+int __weak page_is_mem(unsigned long pfn)
+{
+   return walk_mem_range(pfn, 1) == 1;
+}
+EXPORT_SYMBOL_GPL(page_is_mem);
+
 /**
  * region_intersects() - determine intersection of region with known resources
  * @start: region start address

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 00/32] x86: Secure Encrypted Virtualization (AMD)

2017-03-02 Thread Brijesh Singh
This RFC series provides support for AMD's new Secure Encrypted Virtualization
(SEV) feature. This RFC is build upon Secure Memory Encryption (SME) RFCv4 [1].

SEV is an extension to the AMD-V architecture which supports running multiple
VMs under the control of a hypervisor. When enabled, SEV hardware tags all
code and data with its VM ASID which indicates which VM the data originated
from or is intended for. This tag is kept with the data at all times when
inside the SOC, and prevents that data from being used by anyone other than the
owner. While the tag protects VM data inside the SOC, AES with 128 bit
encryption protects data outside the SOC. When data leaves or enters the SOC,
it is encrypted/decrypted  respectively by hardware with a key based on the
associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory is
encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to be
private. The choice is done using the standard CPU page tables using the C-bit,
and is fully controlled by the guest. Due to security reasons all the DMA
operations inside the  guest must be performed on shared pages (C-bit clear).
Note that since C-bit is only controllable by the guest OS when it is operating
in 64-bit or 32-bit PAE mode, in all other modes the SEV hardware forces the
C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable (i.e. not
fully malicious) hypervisor. In particular, it reduces the attack surface of
guest VMs and can prevent certain types of VM-escape bugs (e.g. hypervisor
read-anywhere) from being used to steal guest data.

The RFC series also expands crypto driver (ccp.ko) to include the support for
Platform Security Processor (PSP) which is used for communicating with SEV
firmware that runs within the AMD secure processor providing a secure key
management interfaces. The hypervisor uses this interface to encrypt the
bootstrap code and perform common activities such as launching, running,
snapshotting, migrating and debugging encrypted guest.

A new ioctl (KVM_MEMORY_ENCRYPT_OP) is introduced which can be used by Qemu to
issue SEV guest life cycle commands.

The RFC series also includes patches required in guest OS to enable SEV feature.
A guest OS can check SEV support by calling KVM_FEATURE cpuid instruction.

The patch breakdown:
* [1 - 17]: guest OS specific changes when SEV is active
* [18]: already queued in kvm upstream tree but was not in tip tree hence its
  included so that build does not fail
* [19 - 21]: since CCP and PSP shares the same PCIe ID hence the patch expands
  the CCP driver by creating a high level AMD Secure Processor (SP) framework
  to allow integration of PSP device into ccp.ko.
* [22 - 32]: hypervisor changes to support memory encryption

The following links provide additional details:

AMD Memory Encryption whitepaper:
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
http://support.amd.com/TechDocs/24593.pdf
SME is section 7.10
SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Specification.pdf

KVM Forum Presentation:
http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf

[1] http://marc.info/?l=linux-kernel=148725974113693=2

---

Based on the feedbacks, we have started adding the SEV guest support in OVMF
BIOS. This series has been tested using EDK2/OVMF BIOS, the initial EDK2 patches
has been submmited on edk2 mailing list for discussion.

TODO:
 - add support for migration commands
 - update QEMU RFC's to SEV spec 0.14
 - investigate virtio and vfio support for SEV guest
 - investigate SMM support for SEV guest
 - add support for nested virtualization

Changes since v1:
 - update to newer SEV key management API spec (0.12 -> 0.14)
 - expand the CCP driver and integrate the PSP interface support
 - remove the usage of SEV ref_count and release the SEV FW resources in
   kvm_x86_ops->vm_destroy
 - acquire the kvm->lock before executing the SEV commands and release on exit.
 - rename ioctl from KVM_SEV_ISSUE_CMD to KVM_MEMORY_ENCRYPT_OP
 - extend KVM_MEMORY_ENCRYPT_OP ioctl to require file descriptor for the SEV
   device. A program without access to /dev/sev will not be able to issue SEV
   commands
 - update vmcb on succesful LAUNCH_FINISH to indicate that SEV is active
 - serveral fixes based on Paolo's review feedbacks
 - add APIs to support sharing the guest physical address with hypervisor
 - update kvm pvclock driver to use the shared buffer when SEV is active
 - pin the SEV guest memory

Brijesh Singh (18):
  x86: mm

[RFC PATCH v2 05/32] x86: Use encrypted access of BOOT related data with SEV

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data, setup data) is encrypted and needs to be accessed as
such when mapped. Update the architecture override in early_memremap to
keep the encryption attribute when mapping this data.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/ioremap.c |   36 +++-
 1 file changed, 31 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c6cb921..c400ab5 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -462,12 +462,31 @@ static bool memremap_is_setup_data(resource_size_t 
phys_addr,
 }
 
 /*
- * This function determines if an address should be mapped encrypted.
- * Boot setup data, EFI data and E820 areas are checked in making this
- * determination.
+ * This function determines if an address should be mapped encrypted when
+ * SEV is active.  E820 areas are checked in making this determination.
  */
-static bool memremap_should_map_encrypted(resource_size_t phys_addr,
- unsigned long size)
+static bool memremap_sev_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
+{
+   /* Check if the address is in persistent memory */
+   switch (e820__get_entry_type(phys_addr, phys_addr + size - 1)) {
+   case E820_TYPE_PMEM:
+   case E820_TYPE_PRAM:
+   return false;
+   default:
+   break;
+   }
+
+   return true;
+}
+
+/*
+ * This function determines if an address should be mapped encrypted when
+ * SME is active.  Boot setup data, EFI data and E820 areas are checked in
+ * making this determination.
+ */
+static bool memremap_sme_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
 {
/*
 * SME is not active, return true:
@@ -508,6 +527,13 @@ static bool memremap_should_map_encrypted(resource_size_t 
phys_addr,
return true;
 }
 
+static bool memremap_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
+{
+   return sev_active() ? memremap_sev_should_map_encrypted(phys_addr, size)
+   : memremap_sme_should_map_encrypted(phys_addr, 
size);
+}
+
 /*
  * Architecure function to determine if RAM remap is allowed.
  */

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 06/32] x86/pci: Use memremap when walking setup data

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

The use of ioremap will force the setup data to be mapped decrypted even
though setup data is encrypted.  Switch to using memremap which will be
able to perform the proper mapping.

Signed-off-by: Tom Lendacky 
---
 arch/x86/pci/common.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index a4fdfa7..0b06670 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -691,7 +691,7 @@ int pcibios_add_device(struct pci_dev *dev)
 
pa_data = boot_params.hdr.setup_data;
while (pa_data) {
-   data = ioremap(pa_data, sizeof(*rom));
+   data = memremap(pa_data, sizeof(*rom), MEMREMAP_WB);
if (!data)
return -ENOMEM;
 
@@ -710,7 +710,7 @@ int pcibios_add_device(struct pci_dev *dev)
}
}
pa_data = data->next;
-   iounmap(data);
+   memunmap(data);
}
set_dma_domain_ops(dev);
set_dev_domain_options(dev);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 02/32] x86: Secure Encrypted Virtualization (SEV) support

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines a flag that is used by the kernel to determine if it is
running with SEV active.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |   14 +-
 arch/x86/mm/mem_encrypt.c  |3 +++
 include/linux/mem_encrypt.h|6 ++
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 1fd5426..9799835 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,10 +20,16 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_enabled;
 
 static inline bool sme_active(void)
 {
-   return (sme_me_mask) ? true : false;
+   return (sme_me_mask && !sev_enabled) ? true : false;
+}
+
+static inline bool sev_active(void)
+{
+   return (sme_me_mask && sev_enabled) ? true : false;
 }
 
 static inline u64 sme_dma_mask(void)
@@ -53,6 +59,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long 
size);
 
 #ifndef sme_me_mask
 #define sme_me_mask0UL
+#define sev_enabled0
 
 static inline bool sme_active(void)
 {
@@ -64,6 +71,11 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
 }
 
+static inline bool sev_active(void)
+{
+   return false;
+}
+
 static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
 {
return 0;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c5062e1..090419b 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -34,6 +34,9 @@ void __init __early_pgtable_flush(void);
 unsigned long sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 
+unsigned int sev_enabled __section(.data) = 0;
+EXPORT_SYMBOL_GPL(sev_enabled);
+
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE);
 
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 913cf80..4b47c73 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -23,6 +23,7 @@
 
 #ifndef sme_me_mask
 #define sme_me_mask0UL
+#define sev_enabled0
 
 static inline bool sme_active(void)
 {
@@ -34,6 +35,11 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
 }
 
+static inline bool sev_active(void)
+{
+   return false;
+}
+
 static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
 {
return 0;

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 04/32] KVM: SVM: Add SEV feature definitions to KVM

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Define a new KVM CPU feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h   |1 +
 arch/x86/include/uapi/asm/kvm_para.h |1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE  BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm_para.h
index 1421a65..bc2802f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME 5
 #define KVM_FEATURE_PV_EOI 6
 #define KVM_FEATURE_PV_UNHALT  7
+#define KVM_FEATURE_SEV8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 01/32] x86: Add the Secure Encrypted Virtualization CPU feature

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Update the CPU features to include identifying and reporting on the
Secure Encrypted Virtualization (SEV) feature.  SME is identified by
CPUID 0x801f, but requires BIOS support to enable it (set bit 23 of
MSR_K8_SYSCFG and set bit 0 of MSR_K7_HWCR).  Only show the SEV feature
as available if reported by CPUID and enabled by BIOS.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/cpufeatures.h |1 +
 arch/x86/include/asm/msr-index.h   |2 ++
 arch/x86/kernel/cpu/amd.c  |   22 ++
 arch/x86/kernel/cpu/scattered.c|1 +
 4 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h 
b/arch/x86/include/asm/cpufeatures.h
index b1a4468..9907579 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -188,6 +188,7 @@
  */
 
 #define X86_FEATURE_SME( 7*32+ 0) /* AMD Secure Memory 
Encryption */
+#define X86_FEATURE_SEV( 7*32+ 1) /* AMD Secure Encrypted 
Virtualization */
 #define X86_FEATURE_CPB( 7*32+ 2) /* AMD Core Performance 
Boost */
 #define X86_FEATURE_EPB( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS 
support */
 #define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation Technology L3 */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e2d0503..e8b3b28 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -361,6 +361,8 @@
 #define MSR_K7_PERFCTR30xc0010007
 #define MSR_K7_CLK_CTL 0xc001001b
 #define MSR_K7_HWCR0xc0010015
+#define MSR_K7_HWCR_SMMLOCK_BIT0
+#define MSR_K7_HWCR_SMMLOCKBIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT)
 #define MSR_K7_FID_VID_CTL 0xc0010041
 #define MSR_K7_FID_VID_STATUS  0xc0010042
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 6bddda3..675958e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -617,10 +617,13 @@ static void early_init_amd(struct cpuinfo_x86 *c)
set_cpu_bug(c, X86_BUG_AMD_E400);
 
/*
-* BIOS support is required for SME. If BIOS has enabld SME then
-* adjust x86_phys_bits by the SME physical address space reduction
-* value. If BIOS has not enabled SME then don't advertise the
-* feature (set in scattered.c).
+* BIOS support is required for SME and SEV.
+*   For SME: If BIOS has enabled SME then adjust x86_phys_bits by
+*the SME physical address space reduction value.
+*If BIOS has not enabled SME then don't advertise the
+*SME feature (set in scattered.c).
+*   For SEV: If BIOS has not enabled SEV then don't advertise the
+*SEV feature (set in scattered.c).
 */
if (c->extended_cpuid_level >= 0x801f) {
if (cpu_has(c, X86_FEATURE_SME)) {
@@ -637,6 +640,17 @@ static void early_init_amd(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_SME);
}
}
+
+   if (cpu_has(c, X86_FEATURE_SEV)) {
+   u64 syscfg, hwcr;
+
+   /* Check if SEV is enabled */
+   rdmsrl(MSR_K8_SYSCFG, syscfg);
+   rdmsrl(MSR_K7_HWCR, hwcr);
+   if (!(syscfg & MSR_K8_SYSCFG_MEM_ENCRYPT) ||
+   !(hwcr & MSR_K7_HWCR_SMMLOCK))
+   clear_cpu_cap(c, X86_FEATURE_SEV);
+   }
}
 }
 
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index cabda87..c3f58d9 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -31,6 +31,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_CPB,  CPUID_EDX,  9, 0x8007, 0 },
{ X86_FEATURE_PROC_FEEDBACK,CPUID_EDX, 11, 0x8007, 0 },
{ X86_FEATURE_SME,  CPUID_EAX,  0, 0x801f, 0 },
+   { X86_FEATURE_SEV,  CPUID_EAX,  1, 0x801f, 0 },
{ 0, 0, 0, 0, 0 }
 };
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 29/32] kvm: svm: Add support for SEV DEBUG_DECRYPT command

2017-03-02 Thread Brijesh Singh
The command is used to decrypt guest memory region for debug purposes.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   76 
 1 file changed, 76 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 977aa22..ce8819a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5986,6 +5986,78 @@ static int sev_guest_status(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+   void *dst, int *error)
+{
+   int ret;
+   struct page **inpages;
+   struct sev_data_dbg *data;
+   unsigned long npages;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   inpages = sev_pin_memory(src, PAGE_SIZE, );
+   if (!inpages) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   data->handle = sev_get_handle(kvm);
+   data->dst_addr = __psp_pa(dst);
+   data->src_addr = __sev_page_pa(inpages[0]);
+   data->length = PAGE_SIZE;
+
+   ret = sev_issue_cmd(kvm, SEV_CMD_DBG_DECRYPT, data, error);
+   if (ret)
+   printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+   ret, *error);
+   sev_unpin_memory(inpages, npages);
+err_1:
+   kfree(data);
+   return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   void *data;
+   int ret, offset, len;
+   struct kvm_sev_dbg debug;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(, (void *)argp->data,
+   sizeof(struct kvm_sev_dbg)))
+   return -EFAULT;
+   /*
+* TODO: add support for decrypting length which crosses the
+* page boundary.
+*/
+   offset = debug.src_addr & (PAGE_SIZE - 1);
+   if (offset + debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   /* decrypt full page */
+   ret = __sev_dbg_decrypt_page(kvm, debug.src_addr & PAGE_MASK,
+   data, >error);
+   if (ret)
+   goto err_1;
+
+   /* we have decrypted full page but copy request length */
+   len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+   if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+   ret = -EFAULT;
+err_1:
+   free_page((unsigned long)data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6013,6 +6085,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_guest_status(kvm, _cmd);
break;
}
+   case KVM_SEV_DBG_DECRYPT: {
+   r = sev_dbg_decrypt(kvm, _cmd);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 32/32] x86: kvm: Pin the guest memory when SEV is active

2017-03-02 Thread Brijesh Singh
The SEV memory encryption engine uses a tweak such that two identical
plaintexts at different location will have a different ciphertexts.
So swapping or moving ciphertexts of two pages will not result in
plaintexts being swapped. Relocating (or migrating) a physical backing pages
for SEV guest will require some additional steps. The current SEV key
management spec [1] does not provide commands to swap or migrate (move)
ciphertexts. For now we pin the memory allocated for the SEV guest. In
future when SEV key management spec provides the commands to support the
page migration we can update the KVM code to remove the pinning logical
without making any changes into userspace (qemu).

The patch pins userspace memory when a new slot is created and unpin the
memory when slot is removed.

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |6 +++
 arch/x86/kvm/svm.c  |   93 +++
 arch/x86/kvm/x86.c  |3 +
 3 files changed, 102 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fcc4710..9dc59f0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -723,6 +723,7 @@ struct kvm_sev_info {
unsigned int handle;/* firmware handle */
unsigned int asid;  /* asid for this guest */
int sev_fd; /* SEV device fd */
+   struct list_head pinned_memory_slot;
 };
 
 struct kvm_arch {
@@ -1043,6 +1044,11 @@ struct kvm_x86_ops {
void (*setup_mce)(struct kvm_vcpu *vcpu);
 
int (*memory_encryption_op)(struct kvm *kvm, void __user *argp);
+
+   void (*prepare_memory_region)(struct kvm *kvm,
+   struct kvm_memory_slot *memslot,
+   const struct kvm_userspace_memory_region *mem,
+   enum kvm_mr_change change);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 13996d6..ab973f9 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -498,12 +498,21 @@ static inline bool gif_set(struct vcpu_svm *svm)
 }
 
 /* Secure Encrypted Virtualization */
+struct kvm_sev_pinned_memory_slot {
+   struct list_head list;
+   unsigned long npages;
+   struct page **pages;
+   unsigned long userspace_addr;
+   short id;
+};
+
 static unsigned int max_sev_asid;
 static unsigned long *sev_asid_bitmap;
 static void sev_deactivate_handle(struct kvm *kvm);
 static void sev_decommission_handle(struct kvm *kvm);
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_unpin_memory(struct page **pages, unsigned long npages);
 #define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
 
 static bool kvm_sev_enabled(void)
@@ -1544,9 +1553,25 @@ static inline int avic_free_vm_id(int id)
 
 static void sev_vm_destroy(struct kvm *kvm)
 {
+   struct list_head *pos, *q;
+   struct kvm_sev_pinned_memory_slot *pinned_slot;
+   struct list_head *head = >arch.sev_info.pinned_memory_slot;
+
if (!sev_guest(kvm))
return;
 
+   /* if guest memory is pinned then unpin it now */
+   if (!list_empty(head)) {
+   list_for_each_safe(pos, q, head) {
+   pinned_slot = list_entry(pos,
+   struct kvm_sev_pinned_memory_slot, list);
+   sev_unpin_memory(pinned_slot->pages,
+   pinned_slot->npages);
+   list_del(pos);
+   kfree(pinned_slot);
+   }
+   }
+
/* release the firmware resources */
sev_deactivate_handle(kvm);
sev_decommission_handle(kvm);
@@ -5663,6 +5688,8 @@ static int sev_pre_start(struct kvm *kvm, int *asid)
}
*asid = ret;
ret = 0;
+
+   INIT_LIST_HEAD(>arch.sev_info.pinned_memory_slot);
}
 
return ret;
@@ -6189,6 +6216,71 @@ static int sev_launch_measure(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static struct kvm_sev_pinned_memory_slot *sev_find_pinned_memory_slot(
+   struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+   struct kvm_sev_pinned_memory_slot *i;
+   struct list_head *head = >arch.sev_info.pinned_memory_slot;
+
+   list_for_each_entry(i, head, list) {
+   if (i->userspace_addr == slot->userspace_addr &&
+   i->id == slot->id)
+   return i;
+   }
+
+   return NULL;
+}
+
+static void amd_prepare_memory_region(struct kvm *kvm,
+   struct kvm_memory_slot *memslot,
+   const struct kvm_userspace_memory_region *mem,
+

[RFC PATCH v2 30/32] kvm: svm: Add support for SEV DEBUG_ENCRYPT command

2017-03-02 Thread Brijesh Singh
The command copies a plain text into guest memory and encrypts it using
the VM encryption key. The command will be used for debug purposes
(e.g setting breakpoint through gdbserver)

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   87 
 1 file changed, 87 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ce8819a..64899ed 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6058,6 +6058,89 @@ static int sev_dbg_decrypt(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   void *data;
+   int len, ret, d_off;
+   struct page **inpages;
+   struct kvm_sev_dbg debug;
+   struct sev_data_dbg *encrypt;
+   unsigned long src_addr, dst_addr, npages;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   if (debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   len = debug.length;
+   src_addr = debug.src_addr;
+   dst_addr = debug.dst_addr;
+
+   inpages = sev_pin_memory(dst_addr, PAGE_SIZE, );
+   if (!inpages)
+   return -EFAULT;
+
+   encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+   if (!encrypt) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data) {
+   ret = -ENOMEM;
+   goto err_2;
+   }
+
+   if ((len & 15) || (dst_addr & 15)) {
+   /* if destination address and length are not 16-byte
+* aligned then:
+* a) decrypt destination page into temporary buffer
+* b) copy source data into temporary buffer at correct offset
+* c) encrypt temporary buffer
+*/
+   ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, >error);
+   if (ret)
+   goto err_3;
+   d_off = dst_addr & (PAGE_SIZE - 1);
+
+   if (copy_from_user(data + d_off,
+   (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }
+
+   encrypt->length = PAGE_SIZE;
+   encrypt->src_addr = __psp_pa(data);
+   encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+   } else {
+   if (copy_from_user(data, (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }
+
+   d_off = dst_addr & (PAGE_SIZE - 1);
+   encrypt->length = len;
+   encrypt->src_addr = __psp_pa(data);
+   encrypt->dst_addr = __sev_page_pa(inpages[0]);
+   encrypt->dst_addr += d_off;
+   }
+
+   encrypt->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_DBG_ENCRYPT, encrypt, >error);
+err_3:
+   free_page((unsigned long)data);
+err_2:
+   kfree(encrypt);
+err_1:
+   sev_unpin_memory(inpages, npages);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6089,6 +6172,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_dbg_decrypt(kvm, _cmd);
break;
}
+   case KVM_SEV_DBG_ENCRYPT: {
+   r = sev_dbg_encrypt(kvm, _cmd);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 31/32] kvm: svm: Add support for SEV LAUNCH_MEASURE command

2017-03-02 Thread Brijesh Singh
The command is used to retrieve the measurement of memory encrypted through
the LAUNCH_UPDATE_DATA command. This measurement can be used for attestation
purposes.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   52 
 1 file changed, 52 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 64899ed..13996d6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6141,6 +6141,54 @@ static int sev_dbg_encrypt(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int ret;
+   void *addr = NULL;
+   struct kvm_sev_launch_measure params;
+   struct sev_data_launch_measure *data;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(, (void *)argp->data,
+   sizeof(struct kvm_sev_launch_measure)))
+   return -EFAULT;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   if (params.address && params.length) {
+   ret = -EFAULT;
+   addr = kzalloc(params.length, GFP_KERNEL);
+   if (!addr)
+   goto err_1;
+   data->address = __psp_pa(addr);
+   data->length = params.length;
+   }
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_MEASURE, data, >error);
+
+   /* copy the measurement to userspace */
+   if (addr &&
+   copy_to_user((void *)params.address, addr, params.length)) {
+   ret = -EFAULT;
+   goto err_1;
+   }
+
+   params.length = data->length;
+   if (copy_to_user((void *)argp->data, ,
+   sizeof(struct kvm_sev_launch_measure)))
+   ret = -EFAULT;
+
+   kfree(addr);
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6176,6 +6224,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_dbg_encrypt(kvm, _cmd);
break;
}
+   case KVM_SEV_LAUNCH_MEASURE: {
+   r = sev_launch_measure(kvm, _cmd);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 27/32] kvm: svm: Add support for SEV LAUNCH_FINISH command

2017-03-02 Thread Brijesh Singh
The command is used for finializing the SEV guest launch process.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   36 
 1 file changed, 36 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 62c2b22..c108064 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5921,6 +5921,38 @@ static int sev_launch_update_data(struct kvm *kvm, 
struct kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int i, ret;
+   struct sev_data_launch_finish *data;
+   struct kvm_vcpu *vcpu;
+
+   if (!sev_guest(kvm))
+   return -EINVAL;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   /* launch finish */
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_FINISH, data, >error);
+   if (ret)
+   goto err_1;
+
+   /* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+* KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+*/
+   kvm_for_each_vcpu(i, vcpu, kvm) {
+   sev_init_vmcb(to_svm(vcpu));
+   svm_cpuid_update(vcpu);
+   }
+
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -5940,6 +5972,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_launch_update_data(kvm, _cmd);
break;
}
+   case KVM_SEV_LAUNCH_FINISH: {
+   r = sev_launch_finish(kvm, _cmd);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v2 25/32] kvm: svm: Add support for SEV LAUNCH_START command

2017-03-02 Thread Brijesh Singh
The command is used to bootstrap SEV guest from unencrypted boot images.
The command creates a new VM encryption key (VEK) using the guest owner's
public DH certificates, and session data. The VEK will be used to encrypt
the guest memory.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |  302 
 1 file changed, 301 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fb63398..b5fa8c0 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -37,6 +37,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -497,6 +498,10 @@ static inline bool gif_set(struct vcpu_svm *svm)
 /* Secure Encrypted Virtualization */
 static unsigned int max_sev_asid;
 static unsigned long *sev_asid_bitmap;
+static void sev_deactivate_handle(struct kvm *kvm);
+static void sev_decommission_handle(struct kvm *kvm);
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
 
 static bool kvm_sev_enabled(void)
 {
@@ -1534,6 +1539,17 @@ static inline int avic_free_vm_id(int id)
return 0;
 }
 
+static void sev_vm_destroy(struct kvm *kvm)
+{
+   if (!sev_guest(kvm))
+   return;
+
+   /* release the firmware resources */
+   sev_deactivate_handle(kvm);
+   sev_decommission_handle(kvm);
+   sev_asid_free(sev_get_asid(kvm));
+}
+
 static void avic_vm_destroy(struct kvm *kvm)
 {
unsigned long flags;
@@ -1551,6 +1567,12 @@ static void avic_vm_destroy(struct kvm *kvm)
spin_unlock_irqrestore(_vm_data_hash_lock, flags);
 }
 
+static void svm_vm_destroy(struct kvm *kvm)
+{
+   avic_vm_destroy(kvm);
+   sev_vm_destroy(kvm);
+}
+
 static int avic_vm_init(struct kvm *kvm)
 {
unsigned long flags;
@@ -5502,6 +5524,282 @@ static inline void avic_post_state_restore(struct 
kvm_vcpu *vcpu)
avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+   int pos;
+
+   if (!max_sev_asid)
+   return -EINVAL;
+
+   pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+   if (pos >= max_sev_asid)
+   return -EBUSY;
+
+   set_bit(pos, sev_asid_bitmap);
+   return pos + 1;
+}
+
+static void sev_asid_free(int asid)
+{
+   int cpu, pos;
+   struct svm_cpu_data *sd;
+
+   pos = asid - 1;
+   clear_bit(pos, sev_asid_bitmap);
+
+   for_each_possible_cpu(cpu) {
+   sd = per_cpu(svm_data, cpu);
+   sd->sev_vmcbs[pos] = NULL;
+   }
+}
+
+static int sev_issue_cmd(struct kvm *kvm, int id, void *data, int *error)
+{
+   int ret;
+   struct fd f;
+   int fd = sev_get_fd(kvm);
+
+   f = fdget(fd);
+   if (!f.file)
+   return -EBADF;
+
+   ret = sev_issue_cmd_external_user(f.file, id, data, 0, error);
+   fdput(f);
+
+   return ret;
+}
+
+static void sev_decommission_handle(struct kvm *kvm)
+{
+   int ret, error;
+   struct sev_data_decommission *data;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_guest_decommission(data, );
+   if (ret)
+   pr_err("SEV: DECOMMISSION %d (%#x)\n", ret, error);
+
+   kfree(data);
+}
+
+static void sev_deactivate_handle(struct kvm *kvm)
+{
+   int ret, error;
+   struct sev_data_deactivate *data;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_guest_deactivate(data, );
+   if (ret) {
+   pr_err("SEV: DEACTIVATE %d (%#x)\n", ret, error);
+   goto buffer_free;
+   }
+
+   wbinvd_on_all_cpus();
+
+   ret = sev_guest_df_flush();
+   if (ret)
+   pr_err("SEV: DF_FLUSH %d (%#x)\n", ret, error);
+
+buffer_free:
+   kfree(data);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *error)
+{
+   int ret;
+   struct sev_data_activate *data;
+
+   wbinvd_on_all_cpus();
+
+   ret = sev_guest_df_flush(error);
+   if (ret) {
+   pr_err("SEV: DF_FLUSH %d (%#x)\n", ret, *error);
+   return ret;
+   }
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   data->handle = handle;
+   data->asid   = asid;
+   ret = sev_guest_activate(data, error);
+   if (ret)
+   pr_err("SEV: ACTIVATE %d (%#x)\n", ret, *error);
+
+   kfree(data);
+   return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+   int ret;
+
+   /* If guest has active SEV handle then deactivate before creating the
+* encryption context.
+*/
+   if (sev_guest(kvm)) {
+   sev_deactivate_handle(kvm);
+   sev_deco

Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl

2016-10-18 Thread Brijesh Singh

Hi Paolo,

On 10/17/2016 03:14 PM, Paolo Bonzini wrote:

I am not sure if I fully understand this feedback. Let me summaries what
we have right now.

At highest level SEV key management commands are divided into two sections:

- platform  management : commands used during platform provisioning. PSP
drv provides ioctl's for these commands. Qemu will not use these
ioctl's, i believe these ioctl will be used by other tools.

- guest management: command used during guest life cycle. PSP drv
exports various function and KVM drv calls these function when it
receives the SEV_ISSUE_CMD ioctl from qemu.

If I understanding correctly then you are recommending that instead of
exporting various functions from PSP drv we should expose one function
for the all the guest command handling (something like this).


My understanding is that a user could exhaust the ASIDs for encrypted
VMs if it was allowed to start an arbitrary number of KVM guests.  So
we would need some kind of control.  Is this correct?



Yes, there is limited number of ASIDs for encrypted VMs. Do we need to 
pass the psp_fd into SEV_ISSUE_CMD ioctl or can we handle it from Qemu 
itself ? e.g when user asks to transition a guest into SEV-enabled mode 
then before calling LAUNCH_START Qemu tries to open /dev/psp device. If 
open() returns success then we know user has permission to communicate 
with PSP firmware. Please let me know if I am missing something.



If so, does /dev/psp provide any functionality that you believe is
dangerous for the KVM userspace (which runs in a very confined
environment)?  Is this functionality blocked through capability
checks?



I do not see /dev/psp providing anything which would be dangerous to KVM 
userspace. It should be safe to access /dev/psp into KVM userspace.



Thanks,

Paolo



int psp_issue_cmd_external_user(struct file *filep,
int cmd, unsigned long addr,
int *psp_ret)
{
/* here we check to ensure that file->f_ops is a valid
 * psp instance.
  */
if (filep->f_ops != _fops)
return -EINVAL;

/* handle the command */
return psp_issue_cmd (cmd, addr, timeout, psp_ret);
}

In KVM driver use something like this to invoke the PSP command handler.

int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
 unsigned long data)
{
int ret;
struct fd f;

f = fdget(input->psp_fd);
if (!f.file)
return -EBADF;


psp_issue_cmd_external_user(f.file, input->cmd,
data, >psp_ret);

}

Please let me know if I understood this correctly.


Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |3 +
 arch/x86/kvm/x86.c  |   13 
 include/uapi/linux/kvm.h|  125
 +++
 3 files changed, 141 insertions(+)




___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl

2016-10-17 Thread Brijesh Singh

Hi Paolo,


On 10/13/2016 05:45 AM, Paolo Bonzini wrote:



On 23/08/2016 01:28, Brijesh Singh wrote:

The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = 

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, );

On SEV command failure, data.ret_code will contain the firmware error code.


Please modify the ioctl to require the file descriptor for the PSP.  A
program without access to /dev/psp should not be able to use SEV.



I am not sure if I fully understand this feedback. Let me summaries what 
we have right now.


At highest level SEV key management commands are divided into two sections:

- platform  management : commands used during platform provisioning. PSP 
drv provides ioctl's for these commands. Qemu will not use these 
ioctl's, i believe these ioctl will be used by other tools.


- guest management: command used during guest life cycle. PSP drv 
exports various function and KVM drv calls these function when it 
receives the SEV_ISSUE_CMD ioctl from qemu.


If I understanding correctly then you are recommending that instead of 
exporting various functions from PSP drv we should expose one function 
for the all the guest command handling (something like this).


int psp_issue_cmd_external_user(struct file *filep,
int cmd, unsigned long addr,
int *psp_ret)
{
/* here we check to ensure that file->f_ops is a valid
 * psp instance.
 */
if (filep->f_ops != _fops)
return -EINVAL;

/* handle the command */
return psp_issue_cmd (cmd, addr, timeout, psp_ret);
}

In KVM driver use something like this to invoke the PSP command handler.

int kvm_sev_psp_cmd (struct kvm_sev_issue_cmd *input,
 unsigned long data)
{
int ret;
struct fd f;

f = fdget(input->psp_fd);
if (!f.file)
return -EBADF;


psp_issue_cmd_external_user(f.file, input->cmd,
data, >psp_ret);

}

Please let me know if I understood this correctly.


Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |3 +
 arch/x86/kvm/x86.c  |   13 
 include/uapi/linux/kvm.h|  125 +++
 3 files changed, 141 insertions(+)


___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)

2016-10-17 Thread Brijesh Singh

Hi Paolo,

Thanks for reviews. I will incorporate your feedbacks in v2.

On 10/13/2016 06:19 AM, Paolo Bonzini wrote:



On 23/08/2016 01:23, Brijesh Singh wrote:

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)


The would be encrypted, right?  Similar to the EFI data in patch 9.


Yes.




- determine how to share the guest memory with hypervisor for to support
  pvclock driver


Is it enough if the guest makes that page unencrypted?



Yes that should be enough. If guest can mark a page as unencrypted then 
hypervisor should be able to read and write to that particular page.


Tom's patches have introduced API (set_memory_dec) to mark memory as 
unencrypted but pvclock drv runs very early during boot (when irq was 
disabled). Because of this we are not able to use set_memory_dec() to 
mark the page as unencrypted. Will need to come up with method for 
handling these cases.



I reviewed the KVM host-side patches and they are pretty
straightforward, so the comments on each patch suffice.

Thanks,

Paolo


Brijesh Singh (11):
  crypto: add AMD Platform Security Processor driver
  KVM: SVM: prepare to reserve asid for SEV guest
  KVM: SVM: prepare for SEV guest management API support
  KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  KVM: SVM: add SEV launch start command
  KVM: SVM: add SEV launch update command
  KVM: SVM: add SEV_LAUNCH_FINISH command
  KVM: SVM: add KVM_SEV_GUEST_STATUS command
  KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
  KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
  KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
  kvm: svm: Add support for additional SVM NPF error codes
  kvm: svm: Add kvm_fast_pio_in support
  kvm: svm: Use the hardware provided GPA instead of page walk
  x86: Secure Encrypted Virtualization (SEV) support
  KVM: SVM: prepare for new bit definition in nested_ctl
  KVM: SVM: Add SEV feature definitions to KVM
  x86: Do not encrypt memory areas if SEV is enabled
  Access BOOT related data encrypted with SEV active
  x86/efi: Access EFI data as encrypted when SEV is active
  x86: Change early_ioremap to early_memremap for BOOT data
  x86: Don't decrypt trampoline area if SEV is active
  x86: DMA support for SEV memory encryption
  iommu/amd: AMD IOMMU support for SEV
  x86: Don't set the SME MSR bit when SEV is active
  x86: Unroll string I/O when SEV is active
  x86: Add support to determine if running with SEV enabled
  KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile  |2
 arch/x86/boot/compressed/head_64.S |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 
 arch/x86/include/asm/io.h  |   26 +
 arch/x86/include/asm/kvm_emulate.h |3
 arch/x86/include/asm/kvm_host.h|   27 +
 arch/x86/include/asm/mem_encrypt.h |3
 arch/x86/include/asm/svm.h |3
 arch/x86/include/uapi/asm/hyperv.h |4
 arch/x86/include/uapi/asm/kvm_para.h   |4
 arch/x86/kernel/acpi/boot.c|4
 arch/x86/kernel/head64.c   |4
 arch/x86/kernel/mem_encrypt.S  |   44 ++
 arch/x86/kernel/mpparse.c  |   10
 arch/x86/kernel/setup.c|7
 arch/x86/kernel/x8664_ksyms_64.c   |1
 arch/x86/kvm/cpuid.c   |4
 arch/x86/kvm/mmu.c |   20 +
 arch/x86/kvm/svm.c |  906 
 arch/x86/kvm/x86.c |   73 +++
 arch/x86/mm/ioremap.c  |7
 arch/x86/mm/mem_encrypt.c  |   50 ++
 arch/x86/platform/efi/efi_64.c |   14
 arch/x86/realmode/init.c   |   11
 drivers/crypto/Kconfig |   11
 drivers/crypto/Makefile|1
 drivers/crypto/psp/Kconfig |8
 drivers/crypto/psp/Makefile|3
 drivers/crypto/psp/psp-dev.c   |  220 
 drivers/crypto/psp/psp-dev.h   |   95 +++
 drivers/crypto/psp/psp-ops.c   |  454 
 drivers/crypto/psp/psp-pci.c   |  376 +
 drivers/sfi/sfi_core.c |6
 include/linux/ccp-psp.h|  833 +
 include/uapi/linux/Kbuild  |1
 include/uapi/linux/ccp-psp.h   |  182 ++
 include/uapi/linux/kvm.h   |  125 
 37 files changed, 3643 insertions(+), 41 deletions(-)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode

[RFC PATCH v1 23/28] KVM: SVM: add SEV launch update command

2016-08-22 Thread Brijesh Singh
The command is used for encrypting guest memory region.

For more information see [1], section 6.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |  126 
 1 file changed, 126 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0b6da4a..c78bdc6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -35,6 +35,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -263,6 +265,8 @@ static unsigned long *sev_asid_bitmap;
 #define svm_sev_guest()(svm->vcpu.kvm->arch.sev_info.handle)
 #define svm_sev_ref_count()(svm->vcpu.kvm->arch.sev_info.ref_count)
 
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
+
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
 static void sev_deactivate_handle(unsigned int handle);
@@ -5376,6 +5380,123 @@ err_1:
return ret;
 }
 
+static int sev_pre_update(struct page **pages, unsigned long uaddr, int npages)
+{
+   int pinned;
+
+   /* pin the user virtual address */
+   down_read(>mm->mmap_sem);
+   pinned = get_user_pages(uaddr, npages, 1, 0, pages, NULL);
+   up_read(>mm->mmap_sem);
+   if (pinned != npages) {
+   printk(KERN_ERR "SEV: failed to pin  %d pages (got %d)\n",
+   npages, pinned);
+   goto err;
+   }
+
+   return 0;
+err:
+   if (pinned > 0)
+   release_pages(pages, pinned, 0);
+   return 1;
+}
+
+static int sev_launch_update(struct kvm *kvm,
+struct kvm_sev_launch_update __user *arg,
+int *psp_ret)
+{
+   int first, last;
+   struct page **inpages;
+   int ret, nr_pages;
+   unsigned long uaddr, ulen;
+   int i, buffer_len, len, offset;
+   struct kvm_sev_launch_update params;
+   struct psp_data_launch_update *update;
+
+   /* Get the parameters from the user */
+   if (copy_from_user(, arg, sizeof(*arg)))
+   return -EFAULT;
+
+   uaddr = params.address;
+   ulen = params.length;
+
+   /* Get number of pages */
+   first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+   last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+   nr_pages = (last - first + 1);
+
+   /* allocate the buffers */
+   buffer_len = sizeof(*update);
+   update = kzalloc(buffer_len, GFP_KERNEL);
+   if (!update)
+   return -ENOMEM;
+
+   ret = -ENOMEM;
+   inpages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL);
+   if (!inpages)
+   goto err_1;
+
+   ret = sev_pre_update(inpages, uaddr, nr_pages);
+   if (ret)
+   goto err_2;
+
+   /* the array of pages returned by get_user_pages() is a page-aligned
+* memory. Since the user buffer is probably not page-aligned, we need
+* to calculate the offset within a page for first update entry.
+*/
+   offset = uaddr & (PAGE_SIZE - 1);
+   len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+   ulen -= len;
+
+   /* update first page -
+* special care need to be taken for the first page because we might
+* be dealing with offset within the page
+*/
+   update->hdr.buffer_len = buffer_len;
+   update->handle = kvm_sev_handle();
+   update->length = len;
+   update->address = __sev_page_pa(inpages[0]) + offset;
+   clflush_cache_range(page_address(inpages[0]), PAGE_SIZE);
+   ret = psp_guest_launch_update(update, 5, psp_ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+   "ret=%d (%#010x)\n", update->address,
+   update->length, ret, *psp_ret);
+   goto err_3;
+   }
+
+   /* update remaining pages */
+   for (i = 1; i < nr_pages; i++) {
+
+   len = min_t(size_t, PAGE_SIZE, ulen);
+   ulen -= len;
+   update->length = len;
+   update->address = __sev_page_pa(inpages[i]);
+   clflush_cache_range(page_address(inpages[i]), PAGE_SIZE);
+
+   ret = psp_guest_launch_update(update, 5, psp_ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: LAUNCH_UPDATE addr %#llx len %d "
+   "ret=%d (%#010x)\n", update->address,
+   update->length, ret, *psp_ret);
+   goto err_3;
+   }
+   }
+
+err_3:
+   /* mark pages dirty */
+   for (i = 0; i < nr_pages; i++) {
+   set_page_dirty_lock(inpages[i]);
+   mark_page_accessed(

[RFC PATCH v1 14/28] x86: Don't set the SME MSR bit when SEV is active

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

When SEV is active the virtual machine cannot set the MSR for SME, so
don't set the trampoline flag for SME.

Signed-off-by: Tom Lendacky 
---
 arch/x86/realmode/init.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f3207e5..391d8ba 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
*trampoline_cr4_features = mmu_cr4_features;
 
trampoline_header->flags = 0;
-   if (sme_me_mask)
+   if (sme_me_mask && !sev_active)
trampoline_header->flags |= TH_FLAGS_SME_ENABLE;
 
trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 20/28] KVM: SVM: prepare for SEV guest management API support

2016-08-22 Thread Brijesh Singh
The patch adds initial support required for Secure Encrypted
Virtualization (SEV) guest management API's.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained
   through CPUID Fn8000_001f[ECX]. A non-SEV guest can use any
   asid outside the SEV asid range.
 - SEV guest must have asid value within asid range obtained
   through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush
   is required if different vcpu for the same ASID is to be run
   on the same host CPU.

- save SEV private structure in kvm_arch.

- If SEV is available then initialize PSP firmware during hardware probe

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |9 ++
 arch/x86/kvm/svm.c  |  213 +++
 2 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b1dd673..9b885fc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -715,6 +715,12 @@ struct kvm_hv {
u64 hv_crash_ctl;
 };
 
+struct kvm_sev_info {
+   unsigned int asid;  /* asid for this guest */
+   unsigned int handle;/* firmware handle */
+   unsigned int ref_count; /* number of active vcpus */
+};
+
 struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
@@ -799,6 +805,9 @@ struct kvm_arch {
 
bool x2apic_format;
bool x2apic_broadcast_quirk_disabled;
+
+   /* struct for SEV guest */
+   struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f010b23..dcee635 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -186,6 +187,9 @@ struct vcpu_svm {
struct page *avic_backing_page;
u64 *avic_physical_id_cache;
bool avic_is_running;
+
+   /* which host cpu was used for running this vcpu */
+   bool last_cpuid;
 };
 
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK   (0xFF)
@@ -243,6 +247,25 @@ static int avic;
 module_param(avic, int, S_IRUGO);
 #endif
 
+/* Secure Encrypted Virtualization */
+static bool sev_enabled;
+static unsigned long max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+#define kvm_sev_guest()(kvm->arch.sev_info.handle)
+#define kvm_sev_handle()   (kvm->arch.sev_info.handle)
+#define kvm_sev_ref()  (kvm->arch.sev_info.ref_count++)
+#define kvm_sev_unref()(kvm->arch.sev_info.ref_count--)
+#define svm_sev_handle()   (svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_asid() (svm->vcpu.kvm->arch.sev_info.asid)
+#define svm_sev_ref()  (svm->vcpu.kvm->arch.sev_info.ref_count++)
+#define svm_sev_unref()
(svm->vcpu.kvm->arch.sev_info.ref_count--)
+#define svm_sev_guest()(svm->vcpu.kvm->arch.sev_info.handle)
+#define svm_sev_ref_count()(svm->vcpu.kvm->arch.sev_info.ref_count)
+
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
+
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -474,6 +497,8 @@ struct svm_cpu_data {
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
+
+   void **sev_vmcb;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -727,7 +752,10 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
-   sd->min_asid = 1;
+   sd->min_asid = max_sev_asid + 1;
+
+   if (sev_enabled)
+   memset(sd->sev_vmcb, 0, (max_sev_asid + 1) * sizeof(void *));
 
native_store_gdt(_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -788,6 +816,7 @@ static void svm_cpu_uninit(int cpu)
 
per_cpu(svm_data, raw_smp_processor_id()) = NULL;
__free_page(sd->save_area);
+   kfree(sd->sev_vmcb);
kfree(sd);
 }
 
@@ -805,6 +834,14 @@ static int svm_cpu_init(int cpu)
if (!sd->save_area)
goto err_1;
 
+   if (sev_enabled) {
+   sd->sev_vmcb = kmalloc((max_sev_asid + 1) * sizeof(void *),
+   GFP_KERNEL);
+   r = -ENOMEM;
+   if (!sd->sev_vmcb)
+   goto err_1;
+   }
+
per_cpu(svm_data, cpu) = sd;
 
return 0;
@@ -931,6 +968,74 @@ static void svm_disable_lbrv(struct vcpu_svm *svm)
set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
 }
 
+static __init void sev_hardware_setup(void)
+{
+   in

[RFC PATCH v1 19/28] KVM: SVM: prepare to reserve asid for SEV guest

2016-08-22 Thread Brijesh Singh
In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 211be94..f010b23 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -470,6 +470,7 @@ struct svm_cpu_data {
u64 asid_generation;
u32 max_asid;
u32 next_asid;
+   u32 min_asid;
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
@@ -726,6 +727,7 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
+   sd->min_asid = 1;
 
native_store_gdt(_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -1887,7 +1889,7 @@ static void new_asid(struct vcpu_svm *svm, struct 
svm_cpu_data *sd)
 {
if (sd->next_asid > sd->max_asid) {
++sd->asid_generation;
-   sd->next_asid = 1;
+   sd->next_asid = sd->min_asid;
svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 11/28] x86: Don't decrypt trampoline area if SEV is active

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

When Secure Encrypted Virtualization is active instruction fetches are
always interpreted as being from encrypted memory so the trampoline area
must remain encrypted when SEV is active.

Signed-off-by: Tom Lendacky 
---
 arch/x86/realmode/init.c |9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c3edb49..f3207e5 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -138,10 +138,13 @@ static void __init set_real_mode_permissions(void)
/*
 * If memory encryption is active, the trampoline area will need to
 * be in non-encrypted memory in order to bring up other processors
-* successfully.
+* successfully. This only applies to SME, SEV requires the trampoline
+* to be encrypted.
 */
-   sme_early_mem_dec(__pa(base), size);
-   sme_set_mem_dec(base, size);
+   if (!sev_active) {
+   sme_early_mem_dec(__pa(base), size);
+   sme_set_mem_dec(base, size);
+   }
 
set_memory_nx((unsigned long) base, size >> PAGE_SHIFT);
set_memory_ro((unsigned long) base, ro_size >> PAGE_SHIFT);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 10/28] x86: Change early_ioremap to early_memremap for BOOT data

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/acpi/boot.c |4 ++--
 arch/x86/kernel/mpparse.c   |   10 +-
 drivers/sfi/sfi_core.c  |6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 1ad5fe2..4622ea2 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -120,7 +120,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned 
long size)
if (!phys || !size)
return NULL;
 
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size, BOOT_DATA);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -128,7 +128,7 @@ void __init __acpi_unmap_table(char *map, unsigned long 
size)
if (!map || !size)
return;
 
-   early_iounmap(map, size);
+   early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0f8d204..04def9f 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long 
physptr)
struct mpc_table *mpc;
unsigned long size;
 
-   mpc = early_ioremap(physptr, PAGE_SIZE);
+   mpc = early_memremap(physptr, PAGE_SIZE, BOOT_DATA);
size = mpc->length;
-   early_iounmap(mpc, PAGE_SIZE);
+   early_memunmap(mpc, PAGE_SIZE);
apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
unsigned long size;
 
size = get_mpc_size(mpf->physptr);
-   mpc = early_ioremap(mpf->physptr, size);
+   mpc = early_memremap(mpf->physptr, size, BOOT_DATA);
/*
 * Read the physical hardware table.  Anything here will
 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
 #endif
pr_err("BIOS bug, MP table errors detected!...\n");
pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
return -1;
}
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
 
if (early)
return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..3078d35 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 
size)
if (sfi_use_ioremap)
return ioremap_cache(phys, size);
else
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size, BOOT_DATA);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 
size)
if (sfi_use_ioremap)
iounmap(virt);
else
-   early_iounmap(virt, size);
+   early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 13/28] iommu/amd: AMD IOMMU support for SEV

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

DMA must be performed to memory that is not mapped encrypted when running
with SEV active. So if SEV is active, do not return the encryption mask
to the IOMMU.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/mem_encrypt.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ce6e3ea..d6e9f96 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -237,7 +237,7 @@ void __init mem_encrypt_init(void)
 
 unsigned long amd_iommu_get_me_mask(void)
 {
-   return sme_me_mask;
+   return sev_active ? 0 : sme_me_mask;
 }
 
 unsigned long swiotlb_get_me_mask(void)

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 06/28] KVM: SVM: Add SEV feature definitions to KVM

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Define a new KVM cpu feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h   |1 +
 arch/x86/include/uapi/asm/kvm_para.h |1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE  BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm_para.h
index 94dc8ca..67dd610f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME 5
 #define KVM_FEATURE_PV_EOI 6
 #define KVM_FEATURE_PV_UNHALT  7
+#define KVM_FEATURE_SEV8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 01/28] kvm: svm: Add support for additional SVM NPF error codes

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

AMD hardware adds two additional bits to aid in nested page fault handling.

Bit 32 - NPF occurred while translating the guest's final physical address
Bit 33 - NPF occurred while translating the guest page tables

The guest page tables fault indicator can be used as an aid for nested
virtualization. Using V0 for the host, V1 for the first level guest and
V2 for the second level guest, when both V1 and V2 are using nested paging
there are currently a number of unnecessary instruction emulations. When
V2 is launched shadow paging is used in V1 for the nested tables of V2. As
a result, KVM marks these pages as RO in the host nested page tables. When
V2 exits and we resume V1, these pages are still marked RO.

Every nested walk for a guest page table is treated as a user-level write
access and this causes a lot of NPFs because the V1 page tables are marked
RO in the V0 nested tables. While executing V1, when these NPFs occur KVM
sees a write to a read-only page, emulates the V1 instruction and unprotects
the page (marking it RW). This patch looks for cases where we get a NPF due
to a guest page table walk where the page was marked RO. It immediately
unprotects the page and resumes the guest, leading to far fewer instruction
emulations when nested virtualization is used.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kvm_host.h |   11 ++-
 arch/x86/kvm/mmu.c  |   20 ++--
 arch/x86/kvm/svm.c  |2 +-
 3 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c51c1cb..3f05d36 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -191,6 +191,8 @@ enum {
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_GUEST_FINAL_BIT 32
+#define PFERR_GUEST_PAGE_BIT 33
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -198,6 +200,13 @@ enum {
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
+#define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+
+#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |   \
+PFERR_USER_MASK |  \
+PFERR_WRITE_MASK | \
+PFERR_PRESENT_MASK)
 
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC   0
@@ -1203,7 +1212,7 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu);
 
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
   void *insn, int insn_len);
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a7040f4..3b47a5d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4512,7 +4512,7 @@ static void make_mmu_pages_available(struct kvm_vcpu 
*vcpu)
kvm_mmu_commit_zap_page(vcpu->kvm, _list);
 }
 
-int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code,
+int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
   void *insn, int insn_len)
 {
int r, emulation_type = EMULTYPE_RETRY;
@@ -4531,12 +4531,28 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t 
cr2, u32 error_code,
return r;
}
 
-   r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false);
+   r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
+ false);
if (r < 0)
return r;
if (!r)
return 1;
 
+   /*
+* Before emulating the instruction, check if the error code
+* was due to a RO violation while translating the guest page.
+* This can occur when using nested virtualization with nested
+* paging in both guests. If true, we simply unprotect the page
+* and resume the guest.
+*
+* Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
+*   in PFERR_NEXT_GUEST_PAGE)
+*/
+   if (error_code == PFERR_NESTED_GUEST_PAGE) {
+   kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
+   return 1;
+   }
+
if (mmio_info_in_cache(vcpu, cr2, direct))
emulation_type = 0;
 emulate:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1e6b84b..d8b9c8c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1935,7 +1935,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned 
long 

[RFC PATCH v1 08/28] Access BOOT related data encrypted with SEV active

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data) is encrypted and needs to be access as such. Update the
architecture override in early_memremap to keep the encryption attribute
when mapping this data.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/ioremap.c |7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e3bdc5a..2ea6deb 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -429,10 +429,11 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
 pgprot_t prot)
 {
/*
-* If memory encryption is enabled and BOOT_DATA is being mapped
-* then remove the encryption bit.
+* If memory encryption is enabled, we are not running with
+* SEV active and BOOT_DATA is being mapped then remove the
+* encryption bit
 */
-   if (_PAGE_ENC && (owner == BOOT_DATA))
+   if (_PAGE_ENC && !sev_active && (owner == BOOT_DATA))
prot = __pgprot(pgprot_val(prot) & ~_PAGE_ENC);
 
return prot;

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 12/28] x86: DMA support for SEV memory encryption

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/mem_encrypt.c |   48 +
 1 file changed, 48 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1154353..ce6e3ea 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -173,8 +173,52 @@ void __init sme_early_init(void)
/* Update the protection map with memory encryption mask */
for (i = 0; i < ARRAY_SIZE(protection_map); i++)
protection_map[i] = __pgprot(pgprot_val(protection_map[i]) | 
sme_me_mask);
+
+   if (sev_active)
+   swiotlb_force = 1;
 }
 
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+  gfp_t gfp, unsigned long attrs)
+{
+   void *vaddr;
+
+   vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, gfp, attrs);
+   if (!vaddr)
+   return NULL;
+
+   /* Clear the SME encryption bit for DMA use */
+   sme_set_mem_dec(vaddr, size);
+
+   /* Remove the encryption bit from the DMA address */
+   *dma_handle &= ~sme_me_mask;
+
+   return vaddr;
+}
+
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+dma_addr_t dma_handle, unsigned long attrs)
+{
+   /* Set the SME encryption bit for re-use as encrypted */
+   sme_set_mem_enc(vaddr, size);
+
+   x86_swiotlb_free_coherent(dev, size, vaddr, dma_handle, attrs);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+   .alloc  = sme_alloc,
+   .free   = sme_free,
+   .map_page   = swiotlb_map_page,
+   .unmap_page = swiotlb_unmap_page,
+   .map_sg = swiotlb_map_sg_attrs,
+   .unmap_sg   = swiotlb_unmap_sg_attrs,
+   .sync_single_for_cpu= swiotlb_sync_single_for_cpu,
+   .sync_single_for_device = swiotlb_sync_single_for_device,
+   .sync_sg_for_cpu= swiotlb_sync_sg_for_cpu,
+   .sync_sg_for_device = swiotlb_sync_sg_for_device,
+   .mapping_error  = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -184,6 +228,10 @@ void __init mem_encrypt_init(void)
/* Make SWIOTLB use an unencrypted DMA area */
swiotlb_clear_encryption();
 
+   /* Use SEV DMA operations if SEV is active */
+   if (sev_active)
+   dma_ops = _dma_ops;
+
pr_info("memory encryption active\n");
 }
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 04/28] x86: Secure Encrypted Virtualization (SEV) support

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines the SEV active flag in order for the kernel to determine
if it is running with SEV active or not.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |3 +++
 arch/x86/kernel/mem_encrypt.S  |8 
 arch/x86/kernel/x8664_ksyms_64.c   |1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index e395729..9c592d1 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,6 +20,7 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_active;
 
 u8 sme_get_me_loss(void);
 
@@ -50,6 +51,8 @@ void swiotlb_set_mem_dec(void *vaddr, unsigned long size);
 
 #define sme_me_mask0UL
 
+#define sev_active 0
+
 static inline u8 sme_get_me_loss(void)
 {
return 0;
diff --git a/arch/x86/kernel/mem_encrypt.S b/arch/x86/kernel/mem_encrypt.S
index bf9f6a9..6a8cd18 100644
--- a/arch/x86/kernel/mem_encrypt.S
+++ b/arch/x86/kernel/mem_encrypt.S
@@ -96,6 +96,10 @@ ENDPROC(sme_enable)
 
 ENTRY(sme_encrypt_kernel)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+   /* If SEV is active then the kernel is already encrypted */
+   cmpl$0, sev_active(%rip)
+   jnz .Lencrypt_exit
+
/* If SME is not active then no need to encrypt the kernel */
cmpq$0, sme_me_mask(%rip)
jz  .Lencrypt_exit
@@ -334,6 +338,10 @@ sme_me_loss:
.byte   0x00
.align  8
 
+ENTRY(sev_active)
+   .word   0x
+   .align  8
+
 mem_encrypt_enable_option:
.asciz "mem_encrypt=on"
.align  8
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index 651c4c8..14bfc0b 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -88,4 +88,5 @@ EXPORT_SYMBOL(___preempt_schedule_notrace);
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 EXPORT_SYMBOL_GPL(sme_me_mask);
 EXPORT_SYMBOL_GPL(sme_get_me_loss);
+EXPORT_SYMBOL_GPL(sev_active);
 #endif

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 05/28] KVM: SVM: prepare for new bit definition in nested_ctl

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the this field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h |2 ++
 arch/x86/kvm/svm.c |7 ---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b2de7c..9b59260 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1177,7 +1177,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
if (npt_enabled) {
/* Setup VMCB for Nested Paging */
-   control->nested_ctl = 1;
+   control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
clr_intercept(svm, INTERCEPT_INVLPG);
clr_exception_intercept(svm, PF_VECTOR);
clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2701,7 +2701,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
if (vmcb->control.asid == 0)
return false;
 
-   if (vmcb->control.nested_ctl && !npt_enabled)
+   if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+   !npt_enabled)
return false;
 
return true;
@@ -2776,7 +2777,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
else
svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-   if (nested_vmcb->control.nested_ctl) {
+   if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
kvm_mmu_unload(>vcpu);
svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
nested_svm_init_mmu_context(>vcpu);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 26/28] KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command

2016-08-22 Thread Brijesh Singh
The command decrypts a page of guest memory for debugging purposes.

For more information see [1], section 7.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   83 
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 63e7d15..b383bc7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5606,6 +5606,84 @@ err_1:
return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+ void *dst, int *psp_ret)
+{
+   int ret, pinned;
+   struct page **inpages;
+   struct psp_data_dbg *decrypt;
+
+   decrypt = kzalloc(sizeof(*decrypt), GFP_KERNEL);
+   if (!decrypt)
+   return -ENOMEM;
+
+   ret = -ENOMEM;
+   inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+   if (!inpages)
+   goto err_1;
+
+   /* pin the user virtual address */
+   ret = -EFAULT;
+   down_read(>mm->mmap_sem);
+   pinned = get_user_pages(src, 1, 1, 0, inpages, NULL);
+   up_read(>mm->mmap_sem);
+   if (pinned < 0)
+   goto err_2;
+
+   decrypt->hdr.buffer_len = sizeof(*decrypt);
+   decrypt->handle = kvm_sev_handle();
+   decrypt->dst_addr = __pa(dst) | sme_me_mask;
+   decrypt->src_addr = __sev_page_pa(inpages[0]);
+   decrypt->length = PAGE_SIZE;
+
+   ret = psp_dbg_decrypt(decrypt, psp_ret);
+   if (ret)
+   printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+   ret, *psp_ret);
+   release_pages(inpages, 1, 0);
+err_2:
+   kfree(inpages);
+err_1:
+   kfree(decrypt);
+   return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm,
+  struct kvm_sev_dbg_decrypt __user *argp,
+  int *psp_ret)
+{
+   void *data;
+   int ret, offset, len;
+   struct kvm_sev_dbg_decrypt debug;
+
+   if (!kvm_sev_guest())
+   return -ENOTTY;
+
+   if (copy_from_user(, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   if (debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   /* decrypt one page */
+   ret = __sev_dbg_decrypt_page(kvm, debug.src_addr, data, psp_ret);
+   if (ret)
+   goto err_1;
+
+   /* we have decrypted full page but copy request length */
+   offset = debug.src_addr & (PAGE_SIZE - 1);
+   len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+   if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+   ret = -EFAULT;
+err_1:
+   free_page((unsigned long)data);
+   return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5636,6 +5714,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
_code);
break;
}
+   case KVM_SEV_DBG_DECRYPT: {
+   r = sev_dbg_decrypt(kvm, (void *)arg.opaque,
+   _code);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 27/28] KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command

2016-08-22 Thread Brijesh Singh
The command encrypts a region of guest memory for debugging purposes.

For more information see [1], section 7.2

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |  100 
 1 file changed, 100 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b383bc7..4af195d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5684,6 +5684,101 @@ err_1:
return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm,
+  struct kvm_sev_dbg_encrypt __user *argp,
+  int *psp_ret)
+{
+   void *data;
+   int len, ret, d_off;
+   struct page **inpages;
+   struct psp_data_dbg *encrypt;
+   struct kvm_sev_dbg_encrypt debug;
+   unsigned long src_addr, dst_addr;
+
+   if (!kvm_sev_guest())
+   return -ENOTTY;
+
+   if (copy_from_user(, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   if (debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   len = debug.length;
+   src_addr = debug.src_addr;
+   dst_addr = debug.dst_addr;
+
+   inpages = kzalloc(1 * sizeof(struct page *), GFP_KERNEL);
+   if (!inpages)
+   return -ENOMEM;
+
+   /* pin the guest destination virtual address */
+   down_read(>mm->mmap_sem);
+   ret = get_user_pages(dst_addr, 1, 1, 0, inpages, NULL);
+   up_read(>mm->mmap_sem);
+   if (ret < 0)
+   goto err_1;
+
+   encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+   if (!encrypt)
+   goto err_2;
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data)
+   goto err_3;
+
+   encrypt->hdr.buffer_len = sizeof(*encrypt);
+   encrypt->handle = kvm_sev_handle();
+
+   if ((len & 15) || (dst_addr & 15)) {
+   /* if destination address and length are not 16-byte
+* aligned then:
+* a) decrypt destination page into temporary buffer
+* b) copy source data into temporary buffer at correct offset
+* c) encrypt temporary buffer
+*/
+   ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, psp_ret);
+   if (ret)
+   goto err_4;
+
+   d_off = dst_addr & (PAGE_SIZE - 1);
+   ret = -EFAULT;
+   if (copy_from_user(data + d_off,
+   (uint8_t *)debug.src_addr, len))
+   goto err_4;
+
+   encrypt->length = PAGE_SIZE;
+   encrypt->src_addr = __pa(data) | sme_me_mask;
+   encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+   } else {
+   if (copy_from_user(data, (uint8_t *)debug.src_addr, len))
+   goto err_4;
+
+   d_off = dst_addr & (PAGE_SIZE - 1);
+   encrypt->length = len;
+   encrypt->src_addr = __pa(data) | sme_me_mask;
+   encrypt->dst_addr = __sev_page_pa(inpages[0]);
+   encrypt->dst_addr += d_off;
+   }
+
+   ret = psp_dbg_encrypt(encrypt, psp_ret);
+   if (ret)
+   printk(KERN_ERR "SEV: DEBUG_ENCRYPT: [%#lx=>%#lx+%#x] "
+   "%d (%#010x)\n",src_addr, dst_addr, len,
+   ret, *psp_ret);
+
+err_4:
+   free_page((unsigned long)data);
+err_3:
+   kfree(encrypt);
+err_2:
+   release_pages(inpages, 1, 0);
+err_1:
+   kfree(inpages);
+
+   return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5719,6 +5814,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
_code);
break;
}
+   case KVM_SEV_DBG_ENCRYPT: {
+   r = sev_dbg_encrypt(kvm, (void *)arg.opaque,
+   _code);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 16/28] x86: Add support to determine if running with SEV enabled

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Early in the boot process, add a check to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) enabled. If active,
the kernel will perform steps necessary to insure the proper kernel
initialization process is performed.

Signed-off-by: Tom Lendacky 
---
 arch/x86/boot/compressed/Makefile  |2 +
 arch/x86/boot/compressed/head_64.S |   19 +
 arch/x86/boot/compressed/mem_encrypt.S |  123 
 arch/x86/include/uapi/asm/hyperv.h |4 +
 arch/x86/include/uapi/asm/kvm_para.h   |3 +
 arch/x86/kernel/mem_encrypt.S  |   36 +
 6 files changed, 187 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 536ccfc..4888df9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -73,6 +73,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o 
$(obj)/misc.o \
$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 0d80a7a..acb907a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -131,6 +131,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+   /*
+* If SEV is active set the encryption mask in the page tables. This
+* will insure that when the kernel is copied and decompressed it
+* will be done so encrypted.
+*/
+   callsev_active
+   xorl%edx, %edx
+   testl   %eax, %eax
+   jz  1f
+   subl$32, %eax   /* Encryption bit is always above bit 31 */
+   bts %eax, %edx  /* Set encryption mask for page tables */
+1:
+
/* Initialize Page tables to 0 */
lealpgtable(%ebx), %edi
xorl%eax, %eax
@@ -141,12 +154,14 @@ ENTRY(startup_32)
lealpgtable + 0(%ebx), %edi
leal0x1007 (%edi), %eax
movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
 
/* Build Level 3 */
lealpgtable + 0x1000(%ebx), %edi
leal0x1007(%edi), %eax
movl$4, %ecx
 1: movl%eax, 0x00(%edi)
+   addl%edx, 0x04(%edi)
addl$0x1000, %eax
addl$8, %edi
decl%ecx
@@ -157,6 +172,7 @@ ENTRY(startup_32)
movl$0x0183, %eax
movl$2048, %ecx
 1: movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
addl$0x0020, %eax
addl$8, %edi
decl%ecx
@@ -344,6 +360,9 @@ preferred_addr:
subl$_end, %ebx
addq%rbp, %rbx
 
+   /* Check for SEV and adjust page tables as necessary */
+   callsev_adjust
+
/* Set up the stack */
leaqboot_stack_end(%rbx), %rsp
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S 
b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 000..56e19f6
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,123 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+   .text
+   .code32
+ENTRY(sev_active)
+   xor %eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   push%ebx
+   push%ecx
+   push%edx
+
+   /* Check if running under a hypervisor */
+   movl$0x4000, %eax
+   cpuid
+   cmpl$0x4001, %eax
+   jb  .Lno_sev
+
+   movl$0x4001, %eax
+   cpuid
+   bt  $KVM_FEATURE_SEV, %eax
+   jnc .Lno_sev
+
+   /*
+* Check for memory encryption feature:
+*   CPUID Fn8000_001F[EAX] - Bit 0
+*/
+   movl$0x801f, %eax
+   cpuid
+   bt  $0, %eax
+   jnc .Lno_sev
+
+   /*
+* Get memory encryption information:
+*   CPUID Fn8000_001F[EBX] - Bits 5:0
+* Pagetable bit position used to indicate encryption
+*/
+   movl%ebx, %eax
+   andl$0x3f, %eax
+   jmp .Lsev_exit
+
+.Lno_sev:
+   xor %eax, %eax
+
+.Lsev_exit:
+   pop %edx
+   pop %ecx
+   pop %ebx
+
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+
+   ret
+ENDPROC(sev_active)
+
+   .code64
+ENTRY(sev_adjust)
+#ifdef 

[RFC PATCH v1 22/28] KVM: SVM: add SEV launch start command

2016-08-22 Thread Brijesh Singh
The command initate the process to launch this guest into
SEV-enabled mode.

For more information on command structure see [1], section 6.1

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |  212 +++-
 1 file changed, 209 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index dcee635..0b6da4a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -265,6 +265,9 @@ static unsigned long *sev_asid_bitmap;
 
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_deactivate_handle(unsigned int handle);
+static void sev_decommission_handle(unsigned int handle);
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 static void svm_flush_tlb(struct kvm_vcpu *vcpu);
@@ -1645,9 +1648,18 @@ static void sev_uninit_vcpu(struct vcpu_svm *svm)
 
svm_sev_unref();
 
-   for_each_possible_cpu(cpu) {
-   sd = per_cpu(svm_data, cpu);
-   sd->sev_vmcb[asid] = NULL;
+   /* when reference count reaches to zero then free SEV asid and
+* deactivate psp handle
+*/
+   if (!svm_sev_ref_count()) {
+   sev_deactivate_handle(svm_sev_handle());
+   sev_decommission_handle(svm_sev_handle());
+   sev_asid_free(svm_sev_asid());
+
+   for_each_possible_cpu(cpu) {
+   sd = per_cpu(svm_data, cpu);
+   sd->sev_vmcb[asid] = NULL;
+   }
}
 }
 
@@ -5196,6 +5208,198 @@ static void sev_asid_free(int asid)
clear_bit(asid, sev_asid_bitmap);
 }
 
+static void sev_decommission_handle(unsigned int handle)
+{
+   int ret, psp_ret;
+   struct psp_data_decommission *decommission;
+
+   decommission = kzalloc(sizeof(*decommission), GFP_KERNEL);
+   if (!decommission)
+   return;
+
+   decommission->hdr.buffer_len = sizeof(*decommission);
+   decommission->handle = handle;
+   ret = psp_guest_decommission(decommission, _ret);
+   if (ret)
+   printk(KERN_ERR "SEV: DECOMISSION ret=%d (%#010x)\n",
+   ret, psp_ret);
+
+   kfree(decommission);
+}
+
+static void sev_deactivate_handle(unsigned int handle)
+{
+   int ret, psp_ret;
+   struct psp_data_deactivate *deactivate;
+
+   deactivate = kzalloc(sizeof(*deactivate), GFP_KERNEL);
+   if (!deactivate)
+   return;
+
+   deactivate->hdr.buffer_len = sizeof(*deactivate);
+   deactivate->handle = handle;
+   ret = psp_guest_deactivate(deactivate, _ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: DEACTIVATE ret=%d (%#010x)\n",
+   ret, psp_ret);
+   goto buffer_free;
+   }
+
+   wbinvd_on_all_cpus();
+
+   ret = psp_guest_df_flush(_ret);
+   if (ret)
+   printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+   ret, psp_ret);
+
+buffer_free:
+   kfree(deactivate);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *psp_ret)
+{
+   int ret;
+   struct psp_data_activate *activate;
+
+   wbinvd_on_all_cpus();
+
+   ret = psp_guest_df_flush(psp_ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: DF_FLUSH ret=%d (%#010x)\n",
+   ret, *psp_ret);
+   return ret;
+   }
+
+   activate = kzalloc(sizeof(*activate), GFP_KERNEL);
+   if (!activate)
+   return -ENOMEM;
+
+   activate->hdr.buffer_len = sizeof(*activate);
+   activate->handle = handle;
+   activate->asid   = asid;
+   ret = psp_guest_activate(activate, psp_ret);
+   if (ret)
+   printk(KERN_ERR "SEV: ACTIVATE ret=%d (%#010x)\n",
+   ret, *psp_ret);
+   kfree(activate);
+   return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+   int ret;
+
+   /* If guest has active psp handle then deactivate before calling
+* launch start.
+*/
+   if (kvm_sev_guest()) {
+   sev_deactivate_handle(kvm_sev_handle());
+   sev_decommission_handle(kvm_sev_handle());
+   *asid = kvm->arch.sev_info.asid;  /* reuse the asid */
+   ret = 0;
+   } else {
+   /* Allocate new asid for this launch */
+   ret = sev_asid_new();
+   if (ret < 0) {
+   printk(KERN_ERR "SEV: failed to allocate asid\n");
+   return ret;
+   }
+   *asid = ret;
+   ret = 0;
+   }
+
+   return ret;
+}
+

[RFC PATCH v1 18/28] crypto: add AMD Platform Security Processor driver

2016-08-22 Thread Brijesh Singh
The driver to communicate with Secure Encrypted Virtualization (SEV)
firmware running within the AMD secure processor providing a secure key
management interface for SEV guests.

Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 drivers/crypto/Kconfig   |   11 +
 drivers/crypto/Makefile  |1 
 drivers/crypto/psp/Kconfig   |8 
 drivers/crypto/psp/Makefile  |3 
 drivers/crypto/psp/psp-dev.c |  220 +++
 drivers/crypto/psp/psp-dev.h |   95 +
 drivers/crypto/psp/psp-ops.c |  454 +++
 drivers/crypto/psp/psp-pci.c |  376 +++
 include/linux/ccp-psp.h  |  833 ++
 include/uapi/linux/Kbuild|1 
 include/uapi/linux/ccp-psp.h |  182 +
 11 files changed, 2184 insertions(+)
 create mode 100644 drivers/crypto/psp/Kconfig
 create mode 100644 drivers/crypto/psp/Makefile
 create mode 100644 drivers/crypto/psp/psp-dev.c
 create mode 100644 drivers/crypto/psp/psp-dev.h
 create mode 100644 drivers/crypto/psp/psp-ops.c
 create mode 100644 drivers/crypto/psp/psp-pci.c
 create mode 100644 include/linux/ccp-psp.h
 create mode 100644 include/uapi/linux/ccp-psp.h

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 1af94e2..3bdbc51 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -464,6 +464,17 @@ if CRYPTO_DEV_CCP
source "drivers/crypto/ccp/Kconfig"
 endif
 
+config CRYPTO_DEV_PSP
+   bool "Support for AMD Platform Security Processor"
+   depends on X86 && PCI
+   help
+ The AMD Platform Security Processor provides hardware key-
+ management services for VMGuard encrypted memory.
+
+if CRYPTO_DEV_PSP
+   source "drivers/crypto/psp/Kconfig"
+endif
+
 config CRYPTO_DEV_MXS_DCP
tristate "Support for Freescale MXS DCP"
depends on (ARCH_MXS || ARCH_MXC)
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 3c6432d..1ea1e08 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
 obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
 obj-$(CONFIG_CRYPTO_DEV_BFIN_CRC) += bfin_crc.o
 obj-$(CONFIG_CRYPTO_DEV_CCP) += ccp/
+obj-$(CONFIG_CRYPTO_DEV_PSP) += psp/
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam/
 obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
 obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
diff --git a/drivers/crypto/psp/Kconfig b/drivers/crypto/psp/Kconfig
new file mode 100644
index 000..acd9b87
--- /dev/null
+++ b/drivers/crypto/psp/Kconfig
@@ -0,0 +1,8 @@
+config CRYPTO_DEV_PSP_DD
+   tristate "PSP Key Management device driver"
+   depends on CRYPTO_DEV_PSP
+   default m
+   help
+ Provides the interface to use the AMD PSP key management APIs
+ for use with the AMD Secure Enhanced Virtualization. If you
+ choose 'M' here, this module will be called psp.
diff --git a/drivers/crypto/psp/Makefile b/drivers/crypto/psp/Makefile
new file mode 100644
index 000..1b7d00c
--- /dev/null
+++ b/drivers/crypto/psp/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_CRYPTO_DEV_PSP_DD) += psp.o
+psp-objs := psp-dev.o psp-ops.o
+psp-$(CONFIG_PCI) += psp-pci.o
diff --git a/drivers/crypto/psp/psp-dev.c b/drivers/crypto/psp/psp-dev.c
new file mode 100644
index 000..65d5c7e
--- /dev/null
+++ b/drivers/crypto/psp/psp-dev.c
@@ -0,0 +1,220 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <thomas.lenda...@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "psp-dev.h"
+
+MODULE_AUTHOR("Advanced Micro Devices, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1.0");
+MODULE_DESCRIPTION("AMD VMGuard key-management driver prototype");
+
+static struct psp_device *psp_master;
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+static atomic_t psp_id;
+
+static void psp_add_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_add_tail(>entry, _devs);
+   psp_master = psp->get_master(_devs);
+
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+static void psp_del_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_devs_lock, flags);
+
+   list_del(>entry);
+   if (psp == psp_master)
+   psp_master = NULL;
+
+   spin_unlock_irqrestore(_devs_lock, flags);
+}
+
+static void psp_check_support(struct psp_device *psp)
+{
+   

[RFC PATCH v1 21/28] KVM: introduce KVM_SEV_ISSUE_CMD ioctl

2016-08-22 Thread Brijesh Singh
The ioctl will be used by qemu to issue the Secure Encrypted
Virtualization (SEV) guest commands to transition a guest into
into SEV-enabled mode.

a typical usage:

struct kvm_sev_launch_start start;
struct kvm_sev_issue_cmd data;

data.cmd = KVM_SEV_LAUNCH_START;
data.opaque = 

ret = ioctl(fd, KVM_SEV_ISSUE_CMD, );

On SEV command failure, data.ret_code will contain the firmware error code.

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/include/asm/kvm_host.h |3 +
 arch/x86/kvm/x86.c  |   13 
 include/uapi/linux/kvm.h|  125 +++
 3 files changed, 141 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b885fc..a94e37d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1040,6 +1040,9 @@ struct kvm_x86_ops {
void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+   int (*sev_issue_cmd)(struct kvm *kvm,
+struct kvm_sev_issue_cmd __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d6f2f4b..0c0adad 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3820,6 +3820,15 @@ split_irqchip_unlock:
return r;
 }
 
+static int kvm_vm_ioctl_sev_issue_cmd(struct kvm *kvm,
+ struct kvm_sev_issue_cmd __user *argp)
+{
+   if (kvm_x86_ops->sev_issue_cmd)
+   return kvm_x86_ops->sev_issue_cmd(kvm, argp);
+
+   return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
   unsigned int ioctl, unsigned long arg)
 {
@@ -4085,6 +4094,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
r = kvm_vm_ioctl_enable_cap(kvm, );
break;
}
+   case KVM_SEV_ISSUE_CMD: {
+   r = kvm_vm_ioctl_sev_issue_cmd(kvm, argp);
+   break;
+   }
default:
r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 300ef25..72c18c3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1274,6 +1274,131 @@ struct kvm_s390_ucas_mapping {
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI   _IO(KVMIO,   0xb7)
 
+/* Secure Encrypted Virtualization mode */
+enum sev_cmd {
+   KVM_SEV_LAUNCH_START = 0,
+   KVM_SEV_LAUNCH_UPDATE,
+   KVM_SEV_LAUNCH_FINISH,
+   KVM_SEV_GUEST_STATUS,
+   KVM_SEV_DBG_DECRYPT,
+   KVM_SEV_DBG_ENCRYPT,
+   KVM_SEV_RECEIVE_START,
+   KVM_SEV_RECEIVE_UPDATE,
+   KVM_SEV_RECEIVE_FINISH,
+   KVM_SEV_SEND_START,
+   KVM_SEV_SEND_UPDATE,
+   KVM_SEV_SEND_FINISH,
+   KVM_SEV_API_VERSION,
+   KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_issue_cmd {
+   __u32 cmd;
+   __u64 opaque;
+   __u32 ret_code;
+};
+
+struct kvm_sev_launch_start {
+   __u32 handle;
+   __u32 flags;
+   __u32 policy;
+   __u8 nonce[16];
+   __u8 dh_pub_qx[32];
+   __u8 dh_pub_qy[32];
+};
+
+struct kvm_sev_launch_update {
+   __u64   address;
+   __u32   length;
+};
+
+struct kvm_sev_launch_finish {
+   __u32 vcpu_count;
+   __u32 vcpu_length;
+   __u64 vcpu_mask_addr;
+   __u32 vcpu_mask_length;
+   __u8  measurement[32];
+};
+
+struct kvm_sev_guest_status {
+   __u32 policy;
+   __u32 state;
+};
+
+struct kvm_sev_dbg_decrypt {
+   __u64 src_addr;
+   __u64 dst_addr;
+   __u32 length;
+};
+
+struct kvm_sev_dbg_encrypt {
+   __u64 src_addr;
+   __u64 dst_addr;
+   __u32 length;
+};
+
+struct kvm_sev_receive_start {
+   __u32 handle;
+   __u32 flags;
+   __u32 policy;
+   __u8 policy_meas[32];
+   __u8 wrapped_tek[24];
+   __u8 wrapped_tik[24];
+   __u8 ten[16];
+   __u8 dh_pub_qx[32];
+   __u8 dh_pub_qy[32];
+   __u8 nonce[16];
+};
+
+struct kvm_sev_receive_update {
+   __u8 iv[16];
+   __u64 address;
+   __u32 length;
+};
+
+struct kvm_sev_receive_finish {
+   __u8 measurement[32];
+};
+
+struct kvm_sev_send_start {
+   __u8 nonce[16];
+   __u32 policy;
+   __u8 policy_meas[32];
+   __u8 wrapped_tek[24];
+   __u8 wrapped_tik[24];
+   __u8 ten[16];
+   __u8 iv[16];
+   __u32 flags;
+   __u8 api_major;
+   __u8 api_minor;
+   __u32 serial;
+   __u8 dh_pub_qx[32];
+   __u8 dh_pub_qy[32];
+   __u8 pek_sig_r[32];
+   __u8 pek_sig_s[32];
+   __u8 cek_sig_r[32];
+   __u8 cek_sig_s[32];
+   __u8 cek_pub_qx[32];
+   __u8 cek_pub_qy[32];
+   __u8 ask_sig_r[32];
+   __u8 ask_sig_s[32];
+   __u32 ncerts;
+   __u32 cert_length;
+   __u64 certs_addr;
+};
+
+struct kvm_sev_send_update {
+   __u32 length;
+   __u64 src_addr;
+   __u64 dst_addr;
+};
+
+struct kvm_sev

[RFC PATCH v1 17/28] KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active by setting the SEV KVM cpu features bit
if SEV is active.  SEV is active if Secure Memory Encryption is active
in the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kvm/cpuid.c |4 +++-
 arch/x86/kvm/svm.c   |   18 ++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 3235e0f..d34faea 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -583,7 +583,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
entry->edx = 0;
break;
case 0x8000:
-   entry->eax = min(entry->eax, 0x801a);
+   entry->eax = min(entry->eax, 0x801f);
break;
case 0x8001:
entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -616,6 +616,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
break;
case 0x801d:
break;
+   case 0x801f:
+   break;
/*Add support for Centaur's CPUID instruction*/
case 0xC000:
/*Just support up to 0xC004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9b59260..211be94 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -43,6 +43,7 @@
 #include 
 
 #include 
+#include 
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -4677,10 +4678,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_cpuid_entry2 *entry;
+   struct vmcb_control_area *ca = >vmcb->control;
+   struct kvm_cpuid_entry2 *features, *sev_info;
 
/* Update nrips enabled cache */
svm->nrips_enabled = !!guest_cpuid_has_nrips(>vcpu);
 
+   /* Check for Secure Encrypted Virtualization support */
+   features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+   if (!features)
+   return;
+
+   sev_info = kvm_find_cpuid_entry(vcpu, 0x801f, 0);
+   if (!sev_info)
+   return;
+
+   if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+   features->eax |= (1 << KVM_FEATURE_SEV);
+   cpuid(0x801f, _info->eax, _info->ebx,
+ _info->ecx, _info->edx);
+   }
+
if (!kvm_vcpu_apicv_active(vcpu))
return;
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 15/28] x86: Unroll string I/O when SEV is active

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/io.h |   26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index de25aad..130b3e2 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -303,14 +303,32 @@ static inline unsigned type in##bwl##_p(int port) 
\
\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; outs" #bwl   \
-: "+S"(addr), "+c"(count) : "d"(port));\
+   if (sev_active) {   \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   out##bwl(*value, port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; outs" #bwl   \
+: "+S"(addr), "+c"(count) : "d"(port));\
+   }   \
 }  \
\
 static inline void ins##bwl(int port, void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; ins" #bwl\
-: "+D"(addr), "+c"(count) : "d"(port));\
+   if (sev_active) {   \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   *value = in##bwl(port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; ins" #bwl\
+: "+D"(addr), "+c"(count) : "d"(port));\
+   }   \
 }
 
 BUILDIO(b, b, char)

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 07/28] x86: Do not encrypt memory areas if SEV is enabled

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

When running under SEV, some memory areas that were originally not
encrypted under SME are already encrypted. In these situations do not
attempt to encrypt them.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/head64.c |4 ++--
 arch/x86/kernel/setup.c  |7 ---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 358d7bc..4a15def 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -114,7 +114,7 @@ static void __init create_unencrypted_mapping(void 
*address, unsigned long size)
unsigned long physaddr = (unsigned long)address - __PAGE_OFFSET;
pmdval_t pmd_flags, pmd;
 
-   if (!sme_me_mask)
+   if (!sme_me_mask || sev_active)
return;
 
/* Clear the encryption mask from the early_pmd_flags */
@@ -165,7 +165,7 @@ static void __init __clear_mapping(unsigned long address)
 
 static void __init clear_mapping(void *address, unsigned long size)
 {
-   if (!sme_me_mask)
+   if (!sme_me_mask || sev_active)
return;
 
do {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index cec8a63..9c10383 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -380,10 +380,11 @@ static void __init reserve_initrd(void)
 
/*
 * This memory is marked encrypted by the kernel but the ramdisk
-* was loaded in the clear by the bootloader, so make sure that
-* the ramdisk image is encrypted.
+* was loaded in the clear by the bootloader (unless SEV is active),
+* so make sure that the ramdisk image is encrypted.
 */
-   sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
+   if (!sev_active)
+   sme_early_mem_enc(ramdisk_image, ramdisk_end - ramdisk_image);
 
initrd_start = 0;
 

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 03/28] kvm: svm: Use the hardware provided GPA instead of page walk

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kvm_emulate.h |3 +++
 arch/x86/include/asm/kvm_host.h|3 +++
 arch/x86/kvm/svm.c |2 ++
 arch/x86/kvm/x86.c |   17 -
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h 
b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..2d1ac09 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -344,6 +344,9 @@ struct x86_emulate_ctxt {
struct read_cache mem_read;
 };
 
+/* String operation identifier (matches the definition in emulate.c) */
+#define CTXT_STRING_OP (1 << 13)
+
 /* Repeat String Operation Prefix */
 #define REPE_PREFIX0xf3
 #define REPNE_PREFIX   0xf2
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c38f878..b1dd673 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -667,6 +667,9 @@ struct kvm_vcpu_arch {
 
int pending_ioapic_eoi;
int pending_external_vector;
+
+   /* GPA available (AMD only) */
+   bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fd5a9a8..9b2de7c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4055,6 +4055,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+   vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
vcpu->arch.cr0 = svm->vmcb->save.cr0;
if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 78295b0..d6f2f4b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4382,7 +4382,19 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, 
unsigned long gva,
return 1;
}
 
-   *gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
+   /*
+* If the exit was due to a NPF we may already have a GPA.
+* If the GPA is present, use it to avoid the GVA to GPA table
+* walk. Note, this cannot be used on string operations since
+* string operation using rep will only have the initial GPA
+* from when the NPF occurred.
+*/
+   if (vcpu->arch.gpa_available &&
+   !(vcpu->arch.emulate_ctxt.d & CTXT_STRING_OP))
+   *gpa = exception->address;
+   else
+   *gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access,
+  exception);
 
if (*gpa == UNMAPPED_GVA)
return -1;
@@ -5504,6 +5516,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
}
 
 restart:
+   /* Save the faulting GPA (cr2) in the address field */
+   ctxt->exception.address = cr2;
+
r = x86_emulate_insn(ctxt);
 
if (r == EMULATION_INTERCEPTED)

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)

2016-08-22 Thread Brijesh Singh
This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
http://support.amd.com/TechDocs/24593.pdf
SME is section 7.10
SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
  crypto: add AMD Platform Security Processor driver
  KVM: SVM: prepare to reserve asid for SEV guest
  KVM: SVM: prepare for SEV guest management API support
  KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  KVM: SVM: add SEV launch start command
  KVM: SVM: add SEV launch update command
  KVM: SVM: add SEV_LAUNCH_FINISH command
  KVM: SVM: add KVM_SEV_GUEST_STATUS command
  KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
  KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
  KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
  kvm: svm: Add support for additional SVM NPF error codes
  kvm: svm: Add kvm_fast_pio_in support
  kvm: svm: Use the hardware provided GPA instead of page walk
  x86: Secure Encrypted Virtualization (SEV) support
  KVM: SVM: prepare for new bit definition in nested_ctl
  KVM: SVM: Add SEV feature definitions to KVM
  x86: Do not encrypt memory areas if SEV is enabled
  Access BOOT related data encrypted with SEV active
  x86/efi: Access EFI data as encrypted when SEV is active
  x86: Change early_ioremap to early_memremap for BOOT data
  x86: Don't decrypt trampoline area if SEV is active
  x86: DMA support for SEV memory encryption
  iommu/amd: AMD IOMMU support for SEV
  x86: Don't set the SME MSR bit when SEV is active
  x86: Unroll string I/O when SEV is active
  x86: Add support to determine if running with SEV enabled
  KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile  |2 
 arch/x86/boot/compressed/head_64

[RFC PATCH v1 02/28] kvm: svm: Add kvm_fast_pio_in support

2016-08-22 Thread Brijesh Singh
From: Tom Lendacky 

Update the I/O interception support to add the kvm_fast_pio_in function
to speed up the in instruction similar to the out instruction.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kvm_host.h |1 +
 arch/x86/kvm/svm.c  |5 +++--
 arch/x86/kvm/x86.c  |   43 +++
 3 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3f05d36..c38f878 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1133,6 +1133,7 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data 
*msr);
 struct x86_emulate_ctxt;
 
 int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port);
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port);
 void kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d8b9c8c..fd5a9a8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2131,7 +2131,7 @@ static int io_interception(struct vcpu_svm *svm)
++svm->vcpu.stat.io_exits;
string = (io_info & SVM_IOIO_STR_MASK) != 0;
in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
-   if (string || in)
+   if (string)
return emulate_instruction(vcpu, 0) == EMULATE_DONE;
 
port = io_info >> 16;
@@ -2139,7 +2139,8 @@ static int io_interception(struct vcpu_svm *svm)
svm->next_rip = svm->vmcb->control.exit_info_2;
skip_emulated_instruction(>vcpu);
 
-   return kvm_fast_pio_out(vcpu, size, port);
+   return in ? kvm_fast_pio_in(vcpu, size, port)
+ : kvm_fast_pio_out(vcpu, size, port);
 }
 
 static int nmi_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d432894..78295b0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5579,6 +5579,49 @@ int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, 
unsigned short port)
 }
 EXPORT_SYMBOL_GPL(kvm_fast_pio_out);
 
+static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+{
+   unsigned long val;
+
+   /* We should only ever be called with arch.pio.count equal to 1 */
+   BUG_ON(vcpu->arch.pio.count != 1);
+
+   /* For size less than 4 we merge, else we zero extend */
+   val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+   : 0;
+
+   /*
+* Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
+* the copy and tracing
+*/
+   emulator_pio_in_emulated(>arch.emulate_ctxt, vcpu->arch.pio.size,
+vcpu->arch.pio.port, , 1);
+   kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+   return 1;
+}
+
+int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size, unsigned short port)
+{
+   unsigned long val;
+   int ret;
+
+   /* For size less than 4 we merge, else we zero extend */
+   val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+
+   ret = emulator_pio_in_emulated(>arch.emulate_ctxt, size, port,
+  , 1);
+   if (ret) {
+   kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+   return ret;
+   }
+
+   vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_fast_pio_in);
+
 static int kvmclock_cpu_down_prep(unsigned int cpu)
 {
__this_cpu_write(cpu_tsc_khz, 0);

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 00/28] x86: Secure Encrypted Virtualization (AMD)

2016-08-22 Thread Brijesh Singh
This RFC series provides support for AMD's new Secure Encrypted 
Virtualization (SEV) feature. This RFC is build upon Secure Memory 
Encryption (SME) RFC.

SEV is an extension to the AMD-V architecture which supports running 
multiple VMs under the control of a hypervisor. When enabled, SEV 
hardware tags all code and data with its VM ASID which indicates which 
VM the data originated from or is intended for. This tag is kept with 
the data at all times when inside the SOC, and prevents that data from 
being used by anyone other than the owner. While the tag protects VM 
data inside the SOC, AES with 128 bit encryption protects data outside 
the SOC. When data leaves or enters the SOC, it is encrypted/decrypted 
respectively by hardware with a key based on the associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory
is encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to
be private. The choice is done using the standard CPU page tables using
the C-bit, and is fully controlled by the guest. Due to security reasons
all the DMA operations inside the  guest must be performed on shared pages
(C-bit clear).  Note that since C-bit is only controllable by the guest OS
when it is operating in 64-bit or 32-bit PAE mode, in all other modes the
SEV hardware forces the C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable
(i.e. not fully malicious) hypervisor. In particular, it reduces the attack
surface of guest VMs and can prevent certain types of VM-escape bugs
(e.g. hypervisor read-anywhere) from being used to steal guest data.

The RFC series also includes a crypto driver (psp.ko) which communicates
with SEV firmware that runs within the AMD secure processor provides a
secure key management interfaces. The hypervisor uses this interface to 
enable SEV for secure guest and perform common hypervisor activities
such as launching, running, snapshotting , migrating and debugging a 
guest. A new ioctl (KVM_SEV_ISSUE_CMD) is introduced which will enable
Qemu to send commands to the SEV firmware during guest life cycle.

The RFC series also includes patches required in guest OS to enable SEV 
feature. A guest OS can check SEV support by calling KVM_FEATURE cpuid 
instruction.

The following links provide additional details:

AMD Memory Encryption whitepaper:
 
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
http://support.amd.com/TechDocs/24593.pdf
SME is section 7.10
SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Spec.pdf

---

TODO:
- send qemu/seabios RFC's on respective mailing list
- integrate the psp driver with CCP driver (they share the PCI id's)
- add SEV guest migration command support
- add SEV snapshotting command support
- determine how to do ioremap of physical memory with mem encryption enabled
  (e.g acpi tables)
- determine how to share the guest memory with hypervisor for to support
  pvclock driver

Brijesh Singh (11):
  crypto: add AMD Platform Security Processor driver
  KVM: SVM: prepare to reserve asid for SEV guest
  KVM: SVM: prepare for SEV guest management API support
  KVM: introduce KVM_SEV_ISSUE_CMD ioctl
  KVM: SVM: add SEV launch start command
  KVM: SVM: add SEV launch update command
  KVM: SVM: add SEV_LAUNCH_FINISH command
  KVM: SVM: add KVM_SEV_GUEST_STATUS command
  KVM: SVM: add KVM_SEV_DEBUG_DECRYPT command
  KVM: SVM: add KVM_SEV_DEBUG_ENCRYPT command
  KVM: SVM: add command to query SEV API version

Tom Lendacky (17):
  kvm: svm: Add support for additional SVM NPF error codes
  kvm: svm: Add kvm_fast_pio_in support
  kvm: svm: Use the hardware provided GPA instead of page walk
  x86: Secure Encrypted Virtualization (SEV) support
  KVM: SVM: prepare for new bit definition in nested_ctl
  KVM: SVM: Add SEV feature definitions to KVM
  x86: Do not encrypt memory areas if SEV is enabled
  Access BOOT related data encrypted with SEV active
  x86/efi: Access EFI data as encrypted when SEV is active
  x86: Change early_ioremap to early_memremap for BOOT data
  x86: Don't decrypt trampoline area if SEV is active
  x86: DMA support for SEV memory encryption
  iommu/amd: AMD IOMMU support for SEV
  x86: Don't set the SME MSR bit when SEV is active
  x86: Unroll string I/O when SEV is active
  x86: Add support to determine if running with SEV enabled
  KVM: SVM: Enable SEV by setting the SEV_ENABLE cpu feature


 arch/x86/boot/compressed/Makefile  |2 
 arch/x86/boot/compressed/head_64

[RFC PATCH v1 28/28] KVM: SVM: add command to query SEV API version

2016-08-22 Thread Brijesh Singh
Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   23 +++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 4af195d..88b8f89 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5779,6 +5779,25 @@ err_1:
return ret;
 }
 
+static int sev_api_version(int *psp_ret)
+{
+   int ret;
+   struct psp_data_status *status;
+
+   status = kzalloc(sizeof(*status), GFP_KERNEL);
+   if (!status)
+   return -ENOMEM;
+
+   ret = psp_platform_status(status, psp_ret);
+   if (ret)
+   goto err;
+
+   ret = (status->api_major << 8) | status->api_minor;
+err:
+   kfree(status);
+   return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5819,6 +5838,10 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
_code);
break;
}
+   case KVM_SEV_API_VERSION: {
+   r = sev_api_version(_code);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 24/28] KVM: SVM: add SEV_LAUNCH_FINISH command

2016-08-22 Thread Brijesh Singh
The command is used for finializing the guest launch into SEV mode.

For more information see [1], section 6.3

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   78 
 1 file changed, 78 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c78bdc6..60cc0f7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5497,6 +5497,79 @@ err_1:
return ret;
 }
  
+static int sev_launch_finish(struct kvm *kvm,
+struct kvm_sev_launch_finish __user *argp,
+int *psp_ret)
+{
+   int i, ret;
+   void *mask = NULL;
+   int buffer_len, len;
+   struct kvm_vcpu *vcpu;
+   struct psp_data_launch_finish *finish;
+   struct kvm_sev_launch_finish params;
+
+   if (!kvm_sev_guest())
+   return -EINVAL;
+
+   /* Get the parameters from the user */
+   if (copy_from_user(, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   buffer_len = sizeof(*finish) + (sizeof(u64) * params.vcpu_count);
+   finish = kzalloc(buffer_len, GFP_KERNEL);
+   if (!finish)
+   return -ENOMEM;
+
+   /* copy the vcpu mask from user */
+   if (params.vcpu_mask_length && params.vcpu_mask_addr) {
+   ret = -ENOMEM;
+   mask = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!mask)
+   goto err_1;
+
+   len = min_t(size_t, PAGE_SIZE, params.vcpu_mask_length);
+   ret = -EFAULT;
+   if (copy_from_user(mask, (uint8_t*)params.vcpu_mask_addr, len))
+   goto err_2;
+   finish->vcpus.state_mask_addr = __psp_pa(mask);
+   }
+
+   finish->handle = kvm_sev_handle();
+   finish->hdr.buffer_len = buffer_len;
+   finish->vcpus.state_count = params.vcpu_count;
+   finish->vcpus.state_length = params.vcpu_length;
+   kvm_for_each_vcpu(i, vcpu, kvm) {
+   finish->vcpus.state_addr[i] =
+   to_svm(vcpu)->vmcb_pa | sme_me_mask;
+   if (i == params.vcpu_count)
+   break;
+   }
+
+   /* launch finish */
+   ret = psp_guest_launch_finish(finish, psp_ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: LAUNCH_FINISH ret=%d (%#010x)\n",
+   ret, *psp_ret);
+   goto err_2;
+   }
+
+   /* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+* KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+*/
+   kvm_for_each_vcpu(i, vcpu, kvm)
+   svm_cpuid_update(vcpu);
+
+   /* copy the measurement for user */
+   if (copy_to_user(argp->measurement, finish->measurement, 32))
+   ret = -EFAULT;
+
+err_2:
+   free_page((unsigned long)mask);
+err_1:
+   kfree(finish);
+   return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5517,6 +5590,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
_code);
break;
}
+   case KVM_SEV_LAUNCH_FINISH: {
+   r = sev_launch_finish(kvm, (void *)arg.opaque,
+   _code);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[RFC PATCH v1 25/28] KVM: SVM: add KVM_SEV_GUEST_STATUS command

2016-08-22 Thread Brijesh Singh
The command is used to query the SEV guest status.

For more information see [1], section 6.10

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh <brijesh.si...@amd.com>
---
 arch/x86/kvm/svm.c |   41 +
 1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 60cc0f7..63e7d15 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5570,6 +5570,42 @@ err_1:
return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm,
+   struct kvm_sev_guest_status __user *argp,
+   int *psp_ret)
+{
+   int ret;
+   struct kvm_sev_guest_status params;
+   struct psp_data_guest_status *status;
+
+   if (!kvm_sev_guest())
+   return -ENOTTY;
+
+   if (copy_from_user(, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   status = kzalloc(sizeof(*status), GFP_KERNEL);
+   if (!status)
+   return -ENOMEM;
+
+   status->hdr.buffer_len = sizeof(*status);
+   status->handle = kvm_sev_handle();
+   ret = psp_guest_status(status, psp_ret);
+   if (ret) {
+   printk(KERN_ERR "SEV: GUEST_STATUS ret=%d (%#010x)\n",
+   ret, *psp_ret);
+   goto err_1;
+   }
+   params.policy = status->policy;
+   params.state = status->state;
+
+   if (copy_to_user(argp, , sizeof(*argp)))
+   ret = -EFAULT;
+err_1:
+   kfree(status);
+   return ret;
+}
+
 static int amd_sev_issue_cmd(struct kvm *kvm,
 struct kvm_sev_issue_cmd __user *user_data)
 {
@@ -5595,6 +5631,11 @@ static int amd_sev_issue_cmd(struct kvm *kvm,
_code);
break;
}
+   case KVM_SEV_GUEST_STATUS: {
+   r = sev_guest_status(kvm, (void *)arg.opaque,
+   _code);
+   break;
+   }
default:
break;
}

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel