Re: [PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-11 Thread VMware

On 9/11/19 7:59 AM, Ingo Molnar wrote:

* Thomas Hellström (VMware)  wrote:


With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
not just set bits. The use of pgprot_modify() is currently quite limited and
easy to audit.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

Changes since:
RFC:
- Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
   pgprot_modify().

Could you please add a "why is this patch-set needed", not just describe
the "what does this patch set do"? I've seen zero discussion in the three
changelogs of exactly why we'd want this, which drivers and features are
affected and in what way, etc.

It's called a "fix" but doesn't explain what bad behavior it fixes.

Thanks,

Ingo


I'll update the changelog to be more clear about that.

Thanks,

Thomas




Re: [PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-10 Thread Ingo Molnar


* Thomas Hellström (VMware)  wrote:

> With SEV and sometimes with SME encryption, The dma api coherent memory is
> typically unencrypted, meaning the linear kernel map has the encryption
> bit cleared. However, default page protection returned from vm_get_page_prot()
> has the encryption bit set. So to compute the correct page protection we need
> to clear the encryption bit.
> 
> Also, in order for the encryption bit setting to survive across do_mmap() and
> mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch 
> it.
> Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
> pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
> not just set bits. The use of pgprot_modify() is currently quite limited and
> easy to audit.
> 
> (Note that the encryption status is not logically encoded in the pfn but in
> the page protection even if an address line in the physical address is used).
> 
> The patchset has seen some sanity testing by exporting dma_pgprot() and
> using it in the vmwgfx mmap handler with SEV enabled.
> 
> Changes since:
> RFC:
> - Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
>   pgprot_modify().

Could you please add a "why is this patch-set needed", not just describe 
the "what does this patch set do"? I've seen zero discussion in the three 
changelogs of exactly why we'd want this, which drivers and features are 
affected and in what way, etc.

It's called a "fix" but doesn't explain what bad behavior it fixes.

Thanks,

Ingo


[PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-10 Thread VMware
With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
not just set bits. The use of pgprot_modify() is currently quite limited and
easy to audit.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

Changes since:
RFC:
- Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
  pgprot_modify().

Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: "H. Peter Anvin" 
Cc: Christoph Hellwig 
Cc: Christian König 
Cc: Marek Szyprowski 
Cc: Tom Lendacky 


Re: [RFC PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-10 Thread VMware

On 9/10/19 8:11 AM, Christoph Hellwig wrote:

On Thu, Sep 05, 2019 at 04:23:11AM -0700, Christoph Hellwig wrote:

This looks fine from the DMA POV.  I'll let the x86 guys comment on the
rest.

Do we want to pick this series up for 5.4?  Should I queue it up in
the dma-mapping tree?


Hi, Christoph

I think the DMA change is pretty uncontroversial.

There are still some questions about the x86 change: After digging a bit 
deeper into the mm code I think Dave is correct about that we should 
include the sme_me_mask in _PAGE_CHG_MASK.


I'll respin that patch and then I guess we need an ack from the x86 people.

Thanks,
Thomas




Re: [RFC PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-10 Thread Christoph Hellwig
On Thu, Sep 05, 2019 at 04:23:11AM -0700, Christoph Hellwig wrote:
> This looks fine from the DMA POV.  I'll let the x86 guys comment on the
> rest.

Do we want to pick this series up for 5.4?  Should I queue it up in
the dma-mapping tree?


Re: [RFC PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-05 Thread Christoph Hellwig
This looks fine from the DMA POV.  I'll let the x86 guys comment on the
rest.


[RFC PATCH 0/2] Fix SEV user-space mapping of unencrypted coherent memory

2019-09-05 Thread VMware
With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: "H. Peter Anvin" 
Cc: Christoph Hellwig 
Cc: Christian König 
Cc: Marek Szyprowski 
Cc: Tom Lendacky