Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-29 Thread Thomas Hellstrom
On Wed, 2019-05-29 at 09:50 +0200, Christian König wrote:
> Am 28.05.19 um 19:23 schrieb Lendacky, Thomas:
> > On 5/28/19 12:05 PM, Thomas Hellstrom wrote:
> > > On 5/28/19 7:00 PM, Lendacky, Thomas wrote:
> > > > On 5/28/19 11:32 AM, Koenig, Christian wrote:
> > > > > Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:
> > > > > > On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:
> > > > > > > On 5/28/19 10:17 AM, Koenig, Christian wrote:
> > > > > > > > Hi Thomas,
> > > > > > > > 
> > > > > > > > Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
> > > > > > > > > Hi, Tom,
> > > > > > > > > 
> > > > > > > > > Thanks for the reply. The question is not graphics
> > > > > > > > > specific, but
> > > > > > > > > lies
> > > > > > > > > in your answer further below:
> > > > > > > > > 
> > > > > > > > > On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
> > > > > > > > > > On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
> > > > > > > > > > [SNIP]
> > > > > > > > > > As for kernel vmaps and user-maps, those pages will
> > > > > > > > > > be marked
> > > > > > > > > > encrypted
> > > > > > > > > > (unless explicitly made un-encrypted by calling
> > > > > > > > > > set_memory_decrypted()).
> > > > > > > > > > But, if you are copying to/from those areas into
> > > > > > > > > > the un-
> > > > > > > > > > encrypted DMA
> > > > > > > > > > area then everything will be ok.
> > > > > > > > > The question is regarding the above paragraph.
> > > > > > > > > 
> > > > > > > > > AFAICT,  set_memory_decrypted() only changes the
> > > > > > > > > fixed kernel map
> > > > > > > > > PTEs.
> > > > > > > > > But when setting up other aliased PTEs to the exact
> > > > > > > > > same
> > > > > > > > > decrypted
> > > > > > > > > pages, for example using dma_mmap_coherent(),
> > > > > > > > > kmap_atomic_prot(),
> > > > > > > > > vmap() etc. What code is responsible for clearing the
> > > > > > > > > encrypted
> > > > > > > > > flag
> > > > > > > > > on those PTEs? Is there something in the x86 platform
> > > > > > > > > code doing
> > > > > > > > > that?
> > > > > > > > Tom actually explained this:
> > > > > > > > > The encryption bit is bit-47 of a physical address.
> > > > > > > > In other words set_memory_decrypted() changes the
> > > > > > > > physical address
> > > > > > > > in
> > > > > > > > the PTEs of the kernel mapping and all other use cases
> > > > > > > > just copy
> > > > > > > > that
> > > > > > > > from there.
> > > > > > > Except I don't think the PTE attributes are copied from
> > > > > > > the kernel
> > > > > > > mapping
> > > > > > +1!
> > > > > > 
> > > > > > > in some cases. For example, dma_mmap_coherent() will
> > > > > > > create the same
> > > > > > > vm_page_prot value regardless of whether or not the
> > > > > > > underlying memory
> > > > > > > is
> > > > > > > encrypted or not. But kmap_atomic_prot() will return the
> > > > > > > kernel
> > > > > > > virtual
> > > > > > > address of the page, so that would be fine.
> > > > > > Yes, on 64-bit systems. On 32-bit systems (do they exist
> > > > > > with SEV?),
> > > > > > they don't.
> > > > > I don't think so, but feel free to prove me wrong Tom.
> > > > SEV is 64-bit only.
> > > And I just noticed that kmap_atomic_prot() indeed returns the
> > > kernel map
> > > also for 32-bit lowmem.
> > > 
> > > > > > And similarly TTM user-space mappings and vmap() doesn't
> > > > > > copy from the
> > > > > > kernel map either,  so I think we actually do need to
> > > > > > modify the page-
> > > > > > prot like done in the patch.
> > > > > Well the problem is that this won't have any effect.
> > > > > 
> > > > > As Tom explained encryption is not implemented as a page
> > > > > protection bit,
> > > > > but rather as part of the physical address of the part.
> > > > This is where things get interesting.  Even though the
> > > > encryption bit is
> > > > part of the physical address (e.g. under SME the device
> > > > could/would use an
> > > > address with the encryption bit set), it is implemented as part
> > > > of the PTE
> > > > attributes. So, for example, using _PAGE_ENC when building a
> > > > pgprot value
> > > > would produce an entry with the encryption bit set.
> > > > 
> > > > And the thing to watch out for is using two virtual addresses
> > > > that point
> > > > to the same physical address (page) in DRAM but one has the
> > > > encryption bit
> > > > set and one doesn't. The hardware does not enforce coherency
> > > > between an
> > > > encrypted and un-encrypted mapping of the same physical address
> > > > (page).
> > > > See section 7.10.6 of the AMD64 Architecture Programmer's
> > > > Manual Volume 2.
> > > Indeed. And I'm pretty sure the kernel map PTE and a TTM / vmap
> > > PTE
> > > pointing to the same decrypted page differ in the encryption bit
> > > (47)
> > > setting.
> 
> That can't be the case, cause otherwise amdgpu wouldn't already work 
> with SME enabled.

With SME the situation is different. The coherent memory is still
encrypted, and TTM is 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-29 Thread Christian König

Am 28.05.19 um 19:23 schrieb Lendacky, Thomas:

On 5/28/19 12:05 PM, Thomas Hellstrom wrote:

On 5/28/19 7:00 PM, Lendacky, Thomas wrote:

On 5/28/19 11:32 AM, Koenig, Christian wrote:

Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:

On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:

On 5/28/19 10:17 AM, Koenig, Christian wrote:

Hi Thomas,

Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:

Hi, Tom,

Thanks for the reply. The question is not graphics specific, but
lies
in your answer further below:

On 5/28/19 4:48 PM, Lendacky, Thomas wrote:

On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
[SNIP]
As for kernel vmaps and user-maps, those pages will be marked
encrypted
(unless explicitly made un-encrypted by calling
set_memory_decrypted()).
But, if you are copying to/from those areas into the un-
encrypted DMA
area then everything will be ok.

The question is regarding the above paragraph.

AFAICT,  set_memory_decrypted() only changes the fixed kernel map
PTEs.
But when setting up other aliased PTEs to the exact same
decrypted
pages, for example using dma_mmap_coherent(),
kmap_atomic_prot(),
vmap() etc. What code is responsible for clearing the encrypted
flag
on those PTEs? Is there something in the x86 platform code doing
that?

Tom actually explained this:

The encryption bit is bit-47 of a physical address.

In other words set_memory_decrypted() changes the physical address
in
the PTEs of the kernel mapping and all other use cases just copy
that
from there.

Except I don't think the PTE attributes are copied from the kernel
mapping

+1!


in some cases. For example, dma_mmap_coherent() will create the same
vm_page_prot value regardless of whether or not the underlying memory
is
encrypted or not. But kmap_atomic_prot() will return the kernel
virtual
address of the page, so that would be fine.

Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
they don't.

I don't think so, but feel free to prove me wrong Tom.

SEV is 64-bit only.

And I just noticed that kmap_atomic_prot() indeed returns the kernel map
also for 32-bit lowmem.


And similarly TTM user-space mappings and vmap() doesn't copy from the
kernel map either,  so I think we actually do need to modify the page-
prot like done in the patch.

Well the problem is that this won't have any effect.

As Tom explained encryption is not implemented as a page protection bit,
but rather as part of the physical address of the part.

This is where things get interesting.  Even though the encryption bit is
part of the physical address (e.g. under SME the device could/would use an
address with the encryption bit set), it is implemented as part of the PTE
attributes. So, for example, using _PAGE_ENC when building a pgprot value
would produce an entry with the encryption bit set.

And the thing to watch out for is using two virtual addresses that point
to the same physical address (page) in DRAM but one has the encryption bit
set and one doesn't. The hardware does not enforce coherency between an
encrypted and un-encrypted mapping of the same physical address (page).
See section 7.10.6 of the AMD64 Architecture Programmer's Manual Volume 2.

Indeed. And I'm pretty sure the kernel map PTE and a TTM / vmap PTE
pointing to the same decrypted page differ in the encryption bit (47)
setting.


That can't be the case, cause otherwise amdgpu wouldn't already work 
with SME enabled.


I think your patch might go into the right direction, but before we 
commit anything like that we need to understand first how it actually 
works currently.


Maybe I really need to setup a test system here.

Christian.



But on the hypervisor that would sort of work, because from what I
understand with SEV we select between the guest key and the hypervisor
key with that bit. On the hypervisor both keys are the same? On a guest
it would probably break.

For SEV, if the encryption bit is set then the guest key is used. If the
encryption bit is not set, then the hypervisor key is used IFF the
encryption bit is set in the hypervisor page tables.  You can have SEV
guests regardless of whether SME is active (note, there is a difference
between SME being enabled vs. SME being active).

For SME, there is only one key. The encryption bit determines whether the
data is run through the encryption process or not.

Thanks,
Tom


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Lendacky, Thomas
On 5/28/19 12:05 PM, Thomas Hellstrom wrote:
> On 5/28/19 7:00 PM, Lendacky, Thomas wrote:
>> On 5/28/19 11:32 AM, Koenig, Christian wrote:
>>> Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:
 On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:
> On 5/28/19 10:17 AM, Koenig, Christian wrote:
>> Hi Thomas,
>>
>> Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
>>> Hi, Tom,
>>>
>>> Thanks for the reply. The question is not graphics specific, but
>>> lies
>>> in your answer further below:
>>>
>>> On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
 On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
 [SNIP]
 As for kernel vmaps and user-maps, those pages will be marked
 encrypted
 (unless explicitly made un-encrypted by calling
 set_memory_decrypted()).
 But, if you are copying to/from those areas into the un-
 encrypted DMA
 area then everything will be ok.
>>> The question is regarding the above paragraph.
>>>
>>> AFAICT,  set_memory_decrypted() only changes the fixed kernel map
>>> PTEs.
>>> But when setting up other aliased PTEs to the exact same
>>> decrypted
>>> pages, for example using dma_mmap_coherent(),
>>> kmap_atomic_prot(),
>>> vmap() etc. What code is responsible for clearing the encrypted
>>> flag
>>> on those PTEs? Is there something in the x86 platform code doing
>>> that?
>> Tom actually explained this:
>>> The encryption bit is bit-47 of a physical address.
>> In other words set_memory_decrypted() changes the physical address
>> in
>> the PTEs of the kernel mapping and all other use cases just copy
>> that
>> from there.
> Except I don't think the PTE attributes are copied from the kernel
> mapping
 +1!

> in some cases. For example, dma_mmap_coherent() will create the same
> vm_page_prot value regardless of whether or not the underlying memory
> is
> encrypted or not. But kmap_atomic_prot() will return the kernel
> virtual
> address of the page, so that would be fine.
 Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
 they don't.
>>> I don't think so, but feel free to prove me wrong Tom.
>> SEV is 64-bit only.
> 
> And I just noticed that kmap_atomic_prot() indeed returns the kernel map
> also for 32-bit lowmem.
> 
>>
 And similarly TTM user-space mappings and vmap() doesn't copy from the
 kernel map either,  so I think we actually do need to modify the page-
 prot like done in the patch.
>>> Well the problem is that this won't have any effect.
>>>
>>> As Tom explained encryption is not implemented as a page protection bit,
>>> but rather as part of the physical address of the part.
>> This is where things get interesting.  Even though the encryption bit is
>> part of the physical address (e.g. under SME the device could/would use an
>> address with the encryption bit set), it is implemented as part of the PTE
>> attributes. So, for example, using _PAGE_ENC when building a pgprot value
>> would produce an entry with the encryption bit set.
>>
>> And the thing to watch out for is using two virtual addresses that point
>> to the same physical address (page) in DRAM but one has the encryption bit
>> set and one doesn't. The hardware does not enforce coherency between an
>> encrypted and un-encrypted mapping of the same physical address (page).
>> See section 7.10.6 of the AMD64 Architecture Programmer's Manual Volume 2.
> 
> Indeed. And I'm pretty sure the kernel map PTE and a TTM / vmap PTE
> pointing to the same decrypted page differ in the encryption bit (47)
> setting.
> 
> But on the hypervisor that would sort of work, because from what I
> understand with SEV we select between the guest key and the hypervisor
> key with that bit. On the hypervisor both keys are the same? On a guest
> it would probably break.

For SEV, if the encryption bit is set then the guest key is used. If the
encryption bit is not set, then the hypervisor key is used IFF the
encryption bit is set in the hypervisor page tables.  You can have SEV
guests regardless of whether SME is active (note, there is a difference
between SME being enabled vs. SME being active).

For SME, there is only one key. The encryption bit determines whether the
data is run through the encryption process or not.

Thanks,
Tom

> 
> /Thomas
> 
>>
>> Thanks,
>> Tom
>>
>>> I have no idea how that is actually handled thought,
>>> Christian.
>>>
 /Thomas

> This is an area that needs looking into to be sure it is working
> properly
> with SME and SEV.
>
> Thanks,
> Tom
>
>> That's rather nifty, because this way everybody will either use or
>> not
>> use encryption on the page.
>>
>> Christian.
>>
>>> Thanks,
>>> Thomas
>>>
>>>
 Things get fuzzy for me when it comes to the GPU access of the
 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Thomas Hellstrom

On 5/28/19 7:00 PM, Lendacky, Thomas wrote:

On 5/28/19 11:32 AM, Koenig, Christian wrote:

Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:

On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:

On 5/28/19 10:17 AM, Koenig, Christian wrote:

Hi Thomas,

Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:

Hi, Tom,

Thanks for the reply. The question is not graphics specific, but
lies
in your answer further below:

On 5/28/19 4:48 PM, Lendacky, Thomas wrote:

On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
[SNIP]
As for kernel vmaps and user-maps, those pages will be marked
encrypted
(unless explicitly made un-encrypted by calling
set_memory_decrypted()).
But, if you are copying to/from those areas into the un-
encrypted DMA
area then everything will be ok.

The question is regarding the above paragraph.

AFAICT,  set_memory_decrypted() only changes the fixed kernel map
PTEs.
But when setting up other aliased PTEs to the exact same
decrypted
pages, for example using dma_mmap_coherent(),
kmap_atomic_prot(),
vmap() etc. What code is responsible for clearing the encrypted
flag
on those PTEs? Is there something in the x86 platform code doing
that?

Tom actually explained this:

The encryption bit is bit-47 of a physical address.

In other words set_memory_decrypted() changes the physical address
in
the PTEs of the kernel mapping and all other use cases just copy
that
from there.

Except I don't think the PTE attributes are copied from the kernel
mapping

+1!


in some cases. For example, dma_mmap_coherent() will create the same
vm_page_prot value regardless of whether or not the underlying memory
is
encrypted or not. But kmap_atomic_prot() will return the kernel
virtual
address of the page, so that would be fine.

Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
they don't.

I don't think so, but feel free to prove me wrong Tom.

SEV is 64-bit only.


And I just noticed that kmap_atomic_prot() indeed returns the kernel map 
also for 32-bit lowmem.





And similarly TTM user-space mappings and vmap() doesn't copy from the
kernel map either,  so I think we actually do need to modify the page-
prot like done in the patch.

Well the problem is that this won't have any effect.

As Tom explained encryption is not implemented as a page protection bit,
but rather as part of the physical address of the part.

This is where things get interesting.  Even though the encryption bit is
part of the physical address (e.g. under SME the device could/would use an
address with the encryption bit set), it is implemented as part of the PTE
attributes. So, for example, using _PAGE_ENC when building a pgprot value
would produce an entry with the encryption bit set.

And the thing to watch out for is using two virtual addresses that point
to the same physical address (page) in DRAM but one has the encryption bit
set and one doesn't. The hardware does not enforce coherency between an
encrypted and un-encrypted mapping of the same physical address (page).
See section 7.10.6 of the AMD64 Architecture Programmer's Manual Volume 2.


Indeed. And I'm pretty sure the kernel map PTE and a TTM / vmap PTE 
pointing to the same decrypted page differ in the encryption bit (47) 
setting.


But on the hypervisor that would sort of work, because from what I 
understand with SEV we select between the guest key and the hypervisor 
key with that bit. On the hypervisor both keys are the same? On a guest 
it would probably break.


/Thomas



Thanks,
Tom


I have no idea how that is actually handled thought,
Christian.


/Thomas


This is an area that needs looking into to be sure it is working
properly
with SME and SEV.

Thanks,
Tom


That's rather nifty, because this way everybody will either use or
not
use encryption on the page.

Christian.


Thanks,
Thomas



Things get fuzzy for me when it comes to the GPU access of the
memory
and what and how it is accessed.

Thanks,
Tom



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Lendacky, Thomas
On 5/28/19 11:32 AM, Koenig, Christian wrote:
> Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:
>> On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:
>>> On 5/28/19 10:17 AM, Koenig, Christian wrote:
 Hi Thomas,

 Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
> Hi, Tom,
>
> Thanks for the reply. The question is not graphics specific, but
> lies
> in your answer further below:
>
> On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
>> On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
>> [SNIP]
>> As for kernel vmaps and user-maps, those pages will be marked
>> encrypted
>> (unless explicitly made un-encrypted by calling
>> set_memory_decrypted()).
>> But, if you are copying to/from those areas into the un-
>> encrypted DMA
>> area then everything will be ok.
> The question is regarding the above paragraph.
>
> AFAICT,  set_memory_decrypted() only changes the fixed kernel map
> PTEs.
> But when setting up other aliased PTEs to the exact same
> decrypted
> pages, for example using dma_mmap_coherent(),
> kmap_atomic_prot(),
> vmap() etc. What code is responsible for clearing the encrypted
> flag
> on those PTEs? Is there something in the x86 platform code doing
> that?
 Tom actually explained this:
> The encryption bit is bit-47 of a physical address.
 In other words set_memory_decrypted() changes the physical address
 in
 the PTEs of the kernel mapping and all other use cases just copy
 that
 from there.
>>> Except I don't think the PTE attributes are copied from the kernel
>>> mapping
>> +1!
>>
>>> in some cases. For example, dma_mmap_coherent() will create the same
>>> vm_page_prot value regardless of whether or not the underlying memory
>>> is
>>> encrypted or not. But kmap_atomic_prot() will return the kernel
>>> virtual
>>> address of the page, so that would be fine.
>> Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
>> they don't.
> 
> I don't think so, but feel free to prove me wrong Tom.

SEV is 64-bit only.

> 
>> And similarly TTM user-space mappings and vmap() doesn't copy from the
>> kernel map either,  so I think we actually do need to modify the page-
>> prot like done in the patch.
> 
> Well the problem is that this won't have any effect.
> 
> As Tom explained encryption is not implemented as a page protection bit, 
> but rather as part of the physical address of the part.

This is where things get interesting.  Even though the encryption bit is
part of the physical address (e.g. under SME the device could/would use an
address with the encryption bit set), it is implemented as part of the PTE
attributes. So, for example, using _PAGE_ENC when building a pgprot value
would produce an entry with the encryption bit set.

And the thing to watch out for is using two virtual addresses that point
to the same physical address (page) in DRAM but one has the encryption bit
set and one doesn't. The hardware does not enforce coherency between an
encrypted and un-encrypted mapping of the same physical address (page).
See section 7.10.6 of the AMD64 Architecture Programmer's Manual Volume 2.

Thanks,
Tom

> 
> I have no idea how that is actually handled thought,
> Christian.
> 
>>
>> /Thomas
>>
>>> This is an area that needs looking into to be sure it is working
>>> properly
>>> with SME and SEV.
>>>
>>> Thanks,
>>> Tom
>>>
 That's rather nifty, because this way everybody will either use or
 not
 use encryption on the page.

 Christian.

> Thanks,
> Thomas
>
>
>> Things get fuzzy for me when it comes to the GPU access of the
>> memory
>> and what and how it is accessed.
>>
>> Thanks,
>> Tom
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Koenig, Christian
Am 28.05.19 um 18:27 schrieb Thomas Hellstrom:
> On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:
>> On 5/28/19 10:17 AM, Koenig, Christian wrote:
>>> Hi Thomas,
>>>
>>> Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
 Hi, Tom,

 Thanks for the reply. The question is not graphics specific, but
 lies
 in your answer further below:

 On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
> On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
> [SNIP]
> As for kernel vmaps and user-maps, those pages will be marked
> encrypted
> (unless explicitly made un-encrypted by calling
> set_memory_decrypted()).
> But, if you are copying to/from those areas into the un-
> encrypted DMA
> area then everything will be ok.
 The question is regarding the above paragraph.

 AFAICT,  set_memory_decrypted() only changes the fixed kernel map
 PTEs.
 But when setting up other aliased PTEs to the exact same
 decrypted
 pages, for example using dma_mmap_coherent(),
 kmap_atomic_prot(),
 vmap() etc. What code is responsible for clearing the encrypted
 flag
 on those PTEs? Is there something in the x86 platform code doing
 that?
>>> Tom actually explained this:
 The encryption bit is bit-47 of a physical address.
>>> In other words set_memory_decrypted() changes the physical address
>>> in
>>> the PTEs of the kernel mapping and all other use cases just copy
>>> that
>>> from there.
>> Except I don't think the PTE attributes are copied from the kernel
>> mapping
> +1!
>
>> in some cases. For example, dma_mmap_coherent() will create the same
>> vm_page_prot value regardless of whether or not the underlying memory
>> is
>> encrypted or not. But kmap_atomic_prot() will return the kernel
>> virtual
>> address of the page, so that would be fine.
> Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
> they don't.

I don't think so, but feel free to prove me wrong Tom.

> And similarly TTM user-space mappings and vmap() doesn't copy from the
> kernel map either,  so I think we actually do need to modify the page-
> prot like done in the patch.

Well the problem is that this won't have any effect.

As Tom explained encryption is not implemented as a page protection bit, 
but rather as part of the physical address of the part.

I have no idea how that is actually handled thought,
Christian.

>
> /Thomas
>
>> This is an area that needs looking into to be sure it is working
>> properly
>> with SME and SEV.
>>
>> Thanks,
>> Tom
>>
>>> That's rather nifty, because this way everybody will either use or
>>> not
>>> use encryption on the page.
>>>
>>> Christian.
>>>
 Thanks,
 Thomas


> Things get fuzzy for me when it comes to the GPU access of the
> memory
> and what and how it is accessed.
>
> Thanks,
> Tom

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Thomas Hellstrom
On Tue, 2019-05-28 at 15:50 +, Lendacky, Thomas wrote:
> On 5/28/19 10:17 AM, Koenig, Christian wrote:
> > Hi Thomas,
> > 
> > Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
> > > Hi, Tom,
> > > 
> > > Thanks for the reply. The question is not graphics specific, but
> > > lies 
> > > in your answer further below:
> > > 
> > > On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
> > > > On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
> > > > [SNIP]
> > > > As for kernel vmaps and user-maps, those pages will be marked
> > > > encrypted
> > > > (unless explicitly made un-encrypted by calling
> > > > set_memory_decrypted()).
> > > > But, if you are copying to/from those areas into the un-
> > > > encrypted DMA
> > > > area then everything will be ok.
> > > 
> > > The question is regarding the above paragraph.
> > > 
> > > AFAICT,  set_memory_decrypted() only changes the fixed kernel map
> > > PTEs.
> > > But when setting up other aliased PTEs to the exact same
> > > decrypted 
> > > pages, for example using dma_mmap_coherent(),
> > > kmap_atomic_prot(), 
> > > vmap() etc. What code is responsible for clearing the encrypted
> > > flag 
> > > on those PTEs? Is there something in the x86 platform code doing
> > > that?
> > 
> > Tom actually explained this:
> > > The encryption bit is bit-47 of a physical address.
> > 
> > In other words set_memory_decrypted() changes the physical address
> > in 
> > the PTEs of the kernel mapping and all other use cases just copy
> > that 
> > from there.
> 
> Except I don't think the PTE attributes are copied from the kernel
> mapping

+1!

> in some cases. For example, dma_mmap_coherent() will create the same
> vm_page_prot value regardless of whether or not the underlying memory
> is
> encrypted or not. But kmap_atomic_prot() will return the kernel
> virtual
> address of the page, so that would be fine.

Yes, on 64-bit systems. On 32-bit systems (do they exist with SEV?),
they don't. 

And similarly TTM user-space mappings and vmap() doesn't copy from the
kernel map either,  so I think we actually do need to modify the page-
prot like done in the patch.

/Thomas

> 
> This is an area that needs looking into to be sure it is working
> properly
> with SME and SEV.
> 
> Thanks,
> Tom
> 
> > That's rather nifty, because this way everybody will either use or
> > not 
> > use encryption on the page.
> > 
> > Christian.
> > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > > Things get fuzzy for me when it comes to the GPU access of the
> > > > memory
> > > > and what and how it is accessed.
> > > > 
> > > > Thanks,
> > > > Tom
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Lendacky, Thomas
On 5/28/19 10:17 AM, Koenig, Christian wrote:
> Hi Thomas,
> 
> Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
>> Hi, Tom,
>>
>> Thanks for the reply. The question is not graphics specific, but lies 
>> in your answer further below:
>>
>> On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
>>> On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
>>> [SNIP]
>>> As for kernel vmaps and user-maps, those pages will be marked encrypted
>>> (unless explicitly made un-encrypted by calling set_memory_decrypted()).
>>> But, if you are copying to/from those areas into the un-encrypted DMA
>>> area then everything will be ok.
>>
>> The question is regarding the above paragraph.
>>
>> AFAICT,  set_memory_decrypted() only changes the fixed kernel map PTEs.
>> But when setting up other aliased PTEs to the exact same decrypted 
>> pages, for example using dma_mmap_coherent(), kmap_atomic_prot(), 
>> vmap() etc. What code is responsible for clearing the encrypted flag 
>> on those PTEs? Is there something in the x86 platform code doing that?
> 
> Tom actually explained this:
>> The encryption bit is bit-47 of a physical address.
> 
> In other words set_memory_decrypted() changes the physical address in 
> the PTEs of the kernel mapping and all other use cases just copy that 
> from there.

Except I don't think the PTE attributes are copied from the kernel mapping
in some cases. For example, dma_mmap_coherent() will create the same
vm_page_prot value regardless of whether or not the underlying memory is
encrypted or not. But kmap_atomic_prot() will return the kernel virtual
address of the page, so that would be fine.

This is an area that needs looking into to be sure it is working properly
with SME and SEV.

Thanks,
Tom

> 
> That's rather nifty, because this way everybody will either use or not 
> use encryption on the page.
> 
> Christian.
> 
>>
>> Thanks,
>> Thomas
>>
>>
>>>
>>> Things get fuzzy for me when it comes to the GPU access of the memory
>>> and what and how it is accessed.
>>>
>>> Thanks,
>>> Tom
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Koenig, Christian
Hi Thomas,

Am 28.05.19 um 17:11 schrieb Thomas Hellstrom:
> Hi, Tom,
>
> Thanks for the reply. The question is not graphics specific, but lies 
> in your answer further below:
>
> On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
>> On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
>> [SNIP]
>> As for kernel vmaps and user-maps, those pages will be marked encrypted
>> (unless explicitly made un-encrypted by calling set_memory_decrypted()).
>> But, if you are copying to/from those areas into the un-encrypted DMA
>> area then everything will be ok.
>
> The question is regarding the above paragraph.
>
> AFAICT,  set_memory_decrypted() only changes the fixed kernel map PTEs.
> But when setting up other aliased PTEs to the exact same decrypted 
> pages, for example using dma_mmap_coherent(), kmap_atomic_prot(), 
> vmap() etc. What code is responsible for clearing the encrypted flag 
> on those PTEs? Is there something in the x86 platform code doing that?

Tom actually explained this:
> The encryption bit is bit-47 of a physical address.

In other words set_memory_decrypted() changes the physical address in 
the PTEs of the kernel mapping and all other use cases just copy that 
from there.

That's rather nifty, because this way everybody will either use or not 
use encryption on the page.

Christian.

>
> Thanks,
> Thomas
>
>
>>
>> Things get fuzzy for me when it comes to the GPU access of the memory
>> and what and how it is accessed.
>>
>> Thanks,
>> Tom

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Thomas Hellstrom

Hi, Tom,

Thanks for the reply. The question is not graphics specific, but lies in 
your answer further below:


On 5/28/19 4:48 PM, Lendacky, Thomas wrote:

On 5/28/19 2:31 AM, Thomas Hellstrom wrote:

Hi, Tom,

Could you shed some light on this?

I don't have a lot of GPU knowledge, so let me start with an overview of
how everything should work and see if that answers the questions being
asked.

First, SME:
The encryption bit is bit-47 of a physical address. So, if a device does
not support at least 48-bit DMA, it will have to use the SWIOTLB and
bounce buffer the data. This is handled automatically if the driver is
using the Linux DMA-api as all of SWIOTLB has been marked un-encrypted.
Data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

For SEV:
The encryption bit position is the same as SME. However, with SEV all
DMA must use an un-encrypted area so all DMA goes through SWIOTLB. Just
like SME, this is handled automatically if the driver is using the Linux
DMA-api as all of SWIOTLB has been marked un-encrypted. And just like SME,
data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

There is an optimization for dma_alloc_coherent() where the pages are
allocated and marked un-encrypted, thus avoiding the bouncing (see file
kernel/dma/direct.c, dma_direct_alloc_pages()).

As for kernel vmaps and user-maps, those pages will be marked encrypted
(unless explicitly made un-encrypted by calling set_memory_decrypted()).
But, if you are copying to/from those areas into the un-encrypted DMA
area then everything will be ok.


The question is regarding the above paragraph.

AFAICT,  set_memory_decrypted() only changes the fixed kernel map PTEs.
But when setting up other aliased PTEs to the exact same decrypted 
pages, for example using dma_mmap_coherent(), kmap_atomic_prot(), vmap() 
etc. What code is responsible for clearing the encrypted flag on those 
PTEs? Is there something in the x86 platform code doing that?


Thanks,
Thomas




Things get fuzzy for me when it comes to the GPU access of the memory
and what and how it is accessed.

Thanks,
Tom


Thanks,
Thomas


On 5/24/19 5:08 PM, Alex Deucher wrote:

+ Tom

He's been looking into SEV as well.

On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom 
wrote:

On 5/24/19 2:03 PM, Koenig, Christian wrote:

Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 12:18 PM, Koenig, Christian wrote:

Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled
and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might
want to
cache decrypted pages in the dma page pool regardless of their
caching
state.

This patch is unnecessary, SEV support already works fine with at
least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

  drm: fallback to dma_alloc_coherent when memory
encryption is
active

  We can't just map any randome page we get when memory
encryption is
  active.

  Signed-off-by: Christian König 
  Acked-by: Alex Deucher 
  Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.

Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.

This sounds a bit odd. If that were the case, the natural place would be
the PAT tracking code, but it only handles caching flags AFAICT. Not
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather
than a guest, or did you run an SEV guest with PCI passthrough to the
AMD device?

Yeah, well the problem is we never tested this ourself :)


/Thomas


And, as a follow up question, why do we need dma_alloc_coherent() when
using SME? I thought the hardware performs the decryption when DMA-ing
to / from an encrypted page with SME, 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Christian König

Am 28.05.19 um 16:48 schrieb Lendacky, Thomas:

On 5/28/19 2:31 AM, Thomas Hellstrom wrote:

Hi, Tom,

Could you shed some light on this?

I don't have a lot of GPU knowledge, so let me start with an overview of
how everything should work and see if that answers the questions being
asked.

First, SME:
The encryption bit is bit-47 of a physical address. So, if a device does
not support at least 48-bit DMA, it will have to use the SWIOTLB and
bounce buffer the data. This is handled automatically if the driver is
using the Linux DMA-api as all of SWIOTLB has been marked un-encrypted.
Data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.


Ok, that explains why we don't need to manually handle the encryption 
bit in TTM.



For SEV:
The encryption bit position is the same as SME. However, with SEV all
DMA must use an un-encrypted area so all DMA goes through SWIOTLB. Just
like SME, this is handled automatically if the driver is using the Linux
DMA-api as all of SWIOTLB has been marked un-encrypted. And just like SME,
data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

There is an optimization for dma_alloc_coherent() where the pages are
allocated and marked un-encrypted, thus avoiding the bouncing (see file
kernel/dma/direct.c, dma_direct_alloc_pages()).


And that explains why we have to use dma_alloc_coherent()


As for kernel vmaps and user-maps, those pages will be marked encrypted
(unless explicitly made un-encrypted by calling set_memory_decrypted()).
But, if you are copying to/from those areas into the un-encrypted DMA
area then everything will be ok.

Things get fuzzy for me when it comes to the GPU access of the memory
and what and how it is accessed.


I can fill in those lose ends, but it's probably going to be a long mail 
and a bit off topic.


Thanks for filling in how that stuff actually works. And yeah, as far as 
I can see we actually don't need to do anything.


The only way to get un-encrypted pages which don't bounce through 
SWIOTLB is using dma_alloc_coherent().


It's probably a bit unfortunate that TTM can't control it, but I think 
Thomas has to live with that. The use case for sharing encrypted pages 
is probably not so common anyway.


Thanks,
Christian.



Thanks,
Tom


Thanks,
Thomas


On 5/24/19 5:08 PM, Alex Deucher wrote:

+ Tom

He's been looking into SEV as well.

On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom 
wrote:

On 5/24/19 2:03 PM, Koenig, Christian wrote:

Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 12:18 PM, Koenig, Christian wrote:

Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled
and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might
want to
cache decrypted pages in the dma page pool regardless of their
caching
state.

This patch is unnecessary, SEV support already works fine with at
least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

  drm: fallback to dma_alloc_coherent when memory
encryption is
active

  We can't just map any randome page we get when memory
encryption is
  active.

  Signed-off-by: Christian König 
  Acked-by: Alex Deucher 
  Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.

Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.

This sounds a bit odd. If that were the case, the natural place would be
the PAT tracking code, but it only handles caching flags AFAICT. Not
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather
than a guest, or did you run an SEV guest with PCI passthrough to the
AMD device?

Yeah, well the problem is we never tested this ourself :)


/Thomas


And, as a follow up 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Lendacky, Thomas
On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
> Hi, Tom,
> 
> Could you shed some light on this?

I don't have a lot of GPU knowledge, so let me start with an overview of
how everything should work and see if that answers the questions being
asked.

First, SME:
The encryption bit is bit-47 of a physical address. So, if a device does
not support at least 48-bit DMA, it will have to use the SWIOTLB and
bounce buffer the data. This is handled automatically if the driver is
using the Linux DMA-api as all of SWIOTLB has been marked un-encrypted.
Data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

For SEV:
The encryption bit position is the same as SME. However, with SEV all
DMA must use an un-encrypted area so all DMA goes through SWIOTLB. Just
like SME, this is handled automatically if the driver is using the Linux
DMA-api as all of SWIOTLB has been marked un-encrypted. And just like SME,
data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

There is an optimization for dma_alloc_coherent() where the pages are
allocated and marked un-encrypted, thus avoiding the bouncing (see file
kernel/dma/direct.c, dma_direct_alloc_pages()).

As for kernel vmaps and user-maps, those pages will be marked encrypted
(unless explicitly made un-encrypted by calling set_memory_decrypted()).
But, if you are copying to/from those areas into the un-encrypted DMA
area then everything will be ok.

Things get fuzzy for me when it comes to the GPU access of the memory
and what and how it is accessed.

Thanks,
Tom

> 
> Thanks,
> Thomas
> 
> 
> On 5/24/19 5:08 PM, Alex Deucher wrote:
>> + Tom
>>
>> He's been looking into SEV as well.
>>
>> On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom 
>> wrote:
>>> On 5/24/19 2:03 PM, Koenig, Christian wrote:
 Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:
> [CAUTION: External Email]
>
> On 5/24/19 12:18 PM, Koenig, Christian wrote:
>> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
>>> [CAUTION: External Email]
>>>
>>> On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
 Hi, Christian,

 On 5/24/19 10:37 AM, Koenig, Christian wrote:
> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
>> [CAUTION: External Email]
>>
>> From: Thomas Hellstrom 
>>
>> With SEV encryption, all DMA memory must be marked decrypted
>> (AKA "shared") for devices to be able to read it. In the future we
>> might
>> want to be able to switch normal (encrypted) memory to decrypted in
>> exactly
>> the same way as we handle caching states, and that would require
>> additional
>> memory pools. But for now, rely on memory allocated with
>> dma_alloc_coherent() which is already decrypted with SEV enabled.
>> Set up
>> the page protection accordingly. Drivers must detect SEV enabled
>> and
>> switch
>> to the dma page pool.
>>
>> This patch has not yet been tested. As a follow-up, we might
>> want to
>> cache decrypted pages in the dma page pool regardless of their
>> caching
>> state.
> This patch is unnecessary, SEV support already works fine with at
> least
> amdgpu and I would expect that it also works with other drivers as
> well.
>
> Also see this patch:
>
> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
> Author: Christian König 
> Date:   Wed Mar 13 10:11:19 2019 +0100
>
>  drm: fallback to dma_alloc_coherent when memory
> encryption is
> active
>
>  We can't just map any randome page we get when memory
> encryption is
>  active.
>
>  Signed-off-by: Christian König 
>  Acked-by: Alex Deucher 
>  Link: https://patchwork.kernel.org/patch/10850833/
>
> Regards,
> Christian.
 Yes, I noticed that. Although I fail to see where we automagically
 clear the PTE encrypted bit when mapping coherent memory? For the
 linear kernel map, that's done within dma_alloc_coherent() but for
 kernel vmaps and and user-space maps? Is that done automatically by
 the x86 platform layer?
>> Yes, I think so. Haven't looked to closely at this either.
> This sounds a bit odd. If that were the case, the natural place would be
> the PAT tracking code, but it only handles caching flags AFAICT. Not
> encryption flags.
>
> But when you tested AMD with SEV, was that running as hypervisor rather
> than a guest, or did you run an SEV guest with PCI passthrough to the
> AMD device?
 Yeah, well the problem is we never tested this ourself :)

 /Thomas

>>> And, as a follow up 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-28 Thread Thomas Hellstrom

Hi, Tom,

Could you shed some light on this?

Thanks,
Thomas


On 5/24/19 5:08 PM, Alex Deucher wrote:

+ Tom

He's been looking into SEV as well.

On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom  wrote:

On 5/24/19 2:03 PM, Koenig, Christian wrote:

Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 12:18 PM, Koenig, Christian wrote:

Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might want to
cache decrypted pages in the dma page pool regardless of their
caching
state.

This patch is unnecessary, SEV support already works fine with at
least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

 drm: fallback to dma_alloc_coherent when memory encryption is
active

 We can't just map any randome page we get when memory
encryption is
 active.

 Signed-off-by: Christian König 
 Acked-by: Alex Deucher 
 Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.

Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.

This sounds a bit odd. If that were the case, the natural place would be
the PAT tracking code, but it only handles caching flags AFAICT. Not
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather
than a guest, or did you run an SEV guest with PCI passthrough to the
AMD device?

Yeah, well the problem is we never tested this ourself :)


/Thomas


And, as a follow up question, why do we need dma_alloc_coherent() when
using SME? I thought the hardware performs the decryption when DMA-ing
to / from an encrypted page with SME, but not with SEV?

I think the issue was that the DMA API would try to use a bounce buffer
in this case.

SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
possible to avoid dma_alloc_coherent() in the SME case.

In this case I don't have an explanation for this.

For the background what happened is that we got reports that SVE/SME
doesn't work with amdgpu. So we told the people to try using the
dma_alloc_coherent() path and that worked fine. Because of this we came
up with the patch I noted earlier.

I can confirm that it indeed works now for a couple of users, but we
still don't have a test system for this in our team.

Christian.

OK, undestood,

But unless there is some strange magic going on, (which there might be
of course),I do think the patch I sent is correct, and the reason that
SEV works is that the AMD card is used by the hypervisor and not the
guest, and TTM is actually incorrectly creating conflicting maps and
treating the coherent memory as encrypted. But since the memory is only
accessed through encrypted PTEs, the hardware does the right thing,
using the hypervisor key for decryption

But that's only a guess, and this is not super-urgent. I will be able to
follow up if / when we bring vmwgfx up for SEV.

/Thomas


/Thomas



Christian.


Thanks, Thomas




___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Alex Deucher
+ Tom

He's been looking into SEV as well.

On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom  wrote:
>
> On 5/24/19 2:03 PM, Koenig, Christian wrote:
> > Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:
> >> [CAUTION: External Email]
> >>
> >> On 5/24/19 12:18 PM, Koenig, Christian wrote:
> >>> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
>  [CAUTION: External Email]
> 
>  On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
> > Hi, Christian,
> >
> > On 5/24/19 10:37 AM, Koenig, Christian wrote:
> >> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
> >>> [CAUTION: External Email]
> >>>
> >>> From: Thomas Hellstrom 
> >>>
> >>> With SEV encryption, all DMA memory must be marked decrypted
> >>> (AKA "shared") for devices to be able to read it. In the future we
> >>> might
> >>> want to be able to switch normal (encrypted) memory to decrypted in
> >>> exactly
> >>> the same way as we handle caching states, and that would require
> >>> additional
> >>> memory pools. But for now, rely on memory allocated with
> >>> dma_alloc_coherent() which is already decrypted with SEV enabled.
> >>> Set up
> >>> the page protection accordingly. Drivers must detect SEV enabled and
> >>> switch
> >>> to the dma page pool.
> >>>
> >>> This patch has not yet been tested. As a follow-up, we might want to
> >>> cache decrypted pages in the dma page pool regardless of their
> >>> caching
> >>> state.
> >> This patch is unnecessary, SEV support already works fine with at
> >> least
> >> amdgpu and I would expect that it also works with other drivers as
> >> well.
> >>
> >> Also see this patch:
> >>
> >> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
> >> Author: Christian König 
> >> Date:   Wed Mar 13 10:11:19 2019 +0100
> >>
> >> drm: fallback to dma_alloc_coherent when memory encryption is
> >> active
> >>
> >> We can't just map any randome page we get when memory
> >> encryption is
> >> active.
> >>
> >> Signed-off-by: Christian König 
> >> Acked-by: Alex Deucher 
> >> Link: https://patchwork.kernel.org/patch/10850833/
> >>
> >> Regards,
> >> Christian.
> > Yes, I noticed that. Although I fail to see where we automagically
> > clear the PTE encrypted bit when mapping coherent memory? For the
> > linear kernel map, that's done within dma_alloc_coherent() but for
> > kernel vmaps and and user-space maps? Is that done automatically by
> > the x86 platform layer?
> >>> Yes, I think so. Haven't looked to closely at this either.
> >> This sounds a bit odd. If that were the case, the natural place would be
> >> the PAT tracking code, but it only handles caching flags AFAICT. Not
> >> encryption flags.
> >>
> >> But when you tested AMD with SEV, was that running as hypervisor rather
> >> than a guest, or did you run an SEV guest with PCI passthrough to the
> >> AMD device?
> > Yeah, well the problem is we never tested this ourself :)
> >
> > /Thomas
> >
>  And, as a follow up question, why do we need dma_alloc_coherent() when
>  using SME? I thought the hardware performs the decryption when DMA-ing
>  to / from an encrypted page with SME, but not with SEV?
> >>> I think the issue was that the DMA API would try to use a bounce buffer
> >>> in this case.
> >> SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
> >> possible to avoid dma_alloc_coherent() in the SME case.
> > In this case I don't have an explanation for this.
> >
> > For the background what happened is that we got reports that SVE/SME
> > doesn't work with amdgpu. So we told the people to try using the
> > dma_alloc_coherent() path and that worked fine. Because of this we came
> > up with the patch I noted earlier.
> >
> > I can confirm that it indeed works now for a couple of users, but we
> > still don't have a test system for this in our team.
> >
> > Christian.
>
> OK, undestood,
>
> But unless there is some strange magic going on, (which there might be
> of course),I do think the patch I sent is correct, and the reason that
> SEV works is that the AMD card is used by the hypervisor and not the
> guest, and TTM is actually incorrectly creating conflicting maps and
> treating the coherent memory as encrypted. But since the memory is only
> accessed through encrypted PTEs, the hardware does the right thing,
> using the hypervisor key for decryption
>
> But that's only a guess, and this is not super-urgent. I will be able to
> follow up if / when we bring vmwgfx up for SEV.
>
> /Thomas
>
> >> /Thomas
> >>
> >>
> >>> Christian.
> >>>
>  Thanks, Thomas
> 
> 
> 
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Thomas Hellstrom

On 5/24/19 2:03 PM, Koenig, Christian wrote:

Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 12:18 PM, Koenig, Christian wrote:

Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might want to
cache decrypted pages in the dma page pool regardless of their
caching
state.

This patch is unnecessary, SEV support already works fine with at
least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

    drm: fallback to dma_alloc_coherent when memory encryption is
active

    We can't just map any randome page we get when memory
encryption is
    active.

    Signed-off-by: Christian König 
    Acked-by: Alex Deucher 
    Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.

Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.

This sounds a bit odd. If that were the case, the natural place would be
the PAT tracking code, but it only handles caching flags AFAICT. Not
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather
than a guest, or did you run an SEV guest with PCI passthrough to the
AMD device?

Yeah, well the problem is we never tested this ourself :)


/Thomas


And, as a follow up question, why do we need dma_alloc_coherent() when
using SME? I thought the hardware performs the decryption when DMA-ing
to / from an encrypted page with SME, but not with SEV?

I think the issue was that the DMA API would try to use a bounce buffer
in this case.

SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
possible to avoid dma_alloc_coherent() in the SME case.

In this case I don't have an explanation for this.

For the background what happened is that we got reports that SVE/SME
doesn't work with amdgpu. So we told the people to try using the
dma_alloc_coherent() path and that worked fine. Because of this we came
up with the patch I noted earlier.

I can confirm that it indeed works now for a couple of users, but we
still don't have a test system for this in our team.

Christian.


OK, undestood,

But unless there is some strange magic going on, (which there might be 
of course),I do think the patch I sent is correct, and the reason that 
SEV works is that the AMD card is used by the hypervisor and not the 
guest, and TTM is actually incorrectly creating conflicting maps and 
treating the coherent memory as encrypted. But since the memory is only 
accessed through encrypted PTEs, the hardware does the right thing, 
using the hypervisor key for decryption


But that's only a guess, and this is not super-urgent. I will be able to 
follow up if / when we bring vmwgfx up for SEV.


/Thomas


/Thomas



Christian.


Thanks, Thomas





___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Koenig, Christian
Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:
> [CAUTION: External Email]
>
> On 5/24/19 12:18 PM, Koenig, Christian wrote:
>> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
>>> [CAUTION: External Email]
>>>
>>> On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
 Hi, Christian,

 On 5/24/19 10:37 AM, Koenig, Christian wrote:
> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
>> [CAUTION: External Email]
>>
>> From: Thomas Hellstrom 
>>
>> With SEV encryption, all DMA memory must be marked decrypted
>> (AKA "shared") for devices to be able to read it. In the future we
>> might
>> want to be able to switch normal (encrypted) memory to decrypted in
>> exactly
>> the same way as we handle caching states, and that would require
>> additional
>> memory pools. But for now, rely on memory allocated with
>> dma_alloc_coherent() which is already decrypted with SEV enabled.
>> Set up
>> the page protection accordingly. Drivers must detect SEV enabled and
>> switch
>> to the dma page pool.
>>
>> This patch has not yet been tested. As a follow-up, we might want to
>> cache decrypted pages in the dma page pool regardless of their 
>> caching
>> state.
> This patch is unnecessary, SEV support already works fine with at 
> least
> amdgpu and I would expect that it also works with other drivers as
> well.
>
> Also see this patch:
>
> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
> Author: Christian König 
> Date:   Wed Mar 13 10:11:19 2019 +0100
>
>    drm: fallback to dma_alloc_coherent when memory encryption is
> active
>
>    We can't just map any randome page we get when memory
> encryption is
>    active.
>
>    Signed-off-by: Christian König 
>    Acked-by: Alex Deucher 
>    Link: https://patchwork.kernel.org/patch/10850833/
>
> Regards,
> Christian.
 Yes, I noticed that. Although I fail to see where we automagically
 clear the PTE encrypted bit when mapping coherent memory? For the
 linear kernel map, that's done within dma_alloc_coherent() but for
 kernel vmaps and and user-space maps? Is that done automatically by
 the x86 platform layer?
>> Yes, I think so. Haven't looked to closely at this either.
>
> This sounds a bit odd. If that were the case, the natural place would be
> the PAT tracking code, but it only handles caching flags AFAICT. Not
> encryption flags.
>
> But when you tested AMD with SEV, was that running as hypervisor rather
> than a guest, or did you run an SEV guest with PCI passthrough to the
> AMD device?

Yeah, well the problem is we never tested this ourself :)

>
>>
 /Thomas

>>> And, as a follow up question, why do we need dma_alloc_coherent() when
>>> using SME? I thought the hardware performs the decryption when DMA-ing
>>> to / from an encrypted page with SME, but not with SEV?
>> I think the issue was that the DMA API would try to use a bounce buffer
>> in this case.
>
> SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
> possible to avoid dma_alloc_coherent() in the SME case.

In this case I don't have an explanation for this.

For the background what happened is that we got reports that SVE/SME 
doesn't work with amdgpu. So we told the people to try using the 
dma_alloc_coherent() path and that worked fine. Because of this we came 
up with the patch I noted earlier.

I can confirm that it indeed works now for a couple of users, but we 
still don't have a test system for this in our team.

Christian.

>
> /Thomas
>
>
>>
>> Christian.
>>
>>> Thanks, Thomas
>>>
>>>
>>>
>

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Thomas Hellstrom

On 5/24/19 12:18 PM, Koenig, Christian wrote:

Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:

[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might want to
cache decrypted pages in the dma page pool regardless of their caching
state.

This patch is unnecessary, SEV support already works fine with at least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

   drm: fallback to dma_alloc_coherent when memory encryption is
active

   We can't just map any randome page we get when memory
encryption is
   active.

   Signed-off-by: Christian König 
   Acked-by: Alex Deucher 
   Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.

Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.


This sounds a bit odd. If that were the case, the natural place would be 
the PAT tracking code, but it only handles caching flags AFAICT. Not 
encryption flags.


But when you tested AMD with SEV, was that running as hypervisor rather 
than a guest, or did you run an SEV guest with PCI passthrough to the 
AMD device?





/Thomas


And, as a follow up question, why do we need dma_alloc_coherent() when
using SME? I thought the hardware performs the decryption when DMA-ing
to / from an encrypted page with SME, but not with SEV?

I think the issue was that the DMA API would try to use a bounce buffer
in this case.


SEV forces SWIOTLB bouncing on, but not SME. So it should probably be 
possible to avoid dma_alloc_coherent() in the SME case.


/Thomas




Christian.


Thanks, Thomas





___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Koenig, Christian
Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
> [CAUTION: External Email]
>
> On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
>> Hi, Christian,
>>
>> On 5/24/19 10:37 AM, Koenig, Christian wrote:
>>> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
 [CAUTION: External Email]

 From: Thomas Hellstrom 

 With SEV encryption, all DMA memory must be marked decrypted
 (AKA "shared") for devices to be able to read it. In the future we
 might
 want to be able to switch normal (encrypted) memory to decrypted in
 exactly
 the same way as we handle caching states, and that would require
 additional
 memory pools. But for now, rely on memory allocated with
 dma_alloc_coherent() which is already decrypted with SEV enabled.
 Set up
 the page protection accordingly. Drivers must detect SEV enabled and
 switch
 to the dma page pool.

 This patch has not yet been tested. As a follow-up, we might want to
 cache decrypted pages in the dma page pool regardless of their caching
 state.
>>> This patch is unnecessary, SEV support already works fine with at least
>>> amdgpu and I would expect that it also works with other drivers as 
>>> well.
>>>
>>> Also see this patch:
>>>
>>> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
>>> Author: Christian König 
>>> Date:   Wed Mar 13 10:11:19 2019 +0100
>>>
>>>   drm: fallback to dma_alloc_coherent when memory encryption is
>>> active
>>>
>>>   We can't just map any randome page we get when memory
>>> encryption is
>>>   active.
>>>
>>>   Signed-off-by: Christian König 
>>>   Acked-by: Alex Deucher 
>>>   Link: https://patchwork.kernel.org/patch/10850833/
>>>
>>> Regards,
>>> Christian.
>>
>> Yes, I noticed that. Although I fail to see where we automagically
>> clear the PTE encrypted bit when mapping coherent memory? For the
>> linear kernel map, that's done within dma_alloc_coherent() but for
>> kernel vmaps and and user-space maps? Is that done automatically by
>> the x86 platform layer?

Yes, I think so. Haven't looked to closely at this either.

>>
>> /Thomas
>>
> And, as a follow up question, why do we need dma_alloc_coherent() when
> using SME? I thought the hardware performs the decryption when DMA-ing
> to / from an encrypted page with SME, but not with SEV?

I think the issue was that the DMA API would try to use a bounce buffer 
in this case.

Christian.

>
> Thanks, Thomas
>
>
>

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Thomas Hellstrom

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we 
might
want to be able to switch normal (encrypted) memory to decrypted in 
exactly
the same way as we handle caching states, and that would require 
additional

memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled. 
Set up
the page protection accordingly. Drivers must detect SEV enabled and 
switch

to the dma page pool.

This patch has not yet been tested. As a follow-up, we might want to
cache decrypted pages in the dma page pool regardless of their caching
state.

This patch is unnecessary, SEV support already works fine with at least
amdgpu and I would expect that it also works with other drivers as well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

      drm: fallback to dma_alloc_coherent when memory encryption is 
active


      We can't just map any randome page we get when memory 
encryption is

      active.

      Signed-off-by: Christian König 
      Acked-by: Alex Deucher 
      Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.


Yes, I noticed that. Although I fail to see where we automagically 
clear the PTE encrypted bit when mapping coherent memory? For the 
linear kernel map, that's done within dma_alloc_coherent() but for 
kernel vmaps and and user-space maps? Is that done automatically by 
the x86 platform layer?


/Thomas

And, as a follow up question, why do we need dma_alloc_coherent() when 
using SME? I thought the hardware performs the decryption when DMA-ing 
to / from an encrypted page with SME, but not with SEV?


Thanks, Thomas



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Thomas Hellstrom

Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:

Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):

[CAUTION: External Email]

From: Thomas Hellstrom 

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we might
want to be able to switch normal (encrypted) memory to decrypted in exactly
the same way as we handle caching states, and that would require additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled. Set up
the page protection accordingly. Drivers must detect SEV enabled and switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might want to
cache decrypted pages in the dma page pool regardless of their caching
state.

This patch is unnecessary, SEV support already works fine with at least
amdgpu and I would expect that it also works with other drivers as well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

      drm: fallback to dma_alloc_coherent when memory encryption is active

      We can't just map any randome page we get when memory encryption is
      active.

      Signed-off-by: Christian König 
      Acked-by: Alex Deucher 
      Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.


Yes, I noticed that. Although I fail to see where we automagically clear 
the PTE encrypted bit when mapping coherent memory? For the linear 
kernel map, that's done within dma_alloc_coherent() but for kernel vmaps 
and and user-space maps? Is that done automatically by the x86 platform 
layer?


/Thomas





Cc: Christian König 
Signed-off-by: Thomas Hellstrom 
---
   drivers/gpu/drm/ttm/ttm_bo_util.c| 17 +
   drivers/gpu/drm/ttm/ttm_bo_vm.c  |  6 --
   drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |  3 +++
   drivers/gpu/drm/vmwgfx/vmwgfx_blit.c |  6 --
   include/drm/ttm/ttm_bo_driver.h  |  8 +---
   include/drm/ttm/ttm_tt.h |  1 +
   6 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 895d77d799e4..1d6643bd0b01 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -419,11 +419,13 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
  page = i * dir + add;
  if (old_iomap == NULL) {
  pgprot_t prot = ttm_io_prot(old_mem->placement,
+   ttm->page_flags,
  PAGE_KERNEL);
  ret = ttm_copy_ttm_io_page(ttm, new_iomap, page,
 prot);
  } else if (new_iomap == NULL) {
  pgprot_t prot = ttm_io_prot(new_mem->placement,
+   ttm->page_flags,
  PAGE_KERNEL);
  ret = ttm_copy_io_ttm_page(ttm, old_iomap, page,
 prot);
@@ -526,11 +528,11 @@ static int ttm_buffer_object_transfer(struct 
ttm_buffer_object *bo,
  return 0;
   }

-pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
+pgprot_t ttm_io_prot(u32 caching_flags, u32 tt_page_flags, pgprot_t tmp)
   {
  /* Cached mappings need no adjustment */
  if (caching_flags & TTM_PL_FLAG_CACHED)
-   return tmp;
+   goto check_encryption;

   #if defined(__i386__) || defined(__x86_64__)
  if (caching_flags & TTM_PL_FLAG_WC)
@@ -548,6 +550,11 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
   #if defined(__sparc__) || defined(__mips__)
  tmp = pgprot_noncached(tmp);
   #endif
+
+check_encryption:
+   if (tt_page_flags & TTM_PAGE_FLAG_DECRYPTED)
+   tmp = pgprot_decrypted(tmp);
+
  return tmp;
   }
   EXPORT_SYMBOL(ttm_io_prot);
@@ -594,7 +601,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
  if (ret)
  return ret;

-   if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) {
+   if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED) &&
+   !(ttm->page_flags & TTM_PAGE_FLAG_DECRYPTED)) {
  /*
   * We're mapping a single page, and the desired
   * page protection is consistent with the bo.
@@ -608,7 +616,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
   * We need to use vmap to get the desired page protection
   * or to make the buffer object look contiguous.
   */
-   prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+   prot = 

Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

2019-05-24 Thread Koenig, Christian
Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
> [CAUTION: External Email]
>
> From: Thomas Hellstrom 
>
> With SEV encryption, all DMA memory must be marked decrypted
> (AKA "shared") for devices to be able to read it. In the future we might
> want to be able to switch normal (encrypted) memory to decrypted in exactly
> the same way as we handle caching states, and that would require additional
> memory pools. But for now, rely on memory allocated with
> dma_alloc_coherent() which is already decrypted with SEV enabled. Set up
> the page protection accordingly. Drivers must detect SEV enabled and switch
> to the dma page pool.
>
> This patch has not yet been tested. As a follow-up, we might want to
> cache decrypted pages in the dma page pool regardless of their caching
> state.

This patch is unnecessary, SEV support already works fine with at least 
amdgpu and I would expect that it also works with other drivers as well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König 
Date:   Wed Mar 13 10:11:19 2019 +0100

     drm: fallback to dma_alloc_coherent when memory encryption is active

     We can't just map any randome page we get when memory encryption is
     active.

     Signed-off-by: Christian König 
     Acked-by: Alex Deucher 
     Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.


>
> Cc: Christian König 
> Signed-off-by: Thomas Hellstrom 
> ---
>   drivers/gpu/drm/ttm/ttm_bo_util.c| 17 +
>   drivers/gpu/drm/ttm/ttm_bo_vm.c  |  6 --
>   drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |  3 +++
>   drivers/gpu/drm/vmwgfx/vmwgfx_blit.c |  6 --
>   include/drm/ttm/ttm_bo_driver.h  |  8 +---
>   include/drm/ttm/ttm_tt.h |  1 +
>   6 files changed, 30 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
> b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index 895d77d799e4..1d6643bd0b01 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -419,11 +419,13 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
>  page = i * dir + add;
>  if (old_iomap == NULL) {
>  pgprot_t prot = ttm_io_prot(old_mem->placement,
> +   ttm->page_flags,
>  PAGE_KERNEL);
>  ret = ttm_copy_ttm_io_page(ttm, new_iomap, page,
> prot);
>  } else if (new_iomap == NULL) {
>  pgprot_t prot = ttm_io_prot(new_mem->placement,
> +   ttm->page_flags,
>  PAGE_KERNEL);
>  ret = ttm_copy_io_ttm_page(ttm, old_iomap, page,
> prot);
> @@ -526,11 +528,11 @@ static int ttm_buffer_object_transfer(struct 
> ttm_buffer_object *bo,
>  return 0;
>   }
>
> -pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
> +pgprot_t ttm_io_prot(u32 caching_flags, u32 tt_page_flags, pgprot_t tmp)
>   {
>  /* Cached mappings need no adjustment */
>  if (caching_flags & TTM_PL_FLAG_CACHED)
> -   return tmp;
> +   goto check_encryption;
>
>   #if defined(__i386__) || defined(__x86_64__)
>  if (caching_flags & TTM_PL_FLAG_WC)
> @@ -548,6 +550,11 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t 
> tmp)
>   #if defined(__sparc__) || defined(__mips__)
>  tmp = pgprot_noncached(tmp);
>   #endif
> +
> +check_encryption:
> +   if (tt_page_flags & TTM_PAGE_FLAG_DECRYPTED)
> +   tmp = pgprot_decrypted(tmp);
> +
>  return tmp;
>   }
>   EXPORT_SYMBOL(ttm_io_prot);
> @@ -594,7 +601,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
>  if (ret)
>  return ret;
>
> -   if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) {
> +   if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED) &&
> +   !(ttm->page_flags & TTM_PAGE_FLAG_DECRYPTED)) {
>  /*
>   * We're mapping a single page, and the desired
>   * page protection is consistent with the bo.
> @@ -608,7 +616,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
>   * We need to use vmap to get the desired page protection
>   * or to make the buffer object look contiguous.
>   */
> -   prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> +   prot = ttm_io_prot(mem->placement, ttm->page_flags,
> +  PAGE_KERNEL);
>  map->bo_kmap_type = ttm_bo_map_vmap;
>  map->virtual = vmap(ttm->pages + start_page, num_pages,
>