Re: [PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-07 Thread Jarkko Sakkinen
On Mon, Oct 05, 2020 at 01:39:21AM +0300, Jarkko Sakkinen wrote:
> On Sat, Oct 03, 2020 at 01:23:49PM -0500, Haitao Huang wrote:
> > On Sat, 03 Oct 2020 08:32:45 -0500, Jarkko Sakkinen
> >  wrote:
> > 
> > > On Sat, Oct 03, 2020 at 12:22:47AM -0500, Haitao Huang wrote:
> > > > When I turn on CONFIG_PROVE_LOCKING, kernel reports following
> > > > suspicious RCU
> > > > usages. Not sure if it is an issue. Just reporting here:
> > > 
> > > I'm glad to hear that my tip helped you to get us the data.
> > > 
> > > This does not look like an issue in the page reclaimer, which was not
> > > obvious for me before. That's a good thing. I was really worried about
> > > that because it has been very stable for a long period now. The last
> > > bug fix for the reclaimer was done in June in v31 version of the patch
> > > set and after that it has been unchanged (except possibly some renames
> > > requested by Boris).
> > > 
> > > I wildly guess I have a bad usage pattern for xarray. I migrated to it
> > > in v36, and it is entirely possible that I've misused it. It was the
> > > first time that I ever used it. Before xarray we had radix_tree but
> > > based Matthew Wilcox feedback I did a migration to xarray.
> > > 
> > > What I'd ask you to do next is to, if by any means possible, to try to
> > > run the same test with v35 so we can verify this. That one still has
> > > the radix tree.
> > > 
> > 
> > 
> > v35 does not cause any such warning messages from kernel
> 
> Thank you. Looks like Matthew already located the issue, a fix will
> land soon.

Just acknowledging that this should be fixed in my master branch now.

/Jarkko


Re: [PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-04 Thread Jarkko Sakkinen
On Sat, Oct 03, 2020 at 01:23:49PM -0500, Haitao Huang wrote:
> On Sat, 03 Oct 2020 08:32:45 -0500, Jarkko Sakkinen
>  wrote:
> 
> > On Sat, Oct 03, 2020 at 12:22:47AM -0500, Haitao Huang wrote:
> > > When I turn on CONFIG_PROVE_LOCKING, kernel reports following
> > > suspicious RCU
> > > usages. Not sure if it is an issue. Just reporting here:
> > 
> > I'm glad to hear that my tip helped you to get us the data.
> > 
> > This does not look like an issue in the page reclaimer, which was not
> > obvious for me before. That's a good thing. I was really worried about
> > that because it has been very stable for a long period now. The last
> > bug fix for the reclaimer was done in June in v31 version of the patch
> > set and after that it has been unchanged (except possibly some renames
> > requested by Boris).
> > 
> > I wildly guess I have a bad usage pattern for xarray. I migrated to it
> > in v36, and it is entirely possible that I've misused it. It was the
> > first time that I ever used it. Before xarray we had radix_tree but
> > based Matthew Wilcox feedback I did a migration to xarray.
> > 
> > What I'd ask you to do next is to, if by any means possible, to try to
> > run the same test with v35 so we can verify this. That one still has
> > the radix tree.
> > 
> 
> 
> v35 does not cause any such warning messages from kernel

Thank you. Looks like Matthew already located the issue, a fix will
land soon.

/Jarkko


Re: [PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-03 Thread Haitao Huang
On Sat, 03 Oct 2020 08:32:45 -0500, Jarkko Sakkinen  
 wrote:



On Sat, Oct 03, 2020 at 12:22:47AM -0500, Haitao Huang wrote:
When I turn on CONFIG_PROVE_LOCKING, kernel reports following  
suspicious RCU

usages. Not sure if it is an issue. Just reporting here:


I'm glad to hear that my tip helped you to get us the data.

This does not look like an issue in the page reclaimer, which was not
obvious for me before. That's a good thing. I was really worried about
that because it has been very stable for a long period now. The last
bug fix for the reclaimer was done in June in v31 version of the patch
set and after that it has been unchanged (except possibly some renames
requested by Boris).

I wildly guess I have a bad usage pattern for xarray. I migrated to it
in v36, and it is entirely possible that I've misused it. It was the
first time that I ever used it. Before xarray we had radix_tree but
based Matthew Wilcox feedback I did a migration to xarray.

What I'd ask you to do next is to, if by any means possible, to try to
run the same test with v35 so we can verify this. That one still has
the radix tree.




v35 does not cause any such warning messages from kernel


Thank you.

/Jarkko



[ +34.337095] =
[  +0.01] WARNING: suspicious RCU usage
[  +0.02] 5.9.0-rc6-lock-sgx39 #1 Not tainted
[  +0.01] -
[  +0.01] ./include/linux/xarray.h:1165 suspicious
rcu_dereference_check() usage!
[  +0.01]
  other info that might help us debug this:

[  +0.01]
  rcu_scheduler_active = 2, debug_locks = 1
[  +0.01] 1 lock held by enclaveos-runne/4238:
[  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:
vm_mmap_pgoff+0xa1/0x120
[  +0.05]
  stack backtrace:
[  +0.02] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted
5.9.0-rc6-lock-sgx39 #1
[  +0.01] Hardware name: Microsoft Corporation Virtual  
Machine/Virtual

Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020
[  +0.02] Call Trace:
[  +0.03]  dump_stack+0x7d/0x9f
[  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
[  +0.04]  xas_start+0x14c/0x1c0
[  +0.03]  xas_load+0xf/0x50
[  +0.02]  xas_find+0x25c/0x2c0
[  +0.04]  sgx_encl_may_map+0x87/0x1c0
[  +0.06]  sgx_mmap+0x29/0x70
[  +0.03]  mmap_region+0x3ee/0x710
[  +0.06]  do_mmap+0x3f1/0x5e0
[  +0.04]  vm_mmap_pgoff+0xcd/0x120
[  +0.07]  ksys_mmap_pgoff+0x1de/0x240
[  +0.05]  __x64_sys_mmap+0x33/0x40
[  +0.02]  do_syscall_64+0x37/0x80
[  +0.03]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.02] RIP: 0033:0x7fe34efe06ba
[  +0.02] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da  
4d 89
f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05  
<48> 3d

00 f0 ff ff 77 56 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 1f 00
[  +0.01] RSP: 002b:7ffee83eac08 EFLAGS: 0206 ORIG_RAX:
0009
[  +0.01] RAX: ffda RBX: 0001 RCX:
7fe34efe06ba
[  +0.01] RDX: 0001 RSI: 1000 RDI:
07fff000
[  +0.01] RBP: 0004 R08: 0004 R09:

[  +0.01] R10: 0011 R11: 0206 R12:
07fff000
[  +0.01] R13: 1000 R14: 0011 R15:


[  +0.10] =
[  +0.01] WARNING: suspicious RCU usage
[  +0.01] 5.9.0-rc6-lock-sgx39 #1 Not tainted
[  +0.01] -
[  +0.01] ./include/linux/xarray.h:1181 suspicious
rcu_dereference_check() usage!
[  +0.01]
  other info that might help us debug this:

[  +0.01]
  rcu_scheduler_active = 2, debug_locks = 1
[  +0.01] 1 lock held by enclaveos-runne/4238:
[  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:
vm_mmap_pgoff+0xa1/0x120
[  +0.03]
  stack backtrace:
[  +0.01] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted
5.9.0-rc6-lock-sgx39 #1
[  +0.01] Hardware name: Microsoft Corporation Virtual  
Machine/Virtual

Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020
[  +0.01] Call Trace:
[  +0.01]  dump_stack+0x7d/0x9f
[  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
[  +0.03]  xas_descend+0x116/0x120
[  +0.04]  xas_load+0x42/0x50
[  +0.02]  xas_find+0x25c/0x2c0
[  +0.04]  sgx_encl_may_map+0x87/0x1c0
[  +0.06]  sgx_mmap+0x29/0x70
[  +0.02]  mmap_region+0x3ee/0x710
[  +0.06]  do_mmap+0x3f1/0x5e0
[  +0.04]  vm_mmap_pgoff+0xcd/0x120
[  +0.07]  ksys_mmap_pgoff+0x1de/0x240
[  +0.05]  __x64_sys_mmap+0x33/0x40
[  +0.02]  do_syscall_64+0x37/0x80
[  +0.02]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.01] RIP: 0033:0x7fe34efe06ba
[  +0.01] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da  
4d 89
f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05  
<48> 3d

00 f0 ff ff 77 56 5b 5d 41 5c 41 5d 41 

Re: [PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-03 Thread Jarkko Sakkinen
On Sat, Oct 03, 2020 at 12:22:47AM -0500, Haitao Huang wrote:
> When I turn on CONFIG_PROVE_LOCKING, kernel reports following suspicious RCU
> usages. Not sure if it is an issue. Just reporting here:

I'm glad to hear that my tip helped you to get us the data.

This does not look like an issue in the page reclaimer, which was not
obvious for me before. That's a good thing. I was really worried about
that because it has been very stable for a long period now. The last
bug fix for the reclaimer was done in June in v31 version of the patch
set and after that it has been unchanged (except possibly some renames
requested by Boris).

I wildly guess I have a bad usage pattern for xarray. I migrated to it
in v36, and it is entirely possible that I've misused it. It was the
first time that I ever used it. Before xarray we had radix_tree but
based Matthew Wilcox feedback I did a migration to xarray.

What I'd ask you to do next is to, if by any means possible, to try to
run the same test with v35 so we can verify this. That one still has
the radix tree.

Thank you.

/Jarkko

> 
> [ +34.337095] =
> [  +0.01] WARNING: suspicious RCU usage
> [  +0.02] 5.9.0-rc6-lock-sgx39 #1 Not tainted
> [  +0.01] -
> [  +0.01] ./include/linux/xarray.h:1165 suspicious
> rcu_dereference_check() usage!
> [  +0.01]
>   other info that might help us debug this:
> 
> [  +0.01]
>   rcu_scheduler_active = 2, debug_locks = 1
> [  +0.01] 1 lock held by enclaveos-runne/4238:
> [  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:
> vm_mmap_pgoff+0xa1/0x120
> [  +0.05]
>   stack backtrace:
> [  +0.02] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted
> 5.9.0-rc6-lock-sgx39 #1
> [  +0.01] Hardware name: Microsoft Corporation Virtual Machine/Virtual
> Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020
> [  +0.02] Call Trace:
> [  +0.03]  dump_stack+0x7d/0x9f
> [  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
> [  +0.04]  xas_start+0x14c/0x1c0
> [  +0.03]  xas_load+0xf/0x50
> [  +0.02]  xas_find+0x25c/0x2c0
> [  +0.04]  sgx_encl_may_map+0x87/0x1c0
> [  +0.06]  sgx_mmap+0x29/0x70
> [  +0.03]  mmap_region+0x3ee/0x710
> [  +0.06]  do_mmap+0x3f1/0x5e0
> [  +0.04]  vm_mmap_pgoff+0xcd/0x120
> [  +0.07]  ksys_mmap_pgoff+0x1de/0x240
> [  +0.05]  __x64_sys_mmap+0x33/0x40
> [  +0.02]  do_syscall_64+0x37/0x80
> [  +0.03]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [  +0.02] RIP: 0033:0x7fe34efe06ba
> [  +0.02] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da 4d 89
> f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05 <48> 3d
> 00 f0 ff ff 77 56 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 1f 00
> [  +0.01] RSP: 002b:7ffee83eac08 EFLAGS: 0206 ORIG_RAX:
> 0009
> [  +0.01] RAX: ffda RBX: 0001 RCX:
> 7fe34efe06ba
> [  +0.01] RDX: 0001 RSI: 1000 RDI:
> 07fff000
> [  +0.01] RBP: 0004 R08: 0004 R09:
> 
> [  +0.01] R10: 0011 R11: 0206 R12:
> 07fff000
> [  +0.01] R13: 1000 R14: 0011 R15:
> 
> 
> [  +0.10] =
> [  +0.01] WARNING: suspicious RCU usage
> [  +0.01] 5.9.0-rc6-lock-sgx39 #1 Not tainted
> [  +0.01] -
> [  +0.01] ./include/linux/xarray.h:1181 suspicious
> rcu_dereference_check() usage!
> [  +0.01]
>   other info that might help us debug this:
> 
> [  +0.01]
>   rcu_scheduler_active = 2, debug_locks = 1
> [  +0.01] 1 lock held by enclaveos-runne/4238:
> [  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:
> vm_mmap_pgoff+0xa1/0x120
> [  +0.03]
>   stack backtrace:
> [  +0.01] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted
> 5.9.0-rc6-lock-sgx39 #1
> [  +0.01] Hardware name: Microsoft Corporation Virtual Machine/Virtual
> Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020
> [  +0.01] Call Trace:
> [  +0.01]  dump_stack+0x7d/0x9f
> [  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
> [  +0.03]  xas_descend+0x116/0x120
> [  +0.04]  xas_load+0x42/0x50
> [  +0.02]  xas_find+0x25c/0x2c0
> [  +0.04]  sgx_encl_may_map+0x87/0x1c0
> [  +0.06]  sgx_mmap+0x29/0x70
> [  +0.02]  mmap_region+0x3ee/0x710
> [  +0.06]  do_mmap+0x3f1/0x5e0
> [  +0.04]  vm_mmap_pgoff+0xcd/0x120
> [  +0.07]  ksys_mmap_pgoff+0x1de/0x240
> [  +0.05]  __x64_sys_mmap+0x33/0x40
> [  +0.02]  do_syscall_64+0x37/0x80
> [  +0.02]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [  +0.01] RIP: 0033:0x7fe34efe06ba
> [  +0.01] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da 4d 89
> f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05 <48> 3d
> 

Re: [PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-02 Thread Haitao Huang
When I turn on CONFIG_PROVE_LOCKING, kernel reports following suspicious  
RCU usages. Not sure if it is an issue. Just reporting here:


[ +34.337095] =
[  +0.01] WARNING: suspicious RCU usage
[  +0.02] 5.9.0-rc6-lock-sgx39 #1 Not tainted
[  +0.01] -
[  +0.01] ./include/linux/xarray.h:1165 suspicious  
rcu_dereference_check() usage!

[  +0.01]
  other info that might help us debug this:

[  +0.01]
  rcu_scheduler_active = 2, debug_locks = 1
[  +0.01] 1 lock held by enclaveos-runne/4238:
[  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:  
vm_mmap_pgoff+0xa1/0x120

[  +0.05]
  stack backtrace:
[  +0.02] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted  
5.9.0-rc6-lock-sgx39 #1
[  +0.01] Hardware name: Microsoft Corporation Virtual Machine/Virtual  
Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020

[  +0.02] Call Trace:
[  +0.03]  dump_stack+0x7d/0x9f
[  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
[  +0.04]  xas_start+0x14c/0x1c0
[  +0.03]  xas_load+0xf/0x50
[  +0.02]  xas_find+0x25c/0x2c0
[  +0.04]  sgx_encl_may_map+0x87/0x1c0
[  +0.06]  sgx_mmap+0x29/0x70
[  +0.03]  mmap_region+0x3ee/0x710
[  +0.06]  do_mmap+0x3f1/0x5e0
[  +0.04]  vm_mmap_pgoff+0xcd/0x120
[  +0.07]  ksys_mmap_pgoff+0x1de/0x240
[  +0.05]  __x64_sys_mmap+0x33/0x40
[  +0.02]  do_syscall_64+0x37/0x80
[  +0.03]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.02] RIP: 0033:0x7fe34efe06ba
[  +0.02] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da 4d  
89 f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05  
<48> 3d 00 f0 ff ff 77 56 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 1f 00
[  +0.01] RSP: 002b:7ffee83eac08 EFLAGS: 0206 ORIG_RAX:  
0009
[  +0.01] RAX: ffda RBX: 0001 RCX:  
7fe34efe06ba
[  +0.01] RDX: 0001 RSI: 1000 RDI:  
07fff000
[  +0.01] RBP: 0004 R08: 0004 R09:  

[  +0.01] R10: 0011 R11: 0206 R12:  
07fff000
[  +0.01] R13: 1000 R14: 0011 R15:  



[  +0.10] =
[  +0.01] WARNING: suspicious RCU usage
[  +0.01] 5.9.0-rc6-lock-sgx39 #1 Not tainted
[  +0.01] -
[  +0.01] ./include/linux/xarray.h:1181 suspicious  
rcu_dereference_check() usage!

[  +0.01]
  other info that might help us debug this:

[  +0.01]
  rcu_scheduler_active = 2, debug_locks = 1
[  +0.01] 1 lock held by enclaveos-runne/4238:
[  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, at:  
vm_mmap_pgoff+0xa1/0x120

[  +0.03]
  stack backtrace:
[  +0.01] CPU: 1 PID: 4238 Comm: enclaveos-runne Not tainted  
5.9.0-rc6-lock-sgx39 #1
[  +0.01] Hardware name: Microsoft Corporation Virtual Machine/Virtual  
Machine, BIOS Hyper-V UEFI Release v4.1 04/02/2020

[  +0.01] Call Trace:
[  +0.01]  dump_stack+0x7d/0x9f
[  +0.03]  lockdep_rcu_suspicious+0xce/0xf0
[  +0.03]  xas_descend+0x116/0x120
[  +0.04]  xas_load+0x42/0x50
[  +0.02]  xas_find+0x25c/0x2c0
[  +0.04]  sgx_encl_may_map+0x87/0x1c0
[  +0.06]  sgx_mmap+0x29/0x70
[  +0.02]  mmap_region+0x3ee/0x710
[  +0.06]  do_mmap+0x3f1/0x5e0
[  +0.04]  vm_mmap_pgoff+0xcd/0x120
[  +0.07]  ksys_mmap_pgoff+0x1de/0x240
[  +0.05]  __x64_sys_mmap+0x33/0x40
[  +0.02]  do_syscall_64+0x37/0x80
[  +0.02]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0.01] RIP: 0033:0x7fe34efe06ba
[  +0.01] Code: 89 f5 41 54 49 89 fc 55 53 74 35 49 63 e8 48 63 da 4d  
89 f9 49 89 e8 4d 63 d6 48 89 da 4c 89 ee 4c 89 e7 b8 09 00 00 00 0f 05  
<48> 3d 00 f0 ff ff 77 56 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 1f 00
[  +0.01] RSP: 002b:7ffee83eac08 EFLAGS: 0206 ORIG_RAX:  
0009
[  +0.01] RAX: ffda RBX: 0001 RCX:  
7fe34efe06ba
[  +0.01] RDX: 0001 RSI: 1000 RDI:  
07fff000
[  +0.01] RBP: 0004 R08: 0004 R09:  

[  +0.01] R10: 0011 R11: 0206 R12:  
07fff000
[  +0.01] R13: 1000 R14: 0011 R15:  



[  +0.001117] =
[  +0.01] WARNING: suspicious RCU usage
[  +0.01] 5.9.0-rc6-lock-sgx39 #1 Not tainted
[  +0.01] -
[  +0.01] ./include/linux/xarray.h:1181 suspicious  
rcu_dereference_check() usage!

[  +0.01]
  other info that might help us debug this:

[  +0.01]
  rcu_scheduler_active = 2, debug_locks = 1
[  +0.01] 1 lock held by enclaveos-runne/4238:
[  +0.01]  #0: 9cc6657e45e8 (>mmap_lock#2){}-{3:3}, 

[PATCH v39 16/24] x86/sgx: Add a page reclaimer

2020-10-02 Thread Jarkko Sakkinen
There is a limited amount of EPC available. Therefore, some of it must be
copied to the regular memory, and only subset kept in the SGX reserved
memory. While kernel cannot directly access enclave memory, SGX provides a
set of ENCLS leaf functions to perform reclaiming.

Implement a page reclaimer by using these leaf functions. It picks the
victim pages in LRU fashion from all the enclaves running in the system.
The thread ksgxswapd reclaims pages on the event when the number of free
EPC pages goes below SGX_NR_LOW_PAGES up until it reaches
SGX_NR_HIGH_PAGES.

sgx_alloc_epc_page() can optionally directly reclaim pages with @reclaim
set true. A caller must also supply owner for each page so that the
reclaimer can access the associated enclaves. This is needed for locking,
as most of the ENCLS leafs cannot be executed concurrently for an enclave.
The owner is also needed for accessing SECS, which is required to be
resident when its child pages are being reclaimed.

Cc: linux...@kvack.org
Acked-by: Jethro Beekman 
Tested-by: Jethro Beekman 
Tested-by: Jordan Hand 
Tested-by: Nathaniel McCallum 
Tested-by: Chunyang Hui 
Tested-by: Seth Moore 
Co-developed-by: Sean Christopherson 
Signed-off-by: Sean Christopherson 
Signed-off-by: Jarkko Sakkinen 
---
 arch/x86/kernel/cpu/sgx/driver.c |   1 +
 arch/x86/kernel/cpu/sgx/encl.c   | 344 +-
 arch/x86/kernel/cpu/sgx/encl.h   |  41 +++
 arch/x86/kernel/cpu/sgx/ioctl.c  |  78 -
 arch/x86/kernel/cpu/sgx/main.c   | 481 +++
 arch/x86/kernel/cpu/sgx/sgx.h|   9 +
 6 files changed, 947 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index d01b28f7ce4a..0446781cc7a2 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -29,6 +29,7 @@ static int sgx_open(struct inode *inode, struct file *file)
atomic_set(>flags, 0);
kref_init(>refcount);
xa_init(>page_array);
+   INIT_LIST_HEAD(>va_pages);
mutex_init(>lock);
INIT_LIST_HEAD(>mm_list);
spin_lock_init(>mm_lock);
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index c2c4a77af36b..54326efa6c2f 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -12,9 +12,88 @@
 #include "encls.h"
 #include "sgx.h"
 
+/*
+ * ELDU: Load an EPC page as unblocked. For more info, see "OS Management of 
EPC
+ * Pages" in the SDM.
+ */
+static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+  struct sgx_epc_page *epc_page,
+  struct sgx_epc_page *secs_page)
+{
+   unsigned long va_offset = SGX_ENCL_PAGE_VA_OFFSET(encl_page);
+   struct sgx_encl *encl = encl_page->encl;
+   struct sgx_pageinfo pginfo;
+   struct sgx_backing b;
+   pgoff_t page_index;
+   int ret;
+
+   if (secs_page)
+   page_index = SGX_ENCL_PAGE_INDEX(encl_page);
+   else
+   page_index = PFN_DOWN(encl->size);
+
+   ret = sgx_encl_get_backing(encl, page_index, );
+   if (ret)
+   return ret;
+
+   pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
+   pginfo.contents = (unsigned long)kmap_atomic(b.contents);
+   pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
+ b.pcmd_offset;
+
+   if (secs_page)
+   pginfo.secs = (u64)sgx_get_epc_addr(secs_page);
+   else
+   pginfo.secs = 0;
+
+   ret = __eldu(, sgx_get_epc_addr(epc_page),
+sgx_get_epc_addr(encl_page->va_page->epc_page) +
+ va_offset);
+   if (ret) {
+   if (encls_failed(ret))
+   ENCLS_WARN(ret, "ELDU");
+
+   ret = -EFAULT;
+   }
+
+   kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
+   kunmap_atomic((void *)(unsigned long)pginfo.contents);
+
+   sgx_encl_put_backing(, false);
+
+   return ret;
+}
+
+static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ struct sgx_epc_page *secs_page)
+{
+   unsigned long va_offset = SGX_ENCL_PAGE_VA_OFFSET(encl_page);
+   struct sgx_encl *encl = encl_page->encl;
+   struct sgx_epc_page *epc_page;
+   int ret;
+
+   epc_page = sgx_alloc_epc_page(encl_page, false);
+   if (IS_ERR(epc_page))
+   return epc_page;
+
+   ret = __sgx_encl_eldu(encl_page, epc_page, secs_page);
+   if (ret) {
+   sgx_free_epc_page(epc_page);
+   return ERR_PTR(ret);
+   }
+
+   sgx_free_va_slot(encl_page->va_page, va_offset);
+   list_move(_page->va_page->list, >va_pages);
+   encl_page->desc &= ~SGX_ENCL_PAGE_VA_OFFSET_MASK;
+   encl_page->epc_page = epc_page;
+
+   return epc_page;
+}
+
 static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,