Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Stefano Stabellini
On Thu, 1 Dec 2016, Volodymyr Babchuk wrote:
> > - TEE may run in parallel of the guest OS, this means that we have
> > to make sure the page will never be removed by the guest OS (see
> > XENMEM_decrease_reservation hypercall).
> Hmmm... I don't know how XEN handles guest memory in details. Can we
> somehow pin pages, so they can't be removed until client unregisters
> shared memory buffer?
> In new OP-TEE shmem design there will be call to register shared
> memory. Client will pass list of pages in this call and OP-TEE will
> map them in its address space. At this moment we need to pin them in
> hypervisor, so guest can't get rid of them, until "unregister shared
> memory" call made. Is this possible?

Yes, pages can be pinned, but we need to cover it in the design doc and
the code.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Stefano Stabellini
On Thu, 1 Dec 2016, Volodymyr Babchuk wrote:
> Hello Julien,
> 
> 
> 
> On 1 December 2016 at 16:24, Julien Grall  wrote:
> > Hi Stefano,
> >
> >
> > On 30/11/16 21:53, Stefano Stabellini wrote:
> >>
> >> On Mon, 28 Nov 2016, Julien Grall wrote:
> >
> > If not, then it might be worth to consider a 3rd solution where the TEE
> > SMC
> > calls are forwarded to a specific domain handling the SMC on behalf of
> > the
> > guests. This would allow to upgrade the TEE layer without having to
> > upgrade
> > the hypervisor.
> 
>  Yes, this is good idea. How this can look? I imagine following flow:
>  Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
>  userspace daemon receives it, maps pages with request data, alters is
>  (e.g. by replacing IPAs with PAs), sends request to hypervisor to
>  issue real SMC, then alters response and only then returns data back
>  to guest.
> >>>
> >>>
> >>> The event channel is only a way to notify (similar to an interrupt), you
> >>> would
> >>> need a shared memory page between the hypervisor and the client to
> >>> communicate
> >>> the SMC information.
> >>>
> >>> I was thinking to get advantage of the VM event API for trapping the SMC.
> >>> But
> >>> I am not sure if it is the best solution here. Stefano, do you have any
> >>> opinions here?
> >>>
>  I can see only one benefit there - this code will be not in
>  hypervisor. And there are number of drawbacks:
> 
>  Stability: if this userspace demon will crash or get killed by, say,
>  OOM, we will lose information about all opened sessions, mapped shared
>  buffers, etc.That would be complete disaster.
> >>>
> >>>
> >>> I disagree on your statement, you would gain in isolation. If your
> >>> userspace
> >>> crashes (because of an emulation bug), you will only loose access to TEE
> >>> for a
> >>> bit. If the hypervisor crashes (because of an emulation bug), then you
> >>> take
> >>> down the platform. I agree that you lose information when the userspace
> >>> app is
> >>> crashing but your platform is still up. Isn't it the most important?
> >>>
> >>> Note that I think it would be "fairly easy" to implement code to reset
> >>> everything or having a backup on the side.
> >>>
>  Performance: how big will be latency introduced by switching between
>  hypervisor, dom0 SVC and USR modes? I have seen use case where TEE is
>  part of video playback pipe (it decodes DRM media).
>  There also can be questions about security, but Dom0 in any case can
>  access any memory from any guest.
> >>>
> >>>
> >>> But those concerns would be the same in the hypervisor, right? If your
> >>> emulation is buggy then a guest would get access to all the memory.
> >>>
>  But I really like the idea, because I don't want to mess with
>  hypervisor when I don't need to. So, how do you think, how it will
>  affect performance?
> >>>
> >>>
> >>> I can't tell here. I would recommend you to try a quick prototype (e.g
> >>> receiving and sending SMC) and see what would be the overhead.
> >>>
> >>> When I wrote my previous e-mail, I mentioned "specific domain", because I
> >>> don't think it is strictly necessary to forward the SMC to DOM0. If you
> >>> are
> >>> concern about overloading DOM0, you could have a separate service domain
> >>> that
> >>> would handle TEE for you. You could have your "custom OS" handling TEE
> >>> request
> >>> directly in kernel space (i.e SVC).
> >>>
> >>> This would be up to the developer of this TEE-layer to decide what to do.
> >>
> >>
> >> Thanks Julien from bringing me into the discussion. These are my
> >> thoughts on the matter.
> >>
> >>
> >> Running emulators in Dom0 (AKA QEMU on x86) has always meant giving them
> >> full Dom0 privileges so far. I don't think that is acceptable. There is
> >> work undergoing on the x86 side of things to fix the situation, see:
> >>
> >>
> >> http://marc.info/?i=1479489244-2201-1-git-send-email-paul.durrant%40citrix.com
> >>
> >> But if the past is any indication of future development speed, we are
> >> still a couple of Xen releases away at least from having unprivileged
> >> emulators in Dom0 on x86. By unprivileged, I mean that they are not able
> >> to map any random page in memory, but just the ones belonging to the
> >> virtual machine that they are serving. Until then, having an emulator in
> >> userspace Dom0 is just as bad as having it in the hypervisor from a
> >> security standpoint.
> >>
> >> I would only consider this option, if we mandate from the start, in the
> >> design doc and implementations, that the emulators need to be
> >> unprivileged on ARM. This would likely require a new set of hypercalls
> >> and possibly Linux privcmds. And even then, this solution would still
> >> present a series of problems:
> >>
> >> - latency
> >> - scalability
> >> - validation against the root of trust
> >> - certifications 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Volodymyr Babchuk
Hi Julien

On 1 December 2016 at 17:19, Julien Grall  wrote:
> On 29/11/16 19:19, Volodymyr Babchuk wrote:
>>
>> Hi Julien,
>
>
> Hi Volodymyr,
>
>
>>
>>
>> On 29 November 2016 at 20:55, Julien Grall  wrote:
>>>
>>> Hi Volodymyr,
>>>
>>> On 29/11/16 17:40, Volodymyr Babchuk wrote:


 On 29 November 2016 at 18:02, Julien Grall  wrote:
>
>
> On 29/11/16 15:27, Volodymyr Babchuk wrote:
>>
>>
>> On 28 November 2016 at 22:10, Julien Grall 
>> wrote:
>>>
>>>
>>> On 28/11/16 18:09, Volodymyr Babchuk wrote:


 On 28 November 2016 at 18:14, Julien Grall 
 wrote:
>
>
> On 24/11/16 21:10, Volodymyr Babchuk wrote:
>
>
> I don't follow your point here. Why would the SMC handler need to map
> the
> guest memory?


 Because this is how parameters are passed. We can pass some parameters
 in registers, but for example in OP-TEE registers hold only address of
 command buffer. In this command buffer there are actual parameters.
 Some of those parameters can be references to another memory objects.
 So, to translate IPAs to PAs we need to map this command buffer,
 analyze it and so on.
>>>
>>>
>>>
>>> So the SMC issued will contain a PA of a page belonging to the guest or
>>> Xen?
>>
>> It will be guest page. But all references to other pages will have
>> real PAs, so TEE can work with them.
>>
>> Lets dive into example: hypervisor traps SMC, mediation layer (see
>> below) can see that there was INVOKE_COMMAND request. There are
>> address of command buffer in register pair (r1, r2). Mediation layer
>> changes address in this register pair to real PA of the command
>> buffer. Then it maps specified page and checks parameters. One of
>> parameters have type MEMREF, so mediation layer has to change IPA of
>> specified buffer to PA. Then it issues real SMC call.
>> After return from SMC it inspects registers and buffer again and
>> replaces memory references back.
>
>
> I was about to ask whether SMC call have some kind of metadata to know the
> parameter, but you answered it on another mail. So I will follow-up there.
Yes, it looked like question to me, so I answered there.

> Regarding the rest, you said that the buffer passed to the real TEE will be
> baked into guest memory. There are few problems with that you don't seem to
> address in this design document:
> - The buffer may be contiguous in the IPA space but discontinuous in
> PA space. This is because Xen may not be able to allocate all the memory for
> the guest contiguously in PA space. So how do you plan to handle buffer
> greater than a Xen page granularity (i.e 4K)
Yep. Currently I'm finishing to rework memory handling in OP-TEE. All
memory-related requests will work with pages lists. That should
eliminate this problem.

> - Can all type memory could be passed to TEE (e.g foreign page,
> grant, mmio...)? I suspect not.
Currently I plan to map only user pages or pages allocated by
__get_free_pages(). Right now I can't image why we need to support
other memory types.

> - TEE may run in parallel of the guest OS, this means that we have
> to make sure the page will never be removed by the guest OS (see
> XENMEM_decrease_reservation hypercall).
Hmmm... I don't know how XEN handles guest memory in details. Can we
somehow pin pages, so they can't be removed until client unregisters
shared memory buffer?
In new OP-TEE shmem design there will be call to register shared
memory. Client will pass list of pages in this call and OP-TEE will
map them in its address space. At this moment we need to pin them in
hypervisor, so guest can't get rid of them, until "unregister shared
memory" call made. Is this possible?

> - The IPA -> PA translation can be slow as this would need to be
> done in software (see p2m_lookup). Is there any upper limit of the number of
> buffer and indirection available?
They are limited by virtual address space in OP-TEE. Currently it is
about 16M for shared memory.
There will be at least one IPA->PA translation per std SMC call (to
find command buffer). More translations if this is "register shared
memory" call.
In other call types OP-TEE uses shared memory cookie (reference)
instead of addresses and resolves this cookie to actual address on
OP-TEE core side. This should minimize count of IPA->PA translations.

-- 
WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babc...@gmail.com

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Julien Grall

On 29/11/16 19:19, Volodymyr Babchuk wrote:

Hi Julien,


Hi Volodymyr,




On 29 November 2016 at 20:55, Julien Grall  wrote:

Hi Volodymyr,

On 29/11/16 17:40, Volodymyr Babchuk wrote:


On 29 November 2016 at 18:02, Julien Grall  wrote:


On 29/11/16 15:27, Volodymyr Babchuk wrote:


On 28 November 2016 at 22:10, Julien Grall  wrote:


On 28/11/16 18:09, Volodymyr Babchuk wrote:


On 28 November 2016 at 18:14, Julien Grall 
wrote:


On 24/11/16 21:10, Volodymyr Babchuk wrote:


I don't follow your point here. Why would the SMC handler need to map the
guest memory?


Because this is how parameters are passed. We can pass some parameters
in registers, but for example in OP-TEE registers hold only address of
command buffer. In this command buffer there are actual parameters.
Some of those parameters can be references to another memory objects.
So, to translate IPAs to PAs we need to map this command buffer,
analyze it and so on.



So the SMC issued will contain a PA of a page belonging to the guest or Xen?

It will be guest page. But all references to other pages will have
real PAs, so TEE can work with them.

Lets dive into example: hypervisor traps SMC, mediation layer (see
below) can see that there was INVOKE_COMMAND request. There are
address of command buffer in register pair (r1, r2). Mediation layer
changes address in this register pair to real PA of the command
buffer. Then it maps specified page and checks parameters. One of
parameters have type MEMREF, so mediation layer has to change IPA of
specified buffer to PA. Then it issues real SMC call.
After return from SMC it inspects registers and buffer again and
replaces memory references back.


I was about to ask whether SMC call have some kind of metadata to know 
the parameter, but you answered it on another mail. So I will follow-up 
there.


Regarding the rest, you said that the buffer passed to the real TEE will 
be baked into guest memory. There are few problems with that you don't 
seem to address in this design document:
	- The buffer may be contiguous in the IPA space but discontinuous in PA 
space. This is because Xen may not be able to allocate all the memory 
for the guest contiguously in PA space. So how do you plan to handle 
buffer greater than a Xen page granularity (i.e 4K)
	- Can all type memory could be passed to TEE (e.g foreign page, grant, 
mmio...)? I suspect not.
	- TEE may run in parallel of the guest OS, this means that we have to 
make sure the page will never be removed by the guest OS (see 
XENMEM_decrease_reservation hypercall).
- The IPA -> PA translation can be slow as this would need to 
be done in software (see p2m_lookup). Is there any upper limit of the 
number of buffer and indirection available?


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Volodymyr Babchuk
Hello Julien,



On 1 December 2016 at 16:24, Julien Grall  wrote:
> Hi Stefano,
>
>
> On 30/11/16 21:53, Stefano Stabellini wrote:
>>
>> On Mon, 28 Nov 2016, Julien Grall wrote:
>
> If not, then it might be worth to consider a 3rd solution where the TEE
> SMC
> calls are forwarded to a specific domain handling the SMC on behalf of
> the
> guests. This would allow to upgrade the TEE layer without having to
> upgrade
> the hypervisor.

 Yes, this is good idea. How this can look? I imagine following flow:
 Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
 userspace daemon receives it, maps pages with request data, alters is
 (e.g. by replacing IPAs with PAs), sends request to hypervisor to
 issue real SMC, then alters response and only then returns data back
 to guest.
>>>
>>>
>>> The event channel is only a way to notify (similar to an interrupt), you
>>> would
>>> need a shared memory page between the hypervisor and the client to
>>> communicate
>>> the SMC information.
>>>
>>> I was thinking to get advantage of the VM event API for trapping the SMC.
>>> But
>>> I am not sure if it is the best solution here. Stefano, do you have any
>>> opinions here?
>>>
 I can see only one benefit there - this code will be not in
 hypervisor. And there are number of drawbacks:

 Stability: if this userspace demon will crash or get killed by, say,
 OOM, we will lose information about all opened sessions, mapped shared
 buffers, etc.That would be complete disaster.
>>>
>>>
>>> I disagree on your statement, you would gain in isolation. If your
>>> userspace
>>> crashes (because of an emulation bug), you will only loose access to TEE
>>> for a
>>> bit. If the hypervisor crashes (because of an emulation bug), then you
>>> take
>>> down the platform. I agree that you lose information when the userspace
>>> app is
>>> crashing but your platform is still up. Isn't it the most important?
>>>
>>> Note that I think it would be "fairly easy" to implement code to reset
>>> everything or having a backup on the side.
>>>
 Performance: how big will be latency introduced by switching between
 hypervisor, dom0 SVC and USR modes? I have seen use case where TEE is
 part of video playback pipe (it decodes DRM media).
 There also can be questions about security, but Dom0 in any case can
 access any memory from any guest.
>>>
>>>
>>> But those concerns would be the same in the hypervisor, right? If your
>>> emulation is buggy then a guest would get access to all the memory.
>>>
 But I really like the idea, because I don't want to mess with
 hypervisor when I don't need to. So, how do you think, how it will
 affect performance?
>>>
>>>
>>> I can't tell here. I would recommend you to try a quick prototype (e.g
>>> receiving and sending SMC) and see what would be the overhead.
>>>
>>> When I wrote my previous e-mail, I mentioned "specific domain", because I
>>> don't think it is strictly necessary to forward the SMC to DOM0. If you
>>> are
>>> concern about overloading DOM0, you could have a separate service domain
>>> that
>>> would handle TEE for you. You could have your "custom OS" handling TEE
>>> request
>>> directly in kernel space (i.e SVC).
>>>
>>> This would be up to the developer of this TEE-layer to decide what to do.
>>
>>
>> Thanks Julien from bringing me into the discussion. These are my
>> thoughts on the matter.
>>
>>
>> Running emulators in Dom0 (AKA QEMU on x86) has always meant giving them
>> full Dom0 privileges so far. I don't think that is acceptable. There is
>> work undergoing on the x86 side of things to fix the situation, see:
>>
>>
>> http://marc.info/?i=1479489244-2201-1-git-send-email-paul.durrant%40citrix.com
>>
>> But if the past is any indication of future development speed, we are
>> still a couple of Xen releases away at least from having unprivileged
>> emulators in Dom0 on x86. By unprivileged, I mean that they are not able
>> to map any random page in memory, but just the ones belonging to the
>> virtual machine that they are serving. Until then, having an emulator in
>> userspace Dom0 is just as bad as having it in the hypervisor from a
>> security standpoint.
>>
>> I would only consider this option, if we mandate from the start, in the
>> design doc and implementations, that the emulators need to be
>> unprivileged on ARM. This would likely require a new set of hypercalls
>> and possibly Linux privcmds. And even then, this solution would still
>> present a series of problems:
>>
>> - latency
>> - scalability
>> - validation against the root of trust
>> - certifications (because they are part of Dom0 and nobody can certify
>>   that)
>>
>>
>> The other option that traditionally is proposed is using stubdoms.
>> Specialized little VMs to run emulators, each VM runs one emulator
>> instance. They are far better from a security standpoint, and could be

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-12-01 Thread Julien Grall

Hi Stefano,

On 30/11/16 21:53, Stefano Stabellini wrote:

On Mon, 28 Nov 2016, Julien Grall wrote:

If not, then it might be worth to consider a 3rd solution where the TEE
SMC
calls are forwarded to a specific domain handling the SMC on behalf of the
guests. This would allow to upgrade the TEE layer without having to
upgrade
the hypervisor.

Yes, this is good idea. How this can look? I imagine following flow:
Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
userspace daemon receives it, maps pages with request data, alters is
(e.g. by replacing IPAs with PAs), sends request to hypervisor to
issue real SMC, then alters response and only then returns data back
to guest.


The event channel is only a way to notify (similar to an interrupt), you would
need a shared memory page between the hypervisor and the client to communicate
the SMC information.

I was thinking to get advantage of the VM event API for trapping the SMC. But
I am not sure if it is the best solution here. Stefano, do you have any
opinions here?


I can see only one benefit there - this code will be not in
hypervisor. And there are number of drawbacks:

Stability: if this userspace demon will crash or get killed by, say,
OOM, we will lose information about all opened sessions, mapped shared
buffers, etc.That would be complete disaster.


I disagree on your statement, you would gain in isolation. If your userspace
crashes (because of an emulation bug), you will only loose access to TEE for a
bit. If the hypervisor crashes (because of an emulation bug), then you take
down the platform. I agree that you lose information when the userspace app is
crashing but your platform is still up. Isn't it the most important?

Note that I think it would be "fairly easy" to implement code to reset
everything or having a backup on the side.


Performance: how big will be latency introduced by switching between
hypervisor, dom0 SVC and USR modes? I have seen use case where TEE is
part of video playback pipe (it decodes DRM media).
There also can be questions about security, but Dom0 in any case can
access any memory from any guest.


But those concerns would be the same in the hypervisor, right? If your
emulation is buggy then a guest would get access to all the memory.


But I really like the idea, because I don't want to mess with
hypervisor when I don't need to. So, how do you think, how it will
affect performance?


I can't tell here. I would recommend you to try a quick prototype (e.g
receiving and sending SMC) and see what would be the overhead.

When I wrote my previous e-mail, I mentioned "specific domain", because I
don't think it is strictly necessary to forward the SMC to DOM0. If you are
concern about overloading DOM0, you could have a separate service domain that
would handle TEE for you. You could have your "custom OS" handling TEE request
directly in kernel space (i.e SVC).

This would be up to the developer of this TEE-layer to decide what to do.


Thanks Julien from bringing me into the discussion. These are my
thoughts on the matter.


Running emulators in Dom0 (AKA QEMU on x86) has always meant giving them
full Dom0 privileges so far. I don't think that is acceptable. There is
work undergoing on the x86 side of things to fix the situation, see:

http://marc.info/?i=1479489244-2201-1-git-send-email-paul.durrant%40citrix.com

But if the past is any indication of future development speed, we are
still a couple of Xen releases away at least from having unprivileged
emulators in Dom0 on x86. By unprivileged, I mean that they are not able
to map any random page in memory, but just the ones belonging to the
virtual machine that they are serving. Until then, having an emulator in
userspace Dom0 is just as bad as having it in the hypervisor from a
security standpoint.

I would only consider this option, if we mandate from the start, in the
design doc and implementations, that the emulators need to be
unprivileged on ARM. This would likely require a new set of hypercalls
and possibly Linux privcmds. And even then, this solution would still
present a series of problems:

- latency
- scalability
- validation against the root of trust
- certifications (because they are part of Dom0 and nobody can certify
  that)


The other option that traditionally is proposed is using stubdoms.
Specialized little VMs to run emulators, each VM runs one emulator
instance. They are far better from a security standpoint, and could be
certifiable. They might still pose problems from a root of trust point
of view. However the real issue with stubdoms, is just that being
treated as VMs they show up in "xl list", they introduce latency, they
consume a lot of memory, etc. Also dealing with Mini-OS can be unfunny.
I think that this option is only a little better than the previous
option, but it is still not great.


This brings us to the third and last option. Introducing the emulators
in the hypervisor. This is acceptable only if they are run in a lower

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-30 Thread Stefano Stabellini
On Mon, 28 Nov 2016, Julien Grall wrote:
> > > If not, then it might be worth to consider a 3rd solution where the TEE
> > > SMC
> > > calls are forwarded to a specific domain handling the SMC on behalf of the
> > > guests. This would allow to upgrade the TEE layer without having to
> > > upgrade
> > > the hypervisor.
> > Yes, this is good idea. How this can look? I imagine following flow:
> > Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
> > userspace daemon receives it, maps pages with request data, alters is
> > (e.g. by replacing IPAs with PAs), sends request to hypervisor to
> > issue real SMC, then alters response and only then returns data back
> > to guest.
> 
> The event channel is only a way to notify (similar to an interrupt), you would
> need a shared memory page between the hypervisor and the client to communicate
> the SMC information.
> 
> I was thinking to get advantage of the VM event API for trapping the SMC. But
> I am not sure if it is the best solution here. Stefano, do you have any
> opinions here?
> 
> > I can see only one benefit there - this code will be not in
> > hypervisor. And there are number of drawbacks:
> > 
> > Stability: if this userspace demon will crash or get killed by, say,
> > OOM, we will lose information about all opened sessions, mapped shared
> > buffers, etc.That would be complete disaster.
> 
> I disagree on your statement, you would gain in isolation. If your userspace
> crashes (because of an emulation bug), you will only loose access to TEE for a
> bit. If the hypervisor crashes (because of an emulation bug), then you take
> down the platform. I agree that you lose information when the userspace app is
> crashing but your platform is still up. Isn't it the most important?
> 
> Note that I think it would be "fairly easy" to implement code to reset
> everything or having a backup on the side.
> 
> > Performance: how big will be latency introduced by switching between
> > hypervisor, dom0 SVC and USR modes? I have seen use case where TEE is
> > part of video playback pipe (it decodes DRM media).
> > There also can be questions about security, but Dom0 in any case can
> > access any memory from any guest.
> 
> But those concerns would be the same in the hypervisor, right? If your
> emulation is buggy then a guest would get access to all the memory.
> 
> > But I really like the idea, because I don't want to mess with
> > hypervisor when I don't need to. So, how do you think, how it will
> > affect performance?
> 
> I can't tell here. I would recommend you to try a quick prototype (e.g
> receiving and sending SMC) and see what would be the overhead.
> 
> When I wrote my previous e-mail, I mentioned "specific domain", because I
> don't think it is strictly necessary to forward the SMC to DOM0. If you are
> concern about overloading DOM0, you could have a separate service domain that
> would handle TEE for you. You could have your "custom OS" handling TEE request
> directly in kernel space (i.e SVC).
> 
> This would be up to the developer of this TEE-layer to decide what to do.

Thanks Julien from bringing me into the discussion. These are my
thoughts on the matter.


Running emulators in Dom0 (AKA QEMU on x86) has always meant giving them
full Dom0 privileges so far. I don't think that is acceptable. There is
work undergoing on the x86 side of things to fix the situation, see:

http://marc.info/?i=1479489244-2201-1-git-send-email-paul.durrant%40citrix.com

But if the past is any indication of future development speed, we are
still a couple of Xen releases away at least from having unprivileged
emulators in Dom0 on x86. By unprivileged, I mean that they are not able
to map any random page in memory, but just the ones belonging to the
virtual machine that they are serving. Until then, having an emulator in
userspace Dom0 is just as bad as having it in the hypervisor from a
security standpoint.

I would only consider this option, if we mandate from the start, in the
design doc and implementations, that the emulators need to be
unprivileged on ARM. This would likely require a new set of hypercalls
and possibly Linux privcmds. And even then, this solution would still
present a series of problems:

- latency
- scalability
- validation against the root of trust
- certifications (because they are part of Dom0 and nobody can certify
  that)


The other option that traditionally is proposed is using stubdoms.
Specialized little VMs to run emulators, each VM runs one emulator
instance. They are far better from a security standpoint, and could be
certifiable. They might still pose problems from a root of trust point
of view. However the real issue with stubdoms, is just that being
treated as VMs they show up in "xl list", they introduce latency, they
consume a lot of memory, etc. Also dealing with Mini-OS can be unfunny.
I think that this option is only a little better than the previous
option, but it is still not great.


This brings us to the third 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Volodymyr Babchuk
Hi Julien,



On 29 November 2016 at 20:55, Julien Grall  wrote:
> Hi Volodymyr,
>
> On 29/11/16 17:40, Volodymyr Babchuk wrote:
>>
>> On 29 November 2016 at 18:02, Julien Grall  wrote:
>>>
>>> On 29/11/16 15:27, Volodymyr Babchuk wrote:

 On 28 November 2016 at 22:10, Julien Grall  wrote:
>
> On 28/11/16 18:09, Volodymyr Babchuk wrote:
>>
>> On 28 November 2016 at 18:14, Julien Grall 
>> wrote:
>>>
>>> On 24/11/16 21:10, Volodymyr Babchuk wrote:
>>>
>>> I don't follow your point here. Why would the SMC handler need to map the
>>> guest memory?
>>
>> Because this is how parameters are passed. We can pass some parameters
>> in registers, but for example in OP-TEE registers hold only address of
>> command buffer. In this command buffer there are actual parameters.
>> Some of those parameters can be references to another memory objects.
>> So, to translate IPAs to PAs we need to map this command buffer,
>> analyze it and so on.
>
>
> So the SMC issued will contain a PA of a page belonging to the guest or Xen?
It will be guest page. But all references to other pages will have
real PAs, so TEE can work with them.

Lets dive into example: hypervisor traps SMC, mediation layer (see
below) can see that there was INVOKE_COMMAND request. There are
address of command buffer in register pair (r1, r2). Mediation layer
changes address in this register pair to real PA of the command
buffer. Then it maps specified page and checks parameters. One of
parameters have type MEMREF, so mediation layer has to change IPA of
specified buffer to PA. Then it issues real SMC call.
After return from SMC it inspects registers and buffer again and
replaces memory references back.

>>>
>>
>> I can see only one benefit there - this code will be not in
>> hypervisor. And there are number of drawbacks:
>>
>> Stability: if this userspace demon will crash or get killed by, say,
>> OOM, we will lose information about all opened sessions, mapped shared
>> buffers, etc.That would be complete disaster.
>
>
>
>
> I disagree on your statement, you would gain in isolation. If your
> userspace
> crashes (because of an emulation bug), you will only loose access to
> TEE
> for
> a bit. If the hypervisor crashes (because of an emulation bug), then
> you
> take down the platform. I agree that you lose information when the
> userspace
> app is crashing but your platform is still up. Isn't it the most
> important?


 This is arguable and depends on what you consider more valuable:
 system security or system stability.
 I'm standing on security point.
>>>
>>>
>>>
>>> How handling SMC in the hypervisor would be more secure? The OP-TEE
>>> support
>>> will introduce code will need to:
>>> - Whitelist SMC call
>>> - Altering SMC call to translate an IPA to PA
>>> - Keep track of session
>>> - 
>>> In general, I am quite concern every time someone ask to add emulation in
>>> the hypervisor.  This is increasing the possibility of bug, this is more
>>> true with emulation.
>>
>> It is not an emulation. Actually it is virtualization. It is like
>> hypervisor provides virtual CPU or virtual GIC. There can be virtual
>> TEE as well.
>
>
> We seem to be disagree on terminology here. The virtualization is a generic
> term for creating a virtual resource. This could be done by the hardware or
> by software (aka emulation).
>
> In the case of the GIC, we use both:
> - emulation for the distributor
> - HW-assisted for the CPU interface
>
> In your case, you need to mangle the SMC parameters so this is software
> virtualization (aka emulation).
Yep, probably terminology issue there. For me "emulation" is a
imitation of behavior of something. While proposed solution is for
mediation, not for imitation.
Probably we can call it "TEE access mediation" or "mediation layer"?

[...]

>>
>> Also, I hate to ask again, but can we ask some TrustZone guys on how
>> they see interaction between Normal and Secure worlds in presence of
>> hypervisor?
>
>
> This has been asked and I am waiting an answer.
Oh, okay. Thank you.

-- 
WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babc...@gmail.com

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Julien Grall

Hi Volodymyr,

On 29/11/16 17:40, Volodymyr Babchuk wrote:

On 29 November 2016 at 18:02, Julien Grall  wrote:

On 29/11/16 15:27, Volodymyr Babchuk wrote:

On 28 November 2016 at 22:10, Julien Grall  wrote:

On 28/11/16 18:09, Volodymyr Babchuk wrote:

On 28 November 2016 at 18:14, Julien Grall  wrote:

On 24/11/16 21:10, Volodymyr Babchuk wrote:

I don't follow your point here. Why would the SMC handler need to map the
guest memory?

Because this is how parameters are passed. We can pass some parameters
in registers, but for example in OP-TEE registers hold only address of
command buffer. In this command buffer there are actual parameters.
Some of those parameters can be references to another memory objects.
So, to translate IPAs to PAs we need to map this command buffer,
analyze it and so on.


So the SMC issued will contain a PA of a page belonging to the guest or Xen?





I can see only one benefit there - this code will be not in
hypervisor. And there are number of drawbacks:

Stability: if this userspace demon will crash or get killed by, say,
OOM, we will lose information about all opened sessions, mapped shared
buffers, etc.That would be complete disaster.




I disagree on your statement, you would gain in isolation. If your
userspace
crashes (because of an emulation bug), you will only loose access to TEE
for
a bit. If the hypervisor crashes (because of an emulation bug), then you
take down the platform. I agree that you lose information when the
userspace
app is crashing but your platform is still up. Isn't it the most
important?


This is arguable and depends on what you consider more valuable:
system security or system stability.
I'm standing on security point.



How handling SMC in the hypervisor would be more secure? The OP-TEE support
will introduce code will need to:
- Whitelist SMC call
- Altering SMC call to translate an IPA to PA
- Keep track of session
- 
In general, I am quite concern every time someone ask to add emulation in
the hypervisor.  This is increasing the possibility of bug, this is more
true with emulation.

It is not an emulation. Actually it is virtualization. It is like
hypervisor provides virtual CPU or virtual GIC. There can be virtual
TEE as well.


We seem to be disagree on terminology here. The virtualization is a 
generic term for creating a virtual resource. This could be done by the 
hardware or by software (aka emulation).


In the case of the GIC, we use both:
- emulation for the distributor
- HW-assisted for the CPU interface

In your case, you need to mangle the SMC parameters so this is software 
virtualization (aka emulation).


[...]


I am not saying this is the best way, but I think we should explore more
before saying: "Let's put more emulation in the hypervisor". Because here we
are not talking about one TEE, but potentially multiple ones.

Yep. I'm not convinced yet to use separate VM. But lets try to image
how it will look.

Someone (can we trust dom0?) should identity which TEE is running on
system and create service domain with appropriate TEE handler.
There will be problem if we are using Secure Boot. Bootloader (like
ARM Trusted FW) can verify XEN in Dom0 kernel images. But it can't
verify which TEE handler will be loaded into service domain. This
verification can be done only by dom0, so dom0 userspace should be
part of chain of trust. This imposes restrictions on dom0 structure.

Then, when it comes to SMC call from guest. there should be special
subsystem in hypervisor. It will trap SMC, put all necessary data into
ring buffer and issue event to service domain. Probably, we will need
some hypercall to register service domain as SMC handler. But again,
how we can trust to that domain? Probably dom0 will say "use domain N
as trusted SMC handler"

Anyway, service domain handles SMC (probably, by doing real SMC to
TEE) and uses the same ring buffer/event channel mechanism to return
data to calling guest. During SMC handling it will map guest memory
pages by IPA, so we will need hypercall "map arbitrary guest memory by
guest IPA".

If service domain will need to wake up guest that is sleeping in TEE
client code, it will ask hypervisor to fire interrupt to that guest.

Then, I took a look onto MiniOS. Looks like it does not support
aarch64, so it need to be ported.

On other hand TEE virtualization right in hypervisor would ease things
significantly: no problems with secure boot, trusted service domains,
memory mapping, etc.


Let see what the other thinks.



Also, I hate to ask again, but can we ask some TrustZone guys on how
they see interaction between Normal and Secure worlds in presence of
hypervisor?


This has been asked and I am waiting an answer.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Volodymyr Babchuk
On 29 November 2016 at 18:02, Julien Grall  wrote:
> Hello Volodymyr,
>
> On 29/11/16 15:27, Volodymyr Babchuk wrote:
>>
>> On 28 November 2016 at 22:10, Julien Grall  wrote:
>>>
>>> On 28/11/16 18:09, Volodymyr Babchuk wrote:

 On 28 November 2016 at 18:14, Julien Grall  wrote:
>
> On 24/11/16 21:10, Volodymyr Babchuk wrote:
>>>
>>> I mean, is there any command that will affect the trusted OS (e.g reset
>>> it,
>>> or else) in whole and not only the session for a given guest?
>>
>> Yes, there are such commands. For example there are command that
>> enables/disables caching for shared memory.
>> We should disable this caching, by the way.
>> SMC handler should manage commands like this.
>
>
> So you have to implement a white-list, right?
Yes. Actually, I imagine this as huge switch(operation_id) where
default action is to return error to caller.
Only in this way I can be sure that I'm properly handling calls to
TEE. Yes, with this design maintainer will need to keep virtualization
code in sync with TEE internal APIs. But only this approach will
ensure security and stability.


 No, they are not standardized and they can change in the future.
 OP-TEE tries to be backward-compatible, though. So hypervisor can drop
 unknown capability flags in SMC call GET_CAPABILITIES. In this way it
 can ensure that guest will use only  APIs that are known by
 hypervisor.

> How about other TEE?


 I can't say for sure. But I think, situation is the same as with OP-TEE
>>>
>>>
>>>
>>> By any chance, is there a TEE specification out somewhere?
>>
>> Yes. There are GlobalPlatform API specs. You can find them at [3]
>> Probably you will be interested in "TEE System Architecture v1.0".
>
>
> Thank you I will give a look.
This is rather high level design, because GP leaves many details to be
implementation-specific. They focus more on client side.

>
>>
>>>

> If not, then it might be worth to consider a 3rd solution where the TEE
> SMC
> calls are forwarded to a specific domain handling the SMC on behalf of
> the
> guests. This would allow to upgrade the TEE layer without having to
> upgrade
> the hypervisor.


 Yes, this is good idea. How this can look? I imagine following flow:
 Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
 userspace daemon receives it, maps pages with request data, alters is
 (e.g. by replacing IPAs with PAs), sends request to hypervisor to
 issue real SMC, then alters response and only then returns data back
 to guest.
>>>
>>>
>>>
>>> The event channel is only a way to notify (similar to an interrupt), you
>>> would need a shared memory page between the hypervisor and the client to
>>> communicate the SMC information.
>>>
>>> I was thinking to get advantage of the VM event API for trapping the SMC.
>>> But I am not sure if it is the best solution here. Stefano, do you have
>>> any
>>> opinions here?
>>>

 Is this even possible with current APIs available to dom0?
>>>
>>>
>>>
>>> It is always possible to extend the API if something is missing :).
>>
>> Yes. On other hand I don't like idea that some domain can map any
>> memory page of other domain to play with SMC calls. We can't use grefs
>> there. So, service domain should be able to map any memory page it
>> wants. This is unsecure.
>
>
> I don't follow your point here. Why would the SMC handler need to map the
> guest memory?
Because this is how parameters are passed. We can pass some parameters
in registers, but for example in OP-TEE registers hold only address of
command buffer. In this command buffer there are actual parameters.
Some of those parameters can be references to another memory objects.
So, to translate IPAs to PAs we need to map this command buffer,
analyze it and so on.
>

 I can see only one benefit there - this code will be not in
 hypervisor. And there are number of drawbacks:

 Stability: if this userspace demon will crash or get killed by, say,
 OOM, we will lose information about all opened sessions, mapped shared
 buffers, etc.That would be complete disaster.
>>>
>>>
>>>
>>> I disagree on your statement, you would gain in isolation. If your
>>> userspace
>>> crashes (because of an emulation bug), you will only loose access to TEE
>>> for
>>> a bit. If the hypervisor crashes (because of an emulation bug), then you
>>> take down the platform. I agree that you lose information when the
>>> userspace
>>> app is crashing but your platform is still up. Isn't it the most
>>> important?
>>
>> This is arguable and depends on what you consider more valuable:
>> system security or system stability.
>> I'm standing on security point.
>
>
> How handling SMC in the hypervisor would be more secure? The OP-TEE support
> will introduce code will need to:
> - Whitelist SMC call
> - 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Julien Grall

Hello Volodymyr,

On 29/11/16 15:27, Volodymyr Babchuk wrote:

On 28 November 2016 at 22:10, Julien Grall  wrote:

On 28/11/16 18:09, Volodymyr Babchuk wrote:

On 28 November 2016 at 18:14, Julien Grall  wrote:

On 24/11/16 21:10, Volodymyr Babchuk wrote:

I mean, is there any command that will affect the trusted OS (e.g reset it,
or else) in whole and not only the session for a given guest?

Yes, there are such commands. For example there are command that
enables/disables caching for shared memory.
We should disable this caching, by the way.
SMC handler should manage commands like this.


So you have to implement a white-list, right?

[...]


No, they are not standardized and they can change in the future.
OP-TEE tries to be backward-compatible, though. So hypervisor can drop
unknown capability flags in SMC call GET_CAPABILITIES. In this way it
can ensure that guest will use only  APIs that are known by
hypervisor.


How about other TEE?


I can't say for sure. But I think, situation is the same as with OP-TEE



By any chance, is there a TEE specification out somewhere?

Yes. There are GlobalPlatform API specs. You can find them at [3]
Probably you will be interested in "TEE System Architecture v1.0".


Thank you I will give a look.








If not, then it might be worth to consider a 3rd solution where the TEE
SMC
calls are forwarded to a specific domain handling the SMC on behalf of
the
guests. This would allow to upgrade the TEE layer without having to
upgrade
the hypervisor.


Yes, this is good idea. How this can look? I imagine following flow:
Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
userspace daemon receives it, maps pages with request data, alters is
(e.g. by replacing IPAs with PAs), sends request to hypervisor to
issue real SMC, then alters response and only then returns data back
to guest.



The event channel is only a way to notify (similar to an interrupt), you
would need a shared memory page between the hypervisor and the client to
communicate the SMC information.

I was thinking to get advantage of the VM event API for trapping the SMC.
But I am not sure if it is the best solution here. Stefano, do you have any
opinions here?



Is this even possible with current APIs available to dom0?



It is always possible to extend the API if something is missing :).

Yes. On other hand I don't like idea that some domain can map any
memory page of other domain to play with SMC calls. We can't use grefs
there. So, service domain should be able to map any memory page it
wants. This is unsecure.


I don't follow your point here. Why would the SMC handler need to map 
the guest memory?




I can see only one benefit there - this code will be not in
hypervisor. And there are number of drawbacks:

Stability: if this userspace demon will crash or get killed by, say,
OOM, we will lose information about all opened sessions, mapped shared
buffers, etc.That would be complete disaster.



I disagree on your statement, you would gain in isolation. If your userspace
crashes (because of an emulation bug), you will only loose access to TEE for
a bit. If the hypervisor crashes (because of an emulation bug), then you
take down the platform. I agree that you lose information when the userspace
app is crashing but your platform is still up. Isn't it the most important?

This is arguable and depends on what you consider more valuable:
system security or system stability.
I'm standing on security point.


How handling SMC in the hypervisor would be more secure? The OP-TEE 
support will introduce code will need to:

- Whitelist SMC call
- Altering SMC call to translate an IPA to PA
- Keep track of session
- 

In general, I am quite concern every time someone ask to add emulation 
in the hypervisor.  This is increasing the possibility of bug, this is 
more true with emulation.


[...]


Performance: how big will be latency introduced by switching between
hypervisor, dom0 SVC and USR modes? I have seen use case where TEE is
part of video playback pipe (it decodes DRM media).
There also can be questions about security, but Dom0 in any case can
access any memory from any guest.



But those concerns would be the same in the hypervisor, right? If your
emulation is buggy then a guest would get access to all the memory.

Yes, but I hope that is harder to compromise hypervisor, than to
compromise guest domain.


I am afraid, but this would need more justification. If you use 
disaggregation and are careful enough to isolate your service, then it 
would be hard to compromise a separate VM only handling SMC on behalf of 
the guest.





But I really like the idea, because I don't want to mess with
hypervisor when I don't need to. So, how do you think, how it will
affect performance?



I can't tell here. I would recommend you to try a quick prototype (e.g
receiving and sending SMC) and see what would be the overhead.

When 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Volodymyr Babchuk
On 28 November 2016 at 22:10, Julien Grall  wrote:
>
>
> On 28/11/16 18:09, Volodymyr Babchuk wrote:
>>
>> Hello,
>
>
> Hello Volodymyr,
>
>> On 28 November 2016 at 18:14, Julien Grall  wrote:
>>>
>>> On 24/11/16 21:10, Volodymyr Babchuk wrote:

 My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
 of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
 there is security in embedded systems.

 I would like to discuss approaches to OP-TEE support in XEN.
>>>
>>>
>>>
>>> Thank you for sharing this, I am CC-ing some people who showed interest
>>> on
>>> accessing trusted firmware from the guest.
>>>
>>> In the future, please try to CC relevant people (in this case ARM
>>> maintainers) to avoid any delay on the answer.
>>
>> Thanks. I never worked with XEN community earlier, so I don't know who is
>> who :)
>
>
> You can give a look to the MAINTAINERS file at the root xen.git.
>
> [...]
>
 You can find patches at [1] if you are interested.
 During working on this PoC I have identified main questions that
 should be answered:

 On XEN side:
 1. SMC handling in XEN. There are many different SMCs and only portion
 of them belong to TEE. We need some SMC dispatcher that will route
 calls to different subsystems. Like PSCI calls to PSCI subsystem, TEE
 calls to TEE subsystem.
>>>
>>>
>>>
>>> So from my understanding of this paragraph, all SMC TEE calls should have
>>> a
>>> guest ID in the command. We don't expect command affecting all TEE.
>>> Correct?
>>
>> Yes. Idea is to trap SMC, alter it, add guest ID (into r7, as SMCCC
>> says) and then
>> do real SMC to pass it to TEE.
>>
>> But I'm not get this: "We don't expect command affecting all TEE".
>> What did you mean?
>
>
> I mean, is there any command that will affect the trusted OS (e.g reset it,
> or else) in whole and not only the session for a given guest?
Yes, there are such commands. For example there are command that
enables/disables caching for shared memory.
We should disable this caching, by the way.
SMC handler should manage commands like this.

>
>>
>>>

 2. Support for different TEEs. There are OP-TEE, Google Trusty, TI
 M-Shield... They all work thru SMC, but have different protocols.
 Currently, we are aimed only to OP-TEE. But we need some generic API
 in XEN, so support for new TEE can be easily added.
>>>
>>>
>>> For instance you
>>
>> Hm?
>>>
>>> Is there any  generic way to detect which TEE is been in used and the
>>> version?
>>
>> Yes, according to SMCCC, there call number 0xBF00FF01 that should
>> return Trusted OS UID.
>> OP-TEE supports this call. I hope, other TEEs also support it. In this
>> way we can which TrustedOS is running on host.
>
>
> Looking at the SMCC, this SMC call seems to be mandatory.
>>

 3. TEE services. Hypervisor should inform TEE when new guest is
 created or destroyed, it should tag SMCs to TEE with GuestID, so TEE
 can isolate guest data on its side.

 4. SMC mangling. RichOS communicates with TEE using shared buffers, by
 providing physical memory addresses. Hypervisor should convert IPAs to
 PAs.
>>>
>>>
>>>
>>> I am actually concerned about this bit. From my understanding, the
>>> hypervisor would need some knowledge of the SMC.
>>
>> Yes, it was my first idea - separate subsystem in the hypervisor that
>> handles SMC calls for different TEEs. This subsystems has a number of
>> backends. One for each TEE.
>>
>>> So are the OP-TEE SMC calls fully standardized? By that I mean they will
>>> not
>>> change across version?
>>
>> No, they are not standardized and they can change in the future.
>> OP-TEE tries to be backward-compatible, though. So hypervisor can drop
>> unknown capability flags in SMC call GET_CAPABILITIES. In this way it
>> can ensure that guest will use only  APIs that are known by
>> hypervisor.
>>
>>> How about other TEE?
>>
>> I can't say for sure. But I think, situation is the same as with OP-TEE
>
>
> By any chance, is there a TEE specification out somewhere?
Yes. There are GlobalPlatform API specs. You can find them at [3]
Probably you will be interested in "TEE System Architecture v1.0".

>
>>
>>> If not, then it might be worth to consider a 3rd solution where the TEE
>>> SMC
>>> calls are forwarded to a specific domain handling the SMC on behalf of
>>> the
>>> guests. This would allow to upgrade the TEE layer without having to
>>> upgrade
>>> the hypervisor.
>>
>> Yes, this is good idea. How this can look? I imagine following flow:
>> Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
>> userspace daemon receives it, maps pages with request data, alters is
>> (e.g. by replacing IPAs with PAs), sends request to hypervisor to
>> issue real SMC, then alters response and only then returns data back
>> to guest.
>
>
> The event channel is only a way to notify (similar to an interrupt), you
> 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-29 Thread Volodymyr Babchuk
Hi Dongli!

On 29 November 2016 at 06:47, Dongli Zhang  wrote:
> 2016-11-25 5:10 GMT+08:00 Volodymyr Babchuk :
>> Hello all,
>
> Hi Volodymyr!
>
>>
>> My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
>> of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
>> there is security in embedded systems.
>>
>> I would like to discuss approaches to OP-TEE support in XEN.
>> But first small introduction for those who is not familiar with topic:
>>
>> There are such thing as Security Extensions for ARM cores. It is part
>> of bigger thing - ARM TrustZone. Security extensions allows CPU to
>> work in two states: Normal (Non-Secure) and Secure.
>> Other parts of TrustZone provide hardware protection for peripherals,
>> memory regions, etc. So, when CPU is running in secure state, it have
>> access to all peripherals and memories, also it can forbid access to
>> some of those resources from Normal state. Secure state have own
>> Supervisor and User modes, so we can run OS there.
>> Just to be clear, CPU can run in following modes; user, supervisor,
>> hypervisor, secure monitor, secure user and secure supervisor. Last
>> two basically are the same as normal USR and SVC, but in secure state.
>> And Secure Monitor is special mode, that can switch Secure state to
>> Non-secure state and back. SMC instruction takes into this mode.
>> So, we can run one OS in secure state and one in non-secure state.
>> They will be mostly independent from each other. In normal
>> (non-secure) mode we run, say, Linux. And in Secure mode we run some
>> Trusted OS (or Secure Os). Secure OS provides services like
>> cryptography, secure storage, DRM, virtual SIM, etc. to Normal world
>> OS (it also called "Rich OS". It can be Linux for example.).
>> There are standard created by GlobalPlatform that is called "Trusted
>> Execution Environment" (TEE). It defines requirements and APIs for the
>> Secure OS. There are number number of Secure OSes that tries to be
>> conform with TEE specification and OP-TEE is one of them.
>>
>> CPU is normally running in Non-Secure state. When normal world OS
>> needs some services from Secure OS, it issues SMC instruction with
>> command data. This instruction puts processor into Secure Monitor
>> Mode, where secure monitor code examines request. If it is request to
>> Secure OS, it switches processor to Secure Supervisor mode and passes
>> request to Secure OS. When Secure OS finishes work, it also issues SMC
>> instruction and Secure Monitor switches processor back to non-secure
>> state, returning control to Rich OS.
>> SMC instruction can be used not only to communicate with SecureOS, but
>> also for other services like power control, control of processor
>> cores, etc. ARM wrote document "SMC Calling Convention" which in
>> detail describes how this instruction can be used.  By the way, SMCCC
>> requires to pass Virtual Machine ID in one of the registers during SMC
>> call.
>
> Would you please explain the objective of the project? We use Trustzone on
> smartphone because we do not trust the OS kernel and we prefer to protect the
> integrity and privacy of execution/data with hardware. Actually, this can be
> achieved with ARM virtualization as well. Trustzone is preferred because ARM
> virtualization based protection is not hardware based.
>
> Would you like to run unmodified software in richos so GlobalPlatform API 
> would
> be supported?  Who would be in the TCB? Only OP-TEE OS? OP-TEE OS and xen
> hypervisor? Or all of OP-TEE OS, xen hypervisor and dom0? To clarify my
> question, who will be in secure world and who will be in normal world in your
> design?
Yes. Idea is to run unmodified software that uses GP APIs on guests.
I would like to minimize TCB (Trusted computing base), so I would
prefer not to introduce any domains into it. In my design hypervisor
mangles SMCs, so TCB includes TEE and XEN hypervisor.
On other hand Julien wants to move all SMC handling logic into some
domain. In this case TCB will include TEE, hypervisor and DomS
(service domain). This is not good from security standpoint, but I'm
sure we'll reach agreement on this. Actually this is goal of this
thread - to develop design, that will satisfy all parts.

> Per my understanding, TCB consists of both OP-TEE OS and xen.
As I said earlier - this is my original idea.

> In which scenario this project will run? Are you going to provide
> GlobalPlatform API in cloud environment or in embeded environment running
> virtualization software?
This should be generic design that can be used on any platform:
automotive, mobile, desktop, server, etc.

> Are you going to run OP-TEE OS as a guest domain or to run OP-TEE OS in
> parallel to xen hypervisor? (e.g., OP-TEE OS in secure world and xen/dom0/domU
> as a whole in normal world)
OP-TEE (or any other TrustedOs) will be running in Secure World.
Hypervisor and guests will be running in Normal World.

> Sorry 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-28 Thread Dongli Zhang
2016-11-25 5:10 GMT+08:00 Volodymyr Babchuk :
> Hello all,

Hi Volodymyr!

>
> My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
> of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
> there is security in embedded systems.
>
> I would like to discuss approaches to OP-TEE support in XEN.
> But first small introduction for those who is not familiar with topic:
>
> There are such thing as Security Extensions for ARM cores. It is part
> of bigger thing - ARM TrustZone. Security extensions allows CPU to
> work in two states: Normal (Non-Secure) and Secure.
> Other parts of TrustZone provide hardware protection for peripherals,
> memory regions, etc. So, when CPU is running in secure state, it have
> access to all peripherals and memories, also it can forbid access to
> some of those resources from Normal state. Secure state have own
> Supervisor and User modes, so we can run OS there.
> Just to be clear, CPU can run in following modes; user, supervisor,
> hypervisor, secure monitor, secure user and secure supervisor. Last
> two basically are the same as normal USR and SVC, but in secure state.
> And Secure Monitor is special mode, that can switch Secure state to
> Non-secure state and back. SMC instruction takes into this mode.
> So, we can run one OS in secure state and one in non-secure state.
> They will be mostly independent from each other. In normal
> (non-secure) mode we run, say, Linux. And in Secure mode we run some
> Trusted OS (or Secure Os). Secure OS provides services like
> cryptography, secure storage, DRM, virtual SIM, etc. to Normal world
> OS (it also called "Rich OS". It can be Linux for example.).
> There are standard created by GlobalPlatform that is called "Trusted
> Execution Environment" (TEE). It defines requirements and APIs for the
> Secure OS. There are number number of Secure OSes that tries to be
> conform with TEE specification and OP-TEE is one of them.
>
> CPU is normally running in Non-Secure state. When normal world OS
> needs some services from Secure OS, it issues SMC instruction with
> command data. This instruction puts processor into Secure Monitor
> Mode, where secure monitor code examines request. If it is request to
> Secure OS, it switches processor to Secure Supervisor mode and passes
> request to Secure OS. When Secure OS finishes work, it also issues SMC
> instruction and Secure Monitor switches processor back to non-secure
> state, returning control to Rich OS.
> SMC instruction can be used not only to communicate with SecureOS, but
> also for other services like power control, control of processor
> cores, etc. ARM wrote document "SMC Calling Convention" which in
> detail describes how this instruction can be used.  By the way, SMCCC
> requires to pass Virtual Machine ID in one of the registers during SMC
> call.

Would you please explain the objective of the project? We use Trustzone on
smartphone because we do not trust the OS kernel and we prefer to protect the
integrity and privacy of execution/data with hardware. Actually, this can be
achieved with ARM virtualization as well. Trustzone is preferred because ARM
virtualization based protection is not hardware based.

Would you like to run unmodified software in richos so GlobalPlatform API would
be supported?  Who would be in the TCB? Only OP-TEE OS? OP-TEE OS and xen
hypervisor? Or all of OP-TEE OS, xen hypervisor and dom0? To clarify my
question, who will be in secure world and who will be in normal world in your
design?

Per my understanding, TCB consists of both OP-TEE OS and xen.

In which scenario this project will run? Are you going to provide
GlobalPlatform API in cloud environment or in embeded environment running
virtualization software?

Are you going to run OP-TEE OS as a guest domain or to run OP-TEE OS in
parallel to xen hypervisor? (e.g., OP-TEE OS in secure world and xen/dom0/domU
as a whole in normal world)

Sorry for my those many questions :)

>
> Now let's get back to XEN. We want XEN to provide TEE services to Dom0
> and guests. I can see different approaches to this:
>
>  - One of them I call "Emulated TEE". Guests does not have access to
> real TEE OS. Instead somewhere (in Dom0) we run instance of TEE for
> each of the Guests. This provides perfect isolation, as TEE instances
> does not anything about each one. But we will miss all hardware
> benefits like cryptographic acceleration.
>  - Another way is to allow guests to work with real TEE running Secure
> state. In this case TEE should be aware about guests to track shared
> memories, opened sessions, etc. It requires some careful programming
> to ensure that guest belongings are isolated from each other. But as
> reward we are much closer to original TrustZone desing.
>
> Personally I prefer second approach. I, even, did small PoC that
> allows different guests to work with OP-TEE (but not simultaneously!).
> You can find patches at [1] if you are interested.
> During 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-28 Thread Julien Grall



On 28/11/16 18:09, Volodymyr Babchuk wrote:

Hello,


Hello Volodymyr,


On 28 November 2016 at 18:14, Julien Grall  wrote:

On 24/11/16 21:10, Volodymyr Babchuk wrote:

My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
there is security in embedded systems.

I would like to discuss approaches to OP-TEE support in XEN.



Thank you for sharing this, I am CC-ing some people who showed interest on
accessing trusted firmware from the guest.

In the future, please try to CC relevant people (in this case ARM
maintainers) to avoid any delay on the answer.

Thanks. I never worked with XEN community earlier, so I don't know who is who :)


You can give a look to the MAINTAINERS file at the root xen.git.

[...]


You can find patches at [1] if you are interested.
During working on this PoC I have identified main questions that
should be answered:

On XEN side:
1. SMC handling in XEN. There are many different SMCs and only portion
of them belong to TEE. We need some SMC dispatcher that will route
calls to different subsystems. Like PSCI calls to PSCI subsystem, TEE
calls to TEE subsystem.



So from my understanding of this paragraph, all SMC TEE calls should have a
guest ID in the command. We don't expect command affecting all TEE. Correct?

Yes. Idea is to trap SMC, alter it, add guest ID (into r7, as SMCCC
says) and then
do real SMC to pass it to TEE.

But I'm not get this: "We don't expect command affecting all TEE".
What did you mean?


I mean, is there any command that will affect the trusted OS (e.g reset 
it, or else) in whole and not only the session for a given guest?








2. Support for different TEEs. There are OP-TEE, Google Trusty, TI
M-Shield... They all work thru SMC, but have different protocols.
Currently, we are aimed only to OP-TEE. But we need some generic API
in XEN, so support for new TEE can be easily added.


For instance you

Hm?

Is there any  generic way to detect which TEE is been in used and the
version?

Yes, according to SMCCC, there call number 0xBF00FF01 that should
return Trusted OS UID.
OP-TEE supports this call. I hope, other TEEs also support it. In this
way we can which TrustedOS is running on host.


Looking at the SMCC, this SMC call seems to be mandatory.





3. TEE services. Hypervisor should inform TEE when new guest is
created or destroyed, it should tag SMCs to TEE with GuestID, so TEE
can isolate guest data on its side.

4. SMC mangling. RichOS communicates with TEE using shared buffers, by
providing physical memory addresses. Hypervisor should convert IPAs to
PAs.



I am actually concerned about this bit. From my understanding, the
hypervisor would need some knowledge of the SMC.

Yes, it was my first idea - separate subsystem in the hypervisor that
handles SMC calls for different TEEs. This subsystems has a number of
backends. One for each TEE.


So are the OP-TEE SMC calls fully standardized? By that I mean they will not
change across version?

No, they are not standardized and they can change in the future.
OP-TEE tries to be backward-compatible, though. So hypervisor can drop
unknown capability flags in SMC call GET_CAPABILITIES. In this way it
can ensure that guest will use only  APIs that are known by
hypervisor.


How about other TEE?

I can't say for sure. But I think, situation is the same as with OP-TEE


By any chance, is there a TEE specification out somewhere?




If not, then it might be worth to consider a 3rd solution where the TEE SMC
calls are forwarded to a specific domain handling the SMC on behalf of the
guests. This would allow to upgrade the TEE layer without having to upgrade
the hypervisor.

Yes, this is good idea. How this can look? I imagine following flow:
Hypervisor traps SMC, uses event channel to pass request to Dom0. Some
userspace daemon receives it, maps pages with request data, alters is
(e.g. by replacing IPAs with PAs), sends request to hypervisor to
issue real SMC, then alters response and only then returns data back
to guest.


The event channel is only a way to notify (similar to an interrupt), you 
would need a shared memory page between the hypervisor and the client to 
communicate the SMC information.


I was thinking to get advantage of the VM event API for trapping the 
SMC. But I am not sure if it is the best solution here. Stefano, do you 
have any opinions here?




Is this even possible with current APIs available to dom0?


It is always possible to extend the API if something is missing :).



I can see only one benefit there - this code will be not in
hypervisor. And there are number of drawbacks:

Stability: if this userspace demon will crash or get killed by, say,
OOM, we will lose information about all opened sessions, mapped shared
buffers, etc.That would be complete disaster.


I disagree on your statement, you would gain in isolation. If your 
userspace crashes (because of an emulation bug), you 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-28 Thread Volodymyr Babchuk
Hello,

On 28 November 2016 at 18:14, Julien Grall  wrote:
> On 24/11/16 21:10, Volodymyr Babchuk wrote:
>>
>> Hello all,
>
>
> Hello,
>
>> My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
>> of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
>> there is security in embedded systems.
>>
>> I would like to discuss approaches to OP-TEE support in XEN.
>
>
> Thank you for sharing this, I am CC-ing some people who showed interest on
> accessing trusted firmware from the guest.
>
> In the future, please try to CC relevant people (in this case ARM
> maintainers) to avoid any delay on the answer.
Thanks. I never worked with XEN community earlier, so I don't know who is who :)

>
>
>> But first small introduction for those who is not familiar with topic:
>>
>> There are such thing as Security Extensions for ARM cores. It is part
>> of bigger thing - ARM TrustZone. Security extensions allows CPU to
>> work in two states: Normal (Non-Secure) and Secure.
>> Other parts of TrustZone provide hardware protection for peripherals,
>> memory regions, etc. So, when CPU is running in secure state, it have
>> access to all peripherals and memories, also it can forbid access to
>> some of those resources from Normal state. Secure state have own
>> Supervisor and User modes, so we can run OS there.
>> Just to be clear, CPU can run in following modes; user, supervisor,
>> hypervisor, secure monitor, secure user and secure supervisor. Last
>> two basically are the same as normal USR and SVC, but in secure state.
>> And Secure Monitor is special mode, that can switch Secure state to
>> Non-secure state and back. SMC instruction takes into this mode.
>> So, we can run one OS in secure state and one in non-secure state.
>> They will be mostly independent from each other. In normal
>> (non-secure) mode we run, say, Linux. And in Secure mode we run some
>> Trusted OS (or Secure Os). Secure OS provides services like
>> cryptography, secure storage, DRM, virtual SIM, etc. to Normal world
>> OS (it also called "Rich OS". It can be Linux for example.).
>> There are standard created by GlobalPlatform that is called "Trusted
>> Execution Environment" (TEE). It defines requirements and APIs for the
>> Secure OS. There are number number of Secure OSes that tries to be
>> conform with TEE specification and OP-TEE is one of them.
>>
>> CPU is normally running in Non-Secure state. When normal world OS
>> needs some services from Secure OS, it issues SMC instruction with
>> command data. This instruction puts processor into Secure Monitor
>> Mode, where secure monitor code examines request. If it is request to
>> Secure OS, it switches processor to Secure Supervisor mode and passes
>> request to Secure OS. When Secure OS finishes work, it also issues SMC
>> instruction and Secure Monitor switches processor back to non-secure
>> state, returning control to Rich OS.
>> SMC instruction can be used not only to communicate with SecureOS, but
>> also for other services like power control, control of processor
>> cores, etc. ARM wrote document "SMC Calling Convention" which in
>> detail describes how this instruction can be used.  By the way, SMCCC
>> requires to pass Virtual Machine ID in one of the registers during SMC
>> call.
>>
>> Now let's get back to XEN. We want XEN to provide TEE services to Dom0
>> and guests. I can see different approaches to this:
>>
>>  - One of them I call "Emulated TEE". Guests does not have access to
>> real TEE OS. Instead somewhere (in Dom0) we run instance of TEE for
>> each of the Guests. This provides perfect isolation, as TEE instances
>> does not anything about each one. But we will miss all hardware
>> benefits like cryptographic acceleration.
>>  - Another way is to allow guests to work with real TEE running Secure
>> state. In this case TEE should be aware about guests to track shared
>> memories, opened sessions, etc. It requires some careful programming
>> to ensure that guest belongings are isolated from each other. But as
>> reward we are much closer to original TrustZone desing.
>>
>> Personally I prefer second approach. I, even, did small PoC that
>> allows different guests to work with OP-TEE (but not simultaneously!).
>
>
> I agree that the first approach is not good because you can't get advantage
> of the hardware. However, I have some concerns about the second (see below).
>
> Bear in mind that I don't know much about TEE :).
>
>> You can find patches at [1] if you are interested.
>> During working on this PoC I have identified main questions that
>> should be answered:
>>
>> On XEN side:
>> 1. SMC handling in XEN. There are many different SMCs and only portion
>> of them belong to TEE. We need some SMC dispatcher that will route
>> calls to different subsystems. Like PSCI calls to PSCI subsystem, TEE
>> calls to TEE subsystem.
>
>
> So from my understanding of this paragraph, all SMC TEE calls should have a
> guest ID in the command. We 

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-28 Thread Julien Grall

On 24/11/16 21:10, Volodymyr Babchuk wrote:

Hello all,


Hello,


My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
there is security in embedded systems.

I would like to discuss approaches to OP-TEE support in XEN.


Thank you for sharing this, I am CC-ing some people who showed interest 
on accessing trusted firmware from the guest.


In the future, please try to CC relevant people (in this case ARM 
maintainers) to avoid any delay on the answer.



But first small introduction for those who is not familiar with topic:

There are such thing as Security Extensions for ARM cores. It is part
of bigger thing - ARM TrustZone. Security extensions allows CPU to
work in two states: Normal (Non-Secure) and Secure.
Other parts of TrustZone provide hardware protection for peripherals,
memory regions, etc. So, when CPU is running in secure state, it have
access to all peripherals and memories, also it can forbid access to
some of those resources from Normal state. Secure state have own
Supervisor and User modes, so we can run OS there.
Just to be clear, CPU can run in following modes; user, supervisor,
hypervisor, secure monitor, secure user and secure supervisor. Last
two basically are the same as normal USR and SVC, but in secure state.
And Secure Monitor is special mode, that can switch Secure state to
Non-secure state and back. SMC instruction takes into this mode.
So, we can run one OS in secure state and one in non-secure state.
They will be mostly independent from each other. In normal
(non-secure) mode we run, say, Linux. And in Secure mode we run some
Trusted OS (or Secure Os). Secure OS provides services like
cryptography, secure storage, DRM, virtual SIM, etc. to Normal world
OS (it also called "Rich OS". It can be Linux for example.).
There are standard created by GlobalPlatform that is called "Trusted
Execution Environment" (TEE). It defines requirements and APIs for the
Secure OS. There are number number of Secure OSes that tries to be
conform with TEE specification and OP-TEE is one of them.

CPU is normally running in Non-Secure state. When normal world OS
needs some services from Secure OS, it issues SMC instruction with
command data. This instruction puts processor into Secure Monitor
Mode, where secure monitor code examines request. If it is request to
Secure OS, it switches processor to Secure Supervisor mode and passes
request to Secure OS. When Secure OS finishes work, it also issues SMC
instruction and Secure Monitor switches processor back to non-secure
state, returning control to Rich OS.
SMC instruction can be used not only to communicate with SecureOS, but
also for other services like power control, control of processor
cores, etc. ARM wrote document "SMC Calling Convention" which in
detail describes how this instruction can be used.  By the way, SMCCC
requires to pass Virtual Machine ID in one of the registers during SMC
call.

Now let's get back to XEN. We want XEN to provide TEE services to Dom0
and guests. I can see different approaches to this:

 - One of them I call "Emulated TEE". Guests does not have access to
real TEE OS. Instead somewhere (in Dom0) we run instance of TEE for
each of the Guests. This provides perfect isolation, as TEE instances
does not anything about each one. But we will miss all hardware
benefits like cryptographic acceleration.
 - Another way is to allow guests to work with real TEE running Secure
state. In this case TEE should be aware about guests to track shared
memories, opened sessions, etc. It requires some careful programming
to ensure that guest belongings are isolated from each other. But as
reward we are much closer to original TrustZone desing.

Personally I prefer second approach. I, even, did small PoC that
allows different guests to work with OP-TEE (but not simultaneously!).


I agree that the first approach is not good because you can't get 
advantage of the hardware. However, I have some concerns about the 
second (see below).


Bear in mind that I don't know much about TEE :).


You can find patches at [1] if you are interested.
During working on this PoC I have identified main questions that
should be answered:

On XEN side:
1. SMC handling in XEN. There are many different SMCs and only portion
of them belong to TEE. We need some SMC dispatcher that will route
calls to different subsystems. Like PSCI calls to PSCI subsystem, TEE
calls to TEE subsystem.


So from my understanding of this paragraph, all SMC TEE calls should 
have a guest ID in the command. We don't expect command affecting all 
TEE. Correct?




2. Support for different TEEs. There are OP-TEE, Google Trusty, TI
M-Shield... They all work thru SMC, but have different protocols.
Currently, we are aimed only to OP-TEE. But we need some generic API
in XEN, so support for new TEE can be easily added.

For instance you

Is there any  generic way to detect which TEE is been in used and the 

[Xen-devel] [RFD] OP-TEE (and probably other TEEs) support

2016-11-24 Thread Volodymyr Babchuk
Hello all,

My name is Volodymyr Babchuk, I'm working on EPAM Systems with bunch
of other guys like Artem Mygaiev or Andrii Anisov. My responsibility
there is security in embedded systems.

I would like to discuss approaches to OP-TEE support in XEN.
But first small introduction for those who is not familiar with topic:

There are such thing as Security Extensions for ARM cores. It is part
of bigger thing - ARM TrustZone. Security extensions allows CPU to
work in two states: Normal (Non-Secure) and Secure.
Other parts of TrustZone provide hardware protection for peripherals,
memory regions, etc. So, when CPU is running in secure state, it have
access to all peripherals and memories, also it can forbid access to
some of those resources from Normal state. Secure state have own
Supervisor and User modes, so we can run OS there.
Just to be clear, CPU can run in following modes; user, supervisor,
hypervisor, secure monitor, secure user and secure supervisor. Last
two basically are the same as normal USR and SVC, but in secure state.
And Secure Monitor is special mode, that can switch Secure state to
Non-secure state and back. SMC instruction takes into this mode.
So, we can run one OS in secure state and one in non-secure state.
They will be mostly independent from each other. In normal
(non-secure) mode we run, say, Linux. And in Secure mode we run some
Trusted OS (or Secure Os). Secure OS provides services like
cryptography, secure storage, DRM, virtual SIM, etc. to Normal world
OS (it also called "Rich OS". It can be Linux for example.).
There are standard created by GlobalPlatform that is called "Trusted
Execution Environment" (TEE). It defines requirements and APIs for the
Secure OS. There are number number of Secure OSes that tries to be
conform with TEE specification and OP-TEE is one of them.

CPU is normally running in Non-Secure state. When normal world OS
needs some services from Secure OS, it issues SMC instruction with
command data. This instruction puts processor into Secure Monitor
Mode, where secure monitor code examines request. If it is request to
Secure OS, it switches processor to Secure Supervisor mode and passes
request to Secure OS. When Secure OS finishes work, it also issues SMC
instruction and Secure Monitor switches processor back to non-secure
state, returning control to Rich OS.
SMC instruction can be used not only to communicate with SecureOS, but
also for other services like power control, control of processor
cores, etc. ARM wrote document "SMC Calling Convention" which in
detail describes how this instruction can be used.  By the way, SMCCC
requires to pass Virtual Machine ID in one of the registers during SMC
call.

Now let's get back to XEN. We want XEN to provide TEE services to Dom0
and guests. I can see different approaches to this:

 - One of them I call "Emulated TEE". Guests does not have access to
real TEE OS. Instead somewhere (in Dom0) we run instance of TEE for
each of the Guests. This provides perfect isolation, as TEE instances
does not anything about each one. But we will miss all hardware
benefits like cryptographic acceleration.
 - Another way is to allow guests to work with real TEE running Secure
state. In this case TEE should be aware about guests to track shared
memories, opened sessions, etc. It requires some careful programming
to ensure that guest belongings are isolated from each other. But as
reward we are much closer to original TrustZone desing.

Personally I prefer second approach. I, even, did small PoC that
allows different guests to work with OP-TEE (but not simultaneously!).
You can find patches at [1] if you are interested.
During working on this PoC I have identified main questions that
should be answered:

On XEN side:
1. SMC handling in XEN. There are many different SMCs and only portion
of them belong to TEE. We need some SMC dispatcher that will route
calls to different subsystems. Like PSCI calls to PSCI subsystem, TEE
calls to TEE subsystem.

2. Support for different TEEs. There are OP-TEE, Google Trusty, TI
M-Shield... They all work thru SMC, but have different protocols.
Currently, we are aimed only to OP-TEE. But we need some generic API
in XEN, so support for new TEE can be easily added.

3. TEE services. Hypervisor should inform TEE when new guest is
created or destroyed, it should tag SMCs to TEE with GuestID, so TEE
can isolate guest data on its side.

4. SMC mangling. RichOS communicates with TEE using shared buffers, by
providing physical memory addresses. Hypervisor should convert IPAs to
PAs.
Currently I'm rewriting parts of OP-TEE to make it support arbitrary
buffers originated from RichOS.

5. Events from TEE. This is hard topic. Sometimes OP-TEE needs some
services from RichOS. For example it wants Linux to service pending
IRQ request, or allocate portion of shared memory, or lock calling
thread, etc. This is called "RPC request". To do RPC request OP-TEE
initiates return to Normal World, but it sets special return code to