On 08/23/2018 03:27 PM, Volodymyr Babchuk wrote:
Hi,
Hi,
On 22.08.18 20:28, Julien Grall wrote:
Hi,
Please only CC relevant people to the patches. This could be done
easily using the new script add_maintainers.pl.
Oh, I'm sorry. I used get_maintainers.pl.
On 22/08/18 15:11, Volodymyr Babchuk wrote:
Add OP-TEE mediator, so guests can access OP-TEE services.
OP-TEE mediator support address translation for DomUs.
It tracks execution of STD calls, correctly handles memory-related RPC
requests, tracks buffer allocated for RPCs.
With this patch OP-TEE sucessfully passes own tests, while client is
running in DomU.
Signed-off-by: Volodymyr Babchuk <volodymyr_babc...@epam.com>
---
Changes from "RFC":
- Removed special case for Dom0/HwDOM
- No more support for plain OP-TEE (only OP-TEE with virtualization
config enabled is supported)
- Multiple domains is now supported
- Pages that are shared between OP-TEE and domain are now pinned
- Renamed CONFIG_ARM_OPTEE to CONFIG_OPTEE
- Command buffers from domain are now shadowed by XEN
- Mediator now filters out unknown capabilities and requests
- call contexts, shared memory object now stored per-domain
xen/arch/arm/tee/Kconfig | 4 +
xen/arch/arm/tee/Makefile | 1 +
xen/arch/arm/tee/optee.c | 972
++++++++++++++++++++++++++++++++++++
This patch is far to big to get a proper review with understanding of
the code. Can you split it in smaller ones with appropriate commit
message?
Yes, it is a quite big. But this is a complete feature. I can't remove
anything from it, because it will not work.
I can split it into series of patches, that will add various pieces of
code... But this will lead to patches with not-working code until the
final one. Is this okay?
This is a new feature so it does not matter if it does not work until
the end. Although, ideally, this should not break the rest of features.
What I want to avoid is a 900 lines complex patch with very little to
understand what is done.
From a quick look at it, I would like to understand how the memory
allocated in Xen is bounded for a given guest? Same question for the
time.
I store references to allocated pages in per-domain context. But they
are not accounted as a domain memory. This pages are needed by XEN to
conceal real PAs from guest. I'm not sure it they should be accounted as
a memory allocated by domain.
Xen heap can be quite limited. As the memory can stay around for a long
time, would it be possible for a guest to exhaust that pool?
And what about a time? Did you mean time accounting?
Xen only supports voluntary preemption. This means that long lasting
operation in Xen may block other vCPUs to run.
Call such p2m_lookup are not cheap to use as it requires to walk the
page-table is software. From a look at the code, the number of call will
be bound by guest-controlled parameter.
I can't see anything in the hypervisor sanitizing those values, so the
guest can control how long the call will take and also the memory
"reserved" in the hypervisor even if OP-TEE fails afterwards.
I am interested in a normal case but also in the case where someone
malicious is using that API. How much damage can it do to the hypervisor?
Every standard (long-lasting) call requires small amount of memory to
store context. Every shared buffer requires enough memory to store
references to shared pages.
OP-TEE has limited resources, so it will not allow you to create, say,
100 calls and couple of GBs of shared memory. I expect that it will
limit caller in memory overuse.
Do you mean per Client instance? Or for OP-TEE in general?
In any case, Xen memory allocation is always done before OP-TEE is
called. So there is still a window where the domain book-keep a big
chunk of memory that will be release at the end of the call.
Apart from that I can't imagine how malicious user can damage the
hypervisor.
See above. I think they are a lot of room for a guest to attack Xen.
Most likely you want to limit the number of call done in parallel and
also the shared memory mapped around.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel