Re: Checkpoint/restore of capabilities

2016-10-18 Thread Denis Huber
Hello Norman,

you are right, it is quite complicated, but I think I understand the 
capability concept in Genode with Fiasco.OC. Let me recap it:

I created a simple figure [1] to illustrate my thoughts. A component has 
a capability map and a kernel-intern capability space. Each managed RPC 
object has a capability which points to a capability map slot that 
stores a system-global identifier called badge. The capability space 
slot can be computed through the capability map slot. The corresponding 
capability map slot points to the object identity which is an IPC gate.

[1] 
https://github.com/702nADOS/genode-CheckpointRestore-SharedMemory/blob/b78f529818d01b42f0b35845e36e4e1d08b22eba/drawio_genode_capability_foc.png

In order to restore a component on another ECU, the checkpointed 
variables representing capabilities (entries in memory, e.g. stack) have 
to be made valid. Therefore, I have to restore the IPC gate, the 
capability space slot pointing to this IPC gate, and allocate a new 
badge, because it is valid only in one system and the component is 
migrated to another system. Also, I have to restore the capability map 
slot to point to the new badge and restore the RPC object.

In the following I assume that the RPC objects of the target component 
are created by the Checkpoint/Restore component (i.e. it intercepts the 
session requests and provides own sessions at child creation). The other 
case regarding local RPC objects of the target component will be 
discussed later, if I hopefully have the time:

By virtualizing the session RPC objects and the normal RPC objects, I 
can checkpoint the state of them. Thus, I can recreate an RPC object. 
When I do that the RPC object has a new capability (local to the 
Checkpoint/Restore component) and a valid badge. Implicitly a valid IPC 
gate is also recreated. Thus, the target component has to know this 
capability inside its protection domain. Therefore, the capability 
space/map slot has to point to the IPC gate or to the new badge, 
respectively.
* The capability space slot is recreated by issuing l4_task_map to map a 
capability from core to the target child. This is done by extending 
Foc_native_pd interface (see in an earlier mail from Norman).
* The capability map slot is recreated by 
Capability_map::insert(new_badge, old_kcap). Thus, I have to checkpoint 
the kcap by Capability_map::find(new_badge)->kcap().

Now I am missing the pointer to target component's internal capability 
map. I already have all dataspace capabilities which are attached to the 
target's address space. With the pointer I can cast it to a 
Capability_map* and use its methods to manipulate the Avl-tree. Please 
correct me if I am wrong.

Norman, you proposed a rough idea of how to obtain a dataspace 
capability of the capability map through the PD_session in one of your 
previous mails:

On 07.10.2016 09:48, Norman Feske wrote:
 >2. We may let the child pro-actively propagate information about its
 >   capability space to the outside so that the monitoring component can
 >   conveniently intercept this information. E.g. as a rough idea, we
 >   could add a 'Pd_session::cap_space_dataspace' RPC function where a
 >   component can request a dataspace capability for a memory buffer
 >   where it reports the layout information of its capability space.
 >   This could happen internally in the base library. So it would be
 >   transparent for the application code.

Can you or of course anyone else elaborate on how it "could happen 
internally in the base library"? Does core know the locations of 
capability maps of other components?


Kind regards,
Denis


PS: If my thoughts contain a mistake, please feel free to correct me. It 
would help me a lot :)

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Linking genode processes at different address space

2016-10-18 Thread Alexander Boettcher
On 18.10.2016 11:16, Parfait Tokponnon wrote:
> My real problem is this :
> When an EC (Execution Context) traps into the kernel, how the kernel can
> know which component it belongs to or which component originates this trap,
> and when the kernel, returning to userspace, elects an EC, which component
> this EC belongs to. I would like to get from the kernel at runtime the
> component an Ec belongs to. Is it Possible?

You will need to add your own support code in the kernel and Genode/NOVA
to correlate things.

If you haven't yet a fully high dynamic setup, I would advise to enable
in the kernel the debug output in the beginning of

sys_create_ec
sys_create_pd

and accordingly in Genode add debug output of the names of the process
and thread before all occurrences of

create_ec
create_pd

in repos/base-nova.

With the information you are able to correlate EC and PD pointers in the
kernel with Genode names for the process and threads.

Hope it helps,

Alex.
-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Number of components per init

2016-10-18 Thread Norman Feske
Hi Roman,

> I agree. Making the preservation configurable makes the memory 
> reservations more transparent, i.e. everything that requires memory is 
> visible in init's configuration - well, at least if init is used in a 
> static way. Or are there still other memory 'pools' one might not be 
> aware of?

on NOVA, the most important one is the kernel's memory pool, which has a
fixed size that is defined in the kernel's linker script. The linker
script is located in nova/src/hypervisor.ld (look for '_mempool_f').

Another limited resource is core's capability space, in particular the
meta data required to manage the lifetime of capabilities. The details
differ from kernel to kernel. On most base platforms, those information
are kept in statically allocated arrays, which are dimensioned to
accommodate the current scenarios. Core is in a special position because
it has to keep track of all capabilities in the system (capabilities are
allocated via core's PD service). Since the capability space of core is
limited, we should apply Genode's resource-trading concept to
capabilities too. In fact, I plan to implement this idea sometime next
year. Until then, we have to life with the situation that capability
allocations are not properly accounted (which is a potential
denial-of-service issue).

> One last question: how do I calculate the required memory preservation 
> for init on nova_x86_64, based on the number of children?

I cannot give you a precise formula. My previously reported experiment
where I started over 70 children with the default preservation of 128
KiB suggests that 2 KiB per child should suffice. Make it 16 KiB per
child and you should be really fine. ;-)

Cheers
Norman

-- 
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main