Re: Error unable to initialize cap quota of PD

2017-08-08 Thread Jörg-Christian Böhme

Hi Norman,

Am Freitag, 4. August 2017 11:56:38 CEST schrieb Norman Feske:


While giving the scenario a quick spin, I observed another cap-quota
issue, which is caused by the too low default value of 50 - the value
used for all  notes that lack an explicit 'caps' attribute. By
setting this value to 100, the scenario starts up with NOVA on Qemu
(tested on the current staging branch). However, apparently the launcher
component also needs a few quota adjustments. I just fixed the
corresponding parts in the commit [2] on the staging branch. May you
give it a try?

[2]
https://github.com/genodelabs/genode/commit/94ef138f223ce6d9ab4a2608bc7cc73a76463335

I have tested it quickly, I checkout staging 
(commit 005820bb7bf6cb8b833b4227b8377984df5f9ceb) and build the launcher 
scenario. Now the scenario is started. Thanks Norman for your quick fix.


The 'drivers' subsystem [5] is an init instance with 7
children (as of now). Hence, assigning 1000 caps to the subsystem seems
to be reasonable.

[5] repos/os/recipes/raw/drivers_interactive-pc/drivers.config


Ok, drivers 'subsystem', that is what I missed. I haven't realized that the
'drivers.config' is a subsystem configuration.


The limit of a component decreases each time it creates a session to a
service where it transfers some of its own quota limit to the server.

Initially, the limit of 'drivers' is 300. But as soon as 'drivers'
creates sessions to the outside, its limit decreases step by step. You
can think of the limit as the balance of a bank account. It you transfer
money to another account, the balance decreases. The "used" value is the
fraction of the quota that the component turned into actual resources
(like RPC objects, dataspaces, signal handlers). To stay with the
analogy, these are the purchased goods.

Ok, limit means also the 'free cap ressouces' for the component and this 
will
be transfered to other 'server components'. 'used' is only showing the 
using
of RPC objects, dataspaces, signal handlers of the actual/current 
component, 
not also from the other 'server component'. Now I got it.


I hope the above explanation sounds logical.


Yes, many thanks for your detailed explanation. I think, I understood now.


Thanks for bringing up this
topic on the mailing list. You may not be the only one confused by the
messages. Finally, let me compliment you for using the "diag" feature
and for including all important details of your scenario at the start of
your posting.


No, problem. I'm also a C++ developer in my company and I get sometimes
'problems/crash dumps' from my colleagues/customer to fix them without
any infos (build/version number, etc) :-).

Cheers
Jörg


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Blocking wait for interrupt

2017-08-08 Thread Johannes Kliemann
Hi Sebastian,

thanks for your explanation, it helped me to understand the problem.
I'm just not yet sure how exactly to call the task.

For the interrupt I have registered it via the Lx::Irq class.
But I'm not sure how to handle the transfer task. Currently I locally
create a Lx::Task that calls the xfer_master function (through a wrapper
though) and call Lx::scheduler::schedule() after its creation.
But that seems to return while the tasks are not yet completed.

Regards,
Johannes

Am 27.07.2017 um 16:20 schrieb Sebastian Sumpf:
> Hi Johannes,
> 
> On 07/27/2017 01:42 PM, Johannes Kliemann wrote:
>> Hi Sebastian,
>>
>> yes, that is the function I call from Genode.
>>
>> And thanks in advance for helping me out.
>>
> 
> Good. I will try to explain how the Lx::Task approach is supposed to
> work in this case:
> 
> Your task calls 'i2c_dw_xfer' which sends a request, creates a Linux
> completion, and calls 'wait_for_completion_timeout'. This leads to a
> blocking of the task and a call to the task scheduler. Since there
> should be nothing to schedule at this point, the entrypoint will
> eventually return to its "wait for RPC" code. When the waited for IRQ
> occurs, the entrypoint receives a signal RPC and wakes up yet another
> task, the IRQ task
> (dde_linux/src/lx_kit/irq.cc:Lx_kit::Irq::Context::unblock). This task
> calls all registered interrupt handlers ('handle_irq' same file). One of
> those handlers should, behind your back, call 'complete' (I haven't
> found the relevant source code for your case, but it is there) for the
> completion your task is waiting for, which in turn should unblock your
> task. This way a synchronous execution is accomplished. That is the theory.
> 
> Details:
> 
> 'wait_for_completion_timeout'
> (dde_linux/src/include/lx_emul/impl/completion.h) will call
> '__wait_completion' which must be implemented by your driver. Please
> refer to '__wait_completion' (dde_linux/src/lib/usb/lx_emul.cc) as a
> guidance. The important part is to call 'block_and_schedule' for the
> current task. Also notice, the 'complete' function in completion.h that
> calls 'unblock' for the task within a completion. This should somehow be
> called by your interrupt handler and lead to the re-scheduling of your
> task when the IRQ tasks blocks again after interrupt handling.
> 
> By the way, these Lx::Taks are also called routine or coroutine in
> literature and nothing more than a stack and a 'setjmp/longjmp' construct.
> 
> I hope this helps,
> 
> Sebastian
> 
> 

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main