Alexis Berlemont wrote:
> Hi,
> 
>>> - Or do you expect a more complete solution like fuse or fusd ?
>> There are a few tricky parts when you want a seamless driver API,
>> allowing to test or even use RTDM drivers in userspace:
>>
>>  - IRQ handling (means: forwarding, acknowledging, sharing).
>>    You won't be able to achieve 100% equivalent functionality here, of
>>    course.
> 
> 
> On that point I thought about some hidden dedicated task(s) / thread(s): 
> either one task per interrupt or one task per CPU for all interrupt (the 
> second alternative may seem better).

From the locking perspective, this sounds reasonable. But fault
containment will be harder to achieve then.

> 
> The issue is to be as close as possible from the functioning in kernel mode:
> -> This task must work with the highest priority as an IRQ handler cannot be 
> preempted (yet...).
> -> The spinlock mechanism could be emulated thanks to an "interrupt" lock (to 
> emulate irq masking and a common mutex. Thus, by taking the "interrupt" lock, 
> any RT task can indirectly prevent the IRQ user-handler execution.

The spinlock can remain as is, but the interrupt locking part (as well
as its stand-alone version) will require some thoughts. It has some
implicit property: no reschedule on the current CPU. That should be
preserved for the tasks the driver interacts with.

> 
>>  - Managing the driver context.
>>    When in kernel space, driver code can run in RTDM tasks (that trivial
>>    to port), in IRQ context (=> probably some carrier task then) or
>>    caller context - and here the problem starts. Single-user drivers
>>    could simply run as library within the context of the user, but for
>>    shared drivers we need some framework that deals with accessing user
>>    data which we could easily in kernel space, but not when the driver
>>    has its own process or should run within multiple user space
>>    contexts.
> 
> The driver code would be executed by a process but this driver context would 
> be hidden behind the context of the application which uses the driver. 
> 
> Here are the cases, I have in my head:
> -> one application uses one device managed by one kernel module (like misc 
> device); => the simplest case: with one thread inside the driver 
> user-process, we emulate the application context;
> -> many applications uses the same device managed by one kernel module => if 
> we are on a SMP config, we might need one thread per CPU-core inside the 
> driver user-process;
> -> many applications uses many devices managed by the the same kernel module 
> (common char devices) => In this configuration, a driver thread must wait for 
> commands coming many contexts; however, this issue could easily be handled by 
> the kernel module devoted to redirect syscall.

The threading model is not the key here, and I don't think we need new
threads at all. What is the key is the address space organization. In
order to map the model we have with in-kernel drivers to user space, my
idea is to have a shared memory between all processes using some driver
so that this driver can access its own data from all process contexts it
may be called from. That way we could let the driver code run in te
context of the application threads under the related processes. And
there would be no need to switch the memory mapping on driver entry,
which should keep the overhead of multi-user drivers low. But this kind
of environment may be tricky to create and may contain some traps and
pitfalls I didn't find yet.

> 
> Therefore I think such a development could be divided in two steps:
> -> the first one would be the extension of a part of the RTDM API in 
> user-mode 
> (most of the functions located in drvlib.c)
> -> the second one would be the syscall redirection. 

Having the basic driver lib (also the possibility to [un]register
drivers) in user space is surely a good starting point. Then one can
start writing first simple demos/test cases.

> 
> The second part can be implemented in two ways:
> -> the RTDM kernel skin is extended;
> -> an RTDM kernel driver is developed (this driver would handle ioctl coming 
> from the user-driver (copy_from_user, register_device, wait_command, etc.) 
> and the user-application (open, read, write, close).

As a second step I would focus on IRQ handling and the execution model
of non-shared (single-user) drivers that can run in the process context
of their only user (=> RTDM drivers as application library).

And the third step should then deal with multi-user drivers, maybe
exploring what I sketched above.

> 
> Which one do you think is the best?
> 
> On my side, I started the first alternative (kernel skin extension) so as to 
> check that my ideas could be translated in code lines (it is just a POC which 
> does not work yet).
> 
> Alexis.
> 
> P.S.: I sent the first mail sunday night, I don't know why it took more than 
> one day to reach its destination.

No error message on this?

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to