[EMAIL PROTECTED] wrote:
Thank you for caring about my problem !
Perhaps I should have mentioned in my earlier postings that I am using a 
PowerPC platform. I hope this does not nullify your prior analyses.

These are the outputs (with some of my debug outputs), when I start satch.

# ./satch
Xenomai: UVM skin or CONFIG_XENO_OPT_PERVASIVE disabled.
(modprobe xeno_uvm?)
# insmod xeno_uvm.o
Using xeno_uvm.o
Xenomai: starting UVM services.
Dec 12 06:21:02 trgt user.info kernel: Xenomai: starting UVM services.

# ./satch
Xenomai/uvm: real-time nucleus v2.1-rc2 (Champagne) loaded.
starting VxWorks services.
spawning consumer 805462824
taskSpawn before TaskInit
taskInit before xnpod_init_thread
taskSpawn before TaskActivate
taskActivate before xnpod_start_thread
xnpod_start_thread before xnarch_init_thread ConsumerTask
xnpod_start_thread after xnarch_init_thread
xnpod_start_thread after xnpod_resume_thread
xnpod_start_thread before xnpod_schedule

satch stalled !!
=> ouput form an other terminal

~ # cat /proc/xenomai/sched CPU PID PRI TIMEOUT STAT NAME
  0  0      0    0        R          ROOT
  0  42     1    0        S          uvm-root
  0  44     3    0        W          uvm-timer
~ # cat /proc/xenomai/timer status=oneshot:setup=40:tickval=1:jiffies=940509634545


It looks like for some reason, the newly created thread vanishes. I'll check this on a PPC board later since I cannot reproduce this on x86.



So far the debug outputs. I never worked with gdb before, but I will try to 
establish a remote debug session with it, to get some more informations.
But in the meantime could you perhaps be so kind to answer a questions occured 
with your answer (thank you) :
You have written :


More precisely, the VxWorks API is compiled as a user-space library (instead of a kernel module) when using the UVM mode, and the VxWorks services are obtained from this library, within the Linux process that embodies it. This is why there is no point in loading the in-kernel VxWorks module in this case.


O.k., I understand that the vxWorks API is done by some kind of wrapper 
functionalities provided by the
user-space vxworks library. What I donĀ“t understand is, why do I need the uvm kernel module for vxWorks
 but not for the  native xenomai API ?
And, what is the vxWorks kernel module (xeno_vxworks.o) for, when do I need it 
??


Ok, long story:

When I first implemented the pervasive real-time support in user-space for Xenomai at core level, a question arose: how do I make the existing real-time skins that stack over this core (vxworks, psos+, vrtx and uitron at that time) runnable in user-space over this new support? Those skins where previously only runnable in kernel space, providing their services to applications compiled as kernel modules, through plain function calls.

Normally, I should have created a library containing all the needed system call wrappers for each skin, allowing user-space applications to link against, and issue requests to the kernel module implementing the real-time services (e.g. xeno_vxworks.ko), the same way the glibc exports system call wrappers to applications for invoking Linux kernel services. But doing so would have required to code ~300 wrappers (i.e. the sum of all services exported by the four existing skins) and their associated handlers in kernel space that eventually invoke the system call, handling the parameters and the return value. For instance, this is what has been done for the native and POSIX skins, which do not need the UVM support to provide their services to user-space applications.

To solve this, and since I'm a lazy bastard with all the required imagination to make an art of procrastination, I devised the UVM support, which allowed to run the original real-time skins in user-space without having to provide those wrappers. To this end, the UVM requires a copy of the nucleus, the real-time skin and the application to be compiled as user-space code, which ends up being embodied into a single Linux process image. A thin layer is then added to connect the "local" nucleus to the "global" one running in kernel space. This way, the embodied skin calls the services of the local nucleus, and each time a scheduling decision is taken by the local nucleus as a consequence of such action, it is transparently delegated to the global one which actually performs context switches. Since threads created within the context of a UVM are regular Xenomai's shadow threads (and _not_ some kind of lightweight/green threads), there is no limitation on what you could do over such context compared to threads created from the native or POSIX skins [1].

The upside of the UVM is that for the most part, the real-time engine is self-contained into a single Linux process, so the number of "real" system calls issued by an application is slightly reduced (e.g. if your application grabs an uncontended VxWorks semaphore in the context of a UVM, it only costs a function call and no actual system call, since the operation has no incidence on the current scheduling state). The other nice part - out of lazyness - is that we don't have to provide the system call wrappers for each and every service exported by the skin, but only a few ones implemented by the UVM support, in order to connect both cores (local and global), so that xeno_uvm.ko can receive requests from all running UVMs, and change the scheduling state appropriately, and also control the timer and a few other specific resources). Therefore, the reason you don't need to load xeno_vxworks.ko to run a VxWorks personality over the UVM is that the VxWorks services are already provided by the same code but compiled as a user-space library (libvxworks.so). On the other hand, libnative.so (native skin) or libpthread_rt.so (POSIX skin) only contain system call wrappers invoking the real-time API in kernel space (i.e. xeno_native.ko and xeno_posix.ko).

The downside of the UVM is that your application can trash the runtime environment, since both are embodied into a single address space; at worst (maybe at best, actually) this would "only" cause a process termination, but this is still an issue to keep an eye on. Perhaps more importantly, giving the applications access to machine-level resources is made much harder by the UVM; for instance, connecting IRQ handlers is not that fun in this environment.

Incidentally, a significant work toward v2.2 will be to progressively provide fully native user-space support to the skins that currently miss it, like it is already available for the native and POSIX APIs. This will underly one of v2.2's major goals: keep improving Xenomai as a system of choice for migrating applications from proprietary environments to GNU/Linux.

[1] 
http://download.gna.org/xenomai/documentation/trunk/pdf/Introduction-to-UVMs.pdf

--

Philippe.

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to