> Dear Jan,
> thank you for taking time to answer my questions and 
> sorry for the delayed response, but I have been busy 
> with some other work. 
> Please find my follow-up questions inserted in the text.
>>> 1.)
>>> Essentially the question deals with the problem, how long a 
>> Xenomai task in secondary mode can be delayed by normal Linux tasks. 
>>> In detail : we plan to to have a lot of "near realtime" 
>> ethernet communication from within Xenomai using the normal 
>> Linux network stack (calling the normal socket API). The 
>> question now is, how our network communication is influenced 
>> by other Linux tasks performing also network communication, 
>> let´s say an FTP transfer ?
>> Depending on the "normal" networking load, you will suffer 
>> from more or less frequent (indeterministic) packet delays. 
> Do you have an idea about the dimension weare talking about :
> less than a millisecond, few milliseconds, seconds, or is the
> delay complete indeterministic ?

Normally you will only see millisecond delays or less. But in case of
overload (heavy data transfers, miss-configured or virus-contaminated
nodes etc.) you may face several hundred milliseconds or more - up to
packets drops. That depends on the infrastructure (network layout,
switches and their QoS features) and cannot be answered generally. Good
QoS-aware switches can help here.

>> Xenomai will not improve this in any way. If your task in 
>> secondary mode tries to send some data and requires to take a 
>> networking lock currently held by another Linux task, it can 
>> take a real long time until this request is completed. 
> But at least, after a (linux-)systemcall (from what task ever) finished, 
> Xenomai gets controll back before any other linux task, isn´t it ?
> This means : between systemcalls a rescheduling back to Xenomai is performed 
> or isn´t it ??

The Xenomai domain regains control as soon as a) the real-time task in
secondary mode can continue or b) some other real-time task becomes

> Sorry for the next stupid question, but what is a network lock. With what 
> kind of 
> action a task can lock the complete stack ? And how long could it block the 
> stack ?
> Could you give me an example for better understanding ?

"Network lock" was an oversimplification. Actually, this is a complex
topic, and I had a wrong model in mind. To clarify:

The networking stack contains a lot of locks (NIC transmission access,
routing and ARP tables, input and output queues, ...), but they are
non-preemptible for the transmission and reception path on standard
Linux (as long as I do not oversee some corner case). So, contention is
now happening to the point where your RT-task issues a Linux syscall.
The kernel then has to preempt the interrupted Linux task (= interrupted
by Xenomai when the RT-task became runnable) to let your task run in
secondary mode. This normally happens within a few hundred microseconds
(CONFIG_PREEMPT enabled in the kernel), but there can be scenarios where
preemption is disabled for several hundred milliseconds or more - though
very rarely with CONFIG_PREEMPT, but possible. That can be ok for soft
RT if you are able to handle the exceptions.

Besides this, another critical issue are delays due to buffer
acquisitions. If there is no fitting buffer at hand on packet arrival or
transmission request, it can take a while to free the necessary
resources, specifically if your whole system just happen to run short on
memory (swapping is taking place, some application leaked memory, ...).

>> This 
>> gets better with PREEMPT_RT but still remains non-RT because 
>> the Linux networking stack is not designed for hard real-time.
> Next stupid question : what is PREEMPT_RT ? Is this kernel 2.6 or is it the 
> Monta Vista approach for real time (making the kernel more preemtable) ?

Actually, neither of both. It is the ongoing community effort (which
includes MontaVista) to make the kernel natively preemptible, it is not
MontaVista's own project. Watch out for "-rt" or "real-time preempt" on
the Linux kernel mailing list, or look at
http://people.redhat.com/mingo/realtime-preempt (don't ask me for the
latest user-space pieces, i.e. glibc patches, they are either still
unpublished, well hidden, or non-existent yet).

>> If you communication can be soft-RT, you could indeed avoid 
>> the separation - but you will then have to live with the side 
>> effects. All you can do then is try to lower the number of 
>> deadline misses by keeping the standard network traffic low 
>> and managing the bandwidth of the participants (the Linux 
>> network stack has some support for QoS, at least
>> 2.6 I think).
>> BTW, as long as your network is not separated or you have no 
>> control over the traffic arriving at your system, picking the 
>> Linux stack in favour of RTnet (which is compatible with 
>> non-RT networks) is indeed generally recommended. This way 
>> you keep indeterministic load away from the real-time subsystem.
> Unfortunatelly we don´t want to limit non realtime traffic, we just 
> want to make shure, that deterministic traffic has a higher priority 
> than non RT traffic (like in other RTOS like vxWorks). 

RT != RT. Soft RT is feasible with improved stacks, hard RT requires
different approaches, e.g. buffer pre-allocation or traffic control for
non-RT data. Specifically, when you read about TCP and real-time, this
can only refer to soft RT applications, as TCP do not have hard RT
characteristics (undefined timeout and packet repetition behaviour).

> Indeterministic traffic should get just the leftover bandwith.
> What do you mean with : "Rtnet is compatible with non-RT networks" ? 

It "speaks" the same UDP/IP protocol, just removes some dynamics from
problematic parts (ARP, IP fragmentation).

> I thought RTnet uses a time slice mechanism and therefore could not be 
> mixed with systems transmitting when ever they want. Do you refer to VNICs ?

That's true, mixing up RTnet with non-RT stations in the same physical
network doesn't make sense, it destroys the determinism of other RT
threads on the RTnet node.

To sum your options up:

1) Go the "hard" way, and let RTnet control both RT and non-RT traffic
as well as manage the buffer resources.

2) Mix up RT and non-RT traffic for the sake of good non-RT performance
and fair soft-RT qualities with optimisations like QoS-switches.

The question is if you can handle rare deadline misses or if you need
hard guarantees.

>>> I have created a scheduling scenario and I would ask you to 
>> have a look on it and to tell me whether it is correct or 
>> not. Thank you !
>>> An corresponding question about this scheduling is : are there 
>>> differences between a 2.4 and 2.6 Linux kernel ? (for our PowerPC 
>>> plattform we intend to use the 2.4 kernel for performance reasons)
>>> Scheduling scenario :
>>> (I hope formating is not destroyed by email transfer)
>>> Time moves downwards
>>> v-Xenomai 
>>>      v-Linux kernel
>>>           v-Linux processes
>>>           l1   Linux task1 running
>>>      s1 < l1   Linux task1 makes systemcall
>>>      s1        Linux task1 systemcall processed
>>> -------------  Linux scheduling   
>>>           l2   Linux task2 starts to run
>>>      s2 < l2   Linux task2 makes systemcall
>>>      s2        Linux task2 systemcall processed
>>> +++++++++++++  Xenomai scheduling
>>> x3             Xenomai task3 starts to run => primary mode
>>> x3 > s3        Xenomai task3 makes systemcall => secondary mode
>>>      s3        Xenomai task3 systemcall processed 
>>> -------------  Linux scheduling => Xenomai task preemted
>> This preemption will only happen if the target Linux task has 
>> a higher priority or the Xenomai task on secondary mode has 
>> to block on some resource to be become free. As I sketched 
>> above, this can actually happen in the network stack.
> What do you mean with "higher priority" ? I thought Xenomai has
> a higher priority than anything else in the linux system.

Either a Xenomai shadow thread in secondary mode or a standard task with
SCHED_FIFO/_RR set could become runnable. Xenomai will allow this switch
 - as long as no other Xenomai thread in primary mode is runnable, which
would then trigger switching back to the Xenomai domain.

> Could you give mean example about the resource (related to network
> communication) s3 could wait for ?

Yep, rescheduling due to lock contention is not likely for the
networking scenario (see above), only voluntary blocking, e.g. when
waiting on yet unavailable data.

As Linux IRQs can arrive and trigger interruptions while in secondary
mode, you may furthermore consider to use the IRQ-shield of Xenomai (see
config) to improve the predictability.


Attachment: signature.asc
Description: OpenPGP digital signature

Xenomai-core mailing list

Reply via email to