Johan Borkhuis wrote: > Jan, > > Jan Kiszka wrote: >> Johan Borkhuis wrote: >> -snip- >> >>> :| + begin 0x80000001 -1908 0.414 __ipipe_dispatch_event+0x1e4 >>> (__ipipe_syscall_root+0x64) >>> :| + end 0x80000001 -1908! 52.121 __ipipe_dispatch_event+0x204 >>> (__ipipe_syscall_root+0x64) >>> : + func -1857+ 1.170 do_page_fault+0x14 >>> (handle_page_fault+0xc) >>> >> This one is interesting: page fault over the RT-thread that belongs to >> PID 1160 (prio 79). >> >>> :| # func -1845+ 1.585 rpi_push+0x14 [xeno_nucleus] >>> (xnshadow_relax+0x84 [xeno_nucleus]) >>> :| # func -1844 0.463 xnpod_schedule_runnable+0x14 >>> [xeno_nucleus] (rpi_push+0x84 [xeno_nucleus]) >>> :| # [ 0] swapper 79 -1843+ 1.658 xnpod_schedule_runnable+0x54 >>> [xeno_nucleus] (rpi_push+0x84 [xeno_nucleus]) >>> >> And now we are running the Linux kernel at xeno-prio 79 (due to >> prio-coupling). >> > Thank you for pointing these out. This made me change 2 things in the > configuration: > - Disabling swap. This should not make any difference as I don't have > any swapspace, but the page_fault triggered me to have a look at that > setting. > - Enabling "Disable priority coupling". I missed the Disable part here, > so it was still enabled during this run.
Don't get me wrong: I was not saying the prio-coupling it the root of the problem here, it's just an amplifier. The (or at least one) root is that your RT task still causes page faults. That needs to be fixed! > > When running with priority coupling disabled the latency is down to less > than 100 usec. But this is always the case when a higher priority > rt_task is stopped, even with only one higher priority thread. Would > this be a normal latency if a higher prio rt_task is deleted? Below are > the results of a test run, where the max latency is 67 usec (the 40 usec > of the first line is caused by the trace-setup): > > bash-3.00# ./latency -f -P 60 > == Sampling period: 100 us > == Test mode: periodic user-mode task > == All results in microseconds > warming up... > RTT| 00:00:01 (periodic user-mode task, 100 us period, priority 60) > RTH|--lat min|--lat avg|--lat max|-overrun|--lat best|--lat worst > RTD| -2.642| 0.432| 40.768| 0| -2.642| 40.768 > RTD| -2.594| 0.408| 8.744| 0| -2.642| 40.768 > RTD| -2.642| 0.432| 6.198| 0| -2.642| 40.768 > RTD| -2.306| 0.552| 33.129| 0| -2.642| 40.768 > RTD| -2.330| 0.480| 67.483| 0| -2.642| 67.483 > RTD| -2.474| 0.408| 6.150| 0| -2.642| 67.483 > RTD| -2.666| 0.408| 10.330| 0| -2.666| 67.483 > RTD| -2.546| 0.432| 6.222| 0| -2.666| 67.483 > ---|---------|---------|---------|--------|---------------------- > RTS| -2.666| 0.432| 67.483| 0| 00:00:08/00:00:08 > bash-3.00# > > bash-3.00# ./cyclictest-Xenomai -q -h -t 1 > #T: 0 P:80 I:10000 O: 0 C: 100 Min: -5.560 Avg: 0.052 Max: 5.732 > > > I also attached another logfile, this one belongs to the session above. > > If deleting a rt_task has such a large impact on lower priority tasks, > would it be possible to lower to priority of the to be deleted task to > 0, just to avoid this impact? IIRC, cyclictest does precisely this before thread termination, ie. drops RT properties. This is indeed recommended if you have prio/coupling on and perform cleanup while other critical tasks continue to work. > (I know, in a RT system you should setup your system before the system > is operational, and do not create or remove tasks dynamically, but > sometimes this is needed). > > Kind regards, > Johan Borkhuis > Jan
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Xenomai-help mailing list [email protected] https://mail.gna.org/listinfo/xenomai-help
