Re: [Xenomai-core] Xenomai latency tests on various PowerPC boards

2005-10-18 Thread Philippe Gerum

Wolfgang Grandegger wrote:

Hallo,

attached you will find the results of Xemonai latency measurements on
various embedded PowerPC boards using MPC 8xx and AMCC 4xx processors,
from low to high end covering a worst case latency range from 25 to 225
us. It also includes a comparison with RTAI 3.0r5 on the slowest CPU.
Here are some remarks and comments:

- On low-end processor code size matters a lot and it's difficult to
  beat RTAI/RTHAL.



Beat no, get closer, yes, probably. The good news is that looking at the 
figures, we do have a margin of improvement! :o>


Btw, the nucleus can be configured so that the user-space threading engine is 
compiled out (i.e. CONFIG_XENO_OPT_PERVASIVE from the nucleus menu), which would 
be the corresponding profile to compare with klatency (i.e. sched_up). Disabling 
this option reduces the code size for the nucleus from:


   textdata bss dec hex filename
  66740 7926540   74072   12158 nucleus/xeno_nucleus.ko

to:

  text data bss dec hex filename
  52596 5763956   57128df28 nucleus/xeno_nucleus.ko

Still a bit fat though.


- Apart from the CPU power, big caches and a fast memory interface
  improves latencies.

- L2 cache improves latencies a lot (compare Ocotea with Yosemite).

- I'm a bit puzzled about the results of the "cruncher" test. Could
  someone explain the output, please?



This test is reminiscent of the HYADES project (ia64 port of RTAI/fusion), where 
we wanted to illustrate the level of execution determinism one could achieve 
using the interrupt shield technique on large ia64 SMP systems. To this end, we 
measured the jitter in execution time of a calibrated float-crunching loop, with 
and without interrupt load. This test is likely going to disappear at some point 
in time, because it's not that informative in Xeno's context.



- Stability seems already quite good. At least I did not observe any
  crash yet :-).



That's cool. I see no other way to properly improve performances than first 
having something which could be run on various platforms without them randomly 
jumping out of the window, or us relying on plain Voodoo stuff to explain why 
those setup would work or not.



The PowerPC port of Xenomai is already in good shape. That's great!



Thanks. This is likely because I do feel better since I have been aware that 
there's life beyond x86. :o)



Wolfgang.







Latency tests with Xenomai on various PowerPC boards


Board   : Processor  CPU-Clk Bus-Clk I-Cache D-Cache Memory Remarks

TQM860L : MPC 860 50 MHz  50 MHz4 KB4 kB  16 MB
TQM866M : MPC 866133 MHz  66 MHz   16 KB8 kB 128 MB

Walnut  : AMCC 405GP 200 MHz 100 MHz   16 KB8 kB  32 MB
Yosemite: AMCC 440EP 533 MHz 133 MHz   32 KB   32 KB 256 MB DDR-RAM, FPU
Ocotea  : AMCC 440GX 533 MHz 152 MHz   32 KB   32 KB 256 MB DDR-RAM, L2 256 KB


Linux  : DENX linux-2.6.14-rc3-g4c234921
iPipe  : 1.0-00
Xenomai: SVN 2005-10-15


CRUNCER without load:

 | Ideal computation time
TQM860L  |   368 us ???
TQM866L  | 10008 us 
Walnut   | 10150 us

Yosemite |  9911 us
Ocotea   |  9479 us 



SWITCH without load:

 | lat min| lat avg| lat max|lost
TQM860L  |  103360|  107840|  209280|   0
TQM866L  |   25745|   31880|   51369|   5
Walnut   |   24620|   25965|   32280|   1
Yosemite |5626|5655|   17403|   0
Ocotea   |5158|5169|   10038|   0


KLATENCY with load:

 |-lat min|-lat avg|-lat max|-overrun|---test-time
TQM860L  |   50560|   98976|  199040|   0|00:09:45
TQM866L  |   13835|   28571|   74348|   0|00:11:44
Walnut   |   16195|   25062|   45755|   0|00:10:09
Yosemite |3106|9697|   36832|   0|00:09:55
Ocotea   |3575|7438|   24474|   0|00:10:50


LATENCY with load:

 |-lat min|-lat avg|-lat max|-overrun|---test-time
TQM860L  |   60480|  120960|  224320|   0|00:09:46
TQM866L  |   15759|   34286|   78799|   0|00:11:14
Walnut   |   21070|   31650|   64500|   0|00:09:58
Yosemite |3808|   12163|   47898|   0|00:10:00
Ocotea   |3575|7438|   24474|   0|00:10:50


KLATENCY comparison Xenomai 2.0 vs. RTAI/RTHAL 3.0r5 on TQM860L:
---

KLATENCY with load:

|-lat min|-lat avg|-lat max|-overrun|---test-time
Xenomai 2.0 |   50560|   98976|  199040|   0|00:09:45
RTAI 3.0r5  |   23120|   31838|   70520|   ?|00:12:26



Note: load has been put onto the system by running in a telnet sessio

Re: [Xenomai-core] Xenomai latency tests on various PowerPC boards

2005-10-18 Thread Philippe Gerum

Philippe Gerum wrote:

Wolfgang Grandegger wrote:


Hallo,

attached you will find the results of Xemonai latency measurements on
various embedded PowerPC boards using MPC 8xx and AMCC 4xx processors,
from low to high end covering a worst case latency range from 25 to 225
us. It also includes a comparison with RTAI 3.0r5 on the slowest CPU.
Here are some remarks and comments:

- On low-end processor code size matters a lot and it's difficult to
  beat RTAI/RTHAL.



Beat no, get closer, yes, probably. The good news is that looking at the 
figures, we do have a margin of improvement! :o>


Btw, the nucleus can be configured so that the user-space threading 
engine is compiled out (i.e. CONFIG_XENO_OPT_PERVASIVE from the nucleus 
menu), which would be the corresponding profile to compare with klatency 
(i.e. sched_up). Disabling this option reduces the code size for the 
nucleus from:


   text   databssdechexfilename
  66740792   6540  74072  12158
nucleus/xeno_nucleus.ko


to:

  text   databssdechexfilename
  52596576   3956  57128   df28
nucleus/xeno_nucleus.ko




Disabling the periodic timer support which is unused for the klatency test 
brings this down to:


   textdata bss dec hex filename
  51040 5443956   55540d8f4 nucleus/xeno_nucleus.ko


Still a bit fat though.


- Apart from the CPU power, big caches and a fast memory interface
  improves latencies.

- L2 cache improves latencies a lot (compare Ocotea with Yosemite).

- I'm a bit puzzled about the results of the "cruncher" test. Could
  someone explain the output, please?



This test is reminiscent of the HYADES project (ia64 port of 
RTAI/fusion), where we wanted to illustrate the level of execution 
determinism one could achieve using the interrupt shield technique on 
large ia64 SMP systems. To this end, we measured the jitter in execution 
time of a calibrated float-crunching loop, with and without interrupt 
load. This test is likely going to disappear at some point in time, 
because it's not that informative in Xeno's context.



- Stability seems already quite good. At least I did not observe any
  crash yet :-).



That's cool. I see no other way to properly improve performances than 
first having something which could be run on various platforms without 
them randomly jumping out of the window, or us relying on plain Voodoo 
stuff to explain why those setup would work or not.



The PowerPC port of Xenomai is already in good shape. That's great!



Thanks. This is likely because I do feel better since I have been aware 
that there's life beyond x86. :o)



Wolfgang.







Latency tests with Xenomai on various PowerPC boards


Board   : Processor  CPU-Clk Bus-Clk I-Cache D-Cache Memory Remarks

TQM860L : MPC 860 50 MHz  50 MHz4 KB4 kB  16 MB
TQM866M : MPC 866133 MHz  66 MHz   16 KB8 kB 128 MB

Walnut  : AMCC 405GP 200 MHz 100 MHz   16 KB8 kB  32 MB
Yosemite: AMCC 440EP 533 MHz 133 MHz   32 KB   32 KB 256 MB DDR-RAM, FPU
Ocotea  : AMCC 440GX 533 MHz 152 MHz   32 KB   32 KB 256 MB DDR-RAM, 
L2 256 KB



Linux  : DENX linux-2.6.14-rc3-g4c234921
iPipe  : 1.0-00
Xenomai: SVN 2005-10-15


CRUNCER without load:

 | Ideal computation time
TQM860L  |   368 us ???
TQM866L  | 10008 us Walnut   | 10150 us
Yosemite |  9911 us
Ocotea   |  9479 us

SWITCH without load:

 | lat min| lat avg| lat max|lost
TQM860L  |  103360|  107840|  209280|   0
TQM866L  |   25745|   31880|   51369|   5
Walnut   |   24620|   25965|   32280|   1
Yosemite |5626|5655|   17403|   0
Ocotea   |5158|5169|   10038|   0


KLATENCY with load:

 |-lat min|-lat avg|-lat max|-overrun|---test-time
TQM860L  |   50560|   98976|  199040|   0|00:09:45
TQM866L  |   13835|   28571|   74348|   0|00:11:44
Walnut   |   16195|   25062|   45755|   0|00:10:09
Yosemite |3106|9697|   36832|   0|00:09:55
Ocotea   |3575|7438|   24474|   0|00:10:50


LATENCY with load:

 |-lat min|-lat avg|-lat max|-overrun|---test-time
TQM860L  |   60480|  120960|  224320|   0|00:09:46
TQM866L  |   15759|   34286|   78799|   0|00:11:14
Walnut   |   21070|   31650|   64500|   0|00:09:58
Yosemite |3808|   12163|   47898|   0|00:10:00
Ocotea   |3575|7438|   24474|   0|00:10:50


KLATENCY comparison Xenomai 2.0 vs. RTAI/RTHAL 3.0r5 on TQM860L:
-

Re: [Xenomai-core] Xenomai latency tests on various PowerPC boards

2005-10-18 Thread Philippe Gerum

Wolfgang Grandegger wrote:

On 10/18/2005 01:44 PM Philippe Gerum wrote:


Philippe Gerum wrote:


Wolfgang Grandegger wrote:



Hallo,

attached you will find the results of Xemonai latency measurements on
various embedded PowerPC boards using MPC 8xx and AMCC 4xx processors,
from low to high end covering a worst case latency range from 25 to 225
us. It also includes a comparison with RTAI 3.0r5 on the slowest CPU.
Here are some remarks and comments:

- On low-end processor code size matters a lot and it's difficult to
 beat RTAI/RTHAL.



Beat no, get closer, yes, probably. The good news is that looking at the 
figures, we do have a margin of improvement! :o>


Btw, the nucleus can be configured so that the user-space threading 
engine is compiled out (i.e. CONFIG_XENO_OPT_PERVASIVE from the nucleus 
menu), which would be the corresponding profile to compare with klatency 
(i.e. sched_up). Disabling this option reduces the code size for the 
nucleus from:


  text   databssdechexfilename
 66740792   6540  74072  12158
nucleus/xeno_nucleus.ko


to:

 text   databssdechexfilename
 52596576   3956  57128   df28
nucleus/xeno_nucleus.ko




Disabling the periodic timer support which is unused for the klatency test 
brings this down to:


   textdata bss dec hex filename
  51040 5443956   55540d8f4 nucleus/xeno_nucleus.ko



OK, here are the new figures with (*)

 CONFIG_XENO_OPT_PERVASIVE is not set
 CONFIG_XENO_HW_PERIODIC_TIMER is not set:

   |-lat min|-lat avg|-lat max|-overrun|---test-time
RTAI 3.0r5 |   23120|   31838|   70520|   ?|00:12:26
Xenomai|   50560|   98976|  199040|   0|00:09:45
Xenomai (*)|   44160|   96215|  200640|   0|00:09:53

The min latency decreases as expected.



Looks like significant. I wonder now what's the impact of having 2.6 trashing 
the caches during the sleep periods compared to 2.4. But to have an answer here, 
we will need Xeno running over 2.4. Ok, it's planned.





Still a bit fat though.



- Apart from the CPU power, big caches and a fast memory interface
 improves latencies.

- L2 cache improves latencies a lot (compare Ocotea with Yosemite).

- I'm a bit puzzled about the results of the "cruncher" test. Could
 someone explain the output, please?



This test is reminiscent of the HYADES project (ia64 port of 
RTAI/fusion), where we wanted to illustrate the level of execution 
determinism one could achieve using the interrupt shield technique on 
large ia64 SMP systems. To this end, we measured the jitter in execution 
time of a calibrated float-crunching loop, with and without interrupt 
load. This test is likely going to disappear at some point in time, 
because it's not that informative in Xeno's context.




- Stability seems already quite good. At least I did not observe any
 crash yet :-).



That's cool. I see no other way to properly improve performances than 
first having something which could be run on various platforms without 
them randomly jumping out of the window, or us relying on plain Voodoo 
stuff to explain why those setup would work or not.




The PowerPC port of Xenomai is already in good shape. That's great!



Thanks. This is likely because I do feel better since I have been aware 
that there's life beyond x86. :o)




Wolfgang.







Latency tests with Xenomai on various PowerPC boards


Board   : Processor  CPU-Clk Bus-Clk I-Cache D-Cache Memory Remarks

TQM860L : MPC 860 50 MHz  50 MHz4 KB4 kB  16 MB
TQM866M : MPC 866133 MHz  66 MHz   16 KB8 kB 128 MB

Walnut  : AMCC 405GP 200 MHz 100 MHz   16 KB8 kB  32 MB
Yosemite: AMCC 440EP 533 MHz 133 MHz   32 KB   32 KB 256 MB DDR-RAM, FPU
Ocotea  : AMCC 440GX 533 MHz 152 MHz   32 KB   32 KB 256 MB DDR-RAM, 
L2 256 KB



Linux  : DENX linux-2.6.14-rc3-g4c234921
iPipe  : 1.0-00
Xenomai: SVN 2005-10-15


CRUNCER without load:

| Ideal computation time
TQM860L  |   368 us ???
TQM866L  | 10008 us Walnut   | 10150 us
Yosemite |  9911 us
Ocotea   |  9479 us

SWITCH without load:

| lat min| lat avg| lat max|lost
TQM860L  |  103360|  107840|  209280|   0
TQM866L  |   25745|   31880|   51369|   5
Walnut   |   24620|   25965|   32280|   1
Yosemite |5626|5655|   17403|   0
Ocotea   |5158|5169|   10038|   0


KLATENCY with load:

|-lat min|-lat avg|-lat max|-overrun|---test-time
TQM860L  |   50560|   98976|  199040|   0|00:09:45
TQM866L  |   13835|   28571|   74348|   0|00:11:44

Re: [Xenomai-core] Xenomai and gdbserver

2005-10-18 Thread Philippe Gerum

Steven Seeger wrote:

I use remote debug with fusion/2.6.13, but I am using fusion from svn.

Also, there are bugs. One I have found recently is that if a task is created
with T_SUSP and then you start stepping through the code, the task seems to
start up on its own somehow. Weird.


Last time you reported that you actually mentioned the rt_task_spawn() service, 
which is designed to start the task immediately. Reading about the T_SUSP bit 
now seems to mean that you are not using rt_task_spawn() anymore, but that the 
problem indeed appears. Could you be more specific about the code in question?




Older verisons of fusion had issues with not every task in the application
halting on breakpoint.



Strange indeed, since there is no reason for the exception triggered by any 
breakpoint to behave in a context-dependent manner, I mean on a task-by-task 
basis. Do you remember the first version that seemed to solve the issue?



On 10/18/05 8:10 AM, "Marco Cavallini" <[EMAIL PROTECTED]> wrote:



Hi
I wonder if someone has been able to perform a remote debug with
xenomai/fusion.
I have problems debugging fusion-0.9.1 programs with kernel-2.6.12.2 and
Fedora Core 2
gdb-6.0

When i run GDB chain and step into rt_task_create(...), GDB prompts "[1]+
Stopped gdb chain" and
terminates. I can
call "gdbserver ipnum:port chain". then connect to it from a remote machine
using gdb and "target remote..."


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai and gdbserver

2005-10-18 Thread Philippe Gerum

Marco Cavallini wrote:

Hi
I wonder if someone has been able to perform a remote debug with 
xenomai/fusion.
I have problems debugging fusion-0.9.1 programs with kernel-2.6.12.2 and 
Fedora Core 2

gdb-6.0

When i run GDB chain and step into rt_task_create(...), GDB prompts 
"[1]+ Stopped gdb chain" and

terminates. I can


I'm tracking an issue like that, causing the debuggee to run into a SIGBUS, due 
to the program counter being messed up somehow. AFAICT, this sometimes happens 
when stepping over code that migrates between primary and secondary modes (and 
rt_task_create does). I need to log this one on the bug tracker. It's pretty 
PITA to chase, but it's 100% reproductible here, so there's hope.



call "gdbserver ipnum:port chain". then connect to it from a remote machine
using gdb and "target remote..."

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai latency tests on various PowerPC boards

2005-10-18 Thread Philippe Gerum

Wolfgang Grandegger wrote:

On 10/18/2005 01:44 PM Philippe Gerum wrote:


Philippe Gerum wrote:


Wolfgang Grandegger wrote:



Hallo,

attached you will find the results of Xemonai latency measurements on
various embedded PowerPC boards using MPC 8xx and AMCC 4xx processors,
from low to high end covering a worst case latency range from 25 to 225
us. It also includes a comparison with RTAI 3.0r5 on the slowest CPU.
Here are some remarks and comments:

- On low-end processor code size matters a lot and it's difficult to
 beat RTAI/RTHAL.



Beat no, get closer, yes, probably. The good news is that looking at the 
figures, we do have a margin of improvement! :o>


Btw, the nucleus can be configured so that the user-space threading 
engine is compiled out (i.e. CONFIG_XENO_OPT_PERVASIVE from the nucleus 
menu), which would be the corresponding profile to compare with klatency 
(i.e. sched_up). Disabling this option reduces the code size for the 
nucleus from:


  text   databssdechexfilename
 66740792   6540  74072  12158
nucleus/xeno_nucleus.ko


to:

 text   databssdechexfilename
 52596576   3956  57128   df28
nucleus/xeno_nucleus.ko




Disabling the periodic timer support which is unused for the klatency test 
brings this down to:


   textdata bss dec hex filename
  51040 5443956   55540d8f4 nucleus/xeno_nucleus.ko



OK, here are the new figures with (*)

 CONFIG_XENO_OPT_PERVASIVE is not set
 CONFIG_XENO_HW_PERIODIC_TIMER is not set:

   |-lat min|-lat avg|-lat max|-overrun|---test-time
RTAI 3.0r5 |   23120|   31838|   70520|   ?|00:12:26
Xenomai|   50560|   98976|  199040|   0|00:09:45
Xenomai (*)|   44160|   96215|  200640|   0|00:09:53

The min latency decreases as expected.



I just discovered that -00 did not include some recent changes I had in my tree, 
aimed at prevent high latencies during fork pressure. I've committed -01 which 
does include them. When time allows, I'd be interested to know if this has some 
impact on the Ocotea figures. TIA,


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai latency tests on various PowerPC boards

2005-10-19 Thread Philippe Gerum

Wolfgang Grandegger wrote:

On 10/18/2005 08:14 PM Philippe Gerum wrote:


Wolfgang Grandegger wrote:


On 10/18/2005 01:44 PM Philippe Gerum wrote:



Philippe Gerum wrote:



Wolfgang Grandegger wrote:




Hallo,

attached you will find the results of Xemonai latency measurements on
various embedded PowerPC boards using MPC 8xx and AMCC 4xx processors,



from low to high end covering a worst case latency range from 25 to 225



us. It also includes a comparison with RTAI 3.0r5 on the slowest CPU.
Here are some remarks and comments:

- On low-end processor code size matters a lot and it's difficult to
beat RTAI/RTHAL.



Beat no, get closer, yes, probably. The good news is that looking at the 
figures, we do have a margin of improvement! :o>


Btw, the nucleus can be configured so that the user-space threading 
engine is compiled out (i.e. CONFIG_XENO_OPT_PERVASIVE from the nucleus 
menu), which would be the corresponding profile to compare with klatency 
(i.e. sched_up). Disabling this option reduces the code size for the 
nucleus from:


 text   databssdechexfilename
66740792   6540  74072  12158
nucleus/xeno_nucleus.ko


to:

text   databssdechexfilename
52596576   3956  57128   df28
nucleus/xeno_nucleus.ko




Disabling the periodic timer support which is unused for the klatency test 
brings this down to:


  text data bss dec hex filename
 51040  5443956   55540d8f4 nucleus/xeno_nucleus.ko



OK, here are the new figures with (*)

CONFIG_XENO_OPT_PERVASIVE is not set
CONFIG_XENO_HW_PERIODIC_TIMER is not set:

  |-lat min|-lat avg|-lat max|-overrun|---test-time
RTAI 3.0r5 |   23120|   31838|   70520|   ?|00:12:26
Xenomai|   50560|   98976|  199040|   0|00:09:45
Xenomai (*)|   44160|   96215|  200640|   0|00:09:53

The min latency decreases as expected.



I just discovered that -00 did not include some recent changes I had in my tree, 
aimed at prevent high latencies during fork pressure. I've committed -01 which 
does include them. When time allows, I'd be interested to know if this has some 
impact on the Ocotea figures. TIA,



bash-2.05b# cat /proc/ipipe/version
1.0-01

SWITCH without load:

== Sampling period: 100 us
RTH| lat min| lat avg| lat max|lost
RTD|5158|5169|   10038|   0   iPipe 1.0-00
RTD|5145|5154|   10166|   0   iPipe 1.0-01

KLATENCY with load:

RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat
worst
RTS|2953|5974|   19147|   0|00:12:05 1.0-00
RTS|3035|8705|   20705|   0|00:09:54 1.0-01

LATENCY with load:

== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat
worst
RTS|3575|7438|   24474|   0|00:10:50 1.0-00
RTS|3553|   10125|   23970|   0|00:09:41 1.0-01


Mmm, average even looks worse for both latency tests.


It has no significant impact, I think.



Ok, thanks. The same fix is worth 10 us on high-end x86 boxen, so I wondered if 
the same could apply to ppc as well.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-19 Thread Philippe Gerum

Fillod Stephane wrote:

Wolfgang Grandegger wrote:
[...]


Load for klatency/latency was ping flooding on FCC (piece of cake),
and cache calibrator. IMHO, we can do nastier.


You mean the cache calibrator from http://monetdb.cwi.nl/Calibrator/? I
tried it on my Ocotea board and it increased the max latency for 25 to
30 us.



Yes, that very one. In this case, it has been used as a cache trashing
load generator. But IMHO, this Calibrator should be better used in the
Benchmarking Plan to get L1/L2/RAM access latency figures (w/o RT
running),
and offer one more correlation against RT latency results.

We can afford a better cache trashing load generator. Earlier this year,
I proposed flushy(tm) [1], but as Philippe suggested, we can do better.
Flushy should be rewritten as an ADEOS layer, inserted just in front of 
Xenomai in the pipeline. This way, we would be sure the caches

are dead cold when Xenomai enter its domain. Using tools like OProfile,
it should be possible then to track cache misses, and fix them 
by prefetching, where available.


[1] http://rtai.dk/cgi-bin/gratiswiki.pl?Latency_Killer (bottom of page)


Here is the result of my 1.0-01 tests on e500:

$ cat /proc/ipipe/version
1.0-01

SWITCH without load:
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   0 1.0-00
RTD|4620|4740|8730|   0 1.0-01

KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01

LATENCY with load:
== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|00:08:06 1.0-00
RTS|   -5670|   -4620|   12930|   0|00:12:39 1.0-01

That's weird. Figures are worse, but since the load (ping -f +
calibrator)
was executed manually, it may not be the same.



Ok, I now suspect that another change regarding the size of the interrupt 
counters made this worse. I'm going to revert it and upload -02, just to make sure.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-19 Thread Philippe Gerum

Philippe Gerum wrote:

Fillod Stephane wrote:


Wolfgang Grandegger wrote:
[...]


Load for klatency/latency was ping flooding on FCC (piece of cake),
and cache calibrator. IMHO, we can do nastier.



You mean the cache calibrator from http://monetdb.cwi.nl/Calibrator/? I
tried it on my Ocotea board and it increased the max latency for 25 to
30 us.




Yes, that very one. In this case, it has been used as a cache trashing
load generator. But IMHO, this Calibrator should be better used in the
Benchmarking Plan to get L1/L2/RAM access latency figures (w/o RT
running),
and offer one more correlation against RT latency results.

We can afford a better cache trashing load generator. Earlier this year,
I proposed flushy(tm) [1], but as Philippe suggested, we can do better.
Flushy should be rewritten as an ADEOS layer, inserted just in front 
of Xenomai in the pipeline. This way, we would be sure the caches

are dead cold when Xenomai enter its domain. Using tools like OProfile,
it should be possible then to track cache misses, and fix them by 
prefetching, where available.


[1] http://rtai.dk/cgi-bin/gratiswiki.pl?Latency_Killer (bottom of page)


Here is the result of my 1.0-01 tests on e500:

$ cat /proc/ipipe/version
1.0-01

SWITCH without load:
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   01.0-00
RTD|4620|4740|8730|   01.0-01

KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01

LATENCY with load:
== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|00:08:06 1.0-00
RTS|   -5670|   -4620|   12930|   0|00:12:39 1.0-01

That's weird. Figures are worse, but since the load (ping -f +
calibrator)
was executed manually, it may not be the same.



Ok, I now suspect that another change regarding the size of the 
interrupt counters made this worse. I'm going to revert it and upload 
-02, just to make sure.




http://download.gna.org/adeos/patches/v2.6/adeos/ppc/adeos-ipipe-2.6.13-ppc-1.0-02.patch

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-19 Thread Philippe Gerum

Fillod Stephane wrote:

Philippe Gerum wrote:
[..]

http://download.gna.org/adeos/patches/v2.6/adeos/ppc/adeos-ipipe-2.6.13-
ppc-1.0-02.patch

Here is the result of tests with version 1.0-02 on e500:

load: ~1 minute ping -f, one run of calibrator chewing 64MiB.

$ cat /proc/ipipe/version
1.0-02

SWITCH without load:
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   01.0-00
RTD|4620|4740|8730|   01.0-01
RTD|4620|4740|8190|   01.0-02

KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01
RTS|   -6150|   -4183|   12480|   0|00:03:38 1.0-02

LATENCY with load:
== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|00:08:06 1.0-00
RTS|   -5670|   -4620|   12930|   0|00:12:39 1.0-01
RTS|   -5700|   -3750|   11280|   0|00:06:05 1.0-02

It looks like the char vs. long in the 1.0-0[12] patch was not the
culprit,


The last significant change between -00 and -01 is actually the one related to 
the fork pressure (others are cosmetic ones aimed at better sharing stuff with 
the blackfin port). The patch below against -02 removes it.


--- 2.6.13/arch/ppc/kernel/entry.S~ 2005-10-18 18:42:09.0 +0200
+++ 2.6.13/arch/ppc/kernel/entry.S  2005-10-19 15:07:54.0 +0200
@@ -316,10 +316,8 @@

.globl  ret_from_fork
 ret_from_fork:
-   STALL_ROOT_COND
REST_NVGPRS(r1)
bl  schedule_tail
-   UNSTALL_ROOT_COND
li  r3,0
b   ret_from_syscall


at least not on e500. I'll do the bench again on 1.0-00. Man, if only we
had that automated benchmark suite...



Indeed... The positive thing being that, we now have the ultimate proof of its 
usefulness :o>


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-19 Thread Philippe Gerum

Fillod Stephane wrote:

Philippe Gerum wrote:
[..]

The last significant change between -00 and -01 is actually the one related to 
the fork pressure (others are cosmetic ones aimed at better sharing stuff with 
the blackfin port). The patch below against -02 removes it.



Here is the result of tests with version 1.0-02+ (entry.S patch) on e500:
 
load: ~1 minute ping -f, one run of calibrator chewing 64MiB.


SWITCH without load:
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   01.0-00
RTD|4620|4740|8730|   01.0-01
RTD|4620|4740|8190|   01.0-02
RTD|4650|4740|7980|   01.0-02+

KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01
RTS|   -6150|   -4183|   12480|   0|00:03:38 1.0-02
RTS|   -6120|   -4447|8370|   0|00:03:47 1.0-02+

LATENCY with load:
== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|00:08:06 1.0-00
RTS|   -5670|   -4620|   12930|   0|00:12:39 1.0-01
RTS|   -5700|   -3750|   11280|   0|00:06:05 1.0-02
RTS|   -5640|   -2250|   11310|   0|00:10:31 1.0-02+

Load is still generated manually, so the pattern may vary: lat avg
is not meaningful. The amplitude would be a much more interesting value.
Anyway, are the generated load comparable?

The klatency is in better shape, however the latency test did not improve.
The 1.0-00 patch should be rerun as a double check.
I'm afraid we're already entering performance fine tuning, while we have


Actually no, such a high variation in latency is the sign of a serious 
regression, not just a fine tuning issue. Let's stop there, I'll remove the fork 
patch for -03 since it had no positive impact anyway. Thanks.




IMO other higher priority tasks for now. At least, I do :-(



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Autotools versions

2005-10-19 Thread Philippe Gerum

Heikki Lindholm wrote:

Hello,

To whoever knows: What versions of the various autotools are used to 
generate the (svn) makefiles at the moment?


o autoconf 2.59
o automake 1.9.5
o aclocal 1.9.5
o libtool 1.5.6

We always document this in the README.INSTALL (see 1.6).



-- Heikki Lindholm

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] doc fix in rtdm

2005-10-20 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

someone ;) complained about this statement regarding
rtdm_task_busy_sleep. I think he is right.



Applied, thanks.


--- drvlib.c(revision 44)
+++ drvlib.c(working copy)
@@ -335,7 +335,7 @@
  * - Kernel-based task
  * - User-space task (RT, non-RT)
  *
- * Rescheduling: always.
+ * Rescheduling: never.
  */
 void rtdm_task_busy_sleep(__u64 delay)
 {

Jan





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Patch] Anonymous (NULL-named) objects from user-space

2005-10-20 Thread Philippe Gerum

Dmitry Adamushko wrote:

Hi,

anonymous objects (name == NULL) from user-space get registered under unique 
names, - which is a string representation of the object's kernel-space 
address, - but remain not-exported via the /proc interface.




Applied (fixing the comment), thanks.

Not tested with the all objects yet (well, I have upgraded the system, a 
compiler in particular err..., so I cannot load the recently rebuilt xeno_* 
modules at the moment), but the changes are not intrusive so everything 
should work as expected.




Should be ok.



---
Best regards,
Dmitry




--- types.h 2005-10-19 20:59:21.0 +0200
+++ types2.h2005-10-19 21:07:44.0 +0200
@@ -91,6 +91,12 @@
*dst = '\0';
 }
 
+static inline int xnobject_create_name(

+   char *dst, size_t n, void *obj)
+{
+return snprintf(dst, n, "%p", obj);
+}
+
 #define minval(a,b) ((a) < (b) ? (a) : (b))
 #define maxval(a,b) ((a) > (b) ? (a) : (b))
 





diff -ur native/alarm.c native2/alarm.c
--- native/alarm.c  2005-10-10 19:15:08.0 +0200
+++ native2/alarm.c 2005-10-19 21:14:59.0 +0200
@@ -208,9 +208,19 @@
complete objects, so that the registry cannot return handles to
half-baked objects... */
 
-if (name && *name)

+if (name)
 {
-err = 
rt_registry_enter(alarm->name,alarm,&alarm->handle,&__alarm_pnode);
+   RT_OBJECT_PROCNODE *pnode = &__alarm_pnode;
+   
+   if (!*name)
+   {
+   /* Since this is an anonymous object (NULL-named) from user-space,
+  it gets registratred under an unique name but remains 
not-exported via /proc */
+   xnobject_create_name(alarm->name,sizeof(alarm->name),(void*)alarm);
+   pnode = NULL;
+   }
+	
+err = rt_registry_enter(alarm->name,alarm,&alarm->handle,pnode);
 
 if (err)

 rt_alarm_delete(alarm);
diff -ur native/cond.c native2/cond.c
--- native/cond.c   2005-10-10 19:15:08.0 +0200
+++ native2/cond.c  2005-10-19 21:15:07.0 +0200
@@ -168,9 +168,19 @@
complete objects, so that the registry cannot return handles to
half-baked objects... */
 
-if (name && *name)

+if (name)
 {
-err = rt_registry_enter(cond->name,cond,&cond->handle,&__cond_pnode);
+   RT_OBJECT_PROCNODE *pnode = &__cond_pnode;
+   
+   if (!*name)
+   {
+   /* Since this is an anonymous object (NULL-named) from user-space,
+  it gets registratred under an unique name but remains 
not-exported via /proc */
+   xnobject_create_name(cond->name,sizeof(cond->name),(void*)cond);
+   pnode = NULL;
+   }
+	
+err = rt_registry_enter(cond->name,cond,&cond->handle,pnode);
 
 if (err)

 rt_cond_delete(cond);
diff -ur native/event.c native2/event.c
--- native/event.c  2005-10-10 19:15:08.0 +0200
+++ native2/event.c 2005-10-19 21:15:11.0 +0200
@@ -190,9 +190,19 @@
complete objects, so that the registry cannot return handles to
half-baked objects... */
 
-if (name && *name)

+if (name)
 {
-err = 
rt_registry_enter(event->name,event,&event->handle,&__event_pnode);
+   RT_OBJECT_PROCNODE *pnode = &__event_pnode;
+   
+   if (!*name)
+   {
+   /* Since this is an anonymous object (NULL-named) from user-space,
+  it gets registratred under an unique name but remains 
not-exported via /proc */
+   xnobject_create_name(event->name,sizeof(event->name),(void*)event);
+   pnode = NULL;
+   }
+	
+err = rt_registry_enter(event->name,event,&event->handle,pnode);
 
 if (err)

 rt_event_delete(event);
diff -ur native/heap.c native2/heap.c
--- native/heap.c   2005-10-10 19:15:08.0 +0200
+++ native2/heap.c  2005-10-19 21:17:42.0 +0200
@@ -287,9 +287,19 @@
complete objects, so that the registry cannot return handles to
half-baked objects... */
 
-if (name && *name)

+if (name)
 {
-err = rt_registry_enter(heap->name,heap,&heap->handle,&__heap_pnode);
+   RT_OBJECT_PROCNODE *pnode = &__heap_pnode;
+   
+   if (!*name)
+   {
+   /* Since this is an anonymous object (NULL-named) from user-space,
+  it gets registratred under an unique name but remains 
not-exported via /proc */
+   xnobject_create_name(heap->name,sizeof(heap->name),(void*)heap);
+   pnode = NULL;
+   }
+
+err = rt_registry_enter(heap->name,heap,&heap->handle,pnode);
 
 if (err)

 rt_heap_delete(heap);
diff -ur native/mutex.c native2/mutex.c
--- native/mutex.c  2005-10-10 19:15:08.0 +0200
+++ native2/

[Xenomai-core] [Fwd: Re: [Xenomai-help] timeout in native API calls (cond, sem, mutex, etc).]

2005-10-21 Thread Philippe Gerum


Resending here since this is a general project issue.

 Original Message 
Subject: Re: [Xenomai-help] timeout in native API calls  (cond, sem, 
mutex, etc).

Date: Fri, 21 Oct 2005 18:45:59 +0200
From: Philippe Gerum <[EMAIL PROTECTED]>
Organization: Xenomai
To: Jan Kiszka <[EMAIL PROTECTED]>
CC: xenomai-help@gna.org
References: <[EMAIL PROTECTED]> 
<[EMAIL PROTECTED]>	<[EMAIL PROTECTED]> 
<[EMAIL PROTECTED]>	<[EMAIL PROTECTED]> 
<[EMAIL PROTECTED]>	<[EMAIL PROTECTED]>


Jan Kiszka wrote:

Ignacio García Pérez wrote:


...
P.S: As a side note, wouldn't be a good practice to expand tabs to
spaces in all the code?



My opinion is yes - but it's a nasty work. Would you provide such
patches...?



Actually, there is a more general problem with the current coding style used
throughout the code base: it's mine, it's not that standard, and now thatmore
people are contributing to it, I'm pondering whether we should just adoptthe
conventional kernel coding style, without the ludicrous 8-space tabs, that is.

--

Philippe.

___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] rtdm without CONFIG_XENO_OPT_PERVASIVE

2005-10-21 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

this fixes an unresolved symbol in xeno_rtdm when
CONFIG_XENO_OPT_PERVASIVE is switched off.



Applied, thanks.



Jan




Index: skins/rtdm/device.c
===
--- skins/rtdm/device.c (Revision 53)
+++ skins/rtdm/device.c (Arbeitskopie)
@@ -30,7 +30,6 @@
 
 #include 

 #include 
-#include 
 
 
 #define SET_DEFAULT_OP(device, operation)   \

@@ -68,6 +67,12 @@
 #endif /* CONFIG_SMP */
 
 
+int rtdm_no_support(void)

+{
+return -ENOSYS;
+}
+
+
 static inline int get_name_hash(const char *str, int limit, int hashkey_mask)
 {
 int hash = 0;
Index: skins/rtdm/GNUmakefile.am
===
--- skins/rtdm/GNUmakefile.am   (Revision 53)
+++ skins/rtdm/GNUmakefile.am   (Arbeitskopie)
@@ -10,9 +10,12 @@
device.c \
drvlib.c \
module.c \
-   proc.c \
-   syscall.c
+   proc.c
 
+if CONFIG_XENO_OPT_PERVASIVE

+libmodule_SRC += syscall.c
+endif
+
 include_HEADERS = \
rtdm.h \
rtdm_driver.h \
Index: skins/rtdm/device.h
===
--- skins/rtdm/device.h (Revision 53)
+++ skins/rtdm/device.h (Arbeitskopie)
@@ -40,6 +40,8 @@
 extern struct list_head *rtdm_protocol_devices;
 
 
+int rtdm_no_support(void);

+
 struct rtdm_device *get_named_device(const char *name);
 struct rtdm_device *get_protocol_device(int protocol_family, int socket_type);
 
Index: skins/rtdm/syscall.c

===
--- skins/rtdm/syscall.c(Revision 53)
+++ skins/rtdm/syscall.c(Arbeitskopie)
@@ -26,12 +26,6 @@
 int __rtdm_muxid;
 
 
-int rtdm_no_support(void)

-{
-return -ENOSYS;
-}
-
-
 static int sys_rtdm_fdcount(struct task_struct *curr, struct pt_regs *regs)
 {
 return fd_count;
Index: skins/rtdm/syscall.h
===
--- skins/rtdm/syscall.h(Revision 53)
+++ skins/rtdm/syscall.h(Arbeitskopie)
@@ -46,8 +46,6 @@
 extern int __rtdm_muxid;
 
 
-int rtdm_no_support(void);

-
 int __init rtdm_syscall_init(void);
 
 static inline void rtdm_syscall_cleanup(void)





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] problems with nucleus/types.h

2005-10-21 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

after the latest changes in include/nucleus/types.h, I get some warnings
during userspace lib compilation on my box:

In file included from /usr/src/xenomai/include/nucleus/queue.h:24,
 from /usr/src/xenomai/include/nucleus/timer.h:24,
 from /usr/src/xenomai/include/nucleus/thread.h:24,
 from
/usr/src/xenomai/skins/native/lib/../../native/task.h:27,
 from /usr/src/xenomai/skins/native/lib/timer.c:21:
/usr/src/xenomai/include/nucleus/types.h: In function
'xnobject_create_name':
/usr/src/xenomai/include/nucleus/types.h:99: warning: implicit
declaration of function 'snprintf'
/usr/src/xenomai/include/nucleus/types.h:99: warning: incompatible
implicit declaration of built-in function 'snprintf'

This is due to the static inline definition of xnobject_create_name in
the mentioned header and missing a snprintf prototype in userspace.

What about the attached patch? It also avoids duplicating code by using
strncpy (or am I overseeing a specific reason for doing it the other way?).



Applied, since types.h must be usable with as little deps as possible in order 
to grab types and definitions. Thanks.



Jan




Index: include/nucleus/types.h
===
--- include/nucleus/types.h (revision 54)
+++ include/nucleus/types.h (working copy)
@@ -76,26 +76,16 @@
 
 #define XNOBJECT_NAME_LEN 32
 
-static inline void xnobject_copy_name (char *dst,

-  const char *src)
-{
-if (src)
-   {
-   const char *rp = src;
-   char *wp = dst;
-   do
-   *wp++ = *rp;
-   while (*rp && rp++ - src < XNOBJECT_NAME_LEN);
-   }
-else
-   *dst = '\0';
-}
+#define xnobject_copy_name(dst, src) \
+do { \
+if (src) \
+strncpy(dst, src, XNOBJECT_NAME_LEN); \
+else \
+*dst = '\0'; \
+} while (0)
 
-static inline int xnobject_create_name(

-   char *dst, size_t n, void *obj)
-{
-return snprintf(dst, n, "%p", obj);
-}
+#define xnobject_create_name(dst, n, obj) \
+snprintf(dst, n, "%p", obj)
 
 #define minval(a,b) ((a) < (b) ? (a) : (b))

 #define maxval(a,b) ((a) > (b) ? (a) : (b))




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] signedness issues in testsuite

2005-10-21 Thread Philippe Gerum

Jan Kiszka wrote:

Fixes gcc-4 warnings.



Applied, thanks.


Jan





Index: testsuite/latency/latency.c
===
--- testsuite/latency/latency.c (revision 54)
+++ testsuite/latency/latency.c (working copy)
@@ -40,9 +40,9 @@
 #define WARMUP_TIME 1
 #define HISTOGRAM_CELLS 100
 int histogram_size = HISTOGRAM_CELLS;
-unsigned long *histogram_avg = NULL,
-*histogram_max = NULL,
-*histogram_min = NULL;
+long *histogram_avg = NULL,
+ *histogram_max = NULL,
+ *histogram_min = NULL;
 
 int do_histogram = 0, do_stats = 0, finished = 0;

 int bucketsize = 1000;  /* default = 1000ns, -B  to override */
Index: testsuite/klatency/latency.c
===
--- testsuite/klatency/latency.c(revision 54)
+++ testsuite/klatency/latency.c(working copy)
@@ -12,9 +12,9 @@
 
 #define HISTOGRAM_CELLS 200

 int histogram_size = HISTOGRAM_CELLS;
-unsigned long *histogram_avg = NULL,
-  *histogram_max = NULL,
-  *histogram_min = NULL;
+long *histogram_avg = NULL,
+ *histogram_max = NULL,
+ *histogram_min = NULL;
 
 int do_histogram = 0, do_stats = 0, finished = 0;

 int bucketsize = 1000; /* bucketsize */




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [packaging] Proposal of split source code organization

2005-10-21 Thread Philippe Gerum

Romain Lenglet wrote:

Hi,

Here is a proposal of reorganization of the files in Xenomai, to 
make packaging easier. I have moved all the files, and the 
resulting hierarchy of directories in in the attached dirs.txt, 
and the contained files in allfiles.txt.

The GNUmakefiles, etc. are still missing here.

What do you think about that?



Xeno's build scheme is going to be refactored as explained here:
https://mail.gna.org/public/xenomai-core/2005-10/msg00089.html

Your proposal seems to complement the solution, since in a way or another, we 
will need to organize the source tree in order to reflect the kernel / user 
split. Going for a strict Debian approach for doing so is another issue; I'm not 
 opposed to it, but that's not a pre-requisite to split the codebase anyway.


I suggest that we restart this discussion after a prototype of a statically 
embeddable Xenomai core into the Linux kernel has been achieved; from that 
point, it should be easier to progress toward the user-space revamping, since we 
will have the complete picture about what is needed, specifically in terms of 
configuration and file dependencies. This task is #1 on my list after 2.0 is 
out, so the topic is going to be revived quickly.


The idea is to have a ksrc/ directory that is independent from 
all other directories (except include/), to contain all the 
kernel modules code (and only that). It would have its own 
independent configuration mechanism based on Kconfig and/or 
autotools (??).
I would help a lot to have such an independent ksrc/ directory 
for packaging, since in Debian packages this would directly 
become the content of the kernel source package 
(xenomai-source).


In addition, an independent and complete include/ directory would 
directly become the main content of the libxenomai-dev package.



The two non-trivial tasks to reach that organization of files is:
- modify most GNUmakefile.am files (that seems quite easy),
- modify the kernel code configuration mechanism.

If everybody agrees, I propose to adapt most GNUmakefile.am 
files...






xenomai/
xenomai/ksrc
xenomai/ksrc/test
xenomai/ksrc/test/klatency
xenomai/ksrc/test/klatency/latency.h
xenomai/ksrc/test/klatency/latency-module.c
xenomai/ksrc/arch
xenomai/ksrc/arch/generic
xenomai/ksrc/arch/generic/hal
xenomai/ksrc/arch/generic/hal/generic.c
xenomai/ksrc/arch/i386
xenomai/ksrc/arch/i386/hal
xenomai/ksrc/arch/i386/hal/nmi.c
xenomai/ksrc/arch/i386/hal/smi.c
xenomai/ksrc/arch/i386/hal/smi.h
xenomai/ksrc/arch/i386/hal/usercopy.c
xenomai/ksrc/arch/i386/hal/x86.c
xenomai/ksrc/arch/ppc
xenomai/ksrc/arch/ppc/arch
xenomai/ksrc/arch/ppc/arch/ppc.c
xenomai/ksrc/arch/ppc/arch/switch.S
xenomai/ksrc/arch/ppc/arch/fpu.S
xenomai/ksrc/arch/ia64
xenomai/ksrc/arch/ia64/hal
xenomai/ksrc/arch/ia64/hal/ia64.c
xenomai/ksrc/arch/ia64/hal/switch.S
xenomai/ksrc/arch/ppc64
xenomai/ksrc/arch/ppc64/hal
xenomai/ksrc/arch/ppc64/hal/fpu.S
xenomai/ksrc/arch/ppc64/hal/ppc64.c
xenomai/ksrc/arch/ppc64/hal/switch.S
xenomai/ksrc/drivers
xenomai/ksrc/drivers/16550A
xenomai/ksrc/drivers/16550A/16550A.c
xenomai/ksrc/nucleus
xenomai/ksrc/nucleus/heap.c
xenomai/ksrc/nucleus/intr.c
xenomai/ksrc/nucleus/module.c
xenomai/ksrc/nucleus/pod.c
xenomai/ksrc/nucleus/synch.c
xenomai/ksrc/nucleus/thread.c
xenomai/ksrc/nucleus/timer.c
xenomai/ksrc/nucleus/shadow.c
xenomai/ksrc/nucleus/core.c
xenomai/ksrc/nucleus/pipe.c
xenomai/ksrc/nucleus/ltt.c
xenomai/ksrc/nucleus/config.c
xenomai/ksrc/native
xenomai/ksrc/native/syscall.h
xenomai/ksrc/native/module.c
xenomai/ksrc/native/task.c
xenomai/ksrc/native/timer.c
xenomai/ksrc/native/syscall.c
xenomai/ksrc/native/pipe.c
xenomai/ksrc/native/sem.c
xenomai/ksrc/native/event.c
xenomai/ksrc/native/mutex.c
xenomai/ksrc/native/registry.c
xenomai/ksrc/native/cond.c
xenomai/ksrc/native/queue.c
xenomai/ksrc/native/heap.c
xenomai/ksrc/native/alarm.c
xenomai/ksrc/native/intr.c
xenomai/ksrc/psos+
xenomai/ksrc/psos+/module.c
xenomai/ksrc/psos+/asr.c
xenomai/ksrc/psos+/event.c
xenomai/ksrc/psos+/pt.c
xenomai/ksrc/psos+/queue.c
xenomai/ksrc/psos+/rn.c
xenomai/ksrc/psos+/task.c
xenomai/ksrc/psos+/tm.c
xenomai/ksrc/psos+/sem.c
xenomai/ksrc/psos+/asr.h
xenomai/ksrc/psos+/defs.h
xenomai/ksrc/psos+/event.h
xenomai/ksrc/psos+/pt.h
xenomai/ksrc/psos+/queue.h
xenomai/ksrc/psos+/rn.h
xenomai/ksrc/psos+/sem.h
xenomai/ksrc/psos+/task.h
xenomai/ksrc/psos+/tm.h
xenomai/ksrc/psos+/doc
xenomai/ksrc/psos+/doc/psos.txt
xenomai/ksrc/posix
xenomai/ksrc/posix/cancel.c
xenomai/ksrc/posix/clock.c
xenomai/ksrc/posix/cond_attr.c
xenomai/ksrc/posix/cond.c
xenomai/ksrc/posix/intr.c
xenomai/ksrc/posix/module.c
xenomai/ksrc/posix/mq.c
xenomai/ksrc/posix/mutex_attr.c
xenomai/ksrc/posix/mutex.c
xenomai/ksrc/posix/once.c
xenomai/ksrc/posix/registry.c
xenomai/ksrc/posix/sched.c
xenomai/ksrc/posix/sem.c
xenomai/ksrc/posix/signal.c
xenomai/ksrc/posix/syscall.c
xenomai/ksrc/posix/thread_attr.c
xenomai/ksrc/posix/thread.c
xenomai/ksrc/posix/ti

[Xenomai-core] Adeos then Xenomai over Blackfin

2005-10-21 Thread Philippe Gerum


FYI, I have uploaded the initial release of Adeos for the Blackfin architecture:
http://download.gna.org/adeos/patches/v2.6/adeos/bfin/adeos-ipipe-uc-2.6.12-bfin-1.0-00.patch

It is based on the 2.6.12 kernel found in the uClinux-2005R3 distribution, and 
tested on a BF533 eval board. Functionally, it is complete for a uniprocessor 
configuration, which means that porting Xenomai over it is now possible.


Special thanks to the Analog Devices people (http://www.analog.com/) who kindly 
provided evaluation boards and support for this Adeos port to happen.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] EDF or RM scheduler for Xenomai

2005-10-21 Thread Philippe Gerum

Germain Olivier wrote:

At the beginning of October I've posted on the RTAI mailing list a
question about the development of additional scheduler for RTAI/Fusion.

About the pluggable scheduler infrastructure, can you give more details on
what you expect ?



An implementation in the nucleus that make scheduling policies provided as 
software plugins, so that we don't end up with braindamage exception cases in 
nucleus/pod.c in order to support them. As of now, nucleus/pod.c implements the 
fixed priority FIFO policy in a hard-wired manner: to have other policies 
integrated into the core, we need to think of a lightweight generic 
infrastructure for hosting those plugins that would replace the hard-coded, 
FIFO-based, scheduling decisions.



Also I'm searching in which files the priority inheritance mechanism code
is implemented.



nucleus/sync.c.


Thanks

Germain



100% agreed. The key issue is having the pluggable scheduler
infrastructure
which fusion currently lacks. After that, other scheduling policies than
fixed-priority FIFO could be mapped cleanly and easily on top of it.





Jan Kiszka wrote:


Germain wrote:



According to refactoring.txt from Fusion doc directory, Rate Monotonic
and
EDF are not supported in Fusion.
I'm in my last year of CS engineering school with a major in Emmbedded
Systems. One of the subject of my final year project is to write a
scheduler (Rate Monotonic, Earliest Deadline First or Least Laxity
First).
So I want to know what kind of knowledges is required to put the hand in
RTAI and develop a such thing. We are two and we have to finish for
early
february. We have skills in C, real time theory and developpment (with
Java), and system programming.




This would best be answered by the nucleus maintainer, who seems to be
offline ATM.


 Philippe already acknowledged the usefulness of a more


flexible scheduling subsystem which, e.g., allows to select a different
policy during compile time or even later. What is certainly required for
this is a clean framework that keeps the compatibility with the upper
(skins) and lower layers (hal) - as far as possible.

Again, this is something to discuss best with Philippe directly. I guess
he will jump into this thread when time permits. In my eyes, your work
would be very welcome.



100% agreed. The key issue is having the pluggable scheduler
infrastructure
which fusion currently lacks. After that, other scheduling policies than
fixed-priority FIFO could be mapped cleanly and easily on top of it.

Hint: scheduling issues are all sorted out inside nucleus/pod.c;
nucleus/synch.c
should be also studied in order to find out the relashionships that exist
between base synchronization object management and scheduling deci



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Fwd: Re: [Xenomai-help] timeout in native API calls (cond, sem, mutex, etc).]

2005-10-22 Thread Philippe Gerum

Dmitry Adamushko wrote:

On Friday 21 October 2005 19:35, Philippe Gerum wrote:


Actually, there is a more general problem with the current coding style
used throughout the code base: it's mine, it's not that standard,



then what's standard in this case?



By standard, I mean the one which is the most widely used in the context of 
kernel development, which is relevant to the larger and most active part of the 
Xenomai codebase.




and now 
thatmore people are contributing to it, I'm pondering whether we should

just adoptthe conventional kernel coding style, without the ludicrous
8-space tabs, that is.



The "kernel coding style" is just yet another codding style. If more people 
are contributing to the project, then it should be a standard that satisfies 
the most part of them. Is it necessarily the linux way?
As I told you once, the important thing is how easily the code may be read, 
hence, understood. IMHO, how/where the braces, etc. are placed is not the 
matter of the first instance.
The use of any codding style doesn't result in neat and well-readable code 
automagically.


Nevertheless, if the ideas are as follows:

1) linux codding style as a part of the seamless integration with the linux 
kernel. err.., what about a possibility to be merged with the mainline 
kernel? ok, sounds almost impossible :) ;




Maybe on April 1st.

2) linux people are reading/will read the code, there are a lot of 
style-adherents amongst them, so let's keep them satisfied and our mail boxes 
free of the messages like "first change your codding style before submitting 
anything" (btw, I remember one of the answers after publishing of first 
i-pipe patch was of that kind). btw, there are some parts of the kernel (e.g. 
some filesystems, if I'm not wrong) that use another style;




Allowing people to agree on a common coding style, which is basically K&R for 
the presentation part, and a few recommendations which belong to common sense, 
is usually the best way to avoid further ludicrous flamewars and unsmart 
assaults from smart asses about the cosmological importance of brace placement 
in the dynamic balance of things in the universe...



3) ok, I may assume, that if a person is familiar with some codding style, 
he/she gets a few percent speed-up when reading the code that is of the same 
style. At least, for the first few minutes :) But here we must assume that 
the most part of the potential readers/contributors like the linux codding 
way.




As you pointed out, this is also about allowing people used to K&R style to read 
this code as they would do with any contributed kernel code without any useless 
stylistic barrier on entry, would they be interested in doing so, that is. 
Reading some comments on the LKML about real-time extensions (by opposition to 
native real-time support a la PREEMPT_RT), it seems that there is a fair amount 
of confusion, misinformation and urban legends still going there. Part of the 
reasons for this is very likely that very few actually ever tried to look at the 
code. There must be some reason for that, aside of "they don't care", that is.


err.. to sum it up, I like the current codding style, but if 1-3) + smth else 
are worthy things, then why not. I'll get used to another style, if the code 
will remain err. well-readable :)




The basic idea I had for RTAI/fusion has not changed with Xenomai "reloaded": we 
are working hard to have the highest possible level of integration with the 
Linux kernel while keeping the advantages the co-scheduling approach still has 
over any purely native implementation. In order to achieve this, it would make 
sense to share as many (reasonable) conventions as possible with the kernel people.





--




Philippe.




---
Best regards,
Dmitry




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Xenomai v2.0

2005-10-23 Thread Philippe Gerum


The first stable release of the former "fusion" effort is now available for 
download. I have not much more to say, except to thank to everyone involved with 
this tireless work since 2001. v2.0 is an important milestone in the life of 
this project, and as such, it paves the way to the seamlessly integrated 
real-time framework for Linux we strive at building.


The development effort is continuing according to the partial roadmap posted 
earlier on the Xenomai-core mailing list [1]. Minor revisions of this release 
starting at 2.0.1 will be issued in parallel to the work on the next 2.1 version.


Source code: http://download.gna.org/xenomai/stable/xenomai-2.0.tar.bz2
On-line documentation: http://download.gna.org/xenomai/documentation/html/api/
In-depth tech. articles: http://download.gna.org/xenomai/documentation/pdf/
Development workspace: http://gna.org/projects/xenomai

[1] https://mail.gna.org/public/xenomai-core/2005-10/msg00107.html

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH: 0/3] powerpc merge

2005-10-24 Thread Philippe Gerum

Heikki Lindholm wrote:
Merge 32- and 64-bit powerpc architectures into a common powerpc arch in 
anticipation of the similar merge happening in linux kernel (possible in 
2.6.15). Amount of shared code between 32- and 64-bit ppc is substantial 
and I don't see anything changing that. These patches will simplify 
maintenance. In case the mailing list rejects the largish attachments 
I'll see if I can use the patch tracker in gna instead.




Ok, thanks (patches made their way). Will merge in the next-to-come 2.1 branch 
that should appear this WE. I have made progress separating the kernel and 
user-space supports, and compiling the whole Xenomai modules inside the kernel, 
so basically we just need to wait for the user-space support to be compilable 
anew for the branch to appear.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] problem and solution with adeos-ipipe-2.6.13-ppc-1.0-03.patch

2005-10-24 Thread Philippe Gerum

Fillod Stephane wrote:

Hi,

The adeos-ipipe-2.6.13-ppc-1.0-03.patch file in Xenomai-2.0 dist appears
to be broken. The unified diff format is wrong on
include/asm-ppc/ipipe.h,
which breaks include/asm-ppc/mmu_context.h. I've found the problem while

applying the patch on a Linux 2.6.13 kernel.
Line 1664 (no pun :), instead of @@ -0,1 +1,178 @@, the following solves
the problem here: @@ -0,0 +1,177 @@



Gahh... Ok.


Does somebody else encountered the problem?


For the record, here is for 2.6.13-ppc-1.0-03 the result of klatency, 
with only calibrator as a load generator.


KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01
RTS|   -6150|   -4183|   12480|   0|00:03:38 1.0-02
RTS|   -6120|   -4447|8370|   0|00:03:47 1.0-02+
RTS|   -6210|   -5381|   12390|   0|00:03:38 1.0-03



Ok, I need to check some changes between -02 and -03 before sending -04 out.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] problem and solution with adeos-ipipe-2.6.13-ppc-1.0-03.patch

2005-10-24 Thread Philippe Gerum

Aristeu Sergio Rozanski Filho wrote:

Hi,

The adeos-ipipe-2.6.13-ppc-1.0-03.patch file in Xenomai-2.0 dist appears
to be broken. The unified diff format is wrong on
include/asm-ppc/ipipe.h,
which breaks include/asm-ppc/mmu_context.h. I've found the problem while

applying the patch on a Linux 2.6.13 kernel.
Line 1664 (no pun :), instead of @@ -0,1 +1,178 @@, the following solves
the problem here: @@ -0,0 +1,177 @@

Does somebody else encountered the problem?


I did.
maybe it's a good reason for 2.01?



Maybe as we need to put perfectly buildable stuff on-line before making some 
noise as Jan's put it. Time for discovering other rough edges before 2.0.1 is 
rolled out, I guess.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug-reminder] user/kernel space header deps

2005-10-24 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:



Hi,

just to avoid that this issue got lost during the migration to Xenomai:

It's still not possible to compile a C++ POSIX program with CFLAGS
obtained via "xeno-config --posix-ldflags". This is due to the fact that
low-level, C++-incompatible headers get included in that case. Moreover,
the same scenarion works for native skin programs only by chance at the
moment.

On the long term, a clear separation between types, defines, function
prototypes, etc. needed for the user API on the one and for core
compilation on the other side is required.



Without duplicating definitions and ABI information, otherwise this
would be an absolute nightmare. Suggestions welcome.




To pick up this issue again (as it's biting me more and more...):

What precisely prevents us at the moment from removing the
-I from all userspace build steps, both Xenomai's own
libraries as well as external rt-applications?



Because a few things like asm/atomic.h and linux/bitops.h are wanted from the 
target header base for compiling some bits of user-space stuff. Not pretty, but 
currently needed. This is probably what needs to be fixed, in which case the -I 
directive would become useless in the same move.



Gilles explained to me that at least asm/atomic.h is used by certain
parts like UVM (or only UVM?), and that including this header directly
from /usr/include fails on Red Hat/Fedora boxes. Are there any further
problems? At least on my SuSE 10 everything still compiles fine
(including UVM) when I remove the kernel headers from XENO_USER_CFLAGS
in configure.in.



It's not an issue with such inclusion failing/passing, it's just that it would 
be incorrect to include your host distro's headers for that purpose, since what 
we need here is the _target_ stuff. The fact that it works on your box is just 
because 1) your host == your target arch, and 2) your host header base does not 
seem to implement guards preventing the use of kernel headers in user-space 
context. In cross compilation context, such inclusion would simply break the 
build, since it expects the target architecture headers to be used, not the 
host's ones.



Jan




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug-reminder] user/kernel space header deps

2005-10-24 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:



Philippe Gerum wrote:



Jan Kiszka wrote:




Hi,

just to avoid that this issue got lost during the migration to Xenomai:

It's still not possible to compile a C++ POSIX program with CFLAGS
obtained via "xeno-config --posix-ldflags". This is due to the fact
that
low-level, C++-incompatible headers get included in that case.
Moreover,
the same scenarion works for native skin programs only by chance at the
moment.

On the long term, a clear separation between types, defines, function
prototypes, etc. needed for the user API on the one and for core
compilation on the other side is required.



Without duplicating definitions and ABI information, otherwise this
would be an absolute nightmare. Suggestions welcome.




To pick up this issue again (as it's biting me more and more...):

What precisely prevents us at the moment from removing the
-I from all userspace build steps, both Xenomai's own
libraries as well as external rt-applications?



Because a few things like asm/atomic.h and linux/bitops.h are wanted
from the target header base for compiling some bits of user-space stuff.
Not pretty, but currently needed. This is probably what needs to be
fixed, in which case the -I directive would become useless in the same
move.



Gilles explained to me that at least asm/atomic.h is used by certain
parts like UVM (or only UVM?), and that including this header directly
from /usr/include fails on Red Hat/Fedora boxes. Are there any further
problems? At least on my SuSE 10 everything still compiles fine
(including UVM) when I remove the kernel headers from XENO_USER_CFLAGS
in configure.in.



It's not an issue with such inclusion failing/passing, it's just that it
would be incorrect to include your host distro's headers for that
purpose, since what we need here is the _target_ stuff. The fact that it
works on your box is just because 1) your host == your target arch, and
2) your host header base does not seem to implement guards preventing
the use of kernel headers in user-space context. In cross compilation
context, such inclusion would simply break the build, since it expects
the target architecture headers to be used, not the host's ones.




I see, so the problem is the pre-set link of /usr/include/asm to
/usr/include/asm-i386 in my case.

Anyway, it seems that really few code is involved so that it should be
possible to find either related headers in the standard libc or copy
that few atomic ops to Xenomai's arch-dependent includes.


Yes, fixing that need is the way to go.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] RTDM abstraction for priority direction

2005-10-25 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

it turned out that it is useful to abstract the priority
increment/decrement by one level at the RTDM layer - for systems that
use a different scheme compared to POSIX or Xenomai (so far only classic
RTAI). Please apply.


Applied, thanks.



Jan




Index: skins/rtdm/rtdm_driver.h
===
--- skins/rtdm/rtdm_driver.h(Revision 72)
+++ skins/rtdm/rtdm_driver.h(Arbeitskopie)
@@ -831,8 +831,16 @@
 #define RTDM_TASK_HIGHEST_PRIORITY  XNCORE_HIGH_PRIO
 /** @} */

+/*!
+ * @anchor changetaskprio @name Task Priority Modification
+ * Raise or lower task priorities by one level
+ * @{ */
+#define RTDM_TASK_RAISE_PRIORITY(+1)
+#define RTDM_TASK_LOWER_PRIORITY(-1)
 /** @} */

+/** @} */
+
 static inline int rtdm_task_init(rtdm_task_t *task, const char *name,
  rtdm_task_proc_t task_proc, void *arg,
  int priority, __u64 period)




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] serial driver fixes/improvements

2005-10-27 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

this patch improves the behaviour of xeno_16550A on
RTSER_RTIOC_EVENT_WAIT. In case it is invoked from non-RT, the driver
tries to trigger an automatic switch-back to RT via returning ENOSYS.
The patch also fixes another remaining issue about the right context
when calling RTSER_RTIOC_SET_CONFIG (forgot the corner case of releasing
the RX history buffer in RT).

Note that I intentionally did not apply the EPERM->ENOSYS scheme on
RTSER_RTIOC_SET_CONFIG. Here a switch to non-RT might be required if the
device was opened in non-RT and a reconfigure request regarding the RX
history buffer is now issued in RT. I think it's better to let the user
decide via a preceding explicit switch to non-RT if leaving RT is really
correct.


Applied, thanks.



The patch furthermore contains a README describing the device setup. I
hope I addressed potential questions and problems sufficiently, feedback
is welcome.

Jan




Index: drivers/16550A/Kconfig
===
--- drivers/16550A/Kconfig  (revision 73)
+++ drivers/16550A/Kconfig  (working copy)
@@ -3,4 +3,5 @@
bool "16550A UART driver"
default n
help
-   Real-time UART driver for 16550A compatible controllers.
+   Real-time UART driver for 16550A compatible controllers. See
+   drivers/16550A/README for more details.
Index: drivers/16550A/16550A.c
===
--- drivers/16550A/16550A.c (revision 73)
+++ drivers/16550A/16550A.c (working copy)
@@ -556,18 +556,24 @@
 config = &config_buf;
 }
 
-if (testbits(config->config_mask, RTSER_SET_TIMESTAMP_HISTORY) &&

-testbits(config->timestamp_history,
- RTSER_RX_TIMESTAMP_HISTORY)) {
-if (test_bit(RTDM_CREATED_IN_NRT, &context->context_flags)) {
-if (rtdm_in_rt_context())
-return -EPERM;
+if (testbits(config->config_mask, RTSER_SET_TIMESTAMP_HISTORY)) {
+if (test_bit(RTDM_CREATED_IN_NRT, &context->context_flags) &&
+rtdm_in_rt_context()) {
+/* already fail if we MAY allocate or release a non-RT
+ * buffer in RT context */
+return -EPERM;
+}
 
-hist_buf = kmalloc(IN_BUFFER_SIZE * sizeof(__u64),

-   GFP_KERNEL);
-} else
-hist_buf =
-rtdm_malloc(IN_BUFFER_SIZE * sizeof(__u64));
+if (testbits(config->timestamp_history,
+ RTSER_RX_TIMESTAMP_HISTORY)) {
+if (test_bit(RTDM_CREATED_IN_NRT,
+ &context->context_flags))
+hist_buf = kmalloc(IN_BUFFER_SIZE * sizeof(__u64),
+   GFP_KERNEL);
+else
+hist_buf =
+rtdm_malloc(IN_BUFFER_SIZE * sizeof(__u64));
+}
 
 if (!hist_buf)

 return -ENOMEM;
@@ -643,7 +649,7 @@
 rtdm_toseq_ttimeout_seq;
 
 if (!rtdm_in_rt_context())

-return -EPERM;
+return -ENOSYS;
 
 /* only one waiter allowed, stop any further attempts here */

 if (test_and_set_bit(0, &ctx->ioc_event_lock))
@@ -1005,7 +1011,7 @@
 device_class:   RTDM_CLASS_SERIAL,
 device_sub_class:   RTDM_SUBCLASS_16550A,
 driver_name:"rt_16550A",
-driver_version: RTDM_DRIVER_VER(1, 1, 1),
+driver_version: RTDM_DRIVER_VER(1, 1, 2),
 peripheral_name:"UART 16550A",
 provider_name:  "Jan Kiszka",
 };
Index: drivers/16550A/README
===
--- drivers/16550A/README   (revision 0)
+++ drivers/16550A/README   (revision 0)
@@ -0,0 +1,34 @@
+Real-Time Serial Driver for 16550A-Compatible Devices
+=
+
+Preparation
+---
+ - decide which serial ports are to be managed by the real-time driver
+
+ - identify their I/O addresses and IRQ numbers:
+
+   setserial /dev/ttyS
+
+ - disable the Linux driver for all these devices:
+
+   setserial /dev/ttyS uart none
+
+
+Invocation
+--
+
+modprobe xeno_16550A ioaddr=[,...] irq=[,...]
+ [tx_fifo=[,...]] [start_index=]
+
+Arguments:
+ioaddr   - I/O address of device  (e.g. 0x3f8 for ttyS0)
+irq  - interrupt number of device  (e.g. 4 for ttyS0)
+tx_fifo  - Transmitter FIFO size in bytes of device , default is 16
+start_index - First device instance number to be used, default is 0
+
+
+Usage
+-
+
+The API 

Re: [Xenomai-core] [patch] clearify 16550A readme

2005-10-30 Thread Philippe Gerum

Jan Kiszka wrote:

Hope this improves the understandability.



Applied, thanks.


Jan




Index: drivers/16550A/README
===
--- drivers/16550A/README   (Revision 86)
+++ drivers/16550A/README   (Arbeitskopie)
@@ -21,10 +21,14 @@
  [tx_fifo=[,...]] [start_index=]
 
 Arguments:

-ioaddr   - I/O address of device  (e.g. 0x3f8 for ttyS0)
-irq  - interrupt number of device  (e.g. 4 for ttyS0)
-tx_fifo  - Transmitter FIFO size in bytes of device , default is 16
+ioaddr  - I/O addresses of the devices, comma separated
+  (e.g. "0x3f8,0x2f8")
+irq - interrupt numbers of the devices, comma separated
+  (e.g. "4,3")
+tx_fifo - Transmitter FIFO sizes of the devices in bytes, comma
+  separated, default is 16
 start_index - First device instance number to be used, default is 0
+  (=> "rtser0")
 
 
 Usage





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] UVM bugfix for RTAI/fusion and Xenomai 2.0

2005-10-30 Thread Philippe Gerum


People depending on the UVM support as available with RTAI/fusion should apply 
the following patch (against v0.9.1); it solves a serious bug which usually 
bites at thread creation time. The bug affects Xenomai 2.0 as well, so this fix 
will be part of Xenomai 2.0.1.


--- fusion-0.9.1/skins/uvm/syscall.c~   2005-09-09 18:16:02.0 +0200
+++ fusion-0.9.1/skins/uvm/syscall.c2005-10-30 11:22:35.0 +0100
@@ -315,8 +315,9 @@

 if (!testbits(next->status,XNSTARTED))
{
+   /* First, make sure it won't preempt us. */
+   xnpod_suspend_thread(next,XNSUSP,XN_INFINITE,NULL);
err = xnpod_start_thread(next,0,0,XNPOD_ALL_CPUS,NULL,NULL);
-   goto out;
}

 xnpod_resume_thread(next,XNSUSP);
--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] overcome kernel headers in userspace

2005-10-31 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

I started some first proof-of-concept to get rid of all kernel header
inclusions in user space. Here is a patch which so far only addresses
asm-inclusion and the i386 architecture. It may have side effects, I
didn't tested it (except compilation).

Anyway, I hope to hear some comments if this direction makes sense. And
I'm looking for suggestions how to distribute correct LDFLAGS. I hacked
XENO_ARCH_FLAGS into the makefiles, but I think this is only correct for
the internal libtool-based linking (which seems to invoke ld via gcc).
Where to get the input for "xeno-config --xeno-ldflags" from?



Ok, I just worked on the same issue too, but as part of the general build system 
revamping, so that we will have a fully split source model with a clear 
separation between kernel space support and user-space libraries very soon. 
Actually, the user-space stuff won't need to be told about any relevant kernel 
to be compiled against, this is going to be a pure ABI thing.


I need a bit more time to solve a few remaining issues that would impact the 
tree architecture, but once those are gone, I'm going to commit the initial 2.1 
dev. branch to SVN exhibiting the split model, and we will restart working on 
this issue from there, since the latter is far too intrusive source-wise to be 
bearable for the 2.0.x series.



Jan




Index: include/nucleus/asm-generic/syscall.h
===
--- include/nucleus/asm-generic/syscall.h   (Revision 87)
+++ include/nucleus/asm-generic/syscall.h   (Arbeitskopie)
@@ -51,6 +51,7 @@ typedef struct xninquiry {
 #ifdef __KERNEL__
 
 #include 

+#include 
 
 struct task_struct;
 
Index: include/nucleus/asm-i386/syscall.h

===
--- include/nucleus/asm-i386/syscall.h  (Revision 87)
+++ include/nucleus/asm-i386/syscall.h  (Arbeitskopie)
@@ -21,7 +21,6 @@
 #define _XENO_ASM_I386_SYSCALL_H
 
 #include 

-#include 
 #include 
 
 #ifndef __KERNEL__

Index: include/nucleus/asm-i386/atomic.h
===
--- include/nucleus/asm-i386/atomic.h   (Revision 87)
+++ include/nucleus/asm-i386/atomic.h   (Arbeitskopie)
@@ -20,16 +20,27 @@
 #ifndef _XENO_ASM_I386_ATOMIC_H
 #define _XENO_ASM_I386_ATOMIC_H
 
-#include 

+typedef unsigned long atomic_flags_t;
 
 #ifdef __KERNEL__
 
 #include 

 #include 
+#include 
 
-#define atomic_xchg(ptr,v)   xchg(ptr,v)

-#define atomic_cmpxchg(ptr,o,n)  cmpxchg(ptr,o,n)
-#define xnarch_memory_barrier()  smp_mb()
+typedef atomic_t atomic_counter_t;
+
+#define xnarch_atomic_set(pcounter,i)  atomic_set(pcounter,i)
+#define xnarch_atomic_get(pcounter)atomic_read(pcounter)
+#define xnarch_atomic_inc(pcounter)atomic_inc(pcounter)
+#define xnarch_atomic_dec(pcounter)atomic_dec(pcounter)
+#define xnarch_atomic_inc_and_test(pcounter)   atomic_inc_and_test(pcounter)
+#define xnarch_atomic_dec_and_test(pcounter)   atomic_dec_and_test(pcounter)
+#define xnarch_atomic_set_mask(pflags,mask)atomic_set_mask(mask,pflags)
+#define xnarch_atomic_clear_mask(pflags,mask)  atomic_clear_mask(mask,pflags)
+#define xnarch_atomic_xchg(ptr,x)  xchg(ptr,v)
+#define xnarch_atomic_cmpxchg(ptr,o,n) cmpxchg(ptr,o,n)
+#define xnarch_memory_barrier()smp_mb()
 
 #else /* !__KERNEL__ */
 
@@ -41,28 +52,80 @@

 #define unlikely(x)__builtin_expect(!!(x), 0)
 #endif /* !likely */
 
-#include 

+typedef struct { volatile int counter; } atomic_counter_t;
 
 struct __xeno_xchg_dummy { unsigned long a[100]; };

 #define __xeno_xg(x) ((struct __xeno_xchg_dummy *)(x))
 
-static inline unsigned long atomic_xchg (volatile void *ptr,

-unsigned long x)
+#define LOCK "lock ; "
+
+#define xnarch_atomic_set(v,i) (((v)->counter) = (i))
+
+#define xnarch_atomic_get(v)   ((v)->counter)
+
+static __inline__ void xnarch_atomic_inc(atomic_counter_t *v)
+{
+   __asm__ __volatile__(
+   LOCK "incl %0"
+   :"=m" (v->counter)
+   :"m" (v->counter));
+}
+
+static __inline__ void xnarch_atomic_dec(atomic_counter_t *v)
+{
+   __asm__ __volatile__(
+   LOCK "decl %0"
+   :"=m" (v->counter)
+   :"m" (v->counter));
+}
+
+static __inline__ int xnarch_atomic_inc_and_test(atomic_counter_t *v)
 {
-__asm__ __volatile__(LOCK_PREFIX "xchgl %0,%1"
+   unsigned char c;
+
+   __asm__ __volatile__(
+   LOCK "incl %0; sete %1"
+   :"=m" (v->counter), "=qm" (c)
+   :"m" (v->counter) : "memory");
+   return c != 0;
+}
+
+static __inline__ int xnarch_atomic_dec_and_test(atomic_counter_t *v)
+{
+   unsigned char c;
+
+   __asm__ __volatile__(
+   LOCK "decl %0; sete 

Re: [Xenomai-core] Re: [Xenomai-help] General Xenomai / RTAI Skin Usage Questions

2005-10-31 Thread Philippe Gerum

Romain Lenglet wrote:

- A kernel option that causes Xenomai (or Adeos) to blatantly
malfunction or even crash is a freaking BUG, and should be
reported asap to the Xenomai-core list or the Adeos-main list.
IOW, there is no such thing as options allowed to crash your
box with Adeos/Xenomai because of some "don't care attitude";
would such bug happen, it must and will be fixed. All the
people involved in contributing to both projects try to make
sure that any option could be enabled without risking terminal
damage to anyone's setup. The worst thing that should be
allowed to happen is high latency spots, because some options
might cause some hardware to interact badly with critical
resources Adeos/Xenomai also happen manage.



Here is a kernel-option-related bug.
I am using a stock Debian-patched kernel with the standard Debian 
kernel config, on Pentium M and Pentium 4 machines, + the latest 
Adeos patch.
The Debian kernel configuration file, that has every option 
enabled and everything as modules, works fine with Xenomai 
except for one single option which must be disabled:

CONFIG_PCI_MSI
This option messes with the oneshot timer (timer freezes).
(thanks to Gilles to have found this out)



Could you confirm that this issue still happens with adeos-ipipe-2.6.13-1.0-05
or higher?



Otherwise, my biggest source of problems is IRQ sharing between 
realtime and non-realtime drivers: this predictably provokes 
kernel panics. But Jan seems to be working on it.




Yes, this is another issue, more of a shortcoming of the current IRQ handling 
scheme than a bug.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-01 Thread Philippe Gerum

Dmitry Adamushko wrote:


Hi Jan,

 >
 > I have some code hanging around here which implements IRQ sharing at
 > skin level for an experimental in-house development over Xenomai. The
 > code is smart enough to register an IRQ sharing trampoline handler only
 > in case sharing is actually practiced for a specific line.

Could you be a bit more specific on what is meant by "...sharing is 
actually practiced for a specific line"?


To my knowledge, the matter is only about whether a certain device 
(driver) permits the earlier obtained irq line to be shared with other 
devices.
i.e. a driver [1] may succeed with an irq registration request in case 
another driver [2] already holds this line but both [1] and [2] have 
specified a SA_SHIRQ flag.



 > I think it would be possible to break this out and generate a mainline
 > patch. Anyway, the question for me is where to put this best, at skin
 > (RTDM?) or at nucleus level? Both is technically feasible, but which way
 > is desired? (I would vote for the nucleus...)

If we have a policy that all the drivers should be implemented on top of 
RTDM, then, it can be done there. If no (and I guess so), this feature 
should be common and I'd vote for the nucleus.


Actually, now that we have a decent driver model built in, I will enforce the 
rule that all Xenomai mainline drivers must be based on RTDM, because we do need 
such a common platform to prevent braindamage calling interfaces hysteria. This 
will be a good opportunity to see how flexible the thing is when confronted to 
the needs of various hw and semantics.


It seems to me now, that some parts of the hal will be involved 
(rthal_irq_request/release()) since the nucleus itself doesn't keep 
track of registered irqs.



 >
 > Jan
 >

---
Best regards,
Dmitry




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-01 Thread Philippe Gerum

Jan Kiszka wrote:

Dmitry Adamushko wrote:


Hi Jan,



I have some code hanging around here which implements IRQ sharing at
skin level for an experimental in-house development over Xenomai. The
code is smart enough to register an IRQ sharing trampoline handler only
in case sharing is actually practiced for a specific line.


Could you be a bit more specific on what is meant by "...sharing is
actually practiced for a specific line"?



Ok, one have to remind that my existing code is built on top of the
non-sharing xnintr_xxx API. This means that I had to define a trampoline
ISR which does the loop over all registered end-user ISRs. And this
intermediate handler is only involved when A) the IRQ is sharable and B)
there is actually more than one ISR registered for it.

This "smartness" becomes obsolete when we embed a doubly-linked list
into xnintr_t and already iterate over it in xnintr_irq_handler().



To my knowledge, the matter is only about whether a certain device (driver)
permits the earlier obtained irq line to be shared with other devices.
i.e. a driver [1] may succeed with an irq registration request in case
another driver [2] already holds this line but both [1] and [2] have
specified a SA_SHIRQ flag.



Yep.





I think it would be possible to break this out and generate a mainline
patch. Anyway, the question for me is where to put this best, at skin
(RTDM?) or at nucleus level? Both is technically feasible, but which way
is desired? (I would vote for the nucleus...)


If we have a policy that all the drivers should be implemented on top of
RTDM, then, it can be done there. If no (and I guess so), this feature
should be common and I'd vote for the nucleus.



Drivers should be built over RTDM, that's true. But there may still be
driver-like applications, also in user space that attach directly to the
IRQs via the various skin APIs. I think it would be good to let them
live side-by-side with RTDM drivers or other IRQ-using applications.



It seems to me now, that some parts of the hal will be involved
(rthal_irq_request/release()) since the nucleus itself doesn't keep track
of registered irqs.



That's true. And it also raises another question to me: why do we have
those two different IRQ models?

The HAL only one handler per IRQ which get called with the triggering
IRQ number. That handler will call the nucleus with an attached cookie.
And on the other side is the nucleus which works with a xnintr_t per
IRQ. The xnintr_irq_handler() deals with things like re-enabling IRQs,
rescheduling, etc.

I'm asking as this abstraction adds one trampoline call (at HAL level),
thus may lead to more I-cache misses. Isn't it worth considering some
HAL mechanisms based on more #defines and static inlines in this regard?



While we are at it, we could just move the HAL's trampoline part to the 
arch/system.h support. Two things to keep in mind: Adeos does not provide 
cookies (yeah, what a pity), but passes an IRQ number the xnarch level doesn't 
grok. Some low-level prototypes would have to be fixed and the cookie array 
moved, but basically, getting rid of the initial trampoline seems like a good 
idea since it brings nothing in the picture.



Jan





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-01 Thread Philippe Gerum

Dmitry Adamushko wrote:

On Monday 31 October 2005 16:04, you wrote:


Dmitry Adamushko wrote:


It seems to me now, that some parts of the hal will be involved
(rthal_irq_request/release()) since the nucleus itself doesn't keep track
of registered irqs.


That's true. And it also raises another question to me: why do we have
those two different IRQ models?

The HAL only one handler per IRQ which get called with the triggering
IRQ number. That handler will call the nucleus with an attached cookie.
And on the other side is the nucleus which works with a xnintr_t per
IRQ. The xnintr_irq_handler() deals with things like re-enabling IRQs,
rescheduling, etc.

I'm asking as this abstraction adds one trampoline call (at HAL level),
thus may lead to more I-cache misses. Isn't it worth considering some
HAL mechanisms based on more #defines and static inlines in this regard?



Let's take a look at what we have got currently:

[1] ipipe_domain::irqs[IPIPE_NR_IRQS]   [ADEOS-IPIPE]

the handler is defined as void (*handler)(unsigned irq);

in our case, this is rthal_irq_trampoline() [2] , but can be different for 
some other cases;


[2] rthal_irq_trampoline()  [HAL]

struct rthal_realtime_irq[IPIPE_NR_IRQS]

the handler is defined as void (*handler)(unsigned irq, void *cookie);

this one normally does a simple thing, just calls xnintr_irq_handler() [3] as 
you have mentioned before.


[3] xnintr_irq_handler()[nucleus]
 
this routine calls a certain user's ISR as well as handles some 
nucleus-specific chores (re-scheduling, etc.)


[4] user's ISR  [user driver]

does user-specific things

Well, [3] is necessary anyway since some nucleus-related chores must be done 
and this is a correct layer for that (e.g. [2] knows nothing about 
scheduling).


What can be theoretically merged is [1] + [2] (errr... I said theoretically, 
it's still not the case to kill me for just having said that :o). To this 
end, ipipe_domain should be extended in order to contain all the fields of 
[2]::struct rthal_realtime_irq (at least, handler(irq, cookie) + cookie).
btw, ipipe_domain::irqs may even contain a pointer to the slightly modified 



This would mean changing a core Adeos prototype, though.

xnintr_t structure (which is really e.g. a circular list) that may be passed

as "cockie" to the xnintr_irq_handler().

The analogy is irq_desc_t vs. irqaction structures in Linux.

This way, xnintr_irq_handler() can be called from adeos-ipipe layer directly 
without the [2] layer.


But that change looks quite invasive to me so far since 
ipipe_domain::irqs::handler(irq - with a single parameter) is used all over 
the map.




I'd really prefer making one invasive change early in the process of addressing 
the issue than several kludges later to work around structural shortcomings, so 
no problem, go wild, I'm all ears.


In our case, the relation between xnintr_irq_handler() and 
rthal_irq_trampoline() is 1:1. The first one does much more things that the 
second one which is really almost a pure redirection layer.
Hopefully, xnintr_irq_handler() is i-cache-hot as long as possible under high 
irq load. In this case, I guess, rthal_irq_trampoline() will be in cache as 
well (since it's really small) and the overhead is only about having one 
array indexing op. and issuing a call via a pointer to the function 
(xnintr_irq_handler() in our case).
Do you think that really gives a significant overhead? Well, maybe so. I'm not 
a profie here anyway...




Jan



---
Best regards,
Dmitry

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-01 Thread Philippe Gerum

Jan Kiszka wrote:

Dmitry Adamushko wrote:


On Monday 31 October 2005 16:04, you wrote:



Dmitry Adamushko wrote:



It seems to me now, that some parts of the hal will be involved
(rthal_irq_request/release()) since the nucleus itself doesn't keep track
of registered irqs.


That's true. And it also raises another question to me: why do we have
those two different IRQ models?

The HAL only one handler per IRQ which get called with the triggering
IRQ number. That handler will call the nucleus with an attached cookie.
And on the other side is the nucleus which works with a xnintr_t per
IRQ. The xnintr_irq_handler() deals with things like re-enabling IRQs,
rescheduling, etc.

I'm asking as this abstraction adds one trampoline call (at HAL level),
thus may lead to more I-cache misses. Isn't it worth considering some
HAL mechanisms based on more #defines and static inlines in this regard?



Let's take a look at what we have got currently:

[1] ipipe_domain::irqs[IPIPE_NR_IRQS]   [ADEOS-IPIPE]

the handler is defined as void (*handler)(unsigned irq);

in our case, this is rthal_irq_trampoline() [2] , but can be different for 
some other cases;


[2] rthal_irq_trampoline()  [HAL]

struct rthal_realtime_irq[IPIPE_NR_IRQS]

the handler is defined as void (*handler)(unsigned irq, void *cookie);

this one normally does a simple thing, just calls xnintr_irq_handler() [3] as 
you have mentioned before.


[3] xnintr_irq_handler()[nucleus]

this routine calls a certain user's ISR as well as handles some 
nucleus-specific chores (re-scheduling, etc.)


[4] user's ISR  [user driver]

does user-specific things

Well, [3] is necessary anyway since some nucleus-related chores must be done 
and this is a correct layer for that (e.g. [2] knows nothing about 
scheduling).


What can be theoretically merged is [1] + [2] (errr... I said theoretically, 
it's still not the case to kill me for just having said that :o). To this 
end, ipipe_domain should be extended in order to contain all the fields of 
[2]::struct rthal_realtime_irq (at least, handler(irq, cookie) + cookie).
btw, ipipe_domain::irqs may even contain a pointer to the slightly modified 
xnintr_t structure (which is really e.g. a circular list) that may be passed 
as "cockie" to the xnintr_irq_handler().


The analogy is irq_desc_t vs. irqaction structures in Linux.

This way, xnintr_irq_handler() can be called from adeos-ipipe layer directly 
without the [2] layer.


But that change looks quite invasive to me so far since 
ipipe_domain::irqs::handler(irq - with a single parameter) is used all over 
the map.


In our case, the relation between xnintr_irq_handler() and 
rthal_irq_trampoline() is 1:1. The first one does much more things that the 
second one which is really almost a pure redirection layer.
Hopefully, xnintr_irq_handler() is i-cache-hot as long as possible under high 
irq load. In this case, I guess, rthal_irq_trampoline() will be in cache as 
well (since it's really small) and the overhead is only about having one 
array indexing op. and issuing a call via a pointer to the function 
(xnintr_irq_handler() in our case).
Do you think that really gives a significant overhead? Well, maybe so. I'm not 
a profie here anyway...





...compared to the usefulness I still have to understand - yes.

Other option: what about merging [2] into [3], i.e. let
xnintr_irq_handler deal with the translation IRQ number -> cookie?



That's an option, yes. The other one being to teach Adeos to pass an additional 
cookie, but in such a case, only the i-pipe series would be upgraded to allow 
that, which would be a bit rude to people still currently relying on the legacy 
oldgen patches for running Xenomai. This said, at some point in time, upgrading 
will be necessary though.



Philippe, I guess your wisdom is required here. Are we missing some
important point in your design right now?



Mmm, for the wisdom thing, please see there: 
http://fr.wikipedia.org/wiki/Mahatma_Gandhi


For the design issue, I agree that the initial trampoline is just a waste of 
cache lines and cycles. The only thing to keep in mind is to preserve a sensible 
layering so that the whole thing is still able to run over the event-driven 
simulation engine we have, but AFAICS, fixing the xnarch layer everywhere would 
grant that.



Jan




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-01 Thread Philippe Gerum

Jan Kiszka wrote:

Dmitry Adamushko wrote:


On Tuesday 01 November 2005 12:58, you wrote:



as "cockie" to the xnintr_irq_handler().

The analogy is irq_desc_t vs. irqaction structures in Linux.

This way, xnintr_irq_handler() can be called from adeos-ipipe layer
directly without the [2] layer.

But that change looks quite invasive to me so far since
ipipe_domain::irqs::handler(irq - with a single parameter) is used all
over the map.


I'd really prefer making one invasive change early in the process of
addressing the issue than several kludges later to work around structural
shortcomings, so no problem, go wild, I'm all ears.



If we only want to get rid of the trampoline-thing then [2] + [3] would work 
out (btw, I have sent a message this morning where I tried to provide even 
some pseudo-code :) 

But if we want to (think that we may) gain the adventage of having a more 
flexible irq-related support from the ipipe layer, then yep, those changes 
might look worthy. I thought that this way, we would even get rid of another 
per-irq (rthal_realtime_irq) array in hal/generic.c, maybe even from 
rthal_linux_irq too. The sole one is provided by the ipipe_domain structure 
and a set of generic interfaces e.g. via system.h so that the HAL or another 
layer may get access of it.


e.g.

the "cookie" remains opaque for the ipipe but when requested by 
HAL::rthal_irq_request() or NUCLEUS::xnintr_irq_handler() it's treated as a 
chain of ISR handlers.





Yep, that's also what I had in mind about potential ipipe changes and
their use in the nucleus.



Ok, let's go for those changes this way:

1. The I-pipe series needs to be updated so that an opaque cookie is passed to 
the handler; since we have a change in the interface, the 1.1 series has to be 
started for this purpose.


2. In order to let the people running the legacy RTAI/fusion and Xenomai 2.0.x 
series a reasonable amount of time to upgrade their patchset, the IRQ layer 
updates (sharing and trampoline suppression) will go to the Xenomai 2.1 dev 
branch. IOW, Xenomai 2.1 will be exclusively based on the I-pipe 1.1 series, 
which also means that Xenomai support for the oldgen Adeos and I-pipe 1.0 
patches will be discontinued after the Xenomai 2.0.x series is closed.


3. Changes in the IRQ layer will be made at nucleus level, which is the most 
efficient way to provide them.


It should be noted that as part of the build system refactoring, the real-time 
HAL has become a static portion of the Linux kernel, with its generic part being 
moved to the nucleus. IOW, the proposed changes will basically end up as 
redispatching some code inside the nucleus.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Dev branch 2.1

2005-11-02 Thread Philippe Gerum


A dev branch toward v2.1 has been created. It features a new build system so 
that Xenomai now follows a split source model, decoupling the kernel space 
support from the user-space libraries used in accessing the former.


It's work in progress, and there is still a lot of things to be done in order to 
finish the transition from a tightly coupled source-based system to the 
decoupled ABI-based model, but a lot has been done already. What remains is 
basically a slew of details to be fixed, since I had to somehow "disconnect" a 
number of things before attempting to connect them back gradually.


All in all, my perception of this beast is that it's way simpler than the older 
fully autoconf-based system which tended to become overly complex over time.


How does this work?

Simple and easy. The Xenomai distro now comes with a pure kernel source part 
(ksrc/) and a pure user-space source part (src/). A few symlinks are made from 
the target kernel to the ksrc/ tree, so that Linux's Kconfig/Kbuild just sees 
Xenomai as a regular set of mainline drivers/extensions. A shell script 
(scripts/prepare-kernel.sh) automates the kernel preparation so that it 
eventually includes the Xenomai support. The user-space side is autoconfiscated, 
and only builds the interface libraries used to issue Xenomai syscalls, and the 
UVM libraries; period.


The HAL has been made a static part of the kernel, since it has an 
architecture-dependent section, is totally silent when the nucleus does not use 
it, and weigths less than 10 Kb on any platform. Additionally, if something goes 
wrong into the HAL, well, there is little chance one is going to be able to 
unload it, fix it and try it again. Under such circumstances, the box almost 
certainly went out of the window anyway, so providing for the modular form is 
just, mhhh, braindamage.


The simulator will be shipped as a separate tarball starting from 2.1. It is 
still part of the dev trunk though.


Advantages:

- The user-space support does not have to know about the target kernel. A set of 
features required by the former is just checked for availability when an 
application binds to a skin built in the latter. This ABI separation is going to 
be enforced; the idea is plain simple: I see absolutely no reason why user-space 
Xenomai interfaces should have for the RT applications a different role than the 
glibc has interfacing them to the regular kernel services.


- Normally, no kernel headers are included anymore from user-space programs. 
This should solve the C++ issues in the same move. If some mistakenly remain, we 
will have to fix them.


- The nucleus and all skins can be either compiled as modules, or statically 
built into the kernel (a glitch remains with POSIX due to a redefinition of 
sem_init(), but this is going to be fixed).


- Binary packaging should be made simpler.

- Xenomai actually looks like a more seamlessly integrated extension of the 
Linux kernel, since the kernel space is fully under the control of the vanilla 
Kbuild/Kconfig stuff because it is fully part of the kernel tree now. One just 
has to open the "Real-time sub-system" menu from the Kconfig toplevel to access 
the complete Xenomai setup, no more error-prone multiple configuration sources. 
I've also killed the Expert mode, all options are visible at once, so that Xeno 
conforms better to the Linux configuration rules. Help strings should be 
revisited in order to explain which defaults are sane though.


Pitfalls/Caveats:

- Compilation of modules is now out of Xenomai's build system scope, it's a pure 
Linux thing anew, so no support is provided for doing this anymore. For 
instance, xeno-config has twice less options than before, because all kernel 
compilation related stuff has disappeared. Kernel modules should be compiled the 
Linux way (i.e. cd your_module && make -C  M=$PWD).


- Don't search for klatency; it's not available yet. The plan is to integrate 
the kernel portion as a built-in test module, and extend the user-space latency 
tool so that it can also be used to get the sampling results from the in-kernel 
module too.


- The cruncher has been removed, because the test case was not sensible anymore.

- Make install currently installs half of the available headers, and basically 
none that would be useful to build kernel modules. It is still possible to find 
those from the kernel tree, but this also requires to leave the full Xenomai 
sources accessible. This likely has to be fixed, but we need a round of 
discussion about what should be available from the installdir first. Btw, I've 
moved the default installation dir to /usr/xenomai, so that we couldn't 
spuriously mix pre-2.1 and 2.1 setups.


- Make dist will not work properly.

I've updated quickly the README.INSTALL file so that the new build procedure is 
explained. This document will need to be fleshen out though, since it's rather 
terse for now.


Again, all this is very new, so review and comments are welcom

Re: [Xenomai-core] [RFC] support for sharing IRQs

2005-11-02 Thread Philippe Gerum

Bernard Dautrevaux wrote:
 




-Message d'origine-
De : [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] De la part de Philippe Gerum

Envoyé : mardi 1 novembre 2005 18:30
À : Jan Kiszka
Cc : xenomai-core
Objet : Re: [Xenomai-core] [RFC] support for sharing IRQs





Ok, let's go for those changes this way:

1. The I-pipe series needs to be updated so that an opaque 
cookie is passed to the handler; since we have a change in 
the interface, the 1.1 series has to be started for this purpose.


2. In order to let the people running the legacy RTAI/fusion 
and Xenomai 2.0.x series a reasonable amount of time to 
upgrade their patchset, the IRQ layer updates (sharing and 
trampoline suppression) will go to the Xenomai 2.1 dev 
branch. IOW, Xenomai 2.1 will be exclusively based on the 
I-pipe 1.1 series, which also means that Xenomai support for 
the oldgen Adeos and I-pipe 1.0 patches will be discontinued 
after the Xenomai 2.0.x series is closed.



I agree with all that is said in this post; however there is just a smal
problem: some very useful tool for xenomai application debug and tune is
LTT; however the only available Adeos+LTT patch is not an ipipe one, but an
old linux-2.6.9 kernel patch.

At least the LTT support should be available with an ipipe-based Adeos-1.1
patch for 2.6.9 (waiting for LTT to support a more recent kernel), so that
LTT is not lost for xenomai (as it seems to be in fact for RTAI). 



LTT has been undergoing a significant refactoring recently, so there has been 
little incentive to go for a combo Adeos+LTT patch over a moving target, this is 
the reason why Alex - the LTT support maintainer for Xenomai - has focused on a 
2.6.9 kernel featuring the previous LTT architecture, and this was a good 
decision. Upgrading this combo will be done in the I-pipe 1.1 timeframe over the 
newest LTT support, for sure, basically to get rid of the oldgen Adeos patches 
for Xenomai completely.


RTAI had problem maintaining the LTT support because of the lack of a 
maintainer; we do have one. This said, the best way you could contribute to this 
is crafting a prototype combo between I-pipe 1.0 and a recent LTT core (i.e. the 
one that relies on the refactored relayfs stuff), especially if you do consider 
this support as a critical feature. I guess that Alex would be fine working on 
this base later.



Bernard


3. Changes in the IRQ layer will be made at nucleus level, 
which is the most efficient way to provide them.


It should be noted that as part of the build system 
refactoring, the real-time HAL has become a static portion of 
the Linux kernel, with its generic part being moved to the 
nucleus. IOW, the proposed changes will basically end up as 
redispatching some code inside the nucleus.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core







___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Dev branch 2.1

2005-11-03 Thread Philippe Gerum

Hannes Mayer wrote:

Ciao Philippe!

prepare-kernel.sh works well - I'd suggest to ask the user for
the 3 needed parameters, instead of supplying them as parameters.
e.g.
# scripts/prepare-kernel.sh
Linux directory: [default: /usr/src/linux] :
Adeos-patch: [default: none] :
Architecture: [default: i386]

...or something like that.



Yep, good idea.


./configure also went well, but then:

# make
Making all in src
make[1]: Entering directory `/usr/src/xenomai-2.1/src'
Making all in include
make[2]: Entering directory `/usr/src/xenomai-2.1/src/include'
make  all-recursive
make[3]: Entering directory `/usr/src/xenomai-2.1/src/include'
Making all in asm-uvm
make[4]: Entering directory `/usr/src/xenomai-2.1/src/include/asm-uvm'
make[4]: stat:system.h: Too many levels of symbolic links
make[4]: *** No rule to make target `system.h', needed by `all-am'.  Stop.
make[4]: Leaving directory `/usr/src/xenomai-2.1/src/include/asm-uvm'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/usr/src/xenomai-2.1/src/include'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/usr/src/xenomai-2.1/src/include'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/src/xenomai-2.1/src'
make: *** [all-recursive] Error 1


/usr/src/xenomai-2.1/src/include/asm-uvm# ll
total 36
-rw-r--r--  1 root root 12780 2005-11-03 01:03 Makefile
-rw-r--r--  1 cap  cap 67 2005-11-02 21:36 Makefile.am
-rw-r--r--  1 cap  cap  13046 2005-11-02 21:36 Makefile.in
lrwxrwxrwx  1 root root39 2005-11-03 01:03 system.h -> 
../../.././src/include/asm-uvm/system.h




Try building out of the source tree; it's something I'm likely going to enforce 
anyway.



Best regards,
Hannes.




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Dev branch 2.1

2005-11-03 Thread Philippe Gerum

Hannes Mayer wrote:

Philippe Gerum wrote:
[...]

The surprise is that xeno_native is statically built-in by default. 
You can change that selecting the proper tristate position in the 
kernel config for the native skin.



So everything (even the 16550 driver) is compiled in by default ?
I didn't look that closely last night - I like that! No more insmodding :-)

I just tried two of my programs, but each says:

Xenomai: native skin or user-space support unavailable.
(did you load the xeno_native.ko module?)

kern.log says:
I-pipe: Domain Xenomai registered.
Xenomai: hal/x86 started.
Xenomai: real-time nucleus v2.1 (Surfing With The Alien) loaded.
Xenomai: starting native API services.
Xenomai: starting RTDM services.

What am I missing ?



The feature set between user-space and kernel support is likely mismatching. Try 
passing --enable-x86-tsc to configure. More explicit messages will be printed 
out in later versions since we do have appropriate return codes passed back 
internally to distinguish the cases.



Thanks and best regards,
Hannes.




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Xenomai v2.0.1

2005-11-06 Thread Philippe Gerum


This is a minor release fixing a few glitches found in 2.0. People using the UVM 
support will find a single but important bug fix there though. Aside of this, 
the most noticeable thing is the upgrade of Adeos/ppc and x86 patches to 2.6.14.


See the ChangeLog for details.

http://download.gna.org/xenomai/stable/xenomai-2.0.1.tar.bz2

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] scheduling order of dying shadow threads

2005-11-08 Thread Philippe Gerum

Jan Kiszka wrote:

Hi Philippe,

I think this one is for you: ;)

Sebastian got almost mad with his CAN driver while tracing a strange
scheduling behaviour during shadow thread deletion for several days(!) -
and I was right on the way to follow him yesterday evening. Attached is
a simplified demonstration of the effect, consisting of a RTDM driver
and both a kernel and user space application to trigger it.

Assume two or more user space RT-threads blocking on the same RTDM
semaphore inside a driver (I was not yet able to reproduce this with a
simply native user space application :/). All get then woken up on
rtdm_sem_destroy during device closure. They increment a global counter,
save the current value in a per-thread variable, and then terminate.
They had also passed another per-thread variable to the RTDM driver
which was updated in the kernel using the same(!) counter.

/* application */
void demo(void *arg)
{
rt_dev_read(dev, &value_k[(int)arg], 0);
value_u[(int)arg] = ++counter;
}

/* driver */
int demo_read_rt(struct rtdm_dev_context*context,
 rtdm_user_info_t   *user_info,
 void   *buf,
 size_t nbyte)
{
struct demodrv_context  *my_context;
int ret;


my_context = (struct demodrv_context *)context->dev_private;

ret = rtdm_sem_down(&my_context->read_sem);
*(int *)buf = ++(*counter);

return ret;
}


That global counter is also incremented during closure to visualise the
call order:


int demo_close_rt(struct rtdm_dev_context   *context,
  rtdm_user_info_t  *user_info)
{
struct demodrv_context  *my_context;


my_context = (struct demodrv_context *)context->dev_private;

printk("close 1: %d\n", xnpod_current_thread()->cprio);
rtdm_sem_destroy(&my_context->read_sem);
printk("close 2: %d\n", xnpod_current_thread()->cprio);
(*counter)++;

return 0;
}


Now one would expect the following content of the involved variables
when running 3 threads e.g.:

   thread 1  (prio 99)
 /   thread 2(prio 98)
 |  /   thread 3 (prio 97)
 |  |  /
value_k: 1, 3, 5
value_u: 2, 4, 6
counter: 7

This is indeed what we get when the application locates in kernel space,
i.e. does not use shadow threads. But when it is a user space
application, the result looks like this:

   thread 1
 /   thread 2
 |  /   thread 3
 |  |  /
value_k: 1, 4, 6
value_u: 2, 5, 7
counter: 7

Which means that first thread returns from kernel to user space and
terminates, then the close handler gets executed again, and only
afterwards the remaining threads!

The reason is also displayed by demodrv:
close 1: 0  - prio of root thread before rtdm_sem_destroy
close 2: 99 - ... and after rtdm_sem_destroy

Which means that the non-RT thread calling rt_dev_close gets lifted to
prio 99 on calling rtdm_sem_destroy, the prio of the thread first woken
up. It seems to loose this prio quite soon again, but not soon enough to
avoid the inversion - very strange.

Any ideas?


Not yet, but your analysis looks right, the main thread seems to spuriously 
recycle the priority level of task1 as the latter enters the secondary mode. The 
good news is that it's likely that such issue only impacts the deletion process 
which involves some specific transitions. I'll try to reproduce this bug asap 
and let you know. Thanks.




Jan




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_receive DOKU

2005-11-08 Thread Philippe Gerum

Ulrich Schwab wrote:

Hello,

here is a patch, making the doku of rt_task_receive()
more complete.


diff -Nru xenomai-2.0-orig/skins/native/task.c 
xenomai-2.0-devel/skins/native/task.c

--- xenomai-2.0-orig/skins/native/task.c2005-10-08 16:26:07.0 
+0200
+++ xenomai-2.0-devel/skins/native/task.c   2005-11-08 11:00:27.0 
+0100
@@ -1788,6 +1788,8 @@
  * and no remote task is currently waiting for sending a message to
  * the caller.
  *
+ * - -ETIMEDOUT is returned if no message was received within the @a timeout.
+ *
  * - -EINTR is returned if rt_task_unblock() has been called for the
  * caller before any message was available.
  *



Applied, thanks.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [16550A-PATCH] integer baud rate, type renaming

2005-11-08 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


Hi all,

and another patch: This one changes the rtserial API so that the baud
rate can now be provided as an integer instead of the previous low-level
encoded value. The baud base of a device is provided as module parameter
to the driver on insmod. Turned out that I specified a not very useful
interface in this regard. Moreover, this patch also moves to
int64_t/uint64_t types.

No regressions known, except that existing applications need a tiny
patch for setting the baud rate according to the new format. Please apply.




There was a regression in user space due to the int64_t/uint64_t types.
The attached patch addresses this in rtdm/rtdm.h by including inttypes.h
- no need for the device profile header to care about this anymore.

Please apply THIS one instead.


Applied, thanks.



Jan




Index: skins/rtdm/rtdm.h
===
--- skins/rtdm/rtdm.h   (revision 112)
+++ skins/rtdm/rtdm.h   (working copy)
@@ -48,6 +48,7 @@
 #else  /* !__KERNEL__ */
 
 #include 

+#include 
 #include 
 #include 
 
Index: skins/rtdm/rtserial.h

===
--- skins/rtdm/rtserial.h   (revision 112)
+++ skins/rtdm/rtserial.h   (working copy)
@@ -77,38 +77,13 @@
 #ifndef _RTSERIAL_H
 #define _RTSERIAL_H
 
-#include 

 #include 
 
 /*!

- * @anchor RTSER_xxx_BAUD   @name RTSER_xxx_BAUD
- * Baud rates
+ * @anchor RTSER_DEF_BAUD   @name RTSER_DEF_BAUD
+ * Default baud rate
  * @{ */
-#define RTSER_50_BAUD   2304
-#define RTSER_75_BAUD   1536
-#define RTSER_110_BAUD  1047
-#define RTSER_134_5_BAUD857
-#define RTSER_150_BAUD  768
-#define RTSER_300_BAUD  384
-#define RTSER_600_BAUD  192
-#define RTSER_1200_BAUD 96
-#define RTSER_2400_BAUD 48
-#define RTSER_3600_BAUD 32
-#define RTSER_4800_BAUD 24
-#define RTSER_7200_BAUD 16
-#define RTSER_9600_BAUD 12
-#define RTSER_19200_BAUD6
-#define RTSER_38400_BAUD3
-#define RTSER_57600_BAUD2
-#define RTSER_115200_BAUD   1
-#define RTSER_DEF_BAUD  RTSER_9600_BAUD
-
-/** Generate customised baud rate code
- * @param base UART clock base
- * @param rate baud rate
- */
-#define RTSER_CUSTOM_BAUD(base, rate) \
-((base + (rate >> 1)) / rate)
+#define RTSER_DEF_BAUD  9600
 /** @} */
 
 /*!

@@ -259,7 +234,7 @@
 typedef struct rtser_config {
 int config_mask;/**< mask specifying valid fields,
  *   see @ref RTSER_SET_xxx */
-int baud_rate;  /**< baud rate, see @ref RTSER_xxx_BAUD */
+int baud_rate;  /**< baud rate, default @ref RTSER_DEF_BAUD */
 int parity; /**< number of parity bits, see
  *   @ref RTSER_xxx_PARITY */
 int data_bits;  /**< number of data bits, see
@@ -270,13 +245,13 @@
  *   @ref RTSER_xxx_HAND */
 int fifo_depth; /**< reception FIFO interrupt threshold, see
  *   @ref RTSER_FIFO_xxx */
-__s64   rx_timeout; /**< reception timeout in ns, see
+int64_t rx_timeout; /**< reception timeout in ns, see
  *   @ref RTSER_TIMEOUT_xxx for special
  *   values */
-__s64   tx_timeout; /**< transmission timeout in ns, see
+int64_t tx_timeout; /**< transmission timeout in ns, see
  *   @ref RTSER_TIMEOUT_xxx for special
  *   values */
-__s64   event_timeout;  /**< event timeout in ns, see
+int64_t event_timeout;  /**< event timeout in ns, see
  *   @ref RTSER_TIMEOUT_xxx for special
  *   values */
 int timestamp_history;  /**< enable timestamp history, see
@@ -303,9 +278,9 @@
 int events; /**< signalled events, see
  *   @ref RTSER_EVENT_xxx */
 int rx_pending; /**< number of pending input characters */
-__u64   last_timestamp; /**< last interrupt timestamp (absolute time
+uint64_t last_timestamp;/**< last interrupt timestamp (absolute time
  *   in ns) */
-__u64   rxpend_timestamp;   /**< reception timestamp (absolute time in ns)
+uint64_t rxpend_timestamp;  /**< reception timestamp (absolute time in ns)
  *   of oldest character in input queue */
 } rtser_event_t;
 
Index: drivers/16550A/16550A.c

===
--- drivers/16550A/16550A.c (revision 1

Re: [Xenomai-core] [BUG] scheduling order of dying shadow threads

2005-11-08 Thread Philippe Gerum

Jan Kiszka wrote:

Hi Philippe,

I think this one is for you: ;)

Sebastian got almost mad with his CAN driver while tracing a strange
scheduling behaviour during shadow thread deletion for several days(!) -
and I was right on the way to follow him yesterday evening. Attached is
a simplified demonstration of the effect, consisting of a RTDM driver
and both a kernel and user space application to trigger it.



I've spotted the issue in nucleus/shadow.c. Basically, the root thread priority 
boost was leaking to a non-shadow thread due to a missing priority reset in the 
lostage APC handler, whilst a shadow was in the process of relaxing. Really 
funky bug, thanks! :o> Fixed in the repo hopefully for good. The scheduling 
sequence is now correct with your demo app on my box.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] scheduling order of dying shadow threads

2005-11-08 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Hi Philippe,

I think this one is for you: ;)

Sebastian got almost mad with his CAN driver while tracing a strange
scheduling behaviour during shadow thread deletion for several days(!) -
and I was right on the way to follow him yesterday evening. Attached is
a simplified demonstration of the effect, consisting of a RTDM driver
and both a kernel and user space application to trigger it.



I've spotted the issue in nucleus/shadow.c. Basically, the root thread
priority boost was leaking to a non-shadow thread due to a missing
priority reset in the lostage APC handler, whilst a shadow was in the
process of relaxing. Really funky bug, thanks! :o> Fixed in the repo
hopefully for good. The scheduling sequence is now correct with your
demo app on my box.




Yep, looks good here as well. Great and quick work! Just don't expect
that someone can follow your explanations easily. :)



Well, sorry. Here is a more useful explanation :

The demo thread in your code calls sleep(1) before exiting, which causes the 
underlying shadow thread to relax. The same would happen without sleeping, since 
 a terminating thread is silently relaxed by the nucleus in any case as needed.


When relaxing the current thread, xnshadow_relax() first boosts the priority of 
the root thread (i.e. the placeholder for Linux in the Xenomai scheduler) right 
before suspending itself. Before that, a wake up request has been scheduled 
(using an APC), so that lostage_handler will be called, which will in turn 
invoke wake_up_process() for the relaxing thread. This is needed because shadows 
running in primary mode are seen as suspended in the Linux sense in 
TASK_INTERRUPTIBLE state. The reason for this is that both Xenomai and Linux 
schedulers must have a mutually exclusive control over a shadow, they should not 
be allowed to both fiddle concurrently with a single thread context; conversely, 
relaxed thread operating in secondary mode are seen as suspended on the XNRELAX 
condition by the nucleus.


IOW, what we want to do here is some kind of transition from the Xenomai to the 
Linux scheduler for the relaxing shadow thread.
This way, we make sure that the Linux scheduler will get back in control for the 
  awaken shadow thread, which ends up running in secondary mode once it has 
been resumed by wake_up_process().


Problem is that the unless we actually reset the root thread priority to the 
lowest one in lostage_handler in order to revert the priority boost done in 
xnshadow_relax, there is a short window of time during which a normal Linux task 
that has been preempted by the APC request that runs lostage_handler could run 
and wreck the scheduling sequence (e.g. your main() context). The fix is about 
downgrading the root thread priority and waking the relaxed shadow up in the 
same move, so that the priority scheme is kept intact.


Now the question is: why does the root thread priority need to be upgraded while 
relaxing a shadow? The answer is simple: when relaxing a shadow, you are not 
expected to change tasks in a Xenomai or Linux sense, you are only changing the 
Xenomai exec mode for a shadow, which means that we must ensure that giving 
control back to the Linux kernel just for the purpose of changing the current 
exec mode won't cause the current priority level of the relaxing thread to be 
lost and spuriously downgraded to the lowest one of the system.
So we just boost it to be equal to the one of the relaxing thread; this way, the 
Linux kernel code undergoes a Xenomai RT priority boost so that Linux cannot be 
preempted by lower priority Xenomai threads. When a shadow thread running in 
secondary mode is switched in, the root thread priority always inherits the 
Xenomai priority level for that thread; conversely, when a non-Xenomai/regular 
Linux task is scheduled in, the root thread priority is downgraded to the lowest 
Xenomai priority.


If one thinks a bit ahead now, having this scheme in place, we should be able to 
benefit from every improvement in the vanilla Linux kernel granularity toward 
real-time guarantees. Because we don't break the priority scheme moving in and 
out of the Linux domain, a Xenomai scheduling decision remains consistent with 
the Linux priority scheme, which is a necessary condition for providing a high 
integration level between Xeno and the vanilla kernel.



I think this issue has some similarity with the one I once stumbled over
regarding non-RT signalling to Linux. I'm not going to repeat my general
concerns regarding the priority boosting of the root thread now... ;)



Each time you spot a bug like this, your stack of concerns should lose at least 
one element, isn'it? :o>


Until Linux is really able to provide a fine-grained, non-disruptive and not 
easily disrupted (e.g. locking semantics in drivers), and low-overhead core 
implementation for RT support, Xeno will need to provide it

Re: [Xenomai-core] RT pipes

2005-11-09 Thread Philippe Gerum

Sebastian Smolorz wrote:

Hi,

I've spotted a -- in my view -- strange behaviour of RT pipes in the
native skin. The scenario is as follows:

A RT task creates a RT pipe and writes some bytes into it. A NRT
counterpart reads from this pipe, but not all bytes. Afterwards, the RT
task deletes the pipe.

Now the program is re-run: RT task creates the pipe, writes some bytes,
and the NRT part reads some bytes, but begins with the old ones which
weren't read during the first run of the program.

Now my question: Is this behaviour intentional?


Nope.

 If it is then I see the

following problem: A task which calls rt_pipe_create() cannot be sure that
the pipe is empty. So it should be able to clear the pipe before use but I
found no function that allows a RT task to clear the pipe.



ioctl(fd,XNPIPEIOC_FLUSH,0) would do, but still, we should clear the output 
queue when closing the kernel side. In the same vein, closing the user-space 
side does not clear the input queue that feeds the kernel side. Thanks for 
spotting this.




Sebastian

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] RT pipes

2005-11-09 Thread Philippe Gerum

Philippe Gerum wrote:

Sebastian Smolorz wrote:


Hi,

I've spotted a -- in my view -- strange behaviour of RT pipes in the
native skin. The scenario is as follows:

A RT task creates a RT pipe and writes some bytes into it. A NRT
counterpart reads from this pipe, but not all bytes. Afterwards, the RT
task deletes the pipe.

Now the program is re-run: RT task creates the pipe, writes some bytes,
and the NRT part reads some bytes, but begins with the old ones which
weren't read during the first run of the program.

Now my question: Is this behaviour intentional?



Nope.

 If it is then I see the

following problem: A task which calls rt_pipe_create() cannot be sure 
that
the pipe is empty. So it should be able to clear the pipe before use 
but I

found no function that allows a RT task to clear the pipe.



ioctl(fd,XNPIPEIOC_FLUSH,0) would do, but still, we should clear the 
output queue when closing the kernel side. In the same vein, closing the 
user-space side does not clear the input queue that feeds the kernel 
side. Thanks for spotting this.




Ok, fixed.



Sebastian

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core







--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Patch for RTDM recvmsg bug

2005-11-12 Thread Philippe Gerum

Sebastian Smolorz wrote:

Hi,

here's a patch for a bug in skins/rtdm/syscall.c. The msghdr was not
copied to user space upon completion of a recvmsg() call if the return
value was not equal to zero. But recvmsg shall return the length of the
message in bytes (according to IEEE Std 1003.1).



Applied, thanks.


I have the permission from Jan to patch his code. ;-)


Sebastian




--- xenomai/skins/rtdm/syscall.c2005-11-10 16:45:24.0 +0100
+++ syscall.c   2005-11-10 16:27:08.0 +0100
@@ -99,7 +99,7 @@ static int sys_rtdm_recvmsg(struct task_
 
 ret = _rtdm_recvmsg(curr, __xn_reg_arg1(regs), &krnl_msg,

 __xn_reg_arg3(regs));
-if (!ret)
+if (ret >= 0)
 __xn_copy_to_user(curr, (void __user *)__xn_reg_arg2(regs), &krnl_msg,
   sizeof(krnl_msg));
 





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai homepage

2005-11-14 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

just to bring this topic in mind again: what is currently preventing to
activate Bruno's nice page as the main Xenomai site? I see no
significant features lacking, fine-tuning can still be done later. This
project deserves a more representative portal than the Gna site! ;)



The DNS should point at the final URL for xenomai.org within a few hours/days. 
The site is still under construction; the idea is to keep things clear, simple 
and informative. So if you have any comments/suggestions regarding this, don't 
forget to let Bruno know about them.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Xenomai stable ppc64 I-pipe sync

2005-11-14 Thread Philippe Gerum

Heikki Lindholm wrote:

Sync the ppc64 arch of the stable tree to work with I-pipe kernel also.



Applied, thanks.


-- Heikki Lindholm




diff -Nru xenomai/arch/ppc64/hal/switch.S xenomai-devel/arch/ppc64/hal/switch.S
--- xenomai/arch/ppc64/hal/switch.S 2005-10-11 10:32:30.0 +0300
+++ xenomai-devel/arch/ppc64/hal/switch.S   2005-11-08 12:49:18.0 
+0200
@@ -26,7 +26,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include  
 
diff -Nru xenomai/include/nucleus/asm-ppc64/hal.h xenomai-devel/include/nucleus/asm-ppc64/hal.h

--- xenomai/include/nucleus/asm-ppc64/hal.h 2005-10-11 10:30:03.0 
+0300
+++ xenomai-devel/include/nucleus/asm-ppc64/hal.h   2005-11-08 
11:32:17.0 +0200
@@ -98,7 +98,11 @@
 #include 
 #include 
 
+#ifdef CONFIG_ADEOS_CORE

 #define RTHAL_TIMER_IRQ   ADEOS_TIMER_VIRQ
+#else /* !CONFIG_ADEOS_CORE */
+#define RTHAL_TIMER_IRQ   IPIPE_TIMER_VIRQ
+#endif /* CONFIG_ADEOS_CORE */
 
 #define rthal_irq_descp(irq)	(&irq_desc[(irq)])
 
diff -Nru xenomai/include/nucleus/asm-ppc64/system.h xenomai-devel/include/nucleus/asm-ppc64/system.h

--- xenomai/include/nucleus/asm-ppc64/system.h  2005-11-01 11:04:04.0 
+0200
+++ xenomai-devel/include/nucleus/asm-ppc64/system.h2005-11-08 
11:35:38.0 +0200
@@ -31,10 +31,12 @@
 #include 
 #include 
 
+#ifdef CONFIG_ADEOS_CORE

 #if ADEOS_RELEASE_NUMBER < 0x02060201
 #error "Adeos 2.6r2c1/ppc64 or above is required to run this software; please 
upgrade."
 #error "See http://download.gna.org/adeos/patches/v2.6/ppc64/";
 #endif
+#endif /* CONFIG_ADEOS_CORE */
 
 #define XNARCH_DEFAULT_TICK 100 /* ns, i.e. 1ms */

 #define XNARCH_HOST_TICK(10UL/HZ)
@@ -102,11 +104,19 @@
 
 /* The following predicates are only usable over a regular Linux stack

  *context. */
+#ifdef CONFIG_ADEOS_CORE
 #define xnarch_fault_pf_p(fi)   ((fi)->exception == ADEOS_ACCESS_TRAP)
 #define xnarch_fault_bp_p(fi)   ((current->ptrace & PT_PTRACED) && \
((fi)->exception == ADEOS_IABR_TRAP || \
(fi)->exception == ADEOS_SSTEP_TRAP || \
(fi)->exception == ADEOS_PERFMON_TRAP))
+#else /* !CONFIG_ADEOS_CORE */
+#define xnarch_fault_pf_p(fi)   ((fi)->exception == IPIPE_TRAP_ACCESS)
+#define xnarch_fault_bp_p(fi)   ((current->ptrace & PT_PTRACED) && \
+   ((fi)->exception == IPIPE_TRAP_IABR || \
+   (fi)->exception == IPIPE_TRAP_SSTEP || \
+   (fi)->exception == IPIPE_TRAP_PERFMON))
+#endif /* CONFIG_ADEOS_CORE */
 
 #define xnarch_fault_notify(fi) (!xnarch_fault_bp_p(fi))
 





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] rt_pipe_flush declaration missing in skins/native/pipe.h

2005-11-14 Thread Philippe Gerum

Ignacio García Pérez wrote:

Hi,

The subject says it all.



Fixed, thanks.

PS: please send patches when possible, it's faster to handle for me and less 
likely to be forgotten in my job queue. TIA,



Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] rt_pipe_flush declaration missing in skins/native/pipe.h

2005-11-14 Thread Philippe Gerum

Ignacio García Pérez wrote:

Philippe Gerum wrote:



Ignacio García Pérez wrote:



Hi,

The subject says it all.



Fixed, thanks.

PS: please send patches when possible, it's faster to handle for me
and less likely to be forgotten in my job queue. TIA,



I updated my source from the repository, and the
EXPORT_SYMBOL(rt_pipe_flush) in pipe.c is missing, so rt_pipe_flush is
not usable yet. Patch attached.



Applied, thanks.


Nacho.




Index: skins/native/pipe.c
===
--- skins/native/pipe.c (revision 143)
+++ skins/native/pipe.c (working copy)
@@ -1050,5 +1050,6 @@
 EXPORT_SYMBOL(rt_pipe_read);
 EXPORT_SYMBOL(rt_pipe_write);
 EXPORT_SYMBOL(rt_pipe_stream);
+EXPORT_SYMBOL(rt_pipe_flush);
 EXPORT_SYMBOL(rt_pipe_alloc);
 EXPORT_SYMBOL(rt_pipe_free);



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] printk

2005-11-15 Thread Philippe Gerum

Dmitry Adamushko wrote:

 > > ...
 > >
 > > This cannot happen in async mode, since the output would be 
buffered and

 > > printk() never called on behalf of the preempted handler.
 > >
 > >>
 > >> let's say at the (*) point
 > >>
 > >> void __ipipe_flush_printk (unsigned virq)
 > >> {
 > >>   char *p = __ipipe_printk_buf;
 > >>   int out = 0, len;
 > >>
 > >>   clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);
 > >>
 > >>   while (out < __ipipe_printk_fill) {
 > >>   len = strlen(p) + 1;
 > >>   printk("%s",p);
 > >>   p += len;
 > >>   out += len;
 > >>   }
 > >> (*) < preempted
 > >>   __ipipe_printk_fill = 0;
 > >> }
 > >>
 > >> When linux gets controll back the virq continues its execution and
 > >> sets __ipipe_printk_fill up to 0.
 > >>
 > >> This cannot happen only if virqs are manipulated with the primary
 > >> domain being stalled as well. But you told "and under __Linux
 > >> domain___ stalling protection in __ipipe_flush_printk since it's a
 > >> virq handler".
 > >>
 > >>
 > >>  > and finally, printk() cannot preempt
 > >>  > __ipipe_flush_printk under normal operation mode (i.e. async 
mode).

 > >> AFAICS,
 > >>  > there's no race here.
 >
 > Mea culpa, Dmitry is right, in the above situation he depicted, we
 > could drop a
 > portion of the output buffer. A way to fix this would be to hw lock
 > a test and
 > decrement section of __ipipe_printk_fill in the flush handler.
 >

Something like that (just a draft) would work probably but I suppose 
something a bit more graceful would be implemented. Actually, with a 
small optimization of IPIPE_PRINTK_FLAG.


void __ipipe_flush_printk (unsigned virq)
{
  char *p = __ipipe_printk_buf;
-   int out = 0, len;
+   int out = 0, len, used;

...
-   clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);
...

+ spin_lock_irqsave_hw(&__ipipe_printk_lock,flags);
+ used = __ipipe_printk_fill;
+ spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);

+ oncemore:

  while (out < used) {
  len = strlen(p) + 1;
  printk("%s",p);
  p += len;
  out += len;
  }

+ spin_lock_irqsave_hw(&__ipipe_printk_lock,flags);
+ if (__ipipe_printk_fill == used)
+ {
-   __ipipe_printk_fill = 0;

+ used = __ipipe_printk_fill = 0;

+ // when everything is done
+  clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);

+ }
+ else
+ used = __ipipe_printk_fill;
+ spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);

+ if (used)
+ goto oncemore;

...
}

and

asmlinkage int printk(const char *fmt, ...)
{
+ int virq_is_active;
...

  spin_lock_irqsave_hw(&__ipipe_printk_lock,flags);

  fbytes = __LOG_BUF_LEN - __ipipe_printk_fill;

  if (fbytes > 1) {
  r = vscnprintf(__ipipe_printk_buf + __ipipe_printk_fill,
 fbytes, fmt, args) + 1; /* account for the 
null byte */

  __ipipe_printk_fill += r;
  } else
  r = 0;

+ virq_is_active = 
test_and_set_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);


  spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);

- if (!test_and_set_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags))
+  if (!virq_is_active)
  ipipe_trigger_irq(__ipipe_printk_virq);
out:
  va_end(args);

  return r;
}



Another approach is about dropping the non-atomic update sequence that hurts, tolerating 
null runs of the virq when the seldom preemption case is seen, but without requiring hw 
interrupt masking to protect the shared stuff. Livelocking Linux inside the virq handler 
would still be possible whenever the RT side spams the kernel log, but this would not be 
an issue for us, since there is no such thing as a fair real-time system anyway.


--- kernel/printk.c 2 Nov 2005 16:29:34 -   1.2
+++ kernel/printk.c 15 Nov 2005 09:11:33 -
@@ -511,24 +511,23 @@

 static ipipe_spinlock_t __ipipe_printk_lock = IPIPE_SPIN_LOCK_UNLOCKED;

-static int __ipipe_printk_fill;
+static atomic_t __ipipe_printk_fill;

 static char __ipipe_printk_buf[__LOG_BUF_LEN];

 void __ipipe_flush_printk (unsigned virq)
 {
char *p = __ipipe_printk_buf;
-   int out = 0, len;
+   int len;

-   clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);
-
-   while (out < __ipipe_printk_fill) {
-   len = strlen(p) + 1;
-   printk("%s",p);
-   p += len;
-   out += len;
+   while 
(test_and_clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags)) {
+   while (atomic_read(&__ipipe_printk_fill) > 0) {
+   len = strlen(p) + 1;
+   printk("%s",p);
+   p += len;
+   atomic_sub(len,&__ipipe_printk_fill);
+   }
}
-   __ipipe_printk_fill = 0;
 }

 asmlinkage int printk(const char *fmt, ...)
@@ -548,12 +547,1

Re: [Xenomai-core] rt_pipe_* usage

2005-11-15 Thread Philippe Gerum

Ignacio García Pérez wrote:

RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
p->whatever1 = X;
p->whatever2 = X;
rt_pipe_send(&mypipe, m, sizeof(mystruct_t), P_NORMAL);

If this is correct, why do I have to specify the size of mystruct_t
*twice*. Can't it be initialized by rt_pipe_alloc ?.


It's initialized actually (*).



So, what's the sense of having to specify it again whet calling
rt_pipe_send ?



Because you may (pre-)allocate more than you really need to send afterwards.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] More on rt pipes usage

2005-11-15 Thread Philippe Gerum

Ignacio García Pérez wrote:

Hi,

Suppose I have a kernel rt task that samples data at a certain rate and
writes it as messages into a rt pipe, from which it is read by a user
space non rt program.

I want to limit the number of messages that are put into the pipe,
because otherwise if the user space program dies, it will grow endlessly
till it exausts the rt heap.

What I want to do is to have a pipe that can hold a limited number of
messages such that rt_pipe_write will fail if it is full.

Is there a way to know how many messages are there in the pipe?

Even if there is a way, to prevent a (harmless) race condition, I would
need to lock the pipe between checking the number of messages and
calling rt_pipe_write. As far as I know, pipe locking belongs to the
nucleus and I'd like to stay in the native skin as much as possible.

Another method would be to count how many messages I write, but then I'd
need some hook that notifies me when the user space program reads a
message so I can decrement the count.

Any ideas?


The plan is to be able to tell the pipe manager to use a user-provided heap 
instead of the system one; this way, exhausting the local heap (*) in RT space 
would be a clear sign that non-RT must be allowed to consume the pending data 
first. A simple interface to do that is missing, but the pipe manager already 
works with a variable heap pointer.


(*) Using the overall available memory seems a better metric than the number of 
pending messages, since it may often happen that messages have different sizes.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [PATCH] Auto-allocation of minor values for pipe objects

2005-11-15 Thread Philippe Gerum

Dmitry Adamushko wrote:


Hello,

enclosed please find a patch that hopefully adds so desired 
functionality. I have made various tests with it just now and it seems 
to work fine.




Sounds good.

A size of the bitmap is dependent on XNPIPE_NDEVS parameter in the same 
vein as xnpipe_states depends on it; so hopefully that is what you have 
meant Philippe (?)




And NDEVS still does not depend on BITS_PER_LONG - yes, that's ok.

Two minor missing points :

- Doc update for rt_pipe_create() describing P_MINOR_AUTO
- ChangeLog frag


A few comment later since I have to go now.

---
Best regards,
Dmitry

/(See attached file: pipe.auto-minor-2.patch)/




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_pipe_* usage

2005-11-15 Thread Philippe Gerum

Ignacio García Pérez wrote:

Philippe Gerum wrote:



Ignacio García Pérez wrote:



RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
p->whatever1 = X;
p->whatever2 = X;
rt_pipe_send(&mypipe, m, sizeof(mystruct_t), P_NORMAL);

If this is correct, why do I have to specify the size of mystruct_t
*twice*. Can't it be initialized by rt_pipe_alloc ?.



It's initialized actually (*).



So, what's the sense of having to specify it again whet calling
rt_pipe_send ?



Because you may (pre-)allocate more than you really need to send
afterwards.



I guess this should be explained in the docs. Please consider the small
patch I attach.



Ok, applied. Thanks.


Nacho.




Index: skins/native/pipe.c
===
--- skins/native/pipe.c (revision 143)
+++ skins/native/pipe.c (working copy)
@@ -598,7 +598,11 @@
  *
  * @param size The size in bytes of the message (payload data
  * only). Zero is a valid value, in which case the service returns
- * immediately without sending any message.
+ * immediately without sending any message. This parameter allows
+ * you to actually send less data than you reserved using the
+ * rt_pipe_alloc() service, which may be the case if you did not
+ * know how much space you needed at the time of allocation. In all
+ * other cases it may be more convenient to just pass P_MSGSIZE(msg).
  *
  * Additionally, rt_pipe_send() causes any data buffered by
  * rt_pipe_stream() to be flushed prior to sending the message. For



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Small build system fix for Xenomai 2.1

2005-11-16 Thread Philippe Gerum

Heikki Lindholm wrote:
The prepare-kernel.sh script doesn't link assembly files to the kernel 
at all. This should fix it.




Applied, thanks.


-- Heikki Lindholm




diff -Nru xenomai.orig/scripts/prepare-kernel.sh 
xenomai/scripts/prepare-kernel.sh
--- xenomai.orig/scripts/prepare-kernel.sh  2005-11-14 20:04:19.0 
+0200
+++ xenomai/scripts/prepare-kernel.sh   2005-11-16 13:03:27.0 +0200
@@ -4,7 +4,7 @@
 do_links() {
 rm -fr $2
 ( cd $1 &&
-  find . \( -name Makefile -o -name Kconfig -o -name '*.[ch]' \) |
+  find . \( -name Makefile -o -name Kconfig -o -name '*.[chS]' \) |
   while read f; do
 d=`dirname $f`
mkdir -p $2/$d && ln -s $1/$f $2/$f




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] printk

2005-11-16 Thread Philippe Gerum

Dmitry Adamushko wrote:

please see comments below...

 >
 > Another approach is about dropping the non-atomic update sequence
 > that hurts, tolerating
 > null runs of the virq when the seldom preemption case is seen, but
 > without requiring hw
 > interrupt masking to protect the shared stuff. Livelocking Linux
 > inside the virq handler
 > would still be possible whenever the RT side spams the kernel log,
 > but this would not be
 > an issue for us, since there is no such thing as a fair real-time
 > system anyway.
 >
 > --- kernel/printk.c   2 Nov 2005 16:29:34 -   1.2
 > +++ kernel/printk.c   15 Nov 2005 09:11:33 -
 > @@ -511,24 +511,23 @@
 >
 >   static ipipe_spinlock_t __ipipe_printk_lock = IPIPE_SPIN_LOCK_UNLOCKED;
 >
 > -static int __ipipe_printk_fill;
 > +static atomic_t __ipipe_printk_fill;
 >
 >   static char __ipipe_printk_buf[__LOG_BUF_LEN];
 >
 >   void __ipipe_flush_printk (unsigned virq)
 >   {
 >  char *p = __ipipe_printk_buf;
 > -   int out = 0, len;
 > +   int len;
 >
 > -   clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);
 > -
 > -   while (out < __ipipe_printk_fill) {
 > -  len = strlen(p) + 1;
 > -  printk("%s",p);
 > -  p += len;
 > -  out += len;
 > +   while
(test_and_clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags)) {
 > +  while (atomic_read(&__ipipe_printk_fill) > 0) {
 > + len = strlen(p) + 1;
 > + printk("%s",p);
 > + p += len;

 > + atomic_sub(len,&__ipipe_printk_fill);

But that's wrong. __ipipe_printk_fill is used as an index in printk()
and here you use it as a number of bytes still need to be printed.



Ok, this kind of stuff should always be designed with the brain turned on. Now
that I have eventually found the power switch, I'm going to fix the printk issue
in the Adeos core along the following lines. The code below basically reworks
your proposal. The construct is still a bit hairy, but unfortunately, we cannot
just go for using a plain atomic compare-and-exchange op in the virq handler,
since we need to serialize its activation among all CPUs - sigh... We also get 
rid
of PPRINTK - which is redundant - in the same move.

--- ipipe/v2.6/2.6.14/kernel/printk.c   2 Nov 2005 16:29:34 -   1.2
+++ ipipe/v2.6/2.6.14/kernel/printk.c   16 Nov 2005 12:18:09 -
@@ -518,23 +518,34 @@
 void __ipipe_flush_printk (unsigned virq)
 {
char *p = __ipipe_printk_buf;
-   int out = 0, len;
+   int len, lmax, out = 0;
+   unsigned long flags;

-   clear_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags);
+   goto start;

-   while (out < __ipipe_printk_fill) {
-   len = strlen(p) + 1;
-   printk("%s",p);
-   p += len;
-   out += len;
+   do {
+   spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);
+ start:
+   lmax = __ipipe_printk_fill;
+   while (out < lmax) {
+   len = strlen(p) + 1;
+   printk("%s",p);
+   p += len;
+   out += len;
+   }
+   spin_lock_irqsave_hw(&__ipipe_printk_lock,flags);
}
+   while (__ipipe_printk_fill != lmax);
+
__ipipe_printk_fill = 0;
+
+   spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);
 }

 asmlinkage int printk(const char *fmt, ...)
 {
-   unsigned long flags;
-   int r, fbytes;
+   int r, fbytes, oldcount;
+   unsigned long flags;
va_list args;

va_start(args, fmt);
@@ -548,7 +559,8 @@

spin_lock_irqsave_hw(&__ipipe_printk_lock,flags);

-   fbytes = __LOG_BUF_LEN - __ipipe_printk_fill;
+   oldcount = __ipipe_printk_fill;
+   fbytes = __LOG_BUF_LEN - oldcount;

if (fbytes > 1)  {
r = vscnprintf(__ipipe_printk_buf + __ipipe_printk_fill,
@@ -559,7 +571,7 @@

spin_unlock_irqrestore_hw(&__ipipe_printk_lock,flags);

-   if (!test_and_set_bit(IPIPE_PPRINTK_FLAG,&ipipe_root_domain->flags))
+   if (oldcount == 0)
ipipe_trigger_irq(__ipipe_printk_virq);
 out:
va_end(args);
--

Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [PATCH] Auto-allocation of minor values for pipe objects

2005-11-16 Thread Philippe Gerum

Dmitry Adamushko wrote:

Philippe Gerum <[EMAIL PROTECTED]> wrote on 15.11.2005 23:17:50:

 > Dmitry Adamushko wrote:
 > >
 > > Hello,
 > >
 > > enclosed please find a patch that hopefully adds so desired
 > > functionality. I have made various tests with it just now and it seems
 > > to work fine.
 > >
 >
 > Sounds good.
 >
 > > A size of the bitmap is dependent on XNPIPE_NDEVS parameter in the 
same
 > > vein as xnpipe_states depends on it; so hopefully that is what you 
have

 > > meant Philippe (?)
 > >
 >
 > And NDEVS still does not depend on BITS_PER_LONG - yes, that's ok.
 >
 > Two minor missing points :
 >
 > - Doc update for rt_pipe_create() describing P_MINOR_AUTO
 > - ChangeLog frag

Enclosed a final patch.



Applied, thanks.

One thing I wanted to point out is that the exteneded interface is not 
usable for the "rtai" skin since rtf_create() must know a real minor 
before calling xnpipe_connect() (at least it's implemented this way at 
the moment).


int rtf_create (unsigned minor, int size)

{

...
   fifo = __fifo_table + minor; < That's the reason!

   err = xnpipe_connect(minor,
&__fifo_output_handler,
&__fifo_exec_handler,
NULL,
fifo); < it's already dependent on minor.

And I don't think, of course, it's a good idea to export a separate 
interface for allocation of minor values from the nucleus. So either:


- rtf_create() should be rewritten differently;

- keep it as is since it looks like the real rtai interface doesn't 
require such a functionality.





We need to keep the things as they are for the RTAI skin. The art of emulating an 
interface includes mimicking its shortcomings.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] v2.1 status

2005-11-17 Thread Philippe Gerum


Here is an update regarding the way things progress on the v2.1 branch:

o The build system has been deeply revamped, so that we now fully leave the burden of 
building Xenomai's kernel support to Linux. To this end, the code tree has been 
reorganized in two major sections, the first one contains the kernel-related sources 
(ksrc/), the other is hosting the user-space support (src/). Since the user-space portion 
does not need to know about the kernel sources anymore, several issues have been solved in 
the same move. All in all, things are way simpler than before, which seems to indicate 
that we are heading to the right direction. The documentation for the installation process 
needs to be updated though.


o Also as a matter of build system (and a bit more), v2.1 has been backported to Linux 
2.4, starting with the ppc32 support. We rely on Denx's 2_4-devel tree as our 2.4 kernel 
of reference for this port [1]. A second backport to 2.4/x86 will happen when this combo 
is I-pipe ready. We are ahead of schedule regarding this backport, since it was initially 
planned for Q12006, but since we are going to add more supported architectures in the next 
months, it's better to have a stable build system for that.


o PowerPC-wise (again), we should be close to ready for 2.6.15, since Heikki merged both 
the 32 and 64-bit trees in a single one.


All other tasks planned are undergoing. For my part, I'm going to fiddle now with ADI's 
Blackfin for which we already have the required Adeos support, and create the Xenomai port 
for it. Since this one is uClinux based, we should be able to check that the new build 
system is ok to host any kind of port sanely.


[1] http://www.denx.de/en/Software/CVS/

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization -> "flush the output queue upon closure"

2005-11-18 Thread Philippe Gerum

Dmitry Adamushko wrote:
yep, it's a problem since data may be client-dependent. In such a case, 
for a new client old messages are just irrelevant. And xnpipe_release() 
cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this breaks the overall 
atomicity of the caller, but open/close hooks are expected to initiate/finalize 
communication sessions, and that may take an unbounded amount of time, so we definitely 
don't want to do this with the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically at 
the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state->status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because of 
PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.


You must _never_ _ever_ reschedule with the nucleus lock held; this is a major cause of 
jittery I recently stumbled upon that was induced by xnpipe_read_wait() at that time. So 
indeed, xnpipe_release() cannot be made atomic this way under a fully preemptible kernel.





---

Dmitry




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization -> "flush the output queue upon closure"

2005-11-18 Thread Philippe Gerum

Philippe Gerum wrote:

Dmitry Adamushko wrote:

yep, it's a problem since data may be client-dependent. In such a 
case, for a new client old messages are just irrelevant. And 
xnpipe_release() cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this 
breaks the overall atomicity of the caller, but open/close hooks are 
expected to initiate/finalize communication sessions, and that may take 
an unbounded amount of time, so we definitely don't want to do this with 
the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically 
at the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state->status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because 
of PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.



You must _never_ _ever_ reschedule


"reschedule" in the Linux sense here; entering Xenomai's rescheduling procedure with the 
superlock held is of course perfectly valid.


 with the nucleus lock held; this is a
major cause of jittery I recently stumbled upon that was induced by 
xnpipe_read_wait() at that time. So indeed, xnpipe_release() cannot be 
made atomic this way under a fully preemptible kernel.





---

Dmitry







--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization -> "flush the output queue upon closure"

2005-11-18 Thread Philippe Gerum

Dmitry Adamushko wrote:

Philippe Gerum <[EMAIL PROTECTED]> wrote on 18.11.2005 11:14:26:

 > > ...
 > >
 > > it looks like we can't make the whole xnpipe_release() atomic 
because of

 > > PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.
 >
 > You must _never_ _ever_ reschedule with the nucleus lock held; this
 > is a major cause of
 > jittery I recently stumbled upon that was induced by
 > xnpipe_read_wait() at that time. So
 > indeed, xnpipe_release() cannot be made atomic this way under a
 > fully preemptible kernel.

Yep.

Now keeping in mind the observation I have made yesterday, it looks 
like, in fact, there is no need in wake_up_*(readers) call in 
file_operations::release(). There is nobody to be woken up at the time 
when release() is called:


1) The reference counter of "file" object is 0, i.e. there are no 
readers since read() increases a counter before getting blocked.


2) noone else can use anew that "file" object since close() does the 
following:


filp = files->fd[fd];
if (!filp)
goto out_unlock;
files->fd[fd] = NULL; <--- it's invalid from now on

so it's not possible that some new readers may occur when a counter == 0.



Ack.

Hm.. but we still have fasync_helper(-1,file,0,&state->asyncq); which is 
about sending a signal and that's perfectly valid (a file::counter is 
not involved here). And that call may lead to re-scheduling (linux 
re-scheduling of course) so we can't put it in a blocked section.


So the best way I see is to have something like():

xnpipe_drop_user_conn()
{
xnlock_get_irqsave(&nklock,s);

while ((holder = getq(&state->outq)) != NULL)
  
 state->output_handler(minor,link2mh(holder),-EPIPE,state->cookie);

   }

   while ((holder = getq(&state->inq)) != NULL)
   {
   if (state->input_handler != NULL)
  
 state->input_handler(minor,link2mh(holder),-EPIPE,state->cookie);

   else if (state->alloc_handler == NULL)
   xnfree(link2mh(holder));
   }

__clrbits(state->status,XNPIPE_USER_CONN);

xnlock_put_irqrestore(&nklock,s);
}

and call it everywhere instead of clrbits(state->status,XNPIPE_USER_CONN);

This way we may be sure there are no pending messages left.




Sounds consistent, since USER_CONN flag is semantically bound to the active/inactive state 
of the associated data queues anyway.



 > --
 >
 > Philippe.

---

Dmitry




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] v2.1 status

2005-11-18 Thread Philippe Gerum

Jan Kiszka wrote:



Question: On recent roadmaps I'm so far missing the topic "RT-signal
support in userspace". Are there any concrete schedules (Qx 200y)?



Not currently, even if AFAIC, this is an interesting feature to have, so
that we don't always have to process async events over dedicated server
threads in user-space. But, I'm cowardly entirely leaving the core
design issues to Gilles regarding this; I need to keep focused on the
other pending stuff for now.


I think to remember that you said Adeos would be involved in such an
extension. Maybe it would be a good time to think about this also when
we extend the Adeos API by shareable IRQs.



Likely, yes.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] v2.1 status

2005-11-19 Thread Philippe Gerum

Ignacio García Pérez wrote:

I had a first contact with the new build system. I really really don't
like the fact it touches my kernel source tree. Besides adeos, I like to
keep the kernel source independent of xenomai, because that tree is
shared for other projects.



At that point, I would really consider leaving the burden of keeping 
various users of a single code base in sync to a SCM, not to the filesystem.



Also, why does it default to monolithic build of the xeno modules when
in 2.0 you always got them as modules?.




Because it's a reasonable default:

o Most setups don't need to unload the real-time support, but rather 
load it once for all at startup.
o Given #1, in the embedded space, modules are often considered as pure 
annoyance. Among other things, you need to activate the module support 
in the kernel just to load the RT system once, and this does not come 
for free, especially in terms of memory footprints.
o Modules are allocated in vmalloc space. Given that the vanilla kernel 
already has rather poor code locality (spatially speaking at least), 
things are not going to improve for time-critical code in modules which 
increase TLB misses.


For the rare cases where the arguments above are outweight by a strong 
requirement to have modules, you can still switch them on in your kernel 
configuration; this is the kind of flexibility you did not have with the 
previous build system.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] v2.1 status

2005-11-19 Thread Philippe Gerum

Wolfgang Grandegger wrote:

Philippe Gerum wrote:



Here is an update regarding the way things progress on the v2.1 branch:

o The build system has been deeply revamped, so that we now fully 
leave the burden of building Xenomai's kernel support to Linux. To 
this end, the code tree has been reorganized in two major sections, 
the first one contains the kernel-related sources (ksrc/), the other 
is hosting the user-space support (src/). Since the user-space portion 
does not need to know about the kernel sources anymore, several issues 
have been solved in the same move. All in all, things are way simpler 
than before, which seems to indicate that we are heading to the right 
direction. The documentation for the installation process needs to be 
updated though.



I like it a lot.

o Also as a matter of build system (and a bit more), v2.1 has been 
backported to Linux 2.4, starting with the ppc32 support. We rely on 
Denx's 2_4-devel tree as our 2.4 kernel of reference for this port 
[1]. A second backport to 2.4/x86 will happen when this combo is 
I-pipe ready. We are ahead of schedule regarding this backport, since 
it was initially planned for Q12006, but since we are going to add 
more supported architectures in the next months, it's better to have a 
stable build system for that.



Just tested Xenomai with adeos-ipipe-2.4.25-ppc-denx-0.9-02.patch on a 
recent DENX kernel tree. MPC 8xx support was missing. I have attached
the patch to fix it. Furthermore I have not found the "Machine" options 


Applied, thanks. 0.9-03 contains this fix.


(here from 2.6):

 #
 # Machine
 #
 # CONFIG_XENO_HW_FPU is not set
 CONFIG_XENO_HW_PERIODIC_TIMER=y
 CONFIG_XENO_HW_TIMER_LATENCY=1
 CONFIG_XENO_HW_SCHED_LATENCY=1



Now fixed.



The latency figures on my TQM860L board with Xenomai 2.1 and Liunx 2.4 
are close to the one I measured with Xenomail 2.0 with Linux 2.6:


LATENCY with load on TQM860L:

Linux   |-lat min|-lat avg|-latmax|-overrun|---test-time
2.6.13  |   60480|  120960| 224320|   0|00:09:46
2.4.25  |   64080|   95120| 225040|   0|00:10:00




Same observation here on a Freescale Icecube. I will reactivate the 
kernel-space latency test in order to have the full picture.


Thanks,

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization -> "flush the output queue upon closure"

2005-11-20 Thread Philippe Gerum

Dmitry Adamushko wrote:

Philippe Gerum <[EMAIL PROTECTED]> wrote on 18.11.2005 12:07:22:

 >
 > > Hm.. but we still have fasync_helper(-1,file,0,&state->asyncq); 
which is

 > > about sending a signal and that's perfectly valid (a file::counter is
 > > not involved here). And that call may lead to re-scheduling (linux
 > > re-scheduling of course) so we can't put it in a blocked section.
 > >
 > > So the best way I see is to have something like():
 > >
 > > xnpipe_drop_user_conn()
 > > {
 > > xnlock_get_irqsave(&nklock,s);
 > >
 > > while ((holder = getq(&state->outq)) != NULL)
 > >  
 > >  state->output_handler(minor,link2mh(holder),-EPIPE,state->cookie);

 > >}
 > >
 > >while ((holder = getq(&state->inq)) != NULL)
 > >{
 > >if (state->input_handler != NULL)
 > >  
 > >  state->input_handler(minor,link2mh(holder),-EPIPE,state->cookie);

 > >else if (state->alloc_handler == NULL)
 > >xnfree(link2mh(holder));
 > >}
 > >
 > > __clrbits(state->status,XNPIPE_USER_CONN);
 > >
 > > xnlock_put_irqrestore(&nklock,s);
 > > }
 > >
 > > and call it everywhere instead of 
clrbits(state->status,XNPIPE_USER_CONN);

 > >
 > > This way we may be sure there are no pending messages left.
 > >
 > >
 >
 > Sounds consistent, since USER_CONN flag is semantically bound to the
 > active/inactive state
 > of the associated data queues anyway.
 >

Then a patch is enclosed.



Applied, thanks.



 > --
 >
 > Philippe.

---

Dmitry


/(See attached file: pipe.cleanup-user-conn.patch)//(See attached file: 
ChangeLog-diff.patch)/





--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] xenomai 2.1 ppc64 i-pipe support

2005-11-20 Thread Philippe Gerum

Heikki Lindholm wrote:

Xenomai 2.1:

- Add ppc64 I-pipe kernel support



Applied, thanks.


-- Heikki Lindholm




diff -Nru xenomai/include/asm-powerpc/system.h 
xenomai-devel/include/asm-powerpc/system.h
--- xenomai/include/asm-powerpc/system.h2005-11-08 09:43:39.0 
+0200
+++ xenomai-devel/include/asm-powerpc/system.h  2005-11-17 12:51:47.0 
+0200
@@ -127,10 +127,17 @@
 #endif /* CONFIG_PPC64 */
 #else /* !CONFIG_ADEOS_CORE */
 #define xnarch_fault_pf_p(fi)   ((fi)->exception == IPIPE_TRAP_ACCESS)
+#ifdef CONFIG_PPC64
+#define xnarch_fault_bp_p(fi)   ((current->ptrace & PT_PTRACED) && \
+((fi)->exception == IPIPE_TRAP_IABR || \
+ (fi)->exception == IPIPE_TRAP_SSTEP || \
+ (fi)->exception == IPIPE_TRAP_PERFMON))
+#else /* !CONFIG_PPC64 */
 #define xnarch_fault_bp_p(fi)   ((current->ptrace & PT_PTRACED) && \
 ((fi)->exception == IPIPE_TRAP_IABR || \
  (fi)->exception == IPIPE_TRAP_SSTEP || \
  (fi)->exception == IPIPE_TRAP_DEBUG))
+#endif /* CONFIG_PPC64 */
 #endif /* CONFIG_ADEOS_CORE */
 
 #define xnarch_fault_notify(fi) (!xnarch_fault_bp_p(fi))

diff -Nru xenomai/ksrc/arch/powerpc/switch_64.S 
xenomai-devel/ksrc/arch/powerpc/switch_64.S
--- xenomai/ksrc/arch/powerpc/switch_64.S   2005-11-08 09:44:47.0 
+0200
+++ xenomai-devel/ksrc/arch/powerpc/switch_64.S 2005-11-17 12:52:06.0 
+0200
@@ -24,7 +24,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 
 #define RTHAL_SAVEREG(reg, pos)	std reg,STACK_FRAME_OVERHEAD+8*(pos)(r1)





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] xenomai 2.1 ppc64 build fix

2005-11-20 Thread Philippe Gerum

Heikki Lindholm wrote:

Xenomai 2.1:

- powerpc/atomic.h lost one 64-bit #ifdef during the merge - ppc64 
defines its own atomic mask functions.


Applied, thanks.



-- Heikki Lindholm




diff -Nru xenomai/include/asm-powerpc/atomic.h 
xenomai-devel/include/asm-powerpc/atomic.h
--- xenomai/include/asm-powerpc/atomic.h2005-11-08 09:43:39.0 
+0200
+++ xenomai-devel/include/asm-powerpc/atomic.h  2005-11-18 15:37:34.0 
+0200
@@ -210,6 +210,7 @@
 return t;
 }
 
+#ifndef __powerpc64__

 static __inline__ void atomic_set_mask(unsigned long mask,
   unsigned long *ptr)
 {
@@ -237,6 +238,7 @@
: "r" (ptr), "r" (mask)
: "r5", "cc", "memory");
 }
+#endif /* __powerpc64__ */
 
 #define xnarch_atomic_xchg(ptr,v)   atomic_xchg(ptr,v)

 #define xnarch_memory_barrier() __asm__ __volatile__("": : :"memory")




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] define your own pipe heap

2005-11-22 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


...
A patch says more than thousand words. ;)

As a first approach, I picked the second variant and implemented a new
function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and
rt_pipe_free so that the right pool is used by them.




I thought about this variant again, and it seems to me rather unsafe in
case some buffer allocation takes place between rt_pipe_create and
rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a
new argument poolsize instead.



Yep, looks safer to me too.


Still untested...

Jan




Index: skins/native/pipe.h
===
--- skins/native/pipe.h (revision 165)
+++ skins/native/pipe.h (working copy)
@@ -37,6 +37,8 @@
 
 #ifdef __KERNEL__
 
+#include 

+
 #define XENO_PIPE_MAGIC 0x0202
 
 typedef xnpipe_mh_t RT_PIPE_MSG;

@@ -57,6 +59,10 @@
 
 RT_PIPE_MSG *buffer;	/* !< Buffer used in byte stream mode. */
 
+xnheap_t *bufpool;		/* !< Current buffer pool. */

+
+xnheap_t privpool; /* !< Private buffer pool. */
+
 size_t fillsz; /* !< Bytes written to the buffer.  */
 
 u_long flushable;		/* !< Flush request flag. */

@@ -85,8 +91,12 @@
 
 int rt_pipe_create(RT_PIPE *pipe,

   const char *name,
-  int minor);
+  int minor,
+  size_t poolsize);
 
+int rt_pipe_setpool(RT_PIPE *pipe,

+size_t poolsize);
+


Useless now, IIUC.


 int rt_pipe_delete(RT_PIPE *pipe);
 
 ssize_t rt_pipe_read(RT_PIPE *pipe,

@@ -113,9 +123,11 @@
 size_t size,
 int mode);
 
-RT_PIPE_MSG *rt_pipe_alloc(size_t size);

+RT_PIPE_MSG *rt_pipe_alloc(RT_PIPE *pipe,
+   size_t size);
 set
-int rt_pipe_free(RT_PIPE_MSG *msg);
+int rt_pipe_free(RT_PIPE *pipe,
+ RT_PIPE_MSG *msg);
 
 ssize_t rt_pipe_flush(RT_PIPE *pipe);
 
Index: skins/native/syscall.c

===
--- skins/native/syscall.c  (revision 165)
+++ skins/native/syscall.c  (working copy)
@@ -3194,6 +3194,7 @@
 char name[XNOBJECT_NAME_LEN];
 RT_PIPE_PLACEHOLDER ph;
 int err, minor;
+size_t poolsize;
 RT_PIPE *pipe;
 
 if (!__xn_access_ok(curr,VERIFY_WRITE,__xn_reg_arg1(regs),sizeof(ph)))

@@ -3213,12 +3214,15 @@
 /* Device minor. */
 minor = (int)__xn_reg_arg3(regs);
 
+/* Buffer pool size. */

+poolsize = (size_t)__xn_reg_arg4(regs);
+
 pipe = (RT_PIPE *)xnmalloc(sizeof(*pipe));
 
 if (!pipe)

return -ENOMEM;
 
-err = rt_pipe_create(pipe,name,minor);

+err = rt_pipe_create(pipe,name,minor,poolsize);
 
 if (err == 0)

{
@@ -3332,7 +3336,7 @@
 /* Zero-sized messages are allowed, so we still need to free the
message buffer even if no data copy took place. */
 
-rt_pipe_free(msg);

+rt_pipe_free(pipe,msg);
 
 return err;

 }
@@ -3374,7 +3378,7 @@
 if (!__xn_access_ok(curr,VERIFY_READ,__xn_reg_arg2(regs),size))
return -EFAULT;
 
-msg = rt_pipe_alloc(size);

+msg = rt_pipe_alloc(pipe,size);

 if (!msg)
return -ENOMEM;
@@ -3386,7 +3390,7 @@
 if (err != size)
/* If the operation failed, we need to free the message buffer
   by ourselves. */
-   rt_pipe_free(msg);
+   rt_pipe_free(pipe,msg);
 
 return err;

 }
@@ -3436,7 +3440,7 @@
}
 else
{
-   msg = rt_pipe_alloc(size);
+   msg = rt_pipe_alloc(pipe,size);

if (!msg)
return -ENOMEM;
@@ -3449,7 +3453,7 @@
 err = rt_pipe_stream(pipe,buf,size);
 
 if (msg)

-   rt_pipe_free(msg);
+   rt_pipe_free(pipe,msg);
 
 return err;

 }


__xeno_pipe_create must be marked as lostage since we need to enter 
rt_pipe_create() in secondary mode now.



Index: skins/native/lib/pipe.c
===
--- skins/native/lib/pipe.c (revision 165)
+++ skins/native/lib/pipe.c (working copy)
@@ -23,7 +23,8 @@
 
 int rt_pipe_create (RT_PIPE *pipe,

const char *name,
-   int minor)
+   int minor,
+   size_t poolsize)
 {
 return XENOMAI_SKINCALL3(__xeno_muxid,
 __xeno_pipe_create,


  return XENOMAI_SKINCALL4(__xeno_muxid,
 __xeno_pipe_create,
 pipe,
 name,
 minor,
 poolsize);


Index: skins/native/pipe.c
===
--- skins/native/pipe.c (revision 165)
+++ skins/native/pipe.c (working copy)
@@ -50,8 +50,6 @@
 #include 
 #include 
 
-static xnheap_t *__pipe_heap = &kheap;

-
 sta

Re: [Xenomai-core] [RFC] define your own pipe heap

2005-11-22 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Jan Kiszka wrote:



...
A patch says more than thousand words. ;)

As a first approach, I picked the second variant and implemented a new
function called rt_pipe_setpool. I also had to extend rt_pipe_alloc and
rt_pipe_free so that the right pool is used by them.




I thought about this variant again, and it seems to me rather unsafe in
case some buffer allocation takes place between rt_pipe_create and
rt_pipe_setpool. So, here is a patch which extends rt_pipe_create with a
new argument poolsize instead.



Yep, looks safer to me too.




Ok, I addressed most your comments, and here is round 2 of variant 2.
The only question for me is if we should rt_pipe_create in kernel space
from RT context with poolsize=0 if this is prevented effectively for
userspace task?


This is not prevented for user-space, since there is an automatic switch 
to secondary mode caused by the lostage exec bit.


 So far, I deny any non-RT invocation.




You likely mean, any non-Linux invocation.


Jan




Index: skins/native/pipe.h
===
--- skins/native/pipe.h (revision 165)
+++ skins/native/pipe.h (working copy)
@@ -37,6 +37,8 @@
 
 #ifdef __KERNEL__
 
+#include 

+
 #define XENO_PIPE_MAGIC 0x0202
 
 typedef xnpipe_mh_t RT_PIPE_MSG;

@@ -57,6 +59,10 @@
 
 RT_PIPE_MSG *buffer;	/* !< Buffer used in byte stream mode. */
 
+xnheap_t *bufpool;		/* !< Current buffer pool. */

+
+xnheap_t privpool; /* !< Private buffer pool. */
+
 size_t fillsz; /* !< Bytes written to the buffer.  */
 
 u_long flushable;		/* !< Flush request flag. */

@@ -85,7 +91,8 @@
 
 int rt_pipe_create(RT_PIPE *pipe,

   const char *name,
-  int minor);
+  int minor,
+  size_t poolsize);
 
 int rt_pipe_delete(RT_PIPE *pipe);
 
@@ -113,9 +120,11 @@

 size_t size,
 int mode);
 
-RT_PIPE_MSG *rt_pipe_alloc(size_t size);

+RT_PIPE_MSG *rt_pipe_alloc(RT_PIPE *pipe,
+   size_t size);
 
-int rt_pipe_free(RT_PIPE_MSG *msg);

+int rt_pipe_free(RT_PIPE *pipe,
+ RT_PIPE_MSG *msg);
 
 ssize_t rt_pipe_flush(RT_PIPE *pipe);
 
Index: skins/native/syscall.c

===
--- skins/native/syscall.c  (revision 165)
+++ skins/native/syscall.c  (working copy)
@@ -3194,6 +3194,7 @@
 char name[XNOBJECT_NAME_LEN];
 RT_PIPE_PLACEHOLDER ph;
 int err, minor;
+size_t poolsize;
 RT_PIPE *pipe;
 
 if (!__xn_access_ok(curr,VERIFY_WRITE,__xn_reg_arg1(regs),sizeof(ph)))

@@ -3213,12 +3214,15 @@
 /* Device minor. */
 minor = (int)__xn_reg_arg3(regs);
 
+/* Buffer pool size. */

+poolsize = (size_t)__xn_reg_arg4(regs);
+
 pipe = (RT_PIPE *)xnmalloc(sizeof(*pipe));
 
 if (!pipe)

return -ENOMEM;
 
-err = rt_pipe_create(pipe,name,minor);

+err = rt_pipe_create(pipe,name,minor,poolsize);
 
 if (err == 0)

{
@@ -3332,7 +3336,7 @@
 /* Zero-sized messages are allowed, so we still need to free the
message buffer even if no data copy took place. */
 
-rt_pipe_free(msg);

+rt_pipe_free(pipe,msg);
 
 return err;

 }
@@ -3374,7 +3378,7 @@
 if (!__xn_access_ok(curr,VERIFY_READ,__xn_reg_arg2(regs),size))
return -EFAULT;
 
-msg = rt_pipe_alloc(size);

+msg = rt_pipe_alloc(pipe,size);

 if (!msg)
return -ENOMEM;
@@ -3386,7 +3390,7 @@
 if (err != size)
/* If the operation failed, we need to free the message buffer
   by ourselves. */
-   rt_pipe_free(msg);
+   rt_pipe_free(pipe,msg);
 
 return err;

 }
@@ -3436,7 +3440,7 @@
}
 else
{
-   msg = rt_pipe_alloc(size);
+   msg = rt_pipe_alloc(pipe,size);

if (!msg)
return -ENOMEM;
@@ -3449,7 +3453,7 @@
 err = rt_pipe_stream(pipe,buf,size);
 
 if (msg)

-   rt_pipe_free(msg);
+   rt_pipe_free(pipe,msg);
 
 return err;

 }
@@ -3595,7 +3599,7 @@
 [__xeno_intr_enable ] = { &__rt_intr_enable, __xn_exec_any },
 [__xeno_intr_disable ] = { &__rt_intr_disable, __xn_exec_any },
 [__xeno_intr_inquire ] = { &__rt_intr_inquire, __xn_exec_any },
-[__xeno_pipe_create ] = { &__rt_pipe_create, __xn_exec_any },
+[__xeno_pipe_create ] = { &__rt_pipe_create, __xn_exec_lostage },
 [__xeno_pipe_bind ] = { &__rt_pipe_bind, __xn_exec_conforming },
 [__xeno_pipe_delete ] = { &__rt_pipe_delete, __xn_exec_any },
 [__xeno_pipe_read ] = { &__rt_pipe_read, __xn_exec_primary },
Index: skins/native/lib/pipe.c
===
--- skins/native/lib/pipe.c (revision 165)
+++ skins

Re: [Xenomai-core] fusion and its stack limit

2005-11-25 Thread Philippe Gerum

Marco Cavallini wrote:

Hi,
with fusion-0.9.1 and VxWorks skin
I am testing the creation of 255 different tasks each with a different 
priority

level from 0 to 254.
I am facing to a problem creating more than 12 tasks with taskSpawn
after creating 12 tasks the program fails into thread.c
in function xnthread_init()
at call xnarch_alloc_stack()
returning -ENOMEM.

Can I increase the stack for fusion (and how) ?


There is not enough space in your system heap. Try raising 
CONFIG_XENO_OPT_SYS_HEAPSZ in the nucleus config menu.



TIA

Marco Cavallini
Koan s.a.s. - Bergamo - ITALIA
Embedded and Real-Time Software Engineering
www.koansoftware.com|www.klinux.org


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] private heaps for native pipes

2005-11-25 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

this is basically the last version of my rt_pipe_create-ext patch to add
private, user-resizeable heaps to native pipes. Joerg has tested this
patch successfully under the 2.1 trunk, and I added a ChangeLog snippet.
Attached both 2.0.x and 2.1.x patches - please apply at least the 2.1
variant.



Applied to 2.1, thanks. We need to keep the 2.0 branch 100% compatible 
with the former RTAI/fusion API-wise, so I won't merge this in 2.0 though.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] ENOMEM detection fix in 16550A driver

2005-11-25 Thread Philippe Gerum

Jan Kiszka wrote:

Hi all,

that's what reusing drivers also gives us: Paolo found a small but ugly
bug in the 16550A driver, see attached fix and Changelog (for both 2.0.x
and 2.1.x).



Applied, thanks.



Jan





Index: ChangeLog
===
--- ChangeLog   (revision 182)
+++ ChangeLog   (working copy)
@@ -1,3 +1,8 @@
+2005-11-25  Jan Kiszka <[EMAIL PROTECTED]>
+
+   * drivers/16550A/16550A.c (RTSER_RTIOC_SET_CONFIG): Fix ENOMEM detection
+(once again). Found by Paolo Mantegazza through cross-usage.
+
 2005-11-21  Philippe Gerum  <[EMAIL PROTECTED]>
 
 	* skins/native, nucleus/pipe.c: Globally replace ENOSPC by

Index: drivers/16550A/16550A.c
===
--- drivers/16550A/16550A.c (revision 182)
+++ drivers/16550A/16550A.c (working copy)
@@ -587,10 +587,9 @@
 else
 hist_buf =
 rtdm_malloc(IN_BUFFER_SIZE * sizeof(uint64_t));
+if (!hist_buf)
+return -ENOMEM;
 }
-
-if (!hist_buf)
-return -ENOMEM;
 }
 
 rt_16550_set_config(ctx, config, &hist_buf);

@@ -1025,7 +1024,7 @@
 device_class:   RTDM_CLASS_SERIAL,
 device_sub_class:   RTDM_SUBCLASS_16550A,
 driver_name:"rt_16550A",
-driver_version: RTDM_DRIVER_VER(1, 2, 0),
+driver_version: RTDM_DRIVER_VER(1, 2, 1),
 peripheral_name:"UART 16550A",
 provider_name:  "Jan Kiszka",
 };




Index: ksrc/drivers/16550A/16550A.c
===
--- ksrc/drivers/16550A/16550A.c(revision 182)
+++ ksrc/drivers/16550A/16550A.c(working copy)
@@ -589,10 +589,9 @@
 else
 hist_buf =
 rtdm_malloc(IN_BUFFER_SIZE * sizeof(uint64_t));
+if (!hist_buf)
+return -ENOMEM;
 }
-
-if (!hist_buf)
-return -ENOMEM;
 }
 
 rt_16550_set_config(ctx, config, &hist_buf);

@@ -1027,7 +1026,7 @@
 device_class:   RTDM_CLASS_SERIAL,
 device_sub_class:   RTDM_SUBCLASS_16550A,
 driver_name:"rt_16550A",
-driver_version: RTDM_DRIVER_VER(1, 2, 0),
+driver_version: RTDM_DRIVER_VER(1, 2, 1),
 peripheral_name:"UART 16550A",
 provider_name:  "Jan Kiszka",
 };




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: Xenomai on PPC

2005-11-28 Thread Philippe Gerum

Andrea Iudiciani wrote:

Hi all, hi Paolo,

I would like to know if is scheduled the porting of Xenomai to
PPC-based platform and, if the answer is affermative, when.

I'm very interested about that.

best regards,




It's already there:
http://download.gna.org/xenomai/stable/xenomai-2.0.1.tar.bz2


Andrea Iudiciani


Ericsson Lab Italy S.p.A.
[EMAIL PROTECTED]


___
RTAI mailing list
[EMAIL PROTECTED]
https://mail.rtai.org/cgi-bin/mailman/listinfo/rtai




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] NMI-watchdog related fixes

2005-11-28 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

some fixes related to warnings when you switch on Xenomai's NMI watchdog.



Applied, thanks.


Jan




Index: include/asm-i386/system.h
===
--- include/asm-i386/system.h   (Revision 191)
+++ include/asm-i386/system.h   (Arbeitskopie)
@@ -602,9 +602,11 @@
handler would simply occur after 4 billions ticks. */
 
rthal_timer_program_shot(rthal_imuldiv(delay,RTHAL_TIMER_FREQ,RTHAL_CPU_FREQ));
 #ifdef CONFIG_XENO_HW_NMI_DEBUG_LATENCY
-extern unsigned long rthal_maxlat_tsc;
-if (delay <= (ULONG_MAX - rthal_maxlat_tsc))
-rthal_nmi_arm(delay + rthal_maxlat_tsc);
+{
+extern unsigned long rthal_maxlat_tsc;
+if (delay <= (ULONG_MAX - rthal_maxlat_tsc))
+rthal_nmi_arm(delay + rthal_maxlat_tsc);
+}
 #endif /* CONFIG_XENO_HW_NMI_DEBUG_LATENCY */
 }
 
Index: ChangeLog

===
--- ChangeLog   (Revision 191)
+++ ChangeLog   (Arbeitskopie)
@@ -1,3 +1,11 @@
+2005-11-28  Jan Kiszka <[EMAIL PROTECTED]>
+
+   * ksrc/arch/i386/Kconfig: Fixed XENO_HW_NMI_DEBUG_LATENCY_MAX
+   kconfig type.
+
+   * include/asm-i386/system.h, ksrc/arch/i386/hal.c: Fixed
+   harmless compiler warnings.
+
 2005-11-25  Jan Kiszka <[EMAIL PROTECTED]>
 
 	* ksrc/skins/native/{pipe.c,syscall.c}, src/skins/native/pipe.c,

Index: ksrc/arch/i386/Kconfig
===
--- ksrc/arch/i386/Kconfig  (Revision 191)
+++ ksrc/arch/i386/Kconfig  (Arbeitskopie)
@@ -81,8 +81,8 @@
 
 config XENO_HW_NMI_DEBUG_LATENCY_MAX

 depends XENO_HW_NMI_DEBUG_LATENCY
-string "NMI watchdog latency threshold (us)"
-default "100"
+int "NMI watchdog latency threshold (us)"
+default 100
 help
 When the timer interrupt latency goes above this threshold, the
 NMI watchdog pulls the handbrake.
Index: ksrc/arch/i386/hal.c
===
--- ksrc/arch/i386/hal.c(Revision 191)
+++ ksrc/arch/i386/hal.c(Arbeitskopie)
@@ -175,6 +175,8 @@
 }
 
 #ifdef CONFIG_XENO_HW_NMI_DEBUG_LATENCY

+#include 
+
 unsigned long rthal_maxlat_tsc;
 EXPORT_SYMBOL(rthal_maxlat_tsc);
 





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Latest snapshot (182) not compiling (kernel)

2005-11-28 Thread Philippe Gerum

Panagiotis Issaris wrote:

Hi,

I'm also having issues compiling the latest Xenomai tree.



Mm, dot-config would be great. TIA,



On 11/25/05, *Ignacio García Pérez* <[EMAIL PROTECTED] 
> wrote:


Hi,

I dunno what's changed, but I updated my xenomai snapshot to the latest
revision (182) and the kernel no longer compiles (fails due to some
xnpod_* undefined symbols).

Revision 179 compiled fine.



I'm using revision 199 on a x86 Ubuntu 5.10 system with GCC 4.0.1 (is 
anyone else using that one?).



  LD  init/built-in.o
  LD  .tmp_vmlinux1
kernel/built-in.o: In function `disengage_irq_shield':
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:176: 
undefined reference to `__raw_write_lock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:207: 
undefined reference to `__raw_write_unlock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:186: 
undefined reference to `__raw_write_unlock'

kernel/built-in.o: In function `xnshadow_reset_shield':
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:152: 
undefined reference to `__raw_read_lock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:158: 
undefined reference to `__raw_read_unlock'

kernel/built-in.o: In function `lostage_handler':
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:152: 
undefined reference to `__raw_read_lock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:158: 
undefined reference to `__raw_read_unlock'

kernel/built-in.o: In function `xnshadow_relax':
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:152: 
undefined reference to `__raw_read_lock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:158: 
undefined reference to `__raw_read_unlock'

kernel/built-in.o: In function `schedule_event':
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:152: 
undefined reference to `__raw_read_lock'
/usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:158: 
undefined reference to `__raw_read_unlock'

make[1]: *** [.tmp_vmlinux1] Error 1
make: *** [_all] Error 2


With friendly regards,
Takis




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Latest snapshot (182) not compiling (kernel)

2005-11-28 Thread Philippe Gerum

Panagiotis Issaris wrote:

Hi Philippe,

On 11/28/05, *Philippe Gerum* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> 
wrote:


Panagiotis Issaris wrote:
 > Hi,
 >
 > I'm also having issues compiling the latest Xenomai tree.
 >

Mm, dot-config would be great. TIA,

Sorry :)
The problem appears to be gone when disabling the interrupt shield. (An 
excerpt from my config can be found in my reply to Jan's e-mail.)




Confirmed here, there's indeed something fishy. Ok, I will look at this 
asap. Thanks,



With friendly regards,
Takis



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC] latency tracing

2005-11-28 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

as I'm lazy, er busy, I'm pushing this idea into public instead of
hacking a patch on my own: wouldn't it be nice to have something like
the latency backtrace of PREEMPT_RT also in Xenomai?


Yes; we would had saved a lot of time with a precise debug 
instrumentation in place in the early fusion days. It's a lesson for the 
future.


 Even when the core

is once optimised ;), there can still be drivers with long IRQ locks
nuking the WCET.

I saw that there is already something for SMP spinlock debugging. Is it
a lot of work to extend this to UP and maybe even all IRQ-off locks?


AFAICS, it's basically a matter of decoupling CONFIG_SMP and 
CONFIG_XENO_SPINLOCK_DEBUG, so that we'd allow a dummy spinlock to exist 
even in UP, just to carry on with the statistics collection.


 Did

someone already look at the backtrace implementation of PREEMPT_RT in
details?


There are different levels of support for this, but basically, mcount() 
support has been crafted for the kernel so that gcc can be asked to 
insert prologue/epilogue calls in every routine when -pg is passed. 
Additionally, a global trace function keeps a copy of internal timings 
and callers %eip when traversed.


 Is is complicated to port?

mcount() support should be fairly manageable to port.

 Does it require some changes at

ADEOS level?



Don't think so. gcc would do the job for the whole kernel anyway using 
mcount(), and trace calls could be spreaded as needed otherwise.



Jan





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Coding style

2005-11-29 Thread Philippe Gerum

Ignacio García Pérez wrote:

Hi,

Some time ago someone mentioned the current xenomai coding style, and
that maybe it would be a good idea to change it to something more
"standard". A good place to start would be turn the tabs into spaces to
eliminate editor configuration dependency. Anyway, my question is: could
indent be used to do this?



The discussion was about going for the kernel coding style and rules; in 
such a case, tabs would have to be hard ones.



Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_intr_enable() requierd after rt_intr_create

2005-11-29 Thread Philippe Gerum

Ignacio García Pérez wrote:

Hi,

I noticed that when an interrupt object is created using
rt_intr_create(), it is created disabled, and a call to rt_intr_enable()
is necessary for the ISR to be called.

Question is: is this the expected behaviour?.


Yes. You don't necessarily want to take interrupts immediately after the 
object is created, for which you might have some additional housekeeping 
chores to perform before that.


 If so, I think this should

be mentioned somewhere in the rt_intr_create documentation. In fact,
from reading the docs one could infer the opposite.



Ok, will add a note to rt_int_create().


On a related issue, I noticed that the rt_intr_enable() documentation says:

"Enables the hardware interrupt line associated with an interrupt
object. Over Adeos-based systems which mask and acknowledge IRQs upon
receipt, this operation is necessary to revalidate the interrupt channel
so that more interrupts from the same source can be notified."

Is this correct?. I ask because the rt_intr_create() documentation tells
you to just return RT_INTR_ENABLE from the ISR if you want this. It's
confusing.



Well, the text seems pretty clear to me here: rt_intr_enable() specifies 
that the re-enabling _operation_ should be carried on after IRQ receipt, 
but not necessarily using rt_intr_enable(). Returning RT_INTR_ENABLE 
from the ISR is just the other way to do this.




Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Latest snapshot (182) not compiling (kernel)

2005-11-29 Thread Philippe Gerum

Panagiotis Issaris wrote:

Hi Jan,

On 11/28/05, *Jan Kiszka* <[EMAIL PROTECTED] 
> wrote:


...
 > reference to `__raw_read_unlock'
 > kernel/built-in.o: In function `schedule_event':
 > /usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:152:
undefined
 > reference to `__raw_read_lock'
 > /usr/local/src/linux-2.6.14/kernel/xenomai/nucleus/shadow.c:158:
undefined
 > reference to `__raw_read_unlock'
 > make[1]: *** [.tmp_vmlinux1] Error 1
 > make: *** [_all] Error 2
 >



ipipe-2.6.14-1.0-11 for x86 and ipipe-2.6.14-1.0-07 for ppc fix this.
Those patches are available from the Xenomai trunk, or from the Adeos dl 
area as usual.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] fix SMI and proc cleanup

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

the smaller the bug is, the longer it takes to track. The first one in
this patch was such an issue. It prevented the SMI workaround to do its
job because the related PCI ID table got "optimised" away (SVN trunk only).


Good catch. Fact is that this table must survive free_initmem() now 
since we want to enable the SMI watchdog only when the nucleus is 
actually running a real-time interface.




The second one is also SVN-only. It fixes the proc-fs cleanup of the
nucleus when being used as a module.

I also removed some special character from a name in the ChangeLog.

Please apply!



Applied, thanks.


Jan





Index: ChangeLog
===
--- ChangeLog   (Revision 208)
+++ ChangeLog   (Arbeitskopie)
@@ -1,3 +1,11 @@
+2005-11-30  Jan Kiszka <[EMAIL PROTECTED]>
+
+   * ksrc/arch/i386/smi.c: Remove __initdata from rthal_smi_pci_tbl
+   to make table persistent.
+
+   * ksrc/nucleus/module.c (__xeno_sys_exit): Reorder proc-fs
+   cleanup to avoid stalled entries.
+
 2005-11-29  Philippe Gerum  <[EMAIL PROTECTED]>
 
 	* ksrc/nucleus/pipe.c (xnpipe_recv): Remove superfluous timeout

@@ -623,7 +631,7 @@
 
 	* nucleus/module.c: Revamp /proc/xenomai/{sched,stat} output format.
 
-2005-08-31  Stéphane Fillod  <[EMAIL PROTECTED]>

+2005-08-31  Stephane Fillod  <[EMAIL PROTECTED]>
 
 	* skins/rtai/fifo.c (__fifo_pkg_init): fix missing init of flush

queue handles.
Index: ksrc/arch/i386/smi.c
===
--- ksrc/arch/i386/smi.c(Revision 208)
+++ ksrc/arch/i386/smi.c(Arbeitskopie)
@@ -31,7 +31,7 @@
 #include 
 #include 
 
-static struct pci_device_id rthal_smi_pci_tbl[] __initdata = {

+static struct pci_device_id rthal_smi_pci_tbl[] = {
 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AA_0) },
 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801AB_0) },
 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0) },
Index: ksrc/nucleus/module.c
===
--- ksrc/nucleus/module.c   (Revision 208)
+++ ksrc/nucleus/module.c   (Arbeitskopie)
@@ -817,6 +817,10 @@
 {
 xnpod_shutdown(XNPOD_NORMAL_EXIT);
 
+#if defined(__KERNEL__) && defined(CONFIG_PROC_FS)

+xnpod_delete_proc();
+#endif /* __KERNEL__ && CONFIG_PROC_FS */
+
 xnarch_exit();
 
 #ifdef __KERNEL__

@@ -830,9 +834,6 @@
 #ifdef CONFIG_LTT
 xnltt_umount();
 #endif /* CONFIG_LTT */
-#ifdef CONFIG_PROC_FS
-xnpod_delete_proc();
-#endif /* CONFIG_PROC_FS */
 #endif /* __KERNEL__ */
 
 if (nkmsgbuf)






___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug] don't try this at home...

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


Hi Philippe,

I'm afraid this one is serious: let the attached migration stress test
run on likely any Xenomai since 2.0, preferably with
CONFIG_XENO_OPT_DEBUG on. Will give a nice crash sooner or later (I'm
trying to set up a serial console now).



Confirmed here. My test box went through some nifty triple salto out of 
the window running this frag for 2mn or so. Actually, the semop 
handshake is not even needed to cause the crash. At first sight, it 
looks like a migration issue taking place during the critical phase when 
a shadow thread switches back to Linux to terminate.





As it took some time to persuade my box to not just reboot but to give a
message, I'm posting here the kernel dump of the P-III running
nat_migration:

[...]
Xenomai: starting native API services.
ce649fb4 ce648000 0b17 0202 c0139246 cdf2819c cdf28070 0b12d310
   0037 ce648000  c02f0700 9a28  b7e94a70
bfed63c8
    ce648000 c0102fcb b7e94a70 bfed63dc b7faf4b0 bfed63c8

Call Trace:
 [] __ipipe_dispatch_event+0x96/0x130
 [] work_resched+0x6/0x1c
Xenomai: fatal: blocked thread migration[22175] rescheduled?!
(status=0x300010, sig=0, prev=watchdog/0[3])


This babe is awaken by Linux while Xeno sees it in a dormant state, 
likely after it has terminated. No wonder why things are going wild 
after that... Ok, job queued. Thanks.



 CPU  PIDPRI  TIMEOUT  STAT  NAME


0  0  0000500080  ROOT


   0  22175  1000300110  migration
Timer: none

cea05ee4 d0842c62 cdcb cea6d030 c02f0700 c035cbec c02f0700 0286
   c0139246 0022 c02f0700 cdf28070 cdf28070 0022 0001
c02f0700
   cea6d030 cdf28070 cea6d158 cea05f78 c02b26c0 cea04000 0238
d1244537
Call Trace:
 [] __ipipe_dispatch_event+0x96/0x130
 [] schedule+0x2d0/0x720
 [] watchdog+0x0/0x80
 [] schedule_timeout+0x47/0xb0
 [] process_timeout+0x0/0x10
 [] msleep_interruptible+0x42/0x60
 [] watchdog+0x50/0x80
 [] kthread+0x8b/0x90
 [] kthread+0x0/0x90
 [] kernel_thread_helper+0x5/0x10


Jan




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug?] calling xnpod_delete_thread after self-termination

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Hi all,

as the subject already says: I face some warning of the nucleus (with
XENO_OPT_DEBUG on - useful switch) when I call xnpod_delete_thread for a
thread which has already terminated itself by leaving the thread
function. Is this double-deletion illegal? Or is it a cleanup-bug of the
nucleus?


Double-deletion is actually illegal.



To reproduce: this happens with RTnet over all Xenomai versions (i.e.
through the RTDM layer). I insert and remove the rtcfg module, and this
one cleans up some kernel tasks - and the nucleus complains.

Jan


PS: One open crash remaining on my list - likely RTnet-internal and
uncritical. ;)




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug?] calling xnpod_delete_thread after self-termination

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Hi all,

as the subject already says: I face some warning of the nucleus (with
XENO_OPT_DEBUG on - useful switch) when I call xnpod_delete_thread for a
thread which has already terminated itself by leaving the thread
function. Is this double-deletion illegal? Or is it a cleanup-bug of the
nucleus?


We could test the XNZOMBIE bit in xnpod_delete_thread() to prevent this, 
I guess.




To reproduce: this happens with RTnet over all Xenomai versions (i.e.
through the RTDM layer). I insert and remove the rtcfg module, and this
one cleans up some kernel tasks - and the nucleus complains.

Jan


PS: One open crash remaining on my list - likely RTnet-internal and
uncritical. ;)




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug?] calling xnpod_delete_thread after self-termination

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:



Hi all,

as the subject already says: I face some warning of the nucleus (with
XENO_OPT_DEBUG on - useful switch) when I call xnpod_delete_thread for a
thread which has already terminated itself by leaving the thread
function. Is this double-deletion illegal? Or is it a cleanup-bug of the
nucleus?



We could test the XNZOMBIE bit in xnpod_delete_thread() to prevent this,
I guess.




Would be better, I think.


Ok, fixed.

 Otherwise, at least xnpod_delete_thread and

also rtdm_task_destroy would require a clear warning sign.

I was not aware of this restriction, so I did not worked around it in
RTnet - which is easy at RTDM level BTW: task wakeup for
self-termination + rtdm_task_join_nrt. This is cleaner anyway, so I will
patch RTnet now.

Jan



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch] minor doc clarification

2005-11-30 Thread Philippe Gerum

Jan Kiszka wrote:

Hi,

this patch clarifies the usage of rtdm_task_join_nrt.


Applied, thanks.



Typically, rtdm_task_join_nrt + target task wakeup should be preferred
over rtdm_task_destroy during cleanup of drivers. It's now intensively
used in RTnet.

Jan




Index: ChangeLog
===
--- ChangeLog   (Revision 219)
+++ ChangeLog   (Arbeitskopie)
@@ -1,3 +1,7 @@
+2005-12-01  Jan Kiszka  <[EMAIL PROTECTED]>
+
+   * ksrc/skins/rtdm/drvlib.c (rtdm_task_join_nrt): Clarify doc.
+
 2005-11-30  Philippe Gerum  <[EMAIL PROTECTED]>
 
 	* ksrc/nucleus/pod.c (xnpod_delete_thread): Prevent double-deletion.

@@ -10,7 +14,7 @@
* ksrc/arch/powerpc/patches: Upgrade to Adeos
2.4.25-denx-0.9-04, 2.6.10-ppc64-r3.patch.
 
-2005-11-30  Ignacio García Pérez <[EMAIL PROTECTED]>

+2005-11-30  Ignacio Garcia Perez <[EMAIL PROTECTED]>
 
 	* ksrc/nucleus/timer.c: Make internal routines static.
 
Index: ksrc/skins/rtdm/drvlib.c

===
--- ksrc/skins/rtdm/drvlib.c(Revision 219)
+++ ksrc/skins/rtdm/drvlib.c(Arbeitskopie)
@@ -250,6 +250,11 @@
  * @param[in,out] task Task handle as returned by rtdm_task_init()
  * @param[in] poll_delay Polling delay in milliseconds
  *
+ * @note It is not required to call rtdm_task_destroy() for a task which has
+ * been passed to rtdm_task_join_nrt(). Moreover, don't forget to inform the
+ * targeted task that it has to terminate. Otherwise, this function will never
+ * return.
+ *
  * Environments:
  *
  * This service can be called from:




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug] don't try this at home...

2005-12-07 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Jan Kiszka wrote:



Hi Philippe,

I'm afraid this one is serious: let the attached migration stress test
run on likely any Xenomai since 2.0, preferably with
CONFIG_XENO_OPT_DEBUG on. Will give a nice crash sooner or later (I'm
trying to set up a serial console now).



Confirmed here. My test box went through some nifty triple salto out of
the window running this frag for 2mn or so. Actually, the semop
handshake is not even needed to cause the crash. At first sight, it
looks like a migration issue taking place during the critical phase when
a shadow thread switches back to Linux to terminate.




As it took some time to persuade my box to not just reboot but to give a
message, I'm posting here the kernel dump of the P-III running
nat_migration:

[...]
Xenomai: starting native API services.
ce649fb4 ce648000 0b17 0202 c0139246 cdf2819c cdf28070 0b12d310
  0037 ce648000  c02f0700 9a28  b7e94a70
bfed63c8
   ce648000 c0102fcb b7e94a70 bfed63dc b7faf4b0 bfed63c8

Call Trace:
[] __ipipe_dispatch_event+0x96/0x130
[] work_resched+0x6/0x1c
Xenomai: fatal: blocked thread migration[22175] rescheduled?!
(status=0x300010, sig=0, prev=watchdog/0[3])


This babe is awaken by Linux while Xeno sees it in a dormant state,
likely after it has terminated. No wonder why things are going wild
after that... Ok, job queued. Thanks.




I think I can explain this warning now: This happens during creation of
a new userspace real-time thread. In the context of the newly created
Linux pthread that is to become a real-time thread, Xenomai first sets
up the real-time part and then calls xnshadow_map. The latter function
does further init and then signals via xnshadow_signal_completion to the
parent Linux thread (the caller of rt_task_create e.g.) that the thread
is up. This happens before xnshadow_harden, i.e. still in preemptible
linux context.

The signalling should normally do not cause a reschedule as the caller -
the to-be-mapped linux pthread - has higher prio than the woken up
thread.


Xeno never assumes this.

 And Xenomai implicitly assumes with this fatal-test above that

there is no preemption! But it can happen: the watchdog thread of linux
does preempt here. So, I think it's a false positive.



This is wrong. This check is not related to Linux preemption at all; it 
makes sure that control over any shadow is shared in a strictly 
_mutually exclusive_ way, so that a thread blocked at Xenomai level may 
not not be seen as runnable by Linux either. Disabling it only makes 
things worse since the scheduling state is obviously corrupted when it 
triggers, and that's the root bug we are chasing right now. You should 
not draw any conclusion beyond that. Additionally, keep in mind that 
Xeno has already run over some PREEMPT_RT patches, for which an infinite 
number of CPUs is assumed over a fine-grained code base, which induces 
maximum preemption probabilities.



I disabled this particular warning and came a bit further:

I-pipe: Domain Xenomai registered.
Xenomai: hal/x86 started.
Xenomai: real-time nucleus v2.1 (Surfing With The Alien) loaded.
Xenomai: starting native API services.
Unable to handle kernel paging request at virtual address 75c08732
 printing eip:
d0acec80
*pde = 
Oops:  [#1]
PREEMPT
Modules linked in: xeno_native xeno_nucleus eepro100 mii
CPU:0
EIP:0060:[]Not tainted VLI
EFLAGS: 00010086   (2.6.14.3)
EIP is at xnpod_schedule+0x790/0xcf0 [xeno_nucleus]
eax: 8005003b   ebx: d09c1a60   ecx: 75c08500   edx: d0ae441c
esi: d0ae4210   edi: ceab1f28   ebp: ceab1f28   esp: ceab1ef4
ds: 007b   es: 007b   ss: 0068
I-pipe domain Xenomai
Stack: 0096 0001 c039cce0 000e ceab1f28 0002 ceab1f20
c010e080
    cee1ba90 000e 0004 c0103224  cee0
cee1ba90
   cee1ba90 ce86f700 0004 cee1b570 007b cee1007b 
c028450c
Call Trace:
 [] show_stack+0x86/0xc0
 [] show_registers+0x144/0x200
 [] die+0xd7/0x1e0
 [] do_page_fault+0x1e4/0x667
 [] __ipipe_handle_exception+0x34/0x80
 [] error_code+0x54/0x70
 [] 0xcee0
Code: b8 05 e4 01 00 00 39 82 18 02 00 00 74 68 0f 20 c0 83 c8 08 0f 22
c0 8b 4d e8 8b 7d c4 85 ff 8b 49 04 89 4d b8
0f 84 37 fa ff ff  81 32 02 00 00 40 0f 84 2a fa ff ff b8 00 e0 ff
ff 21 e0 8b
scheduling while atomic: migration/0x0002/17646
 [] dump_stack+0x15/0x20
 [] schedule+0x63b/0x720
 [] xnshadow_harden+0x83/0x140 [xeno_nucleus]
 [] xnshadow_wait_barrier+0x7a/0x130 [xeno_nucleus]
 [] exec_nucleus_syscall+0x77/0xa0 [xeno_nucleus]
 [] losyscall_event+0x139/0x1a0 [xeno_nucleus]
 [] __ipipe_dispatch_event+0x96/0x130
 [] __ipipe_syscall_root+0x27/0xc0
 [] sysenter_past_esp+0x3b/0x67
Xenomai: Switching migration to secondary mode after exception #14 from
user-space at 0xc028450c (pid 17646)
 <3>Debug: sleeping function called from invalid context at
include/linux/rwsem.h:43
i

[Xenomai-core] Re: I-pipe ppc64 0.9-02

2005-12-07 Thread Philippe Gerum

Heikki Lindholm wrote:

Updated ppc64 I-pipe patch for 2.6.14. Changes:
* sync with ppc 1.0-07
* send IPI to self fixed
* additional IPI (#4) for xenomai SMP timer implementation

Also at the usual
http://www.cs.helsinki.fi/group/nonsto/rtaippc64.html

Philippe, please put this in Xenomai 2.1.



Applied, thanks.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Xenomai 2.1: Add SMP timer support for powerpc

2005-12-07 Thread Philippe Gerum

Heikki Lindholm wrote:
Add SMP timer support code for the powerpc arch in Xenomai 2.1. Still a 
bit rough, but I'll clean it up as I go. Compiled and tested on G5 UP, 
SMP (I-pipe 2.6.14 0.9-02) and G4 UP (I-pipe 2.6.14 1.0-07). Doesn't 
seem to break anything. On a G5, SMP seems to bring a 2-3usec latency 
penalty.




Applied, thanks.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug] don't try this at home...

2005-12-09 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Philippe Gerum wrote:



Jan Kiszka wrote:



Jan Kiszka wrote:




Hi Philippe,

I'm afraid this one is serious: let the attached migration stress test
run on likely any Xenomai since 2.0, preferably with
CONFIG_XENO_OPT_DEBUG on. Will give a nice crash sooner or later (I'm
trying to set up a serial console now).



Confirmed here. My test box went through some nifty triple salto out of
the window running this frag for 2mn or so. Actually, the semop
handshake is not even needed to cause the crash. At first sight, it
looks like a migration issue taking place during the critical phase when
a shadow thread switches back to Linux to terminate.




As it took some time to persuade my box to not just reboot but to
give a
message, I'm posting here the kernel dump of the P-III running
nat_migration:

[...]
Xenomai: starting native API services.
ce649fb4 ce648000 0b17 0202 c0139246 cdf2819c cdf28070 0b12d310
 0037 ce648000  c02f0700 9a28  b7e94a70
bfed63c8
  ce648000 c0102fcb b7e94a70 bfed63dc b7faf4b0 bfed63c8

Call Trace:
[] __ipipe_dispatch_event+0x96/0x130
[] work_resched+0x6/0x1c
Xenomai: fatal: blocked thread migration[22175] rescheduled?!
(status=0x300010, sig=0, prev=watchdog/0[3])


This babe is awaken by Linux while Xeno sees it in a dormant state,
likely after it has terminated. No wonder why things are going wild
after that... Ok, job queued. Thanks.




I think I can explain this warning now: This happens during creation of
a new userspace real-time thread. In the context of the newly created
Linux pthread that is to become a real-time thread, Xenomai first sets
up the real-time part and then calls xnshadow_map. The latter function
does further init and then signals via xnshadow_signal_completion to the
parent Linux thread (the caller of rt_task_create e.g.) that the thread
is up. This happens before xnshadow_harden, i.e. still in preemptible
linux context.

The signalling should normally do not cause a reschedule as the caller -
the to-be-mapped linux pthread - has higher prio than the woken up
thread.


Xeno never assumes this.

And Xenomai implicitly assumes with this fatal-test above that


there is no preemption! But it can happen: the watchdog thread of linux
does preempt here. So, I think it's a false positive.



This is wrong. This check is not related to Linux preemption at all; it
makes sure that control over any shadow is shared in a strictly
_mutually exclusive_ way, so that a thread blocked at Xenomai level may
not not be seen as runnable by Linux either. Disabling it only makes
things worse since the scheduling state is obviously corrupted when it
triggers, and that's the root bug we are chasing right now. You should
not draw any conclusion beyond that. Additionally, keep in mind that
Xeno has already run over some PREEMPT_RT patches, for which an infinite
number of CPUs is assumed over a fine-grained code base, which induces
maximum preemption probabilities.




Ok, may explanation was a quick hack before some meeting here, I should
have elaborated it more thoroughly. Let's try to do it step by step so
that you can say where I go of the right path:

1. We enter xnshadow_map. The linux thread is happily running, the
   shadow thread is in XNDORMANT state and not yet linked to its linux
   mate. Any linux preemption hitting us here and causing a reactivation
   of this particular linux thread later will not cause any activity of
   do_schedule_event related to this thread because [1] is NULL. That's
   important, we will see later why.

2. After some init stuff, xnshadow_map links the shadow to the linux
   thread [2] and then calls xnshadow_signal_completion. This call would
   normally wake up the sleeping parent of our linux thread, performing
   a direct standard linux schedule from the new born thread to the
   parent. Again, nothing here about which do_schedule_event could
   complain.

3. Now let's consider some preemption by a third linux task after [2]
   but before [3]. Scheduling away the new linux thread is no issue. But
   when it comes back again, we will see those nice xnpod_fatal. The
   reason: our shadow thread is now linked to its linux mate, thus [1]
   will evaluate non-NULL, and later also [4] will hit as XNDORMANT is
   part of XNTHREAD_BLOCK_BITS (and the thread is not ptraced).

Ok, this is how I see THIS particular issue so far. For me the question
is now:

 a) I'm right?


Yes.


 b) If yes, is this preemption uncritical, thus the warning in the
described context a false positive?


No.


 c) If it is not, can this cause the following crash?



Since the only preemption opportunity that exists between [2] and [3] 
would come from a Linux IRQ, I remember now that we very recently played 
with the splhigh section around xnshadow_map() in native/syscall.c 
(rt_task_create)... Operations in xnshado

Re: [Xenomai-core] [RFC] rt_task_join?

2005-12-09 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


Hi all,

we ran into some issue where we have to wait on the termination of a
native real-time userspace thread during cleanup. This can be done in a
custom way of course, either via some polling on a flag or by blocking
on a standard posix semaphore that are signalled by the terminating
real-time thread. But maybe it is more useful to have a generic function
available with the native skin.

The problem is now that the pthreads underneath the real-time threads
are created with PTHREAD_CREATE_DETACHED. Changing this also changes the
semantic of other rt_task_xxx functions as posix then requires the
creator to call pthread_join (i.e. a new rt_task_join) in any case. A
better option might be to introduce a new mode bit T_JOINABLE to decide
if the related pthread should be created detached or not. Default would
remain PTHREAD_CREATE_DETACHED, if rt_task_join is to be used,
T_JOINABLE could be passed to rt_task_create.

What do you think, worth the effort?




Actually, the effort could be as simple as this (+ some docs) - as long
as I'm not overseeing some side effect right now.



Looks ok.






Index: include/native/task.h
===
--- include/native/task.h   (revision 245)
+++ include/native/task.h   (working copy)
@@ -35,18 +35,19 @@
 #define T_CPUMASK  0xff00
 
 /* Status/mode flags. */

-#define T_BLOCKED XNPEND
-#define T_DELAYED XNDELAY
-#define T_READY   XNREADY
-#define T_DORMANT XNDORMANT
-#define T_STARTED XNSTARTED
-#define T_BOOST   XNBOOST
-#define T_LOCKXNLOCK
-#define T_RRB XNRRB
-#define T_NOSIG   XNASDI
-#define T_SHIELD  XNSHIELD
-#define T_WARNSW  XNTRAPSW
-#define T_PRIMARY XNTHREAD_SPARE0
+#define T_BLOCKED  XNPEND
+#define T_DELAYED  XNDELAY
+#define T_READYXNREADY
+#define T_DORMANT  XNDORMANT
+#define T_STARTED  XNSTARTED
+#define T_BOOSTXNBOOST
+#define T_LOCK XNLOCK
+#define T_RRB  XNRRB
+#define T_NOSIGXNASDI
+#define T_SHIELD   XNSHIELD
+#define T_WARNSW   XNTRAPSW
+#define T_PRIMARY  XNTHREAD_SPARE0
+#define T_JOINABLE XNTHREAD_SPARE1
 
 /* Task hook types. */

 #define T_HOOK_START  XNHOOK_THREAD_START
@@ -268,6 +269,8 @@
 int rt_task_slice(RT_TASK *task,
  RTIME quantum);
 
+int rt_task_join(RT_TASK *task);

+


Should move to the user-space section of the header file (i.e. !(KERNEL 
|| XENO_SIM))



 #ifdef CONFIG_XENO_OPT_NATIVE_MPS
 
 ssize_t rt_task_send(RT_TASK *task,

Index: src/skins/native/task.c
===
--- src/skins/native/task.c (revision 245)
+++ src/skins/native/task.c (working copy)
@@ -127,7 +127,8 @@
stksize = PTHREAD_STACK_MIN;
 
 pthread_attr_setstacksize(&thattr,stksize);

-pthread_attr_setdetachstate(&thattr,PTHREAD_CREATE_DETACHED);
+if (!(mode & T_JOINABLE))
+   pthread_attr_setdetachstate(&thattr,PTHREAD_CREATE_DETACHED);
 pthread_attr_setschedpolicy(&thattr,SCHED_FIFO);
 param.sched_priority = sched_get_priority_max(SCHED_FIFO);
 pthread_attr_setschedparam(&thattr,¶m);
@@ -331,6 +332,11 @@
 &quantum);
 }
 
+int rt_task_join (RT_TASK *task)

+{
+return -pthread_join((pthread_t)task->opaque2, NULL);
+}
+
 #ifdef CONFIG_XENO_OPT_NATIVE_MPS
 
 ssize_t rt_task_send (RT_TASK *task,





___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug?] set pthread stack size broken

2005-12-09 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


Jan Kiszka wrote:


Hi,

something for the night: Can someone explain why normal pthreads can be
restricted to initially use only the stack size provided via
pthread_attr_setstacksize() while any rt-mapped thread (posix and
native) refuse to accept this? For a simple test, compile the attached
program one time as normal

   gcc -lpthread -o stacksize stacksize.c

and the other time against xeno's posix skin

   gcc `xeno-config --posix-cflags` `xeno-config --posix-ldflags` \
   -o stacksize.o stacksize.c

Then compare the memory requirements of both processes - they should
differ by 2M, the stack size when pthread_attr_setstacksize is not used.
Strange - and also critical when considering larger applications...


This has been nailed down now to be some strange linker problem: while
the standard version of the stacksize demo calls the latest
pthread_create (__pthread_create_2_1 in glibc-2.3.x), the wrapped
real-time version and likely also applications linked against libnative
call an older pthread_create (__pthread_create_2_0). That variant
assumes that pthread_attr_t does not yet contain things like the stack
size and fills in the standard value again. :(

Can anyone with another build environment than my SuSE 10 confirm this?



So far I only tested against 2.1, but I don't see a reason why 2.0.x
should behave different. Will get checked, though.



Same behaviour on 2.0.x (SVN).

Jan




The attached patch fixes this issue for 2.1-SVN. A similar patch should
be applied to 2.0.x as well. And maybe it also hits the UVM, but this is
something I cannot assess ATM.

Well, you know, the smaller the bug... :-/

Jan


PS: For those you are interested why we need this, read

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=145941

Found via google:"pthread_create NOTYPE" after I noticed the differences
in "readelf -s libnative.so".




Index: src/skins/native/Makefile.am
===
--- src/skins/native/Makefile.am(revision 243)
+++ src/skins/native/Makefile.am(working copy)
@@ -1,6 +1,6 @@
 lib_LTLIBRARIES = libnative.la
 
-libnative_la_LDFLAGS = -module -version-info 0:0:0

+libnative_la_LDFLAGS = -module -version-info 0:0:0 -pthread
 
 libnative_la_SOURCES = \

alarm.c \
Index: src/skins/posix/Makefile.am
===
--- src/skins/posix/Makefile.am (revision 243)
+++ src/skins/posix/Makefile.am (working copy)
@@ -2,7 +2,7 @@
 
 lib_LTLIBRARIES = libpthread_rt.la
 
-libpthread_rt_la_LDFLAGS = -module -version-info 0:0:0

+libpthread_rt_la_LDFLAGS = -module -version-info 0:0:0 -lpthread

 libpthread_rt_la_SOURCES = \
init.c \


Applied (with -pthread everywhere), thanks.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [bug] user memory leakage on rt_task_delete

2005-12-09 Thread Philippe Gerum

Jan Kiszka wrote:

Hi all,

during my ongoing search for the init/cleanup issue of shadow threads, I
stumbled over another problem: Deleting a userspace native thread that
is blocked in primary mode does not let the NPTL clean up all resources
allocated in userspace. If you plan to do some rt_task_create/delete in
a loop, you will soon run out of memory (and Mr. oom-killer will show
up...).

I haven't found a solution for this beyond letting a rt-task always
terminate itself (or terminate the whole program after forced
deletions). If there is no solution, we should at least document this
fact somewhere.

Again, it's not a common use case, but it's also not an expectable
behaviour of the native skin.



I see no possible workaround to allow a shadow thread deletion from 
kernel space while still leaving the opportunity for the NPTL thread to 
perform some user-space cleanups; recycling a previous Xenomai context 
to unwind a Linux context would lead to some terminally broken 
situation, so the nucleus must reap the terminated shadow at kernel 
level asap. However, the rt_task_delete() wrapper from the user-space 
support library might preferably pthread_kill() the thread, instead of 
asking the nucleus for that purpose.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


  1   2   3   4   5   6   7   8   9   10   >