[Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Hi,

I'm trying to use the xeno_timerbench as a replacement to the old 2.0
klatency module and I encounter some problems.

This is on my in-progress ARM Xenomai port. User space latency and old
klatency work great (my board has some hardware latency problems though
- latencies can be as high as 500 us..). This in on a 2.6.14.5 kernel
using the latest ipipe patch and the latest SVN trunk xenomai.

Note that I never used the RTDM skin until now, so the problem could be
as well in the core RTDM code.

1) When compiled statically into the kernel, it does not work at all:
# /usr/xenomai/testsuite/latency/latency -p 1 -t 1
== Sampling period: 1 us
== Test mode: in-kernel periodic task
latency: failed to open benchmark device, code -19
(modprobe xeno_timerbench?)

2) When loaded as a module, it does work in -t 2 mode (kernel timer
handler):
# /usr/xenomai/testsuite/latency/latency -p 1 -t 2
== Sampling period: 1 us
== Test mode: in-kernel timer handler
warming up...
RTT|  00:00:01  (in-kernel timer handler, 1 us period)
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat 
worst
RTD|6000|   14640|   72000|   0|6000|   
72000
RTD|7000|   15060|   65000|   0|6000|   
72000
RTD|7000|   15470|   72000|   0|6000|   
72000

---|||||-
RTS|6000|   15056|   72000|   0|
00:00:03/00:00:03

But in -t 1 mode (kernel periodic task) it hangs hard just after the warming 
period:

# /usr/xenomai/testsuite/latency/latency -p 1 -t 1
== Sampling period: 1 us
== Test mode: in-kernel periodic task
warming up...

Before hanging, sometimes it just prints:
Unable to handle kernel NULL pointer dereference at virtual address 
004d

Sometimes the oops is more complete (note that sometimes it also hangs in the 
middle of
the printout):
Unable to handle kernel NULL pointer dereference at virtual address 
004d
pgd = c0004000
[004d] *pgd=
Internal error: Oops: 817 [#1]
Modules linked in: xeno_timerbench
CPU: 0
PC is at xnpod_schedule+0x6b4/0x7f0
LR is at xnpod_schedule+0x54c/0x7f0
pc : []lr : []Not tainted
sp : c7d41830  ip : c79e8044  fp : c7d41860
r10: c0283c2c  r9 : c0282

Does this rings a bell to someone ? 

Thanks.

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Le mercredi 04 janvier 2006 à 17:49 +0100, Jan Kiszka a écrit :
> Stelian Pop wrote:
> > Hi,
> > 
> 88> I'm trying to use the xeno_timerbench as a replacement to the old 2.0
> > klatency module and I encounter some problems.
> > 
> > This is on my in-progress ARM Xenomai port. User space latency and old
> > klatency work great (my board has some hardware latency problems though
> > - latencies can be as high as 500 us..). This in on a 2.6.14.5 kernel
> > using the latest ipipe patch and the latest SVN trunk xenomai.
> > 
> > Note that I never used the RTDM skin until now, so the problem could be
> > as well in the core RTDM code.
> 
> Almost impossible - there are no bugs! ;)

there is no spoon either :)

 
> Well, seriously, I don't believe it's a RTDM-related issue as the
> invocation of the timer-handler and the kernel-task tests are quite
> similar. The former works, the latter fails inside the scheduler, this
> rather indicates some issue in the arch-specific part of the scheduler
> (the timer test runs in IRQ-context).
> 
> Did you test some RTDM or native kernel-only timed-task before? With
> RTDM this can be as trivial as this one:

As I said in the previous post, the old klatency works.

As for your rtdm example:

# insmod test-rtdm.ko
test_rtdm: module license 'unspecified' taints kernel.
# I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm
alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm
alive.I'm alive.I'm alive.

seems to work fine too (and I'm even able to rmmod the module).

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Le mercredi 04 janvier 2006 à 18:19 +0100, Gilles Chanteperdrix a
écrit :
> Stelian Pop wrote:
>  > Before hanging, sometimes it just prints:
>  >Unable to handle kernel NULL pointer dereference at virtual address 
> 004d
>  > 
>  > Sometimes the oops is more complete (note that sometimes it also hangs in 
> the middle of
>  > the printout):
>  >Unable to handle kernel NULL pointer dereference at virtual address 
> 004d
>  >pgd = c0004000
>  >[004d] *pgd=
>  >Internal error: Oops: 817 [#1]
>  >Modules linked in: xeno_timerbench
>  >CPU: 0
>  >PC is at xnpod_schedule+0x6b4/0x7f0
>  >LR is at xnpod_schedule+0x54c/0x7f0
>  >pc : []lr : []Not tainted
>  >sp : c7d41830  ip : c79e8044  fp : c7d41860
>  >r10: c0283c2c  r9 : c0282
>  > 
>  > Does this rings a bell to someone ? 
> 
> Not much, but two remarks:
> - last summer, there use to be such a problem on x86 because of some
>   wrong FPU switch. So, is FPU enabled ? Are you compiling latency with
>   -msoft-float ?

Yes, everything is soft float.

>   When announcing the ARM Adeos port, you told us that not all
>   exceptions were trapped by the Adeos patch. If one of the FPU
>   exceptions is not trapped, it would explain why we do not get the
>   usual message "Invalid use of FPU at ..." before the oops.

Time has passed and the exceptions are now trapped. 

> - as far as I know, xnpod_schedule never recurses, so, there is little
>   chance for PC and LR to be both in xnpod_schedule. If I am not wrong,
>   it means that you will have to have a look at the disassembly (sorry)
>   and see in really what functions the bugs happen. Sometimes the -r and
>   -S option of objdump help...

I'll take a look. 

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] latency kernel part crashes on ppc64

2006-01-08 Thread Stelian Pop
Le dimanche 08 janvier 2006 à 18:56 +0200, Heikki Lindholm a écrit :

> >>Some recent changes (*cough* RTDM benchmark driver *cough*) broke kernel
> >>mode benchmarking for ppc64. Previously klatency worked fine, but now
> >>latency -t 1 crashes somewhere in xnpod_schedule. Jan, any pending
> >>patches a comin'?

So it seems I'm not alone. 

I have done some additionnal debugging on this issue in the last days. I
still haven't find the bug but I narrowed it down a bit.
> 
> > 
> > Nope, it should work as it is. But as Stelian also reported problems on
> > his fresh ARM port with the in-kernel test, I cannot exclude that there
> > /might/ be a problem in the benchmark.
> > 
> > As I don't have any ppc64 hanging around somewhere, we will have to go
> > through this together. Things I would like to know:
> 
> Dammit, I hoped you'd whip up a fix just from me noting a problem. Well, 
> all right then, I'll play along...;)
> 
> >  o When and how does it crash? At start-up immediately? Or after a
> >while?
> 
> I inserted some serial debug prints and it gets two passes to 
> eval_outer_loop done (enter/exit function). After that it freezes. 

It freezes exactly upon the invocation of rtdm_event_pulse() which
causes a scheduling. In xnpod_schedule, the scheduler queue has been
corrupted and this causes the illegal accesses.

> Without the debug printing it dies with kernel access of illegal memory 
> at xnpod_schedule, which btw. has been quite a common place to die.

Same for me.

> >  o Are there any details / backtraces available with the crash?
> 
> Becaktrace limits to xnpod_schedule if I remember right.

Same for me. But very often I don't even get a backtrace, it just hangs.

> >  o Does -t2 work?
> 
> Umm. Probably not. See below.

Heikki said in a later mail that it works for him, and so it does for me
too.

> >  o What happens if your disable "rtdm_event_pulse(&ctx->result_event);"
> >in eval_outer_loop (thus no signalling of intermediate results during
> >the test)? Does it still crash, maybe later during cleanup now?

> Doesn't freeze and can be exited with ctrl-c and even re-run.

Same for me.

Some additionnal information: I've disabled FPU handling in Xeno and it
doesn't change anything, it still crashes.

As I said before, the old klatency test does work reliably for me, with
the latest Xenomai.

I tried moving the 'display' thread into the kernel, and in this
configuration it does no longer crash.

I've started simplifying the code trying to get to the simplest code
which does have the problem. The results is at
http://www.popies.net/tmp/xenobug/bug.tgz if somebody wants to take a
look.

I'll be working on this again tomorrow...

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>
Open Wide


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Re: Xenomai broken on Linux 2.4

2006-01-12 Thread Stelian Pop


Le 12 janv. 06 à 10:12, Philippe Gerum a écrit :



Hi Wolfgang,

Wolfgang Grandegger wrote:

Hi Philippe,
I just realized that recent changes in ksrc/arch/powerpc/switch.S  
have broken the build of Xenomai with linuxppc_2_4_devel on PPC:

 #include  does not exist
 Symbols like SAVE_NVGPRS do not exist



Ok, thanks for the info. I'm going to fix and check the 2.4/ppc  
port today.


2.6/ppc build fails in the same way. Correcting it to offsets.h> fixes fixes it.


Stelian.


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Hi,

I'm trying to use the xeno_timerbench as a replacement to the old 2.0
klatency module and I encounter some problems.

This is on my in-progress ARM Xenomai port. User space latency and old
klatency work great (my board has some hardware latency problems though
- latencies can be as high as 500 us..). This in on a 2.6.14.5 kernel
using the latest ipipe patch and the latest SVN trunk xenomai.

Note that I never used the RTDM skin until now, so the problem could be
as well in the core RTDM code.

1) When compiled statically into the kernel, it does not work at all:
# /usr/xenomai/testsuite/latency/latency -p 1 -t 1
== Sampling period: 1 us
== Test mode: in-kernel periodic task
latency: failed to open benchmark device, code -19
(modprobe xeno_timerbench?)

2) When loaded as a module, it does work in -t 2 mode (kernel timer
handler):
# /usr/xenomai/testsuite/latency/latency -p 1 -t 2
== Sampling period: 1 us
== Test mode: in-kernel timer handler
warming up...
RTT|  00:00:01  (in-kernel timer handler, 1 us period)
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat 
worst
RTD|6000|   14640|   72000|   0|6000|   
72000
RTD|7000|   15060|   65000|   0|6000|   
72000
RTD|7000|   15470|   72000|   0|6000|   
72000

---|||||-
RTS|6000|   15056|   72000|   0|
00:00:03/00:00:03

But in -t 1 mode (kernel periodic task) it hangs hard just after the warming 
period:

# /usr/xenomai/testsuite/latency/latency -p 1 -t 1
== Sampling period: 1 us
== Test mode: in-kernel periodic task
warming up...

Before hanging, sometimes it just prints:
Unable to handle kernel NULL pointer dereference at virtual address 
004d

Sometimes the oops is more complete (note that sometimes it also hangs in the 
middle of
the printout):
Unable to handle kernel NULL pointer dereference at virtual address 
004d
pgd = c0004000
[004d] *pgd=
Internal error: Oops: 817 [#1]
Modules linked in: xeno_timerbench
CPU: 0
PC is at xnpod_schedule+0x6b4/0x7f0
LR is at xnpod_schedule+0x54c/0x7f0
pc : []lr : []Not tainted
sp : c7d41830  ip : c79e8044  fp : c7d41860
r10: c0283c2c  r9 : c0282

Does this rings a bell to someone ? 

Thanks.

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>




Re: [Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Le mercredi 04 janvier 2006 à 17:49 +0100, Jan Kiszka a écrit :
> Stelian Pop wrote:
> > Hi,
> > 
> 88> I'm trying to use the xeno_timerbench as a replacement to the old 2.0
> > klatency module and I encounter some problems.
> > 
> > This is on my in-progress ARM Xenomai port. User space latency and old
> > klatency work great (my board has some hardware latency problems though
> > - latencies can be as high as 500 us..). This in on a 2.6.14.5 kernel
> > using the latest ipipe patch and the latest SVN trunk xenomai.
> > 
> > Note that I never used the RTDM skin until now, so the problem could be
> > as well in the core RTDM code.
> 
> Almost impossible - there are no bugs! ;)

there is no spoon either :)

 
> Well, seriously, I don't believe it's a RTDM-related issue as the
> invocation of the timer-handler and the kernel-task tests are quite
> similar. The former works, the latter fails inside the scheduler, this
> rather indicates some issue in the arch-specific part of the scheduler
> (the timer test runs in IRQ-context).
> 
> Did you test some RTDM or native kernel-only timed-task before? With
> RTDM this can be as trivial as this one:

As I said in the previous post, the old klatency works.

As for your rtdm example:

# insmod test-rtdm.ko
test_rtdm: module license 'unspecified' taints kernel.
# I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm
alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm alive.I'm
alive.I'm alive.I'm alive.

seems to work fine too (and I'm even able to rmmod the module).

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>




Re: [Xenomai-core] RTDM timerbench problems

2006-01-04 Thread Stelian Pop
Le mercredi 04 janvier 2006 à 18:19 +0100, Gilles Chanteperdrix a
écrit :
> Stelian Pop wrote:
>  > Before hanging, sometimes it just prints:
>  >Unable to handle kernel NULL pointer dereference at virtual address 
> 004d
>  > 
>  > Sometimes the oops is more complete (note that sometimes it also hangs in 
> the middle of
>  > the printout):
>  >Unable to handle kernel NULL pointer dereference at virtual address 
> 004d
>  >pgd = c0004000
>  >[004d] *pgd=
>  >Internal error: Oops: 817 [#1]
>  >Modules linked in: xeno_timerbench
>  >CPU: 0
>  >PC is at xnpod_schedule+0x6b4/0x7f0
>  >LR is at xnpod_schedule+0x54c/0x7f0
>  >pc : []lr : []Not tainted
>  >sp : c7d41830  ip : c79e8044  fp : c7d41860
>  >r10: c0283c2c  r9 : c0282
>  > 
>  > Does this rings a bell to someone ? 
> 
> Not much, but two remarks:
> - last summer, there use to be such a problem on x86 because of some
>   wrong FPU switch. So, is FPU enabled ? Are you compiling latency with
>   -msoft-float ?

Yes, everything is soft float.

>   When announcing the ARM Adeos port, you told us that not all
>   exceptions were trapped by the Adeos patch. If one of the FPU
>   exceptions is not trapped, it would explain why we do not get the
>   usual message "Invalid use of FPU at ..." before the oops.

Time has passed and the exceptions are now trapped. 

> - as far as I know, xnpod_schedule never recurses, so, there is little
>   chance for PC and LR to be both in xnpod_schedule. If I am not wrong,
>   it means that you will have to have a look at the disassembly (sorry)
>   and see in really what functions the bugs happen. Sometimes the -r and
>   -S option of objdump help...

I'll take a look. 

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>




Re: [Xenomai-core] latency kernel part crashes on ppc64

2006-01-08 Thread Stelian Pop
Le dimanche 08 janvier 2006 à 18:56 +0200, Heikki Lindholm a écrit :

> >>Some recent changes (*cough* RTDM benchmark driver *cough*) broke kernel
> >>mode benchmarking for ppc64. Previously klatency worked fine, but now
> >>latency -t 1 crashes somewhere in xnpod_schedule. Jan, any pending
> >>patches a comin'?

So it seems I'm not alone. 

I have done some additionnal debugging on this issue in the last days. I
still haven't find the bug but I narrowed it down a bit.
> 
> > 
> > Nope, it should work as it is. But as Stelian also reported problems on
> > his fresh ARM port with the in-kernel test, I cannot exclude that there
> > /might/ be a problem in the benchmark.
> > 
> > As I don't have any ppc64 hanging around somewhere, we will have to go
> > through this together. Things I would like to know:
> 
> Dammit, I hoped you'd whip up a fix just from me noting a problem. Well, 
> all right then, I'll play along...;)
> 
> >  o When and how does it crash? At start-up immediately? Or after a
> >while?
> 
> I inserted some serial debug prints and it gets two passes to 
> eval_outer_loop done (enter/exit function). After that it freezes. 

It freezes exactly upon the invocation of rtdm_event_pulse() which
causes a scheduling. In xnpod_schedule, the scheduler queue has been
corrupted and this causes the illegal accesses.

> Without the debug printing it dies with kernel access of illegal memory 
> at xnpod_schedule, which btw. has been quite a common place to die.

Same for me.

> >  o Are there any details / backtraces available with the crash?
> 
> Becaktrace limits to xnpod_schedule if I remember right.

Same for me. But very often I don't even get a backtrace, it just hangs.

> >  o Does -t2 work?
> 
> Umm. Probably not. See below.

Heikki said in a later mail that it works for him, and so it does for me
too.

> >  o What happens if your disable "rtdm_event_pulse(&ctx->result_event);"
> >in eval_outer_loop (thus no signalling of intermediate results during
> >the test)? Does it still crash, maybe later during cleanup now?

> Doesn't freeze and can be exited with ctrl-c and even re-run.

Same for me.

Some additionnal information: I've disabled FPU handling in Xeno and it
doesn't change anything, it still crashes.

As I said before, the old klatency test does work reliably for me, with
the latest Xenomai.

I tried moving the 'display' thread into the kernel, and in this
configuration it does no longer crash.

I've started simplifying the code trying to get to the simplest code
which does have the problem. The results is at
http://www.popies.net/tmp/xenobug/bug.tgz if somebody wants to take a
look.

I'll be working on this again tomorrow...

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>
Open Wide




Re: [Xenomai-core] Xenomai on PXA255

2006-05-29 Thread Stelian Pop
Le lundi 29 mai 2006 à 16:14 +0200, Bart Jonkers a écrit :

> > The Ipipe patch for ARM only support the integrator platform for
> > now. There exist patch for another ARM platform, but it exist only as
> > a separated patch. Looking at the patch contents it seems that the only
> > patched files specific to the integrator architecture are :
> > arch/arm/mach-integrator/core.c
> > arch/arm/mach-integrator/integrator_cp.c
> > include/asm-arm/arch-integrator/entry-macro.S
> > include/asm-arm/arch-integrator/platform.h
> > include/asm-arm/arch-integrator/timex.h
> > 
> > Looking rapidly at these files, it seems that the machine specific
> > functions and variables are reduced to:
> > 
> > int __ipipe_mach_timerint;
> > int __ipipe_mach_timerstolen;
> > unsigned int __ipipe_mach_ticks_per_jiffy;
> > 
> > void __ipipe_mach_acktimer(void);
> > unsigned long long __ipipe_mach_get_tsc(void);
> > void __ipipe_mach_set_dec(unsigned long reload);
> > unsigned long __ipipe_mach_get_dec(void);
> > 
> > If you provide the same variables and functions for the PXA platform, I
> > think there is no modification to be done at Xenomai level.

Gilles is 100% correct. All the platform specific code has to do is
provide the low level timer manipulation functions.

> I found this out already. But it would be a easier to implement this
> functions if I know what they have to do. So could somebody give an
> explanation of this variables and functions?

Well, __ipipe_mach_acktimer acks the timer, __ipipe_mach_get_tsc returns
the TSC of the platform, __ipipe_mach_set_dec sets the decrementer etc.
Should I go on ?

If you have specific questions feel free to ask. But I suggest you read
and try to understand the code first.

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai on PXA255

2006-05-29 Thread Stelian Pop
Le lundi 29 mai 2006 à 17:45 +0200, Gilles Chanteperdrix a écrit :
> Stelian Pop wrote:
>  > If you have specific questions feel free to ask. But I suggest you read
>  > and try to understand the code first.
> 
> Maybe we could provide a quick overview of how this works, Stelian,
> please correct me if I am wrong. 

Your description is accurate. However, such deep knowledge on the inner
workings should not be needed to port Xenomai to a new ARM platform.
(well, in case it works immediately. Debugging the port could need more
knowledge :) ).

In the list of 'undocumented' functions there is also
__ipipe_mach_get_tsc() which should return some accurate time
information. Most of the ARM platforms do not have a special Time Stamp
Clock register, so most of the time the tick count is used instead
(giving a TSC resolution of one microsecond). This is what standard
Linux also use.

If the platform has something more appropriate than the core timer for
measuring the time, this function is where you need to wire it.

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>
Open Wide


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Some questions about the ARM port (Integrator vs. PXA)

2006-06-29 Thread Stelian Pop
Le jeudi 29 juin 2006 à 10:38 +0200, Detlef Vollmann a écrit :
> Hello,

Hi,

> 
> looking at the ARM Integrator patch (which seems to be something
> like the reference port for ARM), I'm not really clear about some
> of the code:
> 
>  a) What's the difference between __ipipe_mach_ticks_per_jiffy
> and LATCH?

As a matter of fact there is no difference.

>  b) Is there some (hidden, intended future) semantics of tscok?
> Right now it just avoids that garbage is returned before
> the timer is initialized.

tscok is used to prevent __ipipe_mach_get_tsc() returning bogus values
in the early boot stages (when the timer is not yet initialized but
ipipe is). IIRC this was mainly needed when enabling
CONFIG_IPIPE_STATS...

>  c) In the interrupt routine, the comment currently says:
> "If Linux is the only domain, ack the timer and reprogram it",
> but the actual code looks as if the comment should read:
> "If Linux is running natively, ack the timer.
> If Linux's the only domain, reprogram it."
> What's wrong, the code or the comment?

Always trust the code :)

The true meaning of that code is:
* if Linux is running natively (no ipipe), ack and reprogram the timer
* if Linux is running under ipipe, but it still has the control over
the timer (no Xenomai for example), then reprogram the timer (ipipe has
already acked it)
* if some other domain has taken over the timer, then do nothing (ipipe
has acked it, and the other domain has reprogramed it)

>  d) __ipipe_mach_set_dec() sets the next match value of the timer.
> But the current counter isn't changed.  Correct?
> Or is setting the match value and setting the current counter
> the same operation on the Integrator?

Yes, the integrator has a true decrementer and not a match counter.

__ipipe_set_dec(x) must set program the timer so that a timer interrupt
occurs after x ticks.

> __ipipe_mach_get_dec() doesn't return the next match value, but
> the current actual counter, i.e. some value between 0 and the
> next match value.  Correct?

Yes.

> And if so, which value?  The number of ticks elapsed since the
> last match or the number of ticks until the next match occurs?

The number of ticks until the next match occurs.

> The names "..._dec" suggest that the integrator provides a clock
> register that decrements.  The PXA provides clock registers
> that are incremented, and the timer used for the Linux ticks
> is (in Linux) never reset, but instead the match value at
> which an interrupt occurs is incremented on each interrupt.
> So, a port to the PXA wouldn't be straightforward, and so I
> want to make sure that I really understand the semantics
> of the ARM port.

Indeed, you will need to adapt the PXA incrementer to the ipipe
decrementer semantics.

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Some questions about the ARM port (Integrator vs. PXA)

2006-06-30 Thread Stelian Pop
Le vendredi 30 juin 2006 à 08:29 +0200, Detlef Vollmann a écrit :
> Stelian Pop wrote:
> > Le jeudi 29 juin 2006 à 10:38 +0200, Detlef Vollmann a écrit :
> 
> > >  a) What's the difference between __ipipe_mach_ticks_per_jiffy
> > > and LATCH?
> > 
> > As a matter of fact there is no difference.
> Does this mean that __ipipe_mach_ticks_per_jiffy never changes?

Indeed.

> What about the correlation between __ipipe_mach_set_dec() and
> __ipipe_mach_ticks_per_jiffy?  __ipipe_mach_set_dec() seems
> to do a permanent change, and not only a one-time change.

__ipipe_mach_set_dec sets the *next* timer occurence. It functions in a
one-shot way (like a real decrementer, not a auto-reloading one).

> Is the Linux timer interrupt still only called after LATCH ticks?

IPipe doesn't do anything special to the timer (except for acking the
interrupt because this must be done early in some cases).

If Linux handles the timer, then nothing changes, the timer frequency is
LATCH.

If Xenomai handles the timer, then it is its responsability to propagate
the interrupt to Linux when it wants to (look for xnarch_relay_tick() in
Xenomai's nucleus).

> Now I have another question on this: on the PXA I have a hardware
> problem so that I must sometimes set the next match value to the
> match value after the next one, so effectively loosing one
> interrupt.

> If Linux is responsible for reprogramming the timer, I should tell
> ipipe about it, so that ipipe can tell any other domain.
> How can I do that?

If Linux is responsible for reprogramming the timer there is a good
chance there is no other domain, so it doesn't matter much :)

But you will have a problem when Xenomai takes upon the timer, because
its scheduler doesn't expect to lose timer ticks. 

I can imagine adding a return code to __ipipe_mach_set_dec() which would
tell if the hardware has been programmed successfully or not, and in
this latter case Xenomai (or ipipe ?) would have to busy sleep until the
next (calculated) timer occurence... What do the experts think ?

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai on PXA

2006-07-11 Thread Stelian Pop
Le mardi 11 juillet 2006 à 08:20 +0200, Detlef Vollmann a écrit :

> What is missing is a look at entry-macro.S.
> Stelian Pop has done something for the Integrator that I don't
> really understand and therefore I can't say whether the PXA needs
> something similar.

Well, you should have asked if you didn't undestand. :)

The change in entry-macro.S does optimize the fast path for a timer
interrupt. Instead of looking at each interrupt controller status and
compute the irq number the code tests the timer interrupt status and
returns immediately if true. 

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai on PXA

2006-07-11 Thread Stelian Pop
Le mardi 11 juillet 2006 à 08:20 +0200, Detlef Vollmann a écrit :

> What is missing is a look at entry-macro.S.
> Stelian Pop has done something for the Integrator that I don't
> really understand and therefore I can't say whether the PXA needs
> something similar.

Well, you should have asked if you didn't undestand. :)

The change in entry-macro.S does optimize the fast path for a timer
interrupt. Instead of looking at each interrupt controller status and
compute the irq number the code tests the timer interrupt status and
returns immediately if true. 

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-15 Thread Stelian Pop
Hi,

I need to be able to map an IO memory buffer to userspace from a RTDM
driver.

rtdm_mmap_to_user() seems to do what I need, but it doesn't work. Its
code thinks that all virtual addresses between VMALLOC_START and
VMALLOC_END are obtained through vmalloc() and tries to call
xnarch_remap_vm_page() on them, which fails.

Virtual addresses coming from ioremap() need to go through
xnarch_remap_io_page_range(), and their physical address cannot be
obtained with a simple virt_to_phys().

A working patch is attached below, but there might (should ?) be a
better way to do it. Some of the code may also belong to
asm-generic/system.h instead of the RTDM skin.

Note that you may also need to EXPORT_SYMBOL(vmlist and vmlist_lock) in
mm/vmalloc.c if you want to build the RTDM skin as a module.

Comments ?

Stelian.

Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(révision 1624)
+++ ksrc/skins/rtdm/drvlib.c(copie de travail)
@@ -1377,6 +1377,7 @@
 {
 struct rtdm_mmap_data *mmap_data = filp->private_data;
 unsigned long vaddr, maddr, size;
+struct vm_struct *vm;
 
 vma->vm_ops = mmap_data->vm_ops;
 vma->vm_private_data = mmap_data->vm_private_data;
@@ -1385,7 +1386,21 @@
 maddr = vma->vm_start;
 size  = vma->vm_end - vma->vm_start;
 
+write_lock(&vmlist_lock);
+for (vm = vmlist; vm != NULL; vm = vm->next) {
+   if (vm->addr == (void *)vaddr)
+   break;
+}
+write_unlock(&vmlist_lock);
+
+/* ioremap'ed memory */
+if (vm && vm->flags & VM_IOREMAP)
+return xnarch_remap_io_page_range(vma, maddr,
+ vm->phys_addr,
+  size, PAGE_SHARED);
+else
 #ifdef CONFIG_MMU
+/* vmalloc'ed memory */
 if ((vaddr >= VMALLOC_START) && (vaddr < VMALLOC_END)) {
 unsigned long mapped_size = 0;
 

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] pthread_attr_getinheritsched() doc fix

2006-09-18 Thread Stelian Pop
Hi,

The attached patchlet fixes the documentation of
pthread_attr_getinheritsched() (PTHREAD_INHERIT_SCHED and
PTHREAD_EXPLICIT_SCHED were reversed).

Stelian.

Index: ksrc/skins/posix/thread_attr.c
===
--- ksrc/skins/posix/thread_attr.c  (révision 1648)
+++ ksrc/skins/posix/thread_attr.c  (copie de travail)
@@ -305,10 +305,10 @@
  * This service returns at the address @a inheritsched the value of the @a
  * inheritsched attribute in the attribute object @a attr.
  *
- * Threads created with this attribute set to PTHREAD_EXPLICIT_SCHED will use
+ * Threads created with this attribute set to PTHREAD_INHERIT_SCHED will use
  * the same scheduling policy and priority as the thread calling
  * pthread_create(). Threads created with this attribute set to
- * PTHREAD_INHERIT_SCHED will use the value of the @a schedpolicy attribute as
+ * PTHREAD_EXPLICIT_SCHED will use the value of the @a schedpolicy attribute as
  * scheduling policy, and the value of the @a schedparam  attribute as 
scheduling
  * priority.
  *

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-18 Thread Stelian Pop
Le vendredi 15 septembre 2006 à 18:40 +0200, Jan Kiszka a écrit :

> In case no one comes up with an easy, portable way to detect remapped
> memory as well: What about some flags the caller of rtdm_mmap_to_user
> has to pass, telling what kind of memory it is? Would simplify the RTDM
> part, and the user normally knows quite well where the memory came from.
> And I love to break APIs. :)

This would be perfect. We could even reuse the prot field for that
(PROT_READ | PROT_WRITE | PROT_VMALLOC | PROT_IOREMAP). Not the cleanest
solution, but it won't break the API this way.

Or maybe we should lower the API level a little bit, and let the user
specify the physical address of the mapping instead of the virtual
one

Stelian.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Stelian Pop
ref RTDM_MEMTYPE_xxx
  * @param[in] prot Protection flags for the user's memory range, typically
  * either PROT_READ or PROT_READ|PROT_WRITE
  * @param[in,out] pptr Address of a pointer containing the desired user
@@ -1462,12 +1472,14 @@
  *
  * Rescheduling: possible.
  */
-int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
-  int prot, void **pptr,
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info,
+  void *src_vaddr, unsigned long src_paddr, size_t len,
+  int mem_type, int prot, void **pptr,
   struct vm_operations_struct *vm_ops,
   void *vm_private_data)
 {
-struct rtdm_mmap_data   mmap_data = {src_addr, vm_ops, vm_private_data};
+struct rtdm_mmap_data   mmap_data = { src_vaddr, src_paddr, mem_type,
+  vm_ops, vm_private_data };
 struct file     *filp;
 const struct file_operations*old_fops;
 void*old_priv_data;


-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Stelian Pop
 struct vm_operations_struct *vm_ops,
   void *vm_private_data)
 {
-struct rtdm_mmap_data   mmap_data = {src_addr, vm_ops, vm_private_data};
-struct file *filp;
-const struct file_operations*old_fops;
-void*old_priv_data;
-void*user_ptr;
+struct rtdm_mmap_data   mmap_data = { src_addr, 0,
+  vm_ops, vm_private_data };
 
+return __rtdm_do_mmap(user_info, &mmap_data, len, prot, pptr);
+}
 
-XENO_ASSERT(RTDM, xnpod_root_p(), return -EPERM;);
+EXPORT_SYMBOL(rtdm_mmap_to_user);
 
-filp = filp_open("/dev/zero", O_RDWR, 0);
-if (IS_ERR(filp))
-return PTR_ERR(filp);
+/**
+ * Map an I/O memory range into the address space of the user.
+ *
+ * @param[in] user_info User information pointer as passed to the invoked
+ * device operation handler
+ * @param[in] src_addr I/O physical address to be mapped
+ * @param[in] len Length of the memory range
+ * @param[in] prot Protection flags for the user's memory range, typically
+ * either PROT_READ or PROT_READ|PROT_WRITE
+ * @param[in,out] pptr Address of a pointer containing the desired user
+ * address or NULL on entry and the finally assigned address on return
+ * @param[in] vm_ops vm_operations to be executed on the vma_area of the
+ * user memory range or NULL
+ * @param[in] vm_private_data Private data to be stored in the vma_area,
+ * primarily useful for vm_operation handlers
+ *
+ * @return 0 on success, otherwise (most common values):
+ *
+ * - -EINVAL is returned if an invalid start address, size, or destination
+ * address was passed.
+ *
+ * - -ENOMEM is returned if there is insufficient free memory or the limit of
+ * memory mapping for the user process was reached.
+ *
+ * - -EAGAIN is returned if too much memory has been already locked by the
+ * user process.
+ *
+ * - -EPERM @e may be returned if an illegal invocation environment is
+ * detected.
+ *
+ * @note RTDM supports two models for unmapping the user memory range again.
+ * One is explicite unmapping via rtdm_munmap(), either performed when the
+ * user requests it via an IOCTL etc. or when the related device is closed.
+ * The other is automatic unmapping, triggered by the user invoking standard
+ * munmap() or by the termination of the related process. To track release of
+ * the mapping and therefore relinquishment of the referenced physical memory,
+ * the caller of rtdm_mmap_to_user() can pass a vm_operations_struct on
+ * invocation, defining a close handler for the vm_area. See Linux
+ * documentaion (e.g. Linux Device Drivers book) on virtual memory management
+ * for details.
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - User-space task (non-RT)
+ *
+ * Rescheduling: possible.
+ */
+int rtdm_iomap_to_user(rtdm_user_info_t *user_info,
+   unsigned long src_addr, size_t len,
+   int prot, void **pptr,
+   struct vm_operations_struct *vm_ops,
+   void *vm_private_data)
+{
+struct rtdm_mmap_data   mmap_data = { NULL, src_addr,
+  vm_ops, vm_private_data };
 
-old_fops = filp->f_op;
-filp->f_op = &rtdm_mmap_fops;
-
-old_priv_data = filp->private_data;
-filp->private_data = &mmap_data;
-
-down_write(&user_info->mm->mmap_sem);
-user_ptr = (void *)do_mmap(filp, (unsigned long)*pptr, len, prot,
-   MAP_SHARED, 0);
-up_write(&user_info->mm->mmap_sem);
-
-filp->f_op = (typeof(filp->f_op))old_fops;
-filp->private_data = old_priv_data;
-
-filp_close(filp, user_info->files);
-
-if (IS_ERR(user_ptr))
-return PTR_ERR(user_ptr);
-
-*pptr = user_ptr;
-    return 0;
+return __rtdm_do_mmap(user_info, &mmap_data, len, prot, pptr);
 }
 
-EXPORT_SYMBOL(rtdm_mmap_to_user);
+EXPORT_SYMBOL(rtdm_iomap_to_user);
 
-
 /**
  * Unmap a user memory range.
  *

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH, RFC] Make ioremap'ed memory available through rtdm_mmap_to_user()

2006-09-22 Thread Stelian Pop
Le vendredi 22 septembre 2006 à 16:41 +0200, Jan Kiszka a écrit : 
> Stelian Pop wrote:
> > Le vendredi 22 septembre 2006 à 10:58 +0200, Jan Kiszka a écrit :
> > 
> >>>   d) make a special rtdm_mmap_iomem_to_user() function...
> >> Also an option. Specifically, it wouldn't break the existing API... What
> >> about rtdm_iomap_to_user? Would you like to work out a patch in this
> >> direction?
> > 
> > Here it comes.
> 
> Your patch looks very good. Assuming that you have tested it
> successfully,

I did.

>  I'm going to merge it soon.

Thanks.

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Clarify pthread_make_periodic_np() usage

2006-09-29 Thread Stelian Pop
Hi,

The attached patch makes the documentation of pthread_make_periodic_np()
explicit about the rescheduling which takes place on invocation, until
the start time has been reached.

I for one thought that pthread_make_periodic_np() would not sleep, and
that pthread_wait_np() would do the wait instead when invoked for the
first time...

Stelian.

Index: ksrc/skins/posix/thread.c
===
--- ksrc/skins/posix/thread.c   (révision 1680)
+++ ksrc/skins/posix/thread.c   (copie de travail)
@@ -503,10 +503,12 @@
  *
  * This service is a non-portable extension of the POSIX interface.
  *
- * @param thread thread identifier;
+ * @param thread thread identifier. This thread is immediately delayed
+ * until the first periodic release point is reached.
  *
  * @param starttp start time, expressed as an absolute value of the
- * CLOCK_REALTIME clock;
+ * CLOCK_REALTIME clock. The affected thread will be delayed until
+ * this point is reached.
  *
  * @param periodtp period, expressed as a time interval.
  *
@@ -514,6 +516,8 @@
  * @return an error number if:
  * - ESRCH, @a thread is invalid;
  * - ETIMEDOUT, the start time has already passed.
+ *
+ * Rescheduling: always, until the @starttp start time has been reached.
  */
 int pthread_make_periodic_np(pthread_t thread,
 struct timespec *starttp,
 
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] automatically run ldconfig on installation

2006-09-29 Thread Stelian Pop
Hi,

When installing the userspace skin libraries, 'ldconfig' is not
automatically run to update the dynamic library loader cache.

The attached patch does just that, but it does so only if the user is
not installing cross-compiled libraries as it does not make sense in
this case.

Stelian.
Index: configure.in
===
--- configure.in(révision 1680)
+++ configure.in(copie de travail)
@@ -593,6 +593,7 @@
 AC_SUBST([CONFIG_STATUS_DEPENDENCIES],
 ['$(top_srcdir)/src/skins/posix/posix.wrappers'])
 AC_SUBST(XENO_POSIX_WRAPPERS)
+AC_SUBST(cross_compiling)
 
 base=asm-$XENO_TARGET_ARCH
 AC_CONFIG_LINKS([src/include/asm/xenomai:include/$base])
Index: src/skins/Makefile.am
===
--- src/skins/Makefile.am   (révision 1680)
+++ src/skins/Makefile.am   (copie de travail)
@@ -1,2 +1,4 @@
+install-data-local:
+   if test "@cross_compiling@" = "no"; then ldconfig; fi
 
 SUBDIRS = native posix rtdm vxworks vrtx rtai

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Enable usage of pthread_set_{mode, name}_np from kernel space

2007-02-08 Thread Stelian Pop
Hi,

Is there a reason why pthread_set_{mode,name}_np are not allowed to be
called from a kernel space POSIX thread ?

If there is none, please apply the patch below.

Thanks.

Index: ksrc/skins/posix/thread.c
===
--- ksrc/skins/posix/thread.c   (révision 2162)
+++ ksrc/skins/posix/thread.c   (copie de travail)
@@ -745,3 +745,5 @@
 EXPORT_SYMBOL(pthread_self);
 EXPORT_SYMBOL(pthread_make_periodic_np);
 EXPORT_SYMBOL(pthread_wait_np);
+EXPORT_SYMBOL(pthread_set_name_np);
+EXPORT_SYMBOL(pthread_set_mode_np);

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH - ARM] ldrex/strex syntax errors with recent compilers

2007-03-15 Thread Stelian Pop
Hi,

Trying to build a xenomai-enabled kernel using a recent compiler (tried
with gcc version 4.1.1 (CodeSourcery ARM Sourcery G++ 2006q3-26), but
all gcc > 4.1 might be affected) results in the following:

  CC  kernel/xenomai/nucleus/shadow.o
/tmp/cc0XooxH.s: Assembler messages:
/tmp/cc0XooxH.s:1464: Error: instruction does not accept this addressing mode 
-- `ldrex r1,r2'
/tmp/cc0XooxH.s:1466: Error: instruction does not accept this addressing mode 
-- `strex r3,r1,r2'

Older gcc (like gcc version 4.0.0 (DENX ELDK 4.1 4.0.0)) have no problem with 
this.

It appears that the patch below fixes the compile error. I also verified 
that gcc-4.0.0 generates identical code using both forms.

Index: include/asm-arm/atomic.h
===
--- include/asm-arm/atomic.h(révision 2299)
+++ include/asm-arm/atomic.h(copie de travail)
@@ -40,9 +40,9 @@
 unsigned long tmp, tmp2;
 
 __asm__ __volatile__("@ atomic_set_mask\n"
-"1: ldrex   %0, %2\n"
+"1: ldrex   %0, [%2]\n"
 "   orr %0, %0, %3\n"
-"   strex   %1, %0, %2\n"
+"   strex   %1, %0, [%2]\n"
 "   teq %1, #0\n"
 "   bne 1b"
 : "=&r" (tmp), "=&r" (tmp2)
@@ -170,9 +170,9 @@
 unsigned long tmp, tmp2;
 
 __asm__ __volatile__("@ atomic_set_mask\n"
-"1: ldrex   %0, %2\n"
+"1: ldrex   %0, [%2]\n"
 "   orr %0, %0, %3\n"
-"   strex   %1, %0, %2\n"
+"   strex   %1, %0, [%2]\n"
 "   teq %1, #0\n"
 "   bne 1b"
 : "=&r" (tmp), "=&r" (tmp2)
@@ -185,9 +185,9 @@
 unsigned long tmp, tmp2;
 
 __asm__ __volatile__("@ atomic_clear_mask\n"
-"1: ldrex   %0, %2\n"
+"1: ldrex   %0, [%2]\n"
 "   bic %0, %0, %3\n"
-"   strex   %1, %0, %2\n"
+"   strex   %1, %0, [%2]\n"
 "   teq %1, #0\n"
 "   bne 1b"
 : "=&r" (tmp), "=&r" (tmp2)

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH - ARM] ldrex/strex syntax errors with recent compilers

2007-03-15 Thread Stelian Pop
Le jeudi 15 mars 2007 à 14:53 +0100, Stelian Pop a écrit :

> It appears that the patch below fixes the compile error. I also verified 
> that gcc-4.0.0 generates identical code using both forms.

FWIW, the same fix at a different place in the mainline kernel has been
acked by Catalin Marinas from ARM.

Ah, and the patch I submitted was against the 2.3.x branch although it
will probably apply more or less cleanly on trunk.

Thanks,

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Support EABI enabled kernels on ARM

2007-04-14 Thread Stelian Pop
Hi,

The attached patch adds an option to make Xenomai userspace issue EABI
syscalls. This is needed to make Xenomai work with kernels compiled with
CONFIG_EABI.

Note that due to a change in syscall handling when the EABI layer was
added in the kernel, this patch is needed for all EABI enabled kernels,
even if the CONFIG_OABI_COMPAT compatibility layer has been enabled.

All sensible combinations should be supported: old ABI userspace with
old ABI kernel, old ABI userspace with CONFIG_OABI_COMPAT kernels, EABI
userspace with EABI kernels. The other combinations will fail with a
SIGILL signal.

Don't forget to run 'scripts/bootstrap' after applying this patch...

Signed-off-by: Stelian Pop <[EMAIL PROTECTED]>

Index: include/asm-arm/syscall.h
===
--- include/asm-arm/syscall.h   (révision 2385)
+++ include/asm-arm/syscall.h   (copie de travail)
@@ -24,11 +24,12 @@
 #define _XENO_ASM_ARM_SYSCALL_H
 
 #include 
+#include 
 
 #define __xn_mux_code(shifted_id,op) ((op << 24)|shifted_id|(__xn_sys_mux & 
0x))
 #define __xn_mux_shifted_id(id) ((id << 16) & 0xff)
 
-#define XENO_ARM_SYSCALL0x009F0042 /* carefully chosen... */
+#define XENO_ARM_SYSCALL0x000F0042 /* carefully chosen... */
 
 #ifdef __KERNEL__
 
@@ -46,7 +47,13 @@
 #define __xn_reg_arg4(regs) ((regs)->ARM_r4)
 #define __xn_reg_arg5(regs) ((regs)->ARM_r5)
 
-#define __xn_reg_mux_p(regs)((regs)->ARM_r7 == XENO_ARM_SYSCALL)
+/* In OABI_COMPAT mode, handle both OABI and EABI userspace syscalls */
+#ifdef CONFIG_OABI_COMPAT
+#define __xn_reg_mux_p(regs)( ((regs)->ARM_r7 == __NR_OABI_SYSCALL_BASE + 
XENO_ARM_SYSCALL) || \
+  ((regs)->ARM_r7 == __NR_SYSCALL_BASE + 
XENO_ARM_SYSCALL) )
+#else
+#define __xn_reg_mux_p(regs)  ((regs)->ARM_r7 == __NR_SYSCALL_BASE + 
XENO_ARM_SYSCALL)
+#endif
 
 #define __xn_mux_id(regs)   ((__xn_reg_mux(regs) >> 16) & 0xff)
 #define __xn_mux_op(regs)   ((__xn_reg_mux(regs) >> 24) & 0xff)
@@ -134,17 +141,28 @@
 #define __sys2(x)  #x
 #define __sys1(x)  __sys2(x)
 
+#ifdef CONFIG_XENO_ARM_EABI
+#define __SYS_REG register unsigned long __r7 __asm__ ("r7") = 
XENO_ARM_SYSCALL;
+#define __SYS_REG_LIST ,"r" (__r7)
+#define __syscall "swi\t0"
+#else
+#define __SYS_REG
+#define __SYS_REG_LIST
+#define __syscall "swi\t" __sys1(XENO_ARM_SYSCALL) ""
+#endif
+
 #define XENOMAI_DO_SYSCALL(nr, shifted_id, op, args...)\
   ({   \
 unsigned long __res;   \
register unsigned long __res_r0 __asm__ ("r0"); \
ASM_INDECL_##nr;\
+__SYS_REG;  \
\
LOADARGS_##nr(__xn_mux_code(shifted_id,op), args);  \
-   __asm__ __volatile__ (  \
-"   swi " __sys1(XENO_ARM_SYSCALL) \
+   __asm__ __volatile__ (  \
+__syscall   \
: "=r" (__res_r0)   \
-   : ASM_INPUT_##nr\
+   : ASM_INPUT_##nr __SYS_REG_LIST \
: "memory");\
__res = __res_r0;   \
(int) __res;\
Index: include/asm-arm/features.h
===
--- include/asm-arm/features.h  (révision 2385)
+++ include/asm-arm/features.h  (copie de travail)
@@ -30,12 +30,17 @@
 #define CONFIG_XENO_ARM_SA1000 1
 #endif
 
+#ifdef CONFIG_AEABI
+#define CONFIG_XENO_ARM_EABI1
+#endif
+
 #else /* !__KERNEL__ */
 #define __LINUX_ARM_ARCH__  CONFIG_XENO_ARM_ARCH
 #endif /* __KERNEL__ */
 
 #define __xn_feat_arm_atomic_xchg  0x0001
 #define __xn_feat_arm_atomic_atomic0x0002
+#define __xn_feat_arm_eabi  0x0004
 
 /* The ABI revision level we use on this arch. */
 #define XENOMAI_ABI_REV   1UL
@@ -53,10 +58,12 @@
 #endif
 #define __xn_feat_arm_atomic_atomic_mask   0
 #endif
+#define __xn_feat_arm_eabi_mask__xn_feat_arm_eabi
 
-#define XENOMAI_FEAT_DEP  ( __xn_feat_generic_mask | \
-   __xn_feat_arm_atomic_xchg_mask |\
-   __xn_feat_arm_atomic_atomic_mask)
+#define XENOMAI_FEAT_DEP  ( __xn_feat_generic_mask  | \
+__xn_feat_arm_atomic_xchg_mask  | \
+__xn_feat_arm_atomic_atomic_mask| \
+__xn_feat_arm_eabi_mask )
 
 #de

Re: [Xenomai-core] [PATCH] Support EABI enabled kernels on ARM

2007-04-17 Thread Stelian Pop
Le mardi 17 avril 2007 à 00:36 +0200, Gilles Chanteperdrix a écrit :
> Stelian Pop wrote:
>  > Hi,
>  > 
>  > The attached patch adds an option to make Xenomai userspace issue EABI
>  > syscalls. This is needed to make Xenomai work with kernels compiled with
>  > CONFIG_EABI.
>  >
> Applied, thanks.

Thanks.

>  And by the way, I am running Debian Etch, and its
> version of autotools seems to complain that we do not use datarootdir in
> xeno-config.in... 

Hmm, things are going well on an Ubuntu Egdy here, but I suppose the
next Ubuntu version (7.04 or Feisty Fawn, due in 3 days) will have
almost the same package versions as Debian Etch...

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Support EABI enabled kernels on ARM

2007-04-18 Thread Stelian Pop
Le jeudi 19 avril 2007 à 01:35 +0200, Gilles Chanteperdrix a écrit :
> Stelian Pop wrote:
>  > Hi,
>  > 
>  > The attached patch adds an option to make Xenomai userspace issue EABI
>  > syscalls. This is needed to make Xenomai work with kernels compiled with
>  > CONFIG_EABI.
>  > 
> I get a problem with this patch: I am in the no EABI case in user-space
> and kernel-space, and when starting latency, I get an "Illegal
> instruction" message. If I revert this patch, latency starts
> correctly. Any idea ?

Hmm, I might have screwed up here. A quick looking points me to this
change:

+#define __SYS_REG
+#define __SYS_REG_LIST
+#define __syscall "swi\t" __sys1(XENO_ARM_SYSCALL) ""

Does replacing XENO_ARM_SYSCALL with (0x0090 + XENO_ARM_SYSCALL)
fixes it ?

I may be able to mount a test platform later today if this needs further
debugging...

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Support EABI enabled kernels on ARM

2007-04-19 Thread Stelian Pop
Le jeudi 19 avril 2007 à 08:38 +0200, Stelian Pop a écrit :
> Le jeudi 19 avril 2007 à 01:35 +0200, Gilles Chanteperdrix a écrit :
> > Stelian Pop wrote:
> >  > Hi,
> >  > 
> >  > The attached patch adds an option to make Xenomai userspace issue EABI
> >  > syscalls. This is needed to make Xenomai work with kernels compiled with
> >  > CONFIG_EABI.
> >  > 
> > I get a problem with this patch: I am in the no EABI case in user-space
> > and kernel-space, and when starting latency, I get an "Illegal
> > instruction" message. If I revert this patch, latency starts
> > correctly. Any idea ?
> 
> Hmm, I might have screwed up here. A quick looking points me to this
> change:
> 
> +#define __SYS_REG
> +#define __SYS_REG_LIST
> +#define __syscall "swi\t" __sys1(XENO_ARM_SYSCALL) ""
> 
> Does replacing XENO_ARM_SYSCALL with (0x0090 + XENO_ARM_SYSCALL)
> fixes it ?
> 
> I may be able to mount a test platform later today if this needs further
> debugging...

I have finally managed to get my old-ABI toolchain to work and can
confirm that the modification above fixes the problem.

The whole section of include/asm-arm/syscall.h must read:

#ifdef CONFIG_XENO_ARM_EABI
#define __SYS_REG register unsigned long __r7 __asm__ ("r7") = 
XENO_ARM_SYSCALL;#define __SYS_REG_LIST ,"r" (__r7)
#define __syscall "swi\t0"
#else
#define __SYS_REG
#define __SYS_REG_LIST
#define __NR_OABI_SYSCALL_BASE  0x90
#define __syscall "swi\t" __sys1(__NR_OABI_SYSCALL_BASE + XENO_ARM_SYSCALL) ""
#endif

Sorry for the annoyance.

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-12 Thread Stelian Pop
Hi Jan,

[taking this on the list after several mails with Philippe...]

Le jeudi 11 octobre 2007 à 22:47 +0200, Jan Kiszka a écrit :
> This patch for SVN trunk fixes most of the current bugs around hardware
> timer takeover and release from/to Linux.
[...]

I have a problem with the timer on my MacBook Pro (Core2Duo, used in
_32_ bit mode)(*): when Xenomai takes over the timer (at 'modprobe
xeno_native' time), the Linux timer stops.

Looking into /proc/xenomai/irq shows that Xenomai does receive the
hardware interrupts, and /proc/interrupts shows that they are no longer
forwarded to Linux. Before loading xeno_native, everything is ok.

Linux userspace continues to somewhat work: I can issue commands, and
depending on the syscalls they made I suppose (no, strace doesn't work),
sometimes they end correctly sometimes they hang (and I cannot interrupt
them by ^C or other signals.).

I tried several .config variations, without any change in behaviour: my
current test config has SMP, NO_HZ, APIC, PREEMPT, HIRES all disabled).

This happens with a 2.6.22.9 kernel, adeos-ipipe-2.6.22-i386-1.10-07,
xenomai svn HEAD (rev 3050), with or without your current patch. It is
quite possible that this is not a new problem, since I have this laptop
since a few weeks only and I never ran Xenomai on it. 

I'll happily provide any further information or test results if you
need.

Thanks,

Stelian.

(*) yeah, I know I could install a x86_64 distribution but I had some
terrible experiences in the past - mainly due to the usage of some
proprietary bricks like flash, java etc. I know some of those have been
resolved today, and one of these days I should try once more, but now it
is not a good time for this.
-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-12 Thread Stelian Pop
   350.095  ipipe_check_context+0xc 
(clocksource_get_next+0x14)
 #func  350.085  ipipe_check_context+0xc 
(clocksource_get_next+0x30)
 #func  350.090  __ipipe_restore_root+0x8 
(clocksource_get_next+0x4a)
 #func  350.120  ipipe_check_context+0xc 
(clocksource_get_next+0x54)
 #func  350.100  ipipe_check_context+0xc 
(tick_periodic+0x78)
 #func  350.095  update_process_times+0x12 
(tick_periodic+0x29)
 #func  350.105  account_system_time+0x16 
(update_process_times+0x71)
 #func  350.090  run_local_timers+0x8 
(update_process_times+0x2f)
 #func  350.080  raise_softirq+0x16 
(run_local_timers+0x12)
 #func  350.085  ipipe_check_context+0xc 
(raise_softirq+0x22)
 #func  350.100  __ipipe_restore_root+0x8 
(raise_softirq+0x6c)
 #func  360.110  softlockup_tick+0x14 
(run_local_timers+0x17)
 #func  360.075  rcu_pending+0x8 
(update_process_times+0x36)
 #func  360.110  __rcu_pending+0x8 (rcu_pending+0x17)
 #func  360.085  rcu_check_callbacks+0x8 
(update_process_times+0x43)
 #func  360.120  idle_cpu+0x8 (rcu_check_callbacks+0x45)
 #func  360.080  __tasklet_schedule+0x16 
(rcu_check_callbacks+0x3a)
 #func  360.080  ipipe_check_context+0xc 
(__tasklet_schedule+0x22)
 #func  360.115  __ipipe_restore_root+0x8 
(__tasklet_schedule+0x77)
 #func  360.090  scheduler_tick+0x16 
(update_process_times+0x48)
 #func  360.110  sched_clock+0x12 (scheduler_tick+0x1b)
 #func  370.095  idle_cpu+0x8 (scheduler_tick+0x2c)
 #func  370.075  task_running_tick+0x14 
(scheduler_tick+0x5e)
 #func  370.090  ipipe_check_context+0xc 
(task_running_tick+0x3f)
 #func  370.090  ipipe_check_context+0xc 
(task_running_tick+0x95)
 #func  370.115  run_posix_cpu_timers+0xe 
(update_process_times+0x4f)
 #func  370.090  profile_tick+0x12 (tick_periodic+0x33)
 #func  370.120  profile_pc+0x8 (profile_tick+0x47)
 #func  370.125  ipipe_check_context+0xc 
(handle_IRQ_event+0x59)
 #func  370.120  note_interrupt+0xe 
(handle_level_irq+0xbb)
 #func  370.125  ipipe_check_context+0xc 
(handle_level_irq+0x79)
 #func  380.100  enable_8259A_irq+0x16 
(handle_level_irq+0x97)
 #func  380.100  __ipipe_spin_lock_irqsave+0x9 
(enable_8259A_irq+0x2b)
 |   #begin   0x8001380.105  __ipipe_spin_lock_irqsave+0x4b 
(enable_8259A_irq+0x2b)
 |   #func  380.105  __ipipe_spin_unlock_irqrestore+0x9 
(enable_8259A_irq+0x6c)
 |   #end 0x8001380.140  __ipipe_spin_unlock_irqrestore+0x36 
(enable_8259A_irq+0x6c)
 #func  380.110  ipipe_check_context+0xc 
(handle_level_irq+0xa1)
 #func  380.115  irq_exit+0x8 (do_IRQ+0x3d)
 #func  380.095  do_softirq+0x12 (irq_exit+0x39)
 #func  380.120  ipipe_check_context+0xc 
(do_softirq+0x3a)
 #func  390.110  __do_softirq+0xb (do_softirq+0x6d)
 #func  390.090  __ipipe_unstall_root+0x8 
(__do_softirq+0x33)
 |   #begin   0x8000390.090  __ipipe_unstall_root+0x4d 
(__do_softirq+0x33)
 |   +end 0x8000390.135  __ipipe_unstall_root+0x3f 
(__do_softirq+0x33)
 +func  390.110  run_timer_softirq+0xe 
(__do_softirq+0x52)
 +func  390.120  hrtimer_run_queues+0xe 
(run_timer_softirq+0x19)
 +func  390.095  ipipe_check_context+0xc 
(hrtimer_run_queues+0xc5)
 #func  390.000  ipipe_check_context+0xc 
(hrtimer_run_queues+0xd7)


-- 
Stelian Pop <[EMAIL PROTECTED]>

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-12 Thread Stelian Pop
nterrupt+0x8 
(handle_IRQ_event+0x31)
 #func  680.074  tick_handle_periodic+0xe 
(timer_interrupt+0x13)
 #func  680.079  tick_periodic+0x8 
(tick_handle_periodic+0x17)
 #func  680.089  ipipe_check_context+0xc 
(tick_periodic+0x4a)
 #func  690.079  do_timer+0xe (tick_periodic+0x71)
 #func  690.094  update_wall_time+0xe (do_timer+0x23)
 #func  690.184  read_tsc+0x8 (update_wall_time+0x24)
 #func  690.089  clocksource_get_next+0xa 
(update_wall_time+0x220)
 #func  690.084  ipipe_check_context+0xc 
(clocksource_get_next+0x14)
 #func  690.099  ipipe_check_context+0xc 
(clocksource_get_next+0x2e)
 #func  690.089  __ipipe_restore_root+0x8 
(clocksource_get_next+0x48)
 #func  690.089  ipipe_check_context+0xc 
(clocksource_get_next+0x52)
 #func  690.094  ipipe_check_context+0xc 
(tick_periodic+0x81)
 #func  690.089  update_process_times+0xa 
(tick_periodic+0x2b)
 #func  700.114  account_system_time+0xb 
(update_process_times+0x61)
 #func  700.099  run_local_timers+0x8 
(update_process_times+0x27)
 #func  700.089  raise_softirq+0xb 
(run_local_timers+0x12)
 #func  700.089  ipipe_check_context+0xc 
(raise_softirq+0x17)
 #func  700.114  __ipipe_restore_root+0x8 
(raise_softirq+0x67)
 #func  700.099  softlockup_tick+0xe 
(run_local_timers+0x17)
 #func  700.084  rcu_pending+0x8 
(update_process_times+0x2e)
 #func  700.094  __rcu_pending+0x8 (rcu_pending+0x17)
 #func  700.104  __rcu_pending+0x8 (rcu_pending+0x3f)
 #func  700.099  scheduler_tick+0xb 
(update_process_times+0x40)
 #func  710.124  sched_clock+0xa (scheduler_tick+0x10)
 #func  710.099  idle_cpu+0x8 (scheduler_tick+0x21)
 #func  710.114  task_running_tick+0xe 
(scheduler_tick+0x53)
 #func  710.154  ipipe_check_context+0xc 
(task_running_tick+0x34)
 #func  710.124  dequeue_task+0xa 
(task_running_tick+0x13a)
 #func  710.124  effective_prio+0x8 
(task_running_tick+0x149)
 #func  710.154  static_prio_timeslice+0x8 
(task_running_tick+0x156)
 #func  710.099  enqueue_task+0xa 
(task_running_tick+0x1d1)
 #func  720.114  ipipe_check_context+0xc 
(task_running_tick+0x89)
 #func  720.114  run_posix_cpu_timers+0xe 
(update_process_times+0x47)
 #func  720.089  profile_tick+0xa (tick_periodic+0x35)
 #func  720.124  profile_pc+0x8 (profile_tick+0x37)
 #func  720.109  ipipe_check_context+0xc 
(handle_IRQ_event+0x67)
 #func  720.099  note_interrupt+0xe 
(handle_level_irq+0xab)
 #func  720.099  ipipe_check_context+0xc 
(handle_level_irq+0x6d)
 #func  720.084  enable_8259A_irq+0xb 
(handle_level_irq+0x8b)
 #func  720.094  __ipipe_spin_lock_irqsave+0x9 
(enable_8259A_irq+0x20)
 |   #begin   0x8001720.109  __ipipe_spin_lock_irqsave+0x4b 
(enable_8259A_irq+0x20)
 |   #func  730.104  __ipipe_spin_unlock_irqrestore+0x9 
(enable_8259A_irq+0x61)
 |   #end 0x8001730.114  __ipipe_spin_unlock_irqrestore+0x36 
(enable_8259A_irq+0x61)
 #func  730.104  ipipe_check_context+0xc 
(handle_level_irq+0x95)
 #func  730.124  irq_exit+0x8 (do_IRQ+0x41)
 #func  730.079  do_softirq+0xa (irq_exit+0x45)
 #func  730.104  ipipe_check_context+0xc 
(do_softirq+0x2a)
 #func  730.089  __do_softirq+0xb (do_softirq+0x65)
 #func  730.000  __ipipe_unstall_root+0x8 
(__do_softirq+0x3f)
[EMAIL PROTECTED]:~#

> 
> > 
> >> BTW, does the latency test of Xenomai work?
> > 
> > No. It hangs after "warming up". I'm able to interrupt with ^C and then
> > it prints a single line showing a max latency of 208983.492 ms (same value
> > on several invocations).
> 
> Ah, then we may fail to program the APIC appropriately.

[EMAIL PROTECTED]:# zgrep APIC /proc/config.gz
# CONFIG_X86_UP_APIC is not set

Do you mean the PIT ?

> That would need
> a closer look if you want to dig into this.

I'm not sure how much time I have to investigate this, but yes, I can take a
look if you tell me what I should look for.

> /me is going to be
> d

Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-12 Thread Stelian Pop
Jan Kiszka a écrit :
> Stelian Pop wrote:
>> # Automatically generated make config: don't edit
>> # Linux kernel version: 2.6.22.9-xeno
>> # Fri Oct 12 11:45:32 2007
>> #
> ...
>> # CONFIG_X86_UP_APIC is not set
> 
> As long as APIC is off...
> 
>> CONFIG_HPET_TIMER=y
> 
> ...HPET_TIMER must be off as well. Otherwise, Linux may actually pick up
> the HPET which block the PIT for Xenomai usage.

Right. Disabling CONFIG_HPET makes Xenomai work for me on this laptop. 
Hurray!

I still have a strange problem though: after loading xeno_nucleus and 
before loading xeno_native, the keyboard reacts strangely: each key 
press results in at least 6 (and up to 20) letters on the terminal.

Before loading the nucleus and after the skin is loaded everything is 
ok. And even while the keyboard reacts strangely, I can log in over ssh 
and the systems seems to be fine.

What should I do next ? Do you want to try looking into the keyboard 
issue, or should I rather enable the UP_APIC ? Or even go crazy and 
activate SMP again ?

Stelian.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-12 Thread Stelian Pop
On Fri, Oct 12, 2007 at 10:22:45PM +0200, Stelian Pop wrote:

> Or even go crazy and activate SMP again ?

Would have been too easy:

# modprobe xeno_native
Xenomai: native skin init failed, code -19.

This is with all relevant options on (HPET, HIRES, SMP, NO_HZ).
I guess I'll have to take one step at a time.

-- 
Stelian Pop <[EMAIL PROTECTED]>

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-13 Thread Stelian Pop
Le vendredi 12 octobre 2007 à 23:58 +0200, Philippe Gerum a écrit :
> On Fri, 2007-10-12 at 23:51 +0200, Stelian Pop wrote:
> > On Fri, Oct 12, 2007 at 10:22:45PM +0200, Stelian Pop wrote:
> > 
> > > Or even go crazy and activate SMP again ?
> > 
> > Would have been too easy:
> > 
> > # modprobe xeno_native
> > Xenomai: native skin init failed, code -19.
> > 
> 
> The I-pipe likely told Xenomai that the LAPIC was unusable, because it
> has been put in dummy state by the clock event layer. Must be something
> silly going on at I-pipe level. Please switch HPET off just for kicks.

No, it doesn't change a thing.

What does change something however is CONFIG_X86_UP_IOAPIC. Without it
(UP or UP+APIC), latency works (albeit with higher latencies - up to 30
us - than with 2.6.20+xeno-2.3.4 - where I saw latencies up to 20-25
_in_SMP_mode_). With IOAPIC I get code 19.

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-13 Thread Stelian Pop
Le vendredi 12 octobre 2007 à 22:22 +0200, Stelian Pop a écrit :

> I still have a strange problem though: after loading xeno_nucleus and 
> before loading xeno_native, the keyboard reacts strangely: each key 
> press results in at least 6 (and up to 20) letters on the terminal.

FYI, I am no longer able to reproduce this keyboard problem today. Could
be because or a temporary USB problem, or because I did a cold boot this
morning...  Or it could be anything else...

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-13 Thread Stelian Pop
On Sat, Oct 13, 2007 at 04:42:27PM +0200, Jan Kiszka wrote:

> Where do you get ENODEV? On nucleus startup? Please provide
> /proc/timer_list output of the working and non-working setups.

It turns out that I had the Linux NMI watchdog enabled (nmi_watchdog=1
on the command line) and this was causing the -ENODEV problems. Once
removed, I'm able to boot and successfully run all configurations: UP,
UP + APIC, UP + APIC + IO_APIC, SMP. And the latencies are back to normal.

Maybe we should detect that the NMI watchdog is enabled and issue a
warning message, this would save others a few hours and many kernel
builds...

This is with your timer cleanup patch, of course.

> PS: For unknown reasons your mails don't make it to my web.de address,
> only to the list. Do you get any error messages?

I did get one saying:
<[EMAIL PROTECTED]>: host mx-ha02.web.de[217.72.192.188] refused to 
talk to me:
554 Transaction failed. For explanation visit
http://freemail.web.de/reject/?ip=88.191.70.230

I didn't investigate yet what's happenning.

-- 
Stelian Pop <[EMAIL PROTECTED]>

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] fix hw-timer setup/cleanup for i386

2007-10-13 Thread Stelian Pop
Le samedi 13 octobre 2007 à 18:38 +0200, Jan Kiszka a écrit :

> Your IP is probably associated to some DSL access, and a lot of
> providers block such senders categorically due to all the spam robots
> running on hijacked user PCs.

No, my mail server has a fixed IP and is sitting somewhere in a
datacenter, it shouldn't be in a dynamic-IP-DSL-provider range.

-- 
Stelian Pop <[EMAIL PROTECTED]>


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core