Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Anders Blomdell

Philippe Gerum wrote:

Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the




Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED



ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.
This says that the interrupt will be ->end'ed at some later time (perhaps in the 
interrupt handler task)



- RT_INTR_CHAINED



ISR says it wants the IRQ to be propagated down the pipeline. Nothing is 
said about the fact that the last ISR did or did not handle the IRQ 
locally; this is irrelevant.
This says that the interrupt will eventually be ->end'ed by some later stage in 
the pipeline.



- RT_INTR_ENABLE



ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).

This says that the interrupt will be ->end'ed when this interrupt handler 
returns.




- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. It 
would mean to continue processing the chain of handlers because the last 
ISR invoked was not concerned by the outstanding IRQ.

Sounds like RT_INTR_CHAINED, except that it's for the current pipeline stage?

Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)
  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!
__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
My distinct feeling is that the return value should be a scalar and not a set!

...

I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.



I support that. Shared interrupts should be handled properly by Xeno 
since such - I'd say "last resort" - configuration could be needed; this 
said, we should not see this as the rule but rather as the exception, 
since this is basically required to solve some underlying hw limitations 
wrt interrupt management, and definitely has a significant cost on 
processing each shared IRQ wrt determinism.


Incidentally, there is an interesting optimization on the project's todo 
list 

Is this todo list accessible anywhere?

> that would allow non-RT interrupts to be masked at IC level when
the Xenomai domain is active. We could do that on any arch with 
civilized interrupt management, and that would prevent any asynchronous 
diversion from the critical code when Xenomai is running RT tasks 
(kernel or user-space). Think of this as some hw-controlled interrupt 
shield. Since this feature requires to be able to individually mask each 
interrupt source at IC level, there should be no point in sharing fully 
vectored interrupts in such a configuration anyway. This fact also 
pleads for having the shared IRQ support as a build-time option.


--
Anders Blomdell

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Jan Kiszka
Anders Blomdell wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>
>>> Wolfgang Grandegger wrote:
>>>
 Hello,

 Dmitry Adamushko wrote:

> Hi,
>
> this is the final set of patches against the SVN trunk of 2006-02-03.
>
> It addresses mostly remarks concerning naming (XN_ISR_ISA ->
> XN_ISR_EDGE), a few cleanups and updated comments.
>
> Functionally, the support for shared interrupts (a few flags) to the
>>>
>>>
>>>
>>> Not directly your fault: the increasing number of return flags for IRQ
>>> handlers makes me worry that they are used correctly. I can figure out
>>> what they mean (not yet that clearly from the docs), but does someone
>>> else understand all this:
>>>
>>> - RT_INTR_HANDLED
>>
>>
>> ISR says it has handled the IRQ, and does not want any propagation to
>> take place down the pipeline. IOW, the IRQ processing stops there.
> This says that the interrupt will be ->end'ed at some later time
> (perhaps in the interrupt handler task)
> 
>>> - RT_INTR_CHAINED
>>
>>
>> ISR says it wants the IRQ to be propagated down the pipeline. Nothing
>> is said about the fact that the last ISR did or did not handle the IRQ
>> locally; this is irrelevant.
> This says that the interrupt will eventually be ->end'ed by some later
> stage in the pipeline.
> 
>>> - RT_INTR_ENABLE
>>
>>
>> ISR requests the interrupt dispatcher to re-enable the IRQ line upon
>> return (cumulable with HANDLED/CHAINED).
> This says that the interrupt will be ->end'ed when this interrupt
> handler returns.
> 
>>
>>> - RT_INTR_NOINT
>>>
>>
>> This new one comes from Dmitry's patch for shared IRQ support AFAICS.
>> It would mean to continue processing the chain of handlers because the
>> last ISR invoked was not concerned by the outstanding IRQ.
> Sounds like RT_INTR_CHAINED, except that it's for the current pipeline
> stage?
> 
> Now for the quiz question (powerpc arch):
> 
>   1. Assume an edge triggered interrupt
>   2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared

Kind of redundant. What did you really mean?

>  interrupt, but no problem since it's edge-triggered)
>   3. Interrupt gets ->end'ed right after RT-handler has returned
>   4. Linux interrupt eventually handler starts its ->end() handler:
> local_irq_save_hw(flags);
> if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
>   ipipe_irq_unlock(irq);
> // Next interrupt occurs here!
> __ipipe_std_irq_dtype[irq].end(irq);
> local_irq_restore_hw(flags);
> 
> 
> Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
> My distinct feeling is that the return value should be a scalar and not
> a set!

That's a good idea: only provide valid and reasonable flag combinations
to the user!

>> ...
>>> I would vote for the (already scheduled?) extension to register an
>>> optimised IRQ trampoline in case there is actually no sharing taking
>>> place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
>>> obsolete.
>>
>>
>> I support that. Shared interrupts should be handled properly by Xeno
>> since such - I'd say "last resort" - configuration could be needed;
>> this said, we should not see this as the rule but rather as the
>> exception, since this is basically required to solve some underlying
>> hw limitations wrt interrupt management, and definitely has a
>> significant cost on processing each shared IRQ wrt determinism.
>>
>> Incidentally, there is an interesting optimization on the project's
>> todo list 
> Is this todo list accessible anywhere?

I did not know of such interesting plans as well. Maybe we should start
using more of the feature GNA provide to us (task lists?)...

> 
>> that would allow non-RT interrupts to be masked at IC level when
>> the Xenomai domain is active. We could do that on any arch with
>> civilized interrupt management, and that would prevent any
>> asynchronous diversion from the critical code when Xenomai is running
>> RT tasks (kernel or user-space). Think of this as some hw-controlled
>> interrupt shield. Since this feature requires to be able to
>> individually mask each interrupt source at IC level, there should be
>> no point in sharing fully vectored interrupts in such a configuration
>> anyway. This fact also pleads for having the shared IRQ support as a
>> build-time option.
> 

This concept sound really thrilling. I already wondered if this is
possible after seeing how many non-RT IRQ stubs can hit between an RT
event and the RT task invocation: HD, network, keyboard, mouse, sound,
graphic card - and if you are "lucky", a lot of them chain up at the
same time. But I thought that such disabling is too costly for being
used at every domain switch. Is it not?

Jan




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Anders Blomdell wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the





Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED




ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.


This says that the interrupt will be ->end'ed at some later time 
(perhaps in the interrupt handler task)




The ISR may end the IRQ before returning, or leave it to the nucleus upon return 
by adding the ENABLE bit.



- RT_INTR_CHAINED




ISR says it wants the IRQ to be propagated down the pipeline. Nothing 
is said about the fact that the last ISR did or did not handle the IRQ 
locally; this is irrelevant.


This says that the interrupt will eventually be ->end'ed by some later 
stage in the pipeline.



- RT_INTR_ENABLE




ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).




This is wrong; we should only associate this to HANDLED; sorry.

This says that the interrupt will be ->end'ed when this interrupt 
handler returns.





- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. 
It would mean to continue processing the chain of handlers because the 
last ISR invoked was not concerned by the outstanding IRQ.


Sounds like RT_INTR_CHAINED, except that it's for the current pipeline 
stage?




Basically, yes.


Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)


( Assuming RT_INTR_CHAINED | RT_INTR_ENABLE )


  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!


It can't occur here: hw interrupts are off after local_irq_save_hw, and unlocking 
some IRQ does not flush the IRQ log.



__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?


This could happen, yep. Actually, this would be a possible misuse of the ISR 
return values.
If one chains the handler Adeos-wise, it is expected to leave the IC in its 
original state wrt the processed interrupt. CHAINED should be seen as mutually 
exclusive with ENABLE.


My distinct feeling is that the return value should be a scalar and not 
a set!




To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*), HANDLED | 
CHAINED and CHAINED. It's currently a set because I once thought that the 
"handled" indication (or lack of) could be a valuable information to gather at 
nucleus level to detect unhandled RT interrupts. Fact is that we currently don't 
use this information, though. IOW, we could indeed define some enum and have a 
scalar there instead of a set; or we could just leave this as a set, but whine 
when detecting the invalid ENABLE | CHAINED combination.


(*) because the handler does not necessary know how to ->end() the current IRQ at 
IC level, but Xenomai always does.



...


I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.




I support that. Shared interrupts should be handled properly by Xeno 
since such - I'd say "last resort" - configuration could be needed; 
this said, we should not see this as the rule but rather as the 
exception, since this is basically required to solve some underlying 
hw limitations wrt interrupt management, and definitely has a 
significant cost on processing each shared IRQ wrt determinism.


Incidentally, there is an interesting optimization on the project's 
todo list 


Is this todo list accessible anywhere?



There's a roadmap for v2.1 that has been posted to the -core list in 
October/November. Aside of that, the todos are not maintained in a centralized and 
accessible way yet. We could perhaps use GNA's task manager for that 
(http://gna.org/task/?group=xenomai), even if not to the full extent of its features.



 > that would allow non-RT interrupts to be masked at IC level when

the Xenomai domain is active. We could do that on any arch with 
civilized int

Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Jan Kiszka wrote:

Anders Blomdell wrote:


Philippe Gerum wrote:


Jan Kiszka wrote:



Wolfgang Grandegger wrote:



Hello,

Dmitry Adamushko wrote:



Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the




Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED



ISR says it has handled the IRQ, and does not want any propagation to
take place down the pipeline. IOW, the IRQ processing stops there.


This says that the interrupt will be ->end'ed at some later time
(perhaps in the interrupt handler task)



- RT_INTR_CHAINED



ISR says it wants the IRQ to be propagated down the pipeline. Nothing
is said about the fact that the last ISR did or did not handle the IRQ
locally; this is irrelevant.


This says that the interrupt will eventually be ->end'ed by some later
stage in the pipeline.



- RT_INTR_ENABLE



ISR requests the interrupt dispatcher to re-enable the IRQ line upon
return (cumulable with HANDLED/CHAINED).


This says that the interrupt will be ->end'ed when this interrupt
handler returns.



- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS.
It would mean to continue processing the chain of handlers because the
last ISR invoked was not concerned by the outstanding IRQ.


Sounds like RT_INTR_CHAINED, except that it's for the current pipeline
stage?

Now for the quiz question (powerpc arch):

 1. Assume an edge triggered interrupt
 2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared



Kind of redundant. What did you really mean?



interrupt, but no problem since it's edge-triggered)
 3. Interrupt gets ->end'ed right after RT-handler has returned
 4. Linux interrupt eventually handler starts its ->end() handler:
   local_irq_save_hw(flags);
   if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
 ipipe_irq_unlock(irq);
   // Next interrupt occurs here!
   __ipipe_std_irq_dtype[irq].end(irq);
   local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
My distinct feeling is that the return value should be a scalar and not
a set!



That's a good idea: only provide valid and reasonable flag combinations
to the user!



...


I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.



I support that. Shared interrupts should be handled properly by Xeno
since such - I'd say "last resort" - configuration could be needed;
this said, we should not see this as the rule but rather as the
exception, since this is basically required to solve some underlying
hw limitations wrt interrupt management, and definitely has a
significant cost on processing each shared IRQ wrt determinism.

Incidentally, there is an interesting optimization on the project's
todo list 


Is this todo list accessible anywhere?



I did not know of such interesting plans as well. Maybe we should start
using more of the feature GNA provide to us (task lists?)...



that would allow non-RT interrupts to be masked at IC level when
the Xenomai domain is active. We could do that on any arch with
civilized interrupt management, and that would prevent any
asynchronous diversion from the critical code when Xenomai is running
RT tasks (kernel or user-space). Think of this as some hw-controlled
interrupt shield. Since this feature requires to be able to
individually mask each interrupt source at IC level, there should be
no point in sharing fully vectored interrupts in such a configuration
anyway. This fact also pleads for having the shared IRQ support as a
build-time option.




This concept sound really thrilling. I already wondered if this is
possible after seeing how many non-RT IRQ stubs can hit between an RT
event and the RT task invocation: HD, network, keyboard, mouse, sound,
graphic card - and if you are "lucky", a lot of them chain up at the
same time. But I thought that such disabling is too costly for being
used at every domain switch. Is it not?



It all depends on the underlying arch. I started to think about this when 
working
with the Blackfin, which provides an efficient and fine-grained control over the 
interrupt system (hey, it's a DSP after all). Anders recently brought up the issue 
too, waking up the sleeper. Of course, one would not want to try that with a 8259 
chip over x86...



Jan





--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/x

Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Jan Kiszka
Philippe Gerum wrote:
> Anders Blomdell wrote:
> 
>> My distinct feeling is that the return value should be a scalar and
>> not a set!
>>
> 
> To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
> HANDLED | CHAINED and CHAINED. It's currently a set because I once
> thought that the "handled" indication (or lack of) could be a valuable
> information to gather at nucleus level to detect unhandled RT
> interrupts. Fact is that we currently don't use this information,

But it is required for the edge-triggered case to detect when the IRQ
line was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,
we could avoid this new flag by just making HANDLED non-zero.

> though. IOW, we could indeed define some enum and have a scalar there
> instead of a set; or we could just leave this as a set, but whine when
> detecting the invalid ENABLE | CHAINED combination.

In combination with the change above and some doc improvement ("valid
combinations are: ..."), I could also live with keeping the flags. The
advantage were that we wouldn't break existing applications.

> 
> (*) because the handler does not necessary know how to ->end() the
> current IRQ at IC level, but Xenomai always does.
> 

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Anders Blomdell

Philippe Gerum wrote:

Anders Blomdell wrote:


Philippe Gerum wrote:


Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the






Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED





ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.



This says that the interrupt will be ->end'ed at some later time 
(perhaps in the interrupt handler task)




The ISR may end the IRQ before returning, or leave it to the nucleus 
upon return by adding the ENABLE bit.



- RT_INTR_CHAINED





ISR says it wants the IRQ to be propagated down the pipeline. Nothing 
is said about the fact that the last ISR did or did not handle the 
IRQ locally; this is irrelevant.



This says that the interrupt will eventually be ->end'ed by some later 
stage in the pipeline.



- RT_INTR_ENABLE





ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).





This is wrong; we should only associate this to HANDLED; sorry.

This says that the interrupt will be ->end'ed when this interrupt 
handler returns.





- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. 
It would mean to continue processing the chain of handlers because 
the last ISR invoked was not concerned by the outstanding IRQ.



Sounds like RT_INTR_CHAINED, except that it's for the current pipeline 
stage?




Basically, yes.


Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)



( Assuming RT_INTR_CHAINED | RT_INTR_ENABLE )


  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!



It can't occur here: hw interrupts are off after local_irq_save_hw, and 
unlocking some IRQ does not flush the IRQ log.



__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?



This could happen, yep. Actually, this would be a possible misuse of the 
ISR return values.
If one chains the handler Adeos-wise, it is expected to leave the IC in 
its original state wrt the processed interrupt. CHAINED should be seen 
as mutually exclusive with ENABLE.


My distinct feeling is that the return value should be a scalar and 
not a set!




To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*), 
HANDLED | CHAINED and CHAINED. It's currently a set because I once 
thought that the "handled" indication (or lack of) could be a valuable 
information to gather at nucleus level to detect unhandled RT 
interrupts. Fact is that we currently don't use this information, 
though. IOW, we could indeed define some enum and have a scalar there 
instead of a set; or we could just leave this as a set, but whine when 
detecting the invalid ENABLE | CHAINED combination.


agile_programmer_mode_off();
realtime_programmer_hat_on();

I prefer programming errors to show up at compile time!

goto todays_work;

// Will never come here :-(
realtime_programmer_hat_off();
agile_programmer_mode_on();

--

Anders

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Dmitry Adamushko
On 09/02/06, Jan Kiszka <[EMAIL PROTECTED]> wrote:
Philippe Gerum wrote:> Anders Blomdell wrote:>>> My distinct feeling is that the return value should be a scalar and>> not a set! To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
> HANDLED | CHAINED and CHAINED. It's currently a set because I once> thought that the "handled" indication (or lack of) could be a valuable> information to gather at nucleus level to detect unhandled RT
> interrupts. Fact is that we currently don't use this information,But it is required for the edge-triggered case to detect when the IRQline was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,we could avoid this new flag by just making HANDLED non-zero.
That's it.

I was about to make a comment on Philipe's list of possible return values, but you outrun me.

HANDLED is 0 so we can not distinguish between HANDLED | CHAINED and
CHAINED cases. NOINT denotes explicitly the handler's answer "this IRQ
is rised not by my hw!" and it's needed (at least) for implementing the
edge-triggered irq sharing.

It's not necessary in case HANDLED becomes non-zero and, actually, HANDLED is not necessary with NOINT.

As I may see, Philipe's list can be mapped as follows :

HANDLED  ->  0
HANDLED | ENABLE  ->  ENABLE
HANDLED | CHAINED ->  CHAINED
CHAINED  
->  CHAINED | NOINT

and 

NOINT as a separete use case (?). Could be useful at least for the edge-triggered stuff.

-- Best regards,Dmitry Adamushko
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Anders Blomdell wrote:



My distinct feeling is that the return value should be a scalar and
not a set!



To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
HANDLED | CHAINED and CHAINED. It's currently a set because I once
thought that the "handled" indication (or lack of) could be a valuable
information to gather at nucleus level to detect unhandled RT
interrupts. Fact is that we currently don't use this information,



But it is required for the edge-triggered case to detect when the IRQ
line was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,
we could avoid this new flag by just making HANDLED non-zero.



Yes, we could. HANDLED is currently zero only because the nucleus does not care of 
the !handled case, yet.





though. IOW, we could indeed define some enum and have a scalar there
instead of a set; or we could just leave this as a set, but whine when
detecting the invalid ENABLE | CHAINED combination.



In combination with the change above and some doc improvement ("valid
combinations are: ..."), I could also live with keeping the flags. The
advantage were that we wouldn't break existing applications.



(*) because the handler does not necessary know how to ->end() the
current IRQ at IC level, but Xenomai always does.




Jan




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] xenomai on SPARC V8

2006-02-09 Thread Frederic Pont

Dear all,

I would be interested in running xenomai on a SPARC V8 (LEON) CPU. 
Any information on a SPARC port of xenomai, anybody working on it? 
Could anyone evaluate the complexity of such a port?


Thanks for your feedback
Fred

--
Frederic Pont
http://asl.epfl.ch
tel: +41 21 693 78 27

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Benchmarks

2006-02-09 Thread Dmitry Adamushko

Hi there,

after a preliminary discussion with Philipe and, well, a few days later
than I expected, I'm starting a new effort of writting some simple (i.e. not 
too complex :) yet, hopefully, useful benchmarking utilites.

The idea of each utility is to emulate a certain use case but
at the level which is significant enough to prove that
the system is (or is not) working properly latency-wise.
This, hopefully, will help to determine some bottlenecks and 
the parts of code that need to be reworked/tweaked.
Then we may use such tests on release-by-release basis as indicators
of either progress or regress we are making with a certain release.

As an example, the first utility would implement the following use case :

(based on the latency program)

- a given number of periodic threads are running;

- configurable periods (so that e.g. a few threads can become active
  at the same moment of time). Actually, that's what we would need.

- timer : periodic or apperiodic;

...

the utility will not likely produce any screen-output during its work, but
rather the comprehensive statistic in a handy form after finishing.

---

other utils could make use of some scenarious where synch. primitives/
rt_queue's/pipes could be highly used etc.


I guess, Xenomai already provides a valid amount of functionality and it's
quite stable for the time being. So it's time to work on optimizing it!

Everyone is wellcome to come up with any scenarios on which such utilities
could be based.

Any comments on the one with a given number of threads are wellcome too.
-- Best regards,Dmitry Adamushko
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Benchmarks

2006-02-09 Thread Luotao Fu
Hi folks,

Dmitry Adamushko schrieb:
> 
> Hi there,
> 
> after a preliminary discussion with Philipe and, well, a few days later
> than I expected, I'm starting a new effort of writting some simple (i.e.
> not
> too complex :) yet, hopefully, useful benchmarking utilites.
> 
> The idea of each utility is to emulate a certain use case but
> at the level which is significant enough to prove that
> the system is (or is not) working properly latency-wise.
> This, hopefully, will help to determine some bottlenecks and
> the parts of code that need to be reworked/tweaked.
> Then we may use such tests on release-by-release basis as indicators
> of either progress or regress we are making with a certain release.

Actually, I'm doing here some measurementstuffs to compare the Realtime
Performance of Xenomai and Preempt-RT stuffs. :-)

> 
> As an example, the first utility would implement the following use case :
> 
> (based on the latency program)
> 
> - a given number of periodic threads are running;
> 
> - configurable periods (so that e.g. a few threads can become active
>   at the same moment of time). Actually, that's what we would need.
> 
> - timer : periodic or apperiodic;

I've already implemented something in this way in POSIX. I took the
accuracy.c out of the posix demo from Gille and changed it. So that you
can start few threads with different nsleep duration. Except that it
writes a log, which can be plotted. The util does quite the same stuffs
like the cyclictest by Thomas Gleixner.

Further I implemented a tool for Interruptmeasurement with rtdm. Still
tuning on it, because the Pre-RT Kernel get ocassionally problems with
stability.

I even implemented Rhealstone Benchmark with xenomai-complaint Posix, it
however provides only middlevalues and might not be very interesting.

> 
> ...
> 
> the utility will not likely produce any screen-output during its work, but
> rather the comprehensive statistic in a handy form after finishing.
> 

Exactly what I thought :-)

> ---
> 
> other utils could make use of some scenarious where synch. primitives/
> rt_queue's/pipes could be highly used etc.
> 

Generally I'm quite interested on some xenomai specific latency
behaviour caused by i.E. Domain Switching, Function Wrapping and so on.
I'm still considering on some concrete Workload-scenarious.

> 
> I guess, Xenomai already provides a valid amount of functionality and it's
> quite stable for the time being. So it's time to work on optimizing it!
> 
> Everyone is wellcome to come up with any scenarios on which such utilities
> could be based.
> 
> Any comments on the one with a given number of threads are wellcome too.
> 

I'm now busy writing my stuffs together. no time to debug my hacks. so I
think I'd release them somehow later some time after I've given them
first to Jan for quice code review.

> 
> -- 
> Best regards,
> Dmitry Adamushko
> 
> 
> 
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core

Cheers
Luotao Fu

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] More on Shared interrupts

2006-02-09 Thread Anders Blomdell
For the last few days, I have tried to figure out a good way to share interrupts 
between RT and non-RT domains. This has included looking through Dmitry's patch, 
correcting bugs and testing what is possible in my specific case. I'll therefore 
try to summarize at least a few of my thoughts.


1. When looking through Dmitry's patch I get the impression that the iack 
handler has very little to do with each interrupt (the test 'prev->iack != 
intr->iack' is a dead giveaway), but is more of a domain-specific function (or 
perhaps even just a placeholder for the hijacked Linux ack-function).



2. Somewhat inspired by the figure in "Life with Adeos", I have identified the 
following cases:


  irq K  | --- | ---o|   // Linux only
  ...
  irq L  | ---o| |   // RT-only
  ...
  irq M  | ---o--- | ---o|   // Shared between domains
  ...
  irq N  | ---o---o--- | |   // Shared inside single domain
  ...
  irq O  | ---o---o--- | ---o|   // Shared between and inside single domain

Xenomai currently handles the K & L cases, Dmitrys patch addresses the N case, 
with edge triggered interrupts the M (and O after Dmitry's patch) case(s) might 
be handled by returning RT_INTR_CHAINED | RT_INTR_ENABLE from the interrupt 
handler, for level triggered interrupt the M and O cases can't be handled.


If one looks more closely at the K case (Linux only interrupt), it works by when 
an interrupt occurs, the call to irq_end is postponed until the Linux interrupt 
handler has run, i.e. further interrupts are disabled. This can be seen as a 
lazy version of Philippe's idea of disabling all non-RT interrupts until the 
RT-domain is idle, i.e. the interrupt is disabled only if it indeed occurs.


If this idea should be generalized to the M (and O) case(s), one can't rely on 
postponing the irq_end call (since the interrupt is still needed in the 
RT-domain), but has to rely on some function that disables all non-RT hardware 
that generates interrupts on that irq-line; such a function naturally has to 
have intimate knowledge of all hardware that can generate interrupts in order to 
be able to disable those interrupt sources that are non-RT.


If we then take Jan's observation about the many (Linux-only) interrupts present 
in an ordinary PC and add it to Philippe's idea of disabling all non-RT 
interrupts while executing in the RT-domain, I think that the following is a 
workable (and fairly efficient) way of handling this:


Add hardware dependent enable/disable functions, where the enable is called just 
before normal execution in a domain starts (i.e. when playing back interrupts, 
the disable is still in effect), and disable is called when normal domain 
execution end. This does effectively handle the K case above, with the added 
benefit that NO non-RT interrupts will occur during RT execution.


In the 8259 case, the disable function could look something like:

  domain_irq_disable(uint irqmask) {
if (irqmask & 0xff00 != 0xff00) {
  irqmask &= ~0x0004; // Cascaded interrupt is still needed
  outb(irqmask >> 8, PIC_SLAVE_IMR);
}
outb(irqmask, PIC_MASTER_IMR);
  }

If we should extend this to handle the M (and O) case(s), the disable function 
could look like:


  domain_irq_disable(uint irqmask, shared_irq_t *shared[]) {
int i;

for (i = 0 ; i < MAX_IRQ ; i++) {
  if (shared[i]) {
shared_irq_t *next = shared[i];
irqmask &= ~(1next;
}
  }
}
if (irqmask & 0xff00 != 0xff00) {
  irqmask &= ~0x0004; // Cascaded interrupt is still needed
  outb(irqmask >> 8, PIC_SLAVE_IMR);
}
outb(irqmask, PIC_MASTER_IMR);
  }

An obvious optimization of the above scheme, is to never call the disable (or 
enable) function for the RT-domain, since there all interrupt processing is 
protected by the hardware.


Comments, anyone?

--

Anders


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] More on Shared interrupts

2006-02-09 Thread Jan Kiszka
Anders Blomdell wrote:
> For the last few days, I have tried to figure out a good way to share
> interrupts between RT and non-RT domains. This has included looking
> through Dmitry's patch, correcting bugs and testing what is possible in
> my specific case. I'll therefore try to summarize at least a few of my
> thoughts.
> 
> 1. When looking through Dmitry's patch I get the impression that the
> iack handler has very little to do with each interrupt (the test
> 'prev->iack != intr->iack' is a dead giveaway), but is more of a
> domain-specific function (or perhaps even just a placeholder for the
> hijacked Linux ack-function).
> 
> 
> 2. Somewhat inspired by the figure in "Life with Adeos", I have
> identified the following cases:
> 
>   irq K  | --- | ---o|   // Linux only
>   ...
>   irq L  | ---o| |   // RT-only
>   ...
>   irq M  | ---o--- | ---o|   // Shared between domains
>   ...
>   irq N  | ---o---o--- | |   // Shared inside single domain
>   ...
>   irq O  | ---o---o--- | ---o|   // Shared between and inside single
> domain
> 
> Xenomai currently handles the K & L cases, Dmitrys patch addresses the N
> case, with edge triggered interrupts the M (and O after Dmitry's patch)
> case(s) might be handled by returning RT_INTR_CHAINED | RT_INTR_ENABLE
> from the interrupt handler, for level triggered interrupt the M and O
> cases can't be handled.

I guess you mean it the other way around: for the edge-triggered
cross-domain case we would actually have to loop over both the RT and
the Linux handlers until we are sure, that the IRQ line was released once.

Luckily, I never saw such a scenario which were unavoidable (it hits you
with ISA hardware which tend to have nice IRQ jumpers or other means -
it's just that you often cannot divide several controllers on the same
extension card IRQ-wise apart).

> 
> If one looks more closely at the K case (Linux only interrupt), it works
> by when an interrupt occurs, the call to irq_end is postponed until the
> Linux interrupt handler has run, i.e. further interrupts are disabled.
> This can be seen as a lazy version of Philippe's idea of disabling all
> non-RT interrupts until the RT-domain is idle, i.e. the interrupt is
> disabled only if it indeed occurs.
> 
> If this idea should be generalized to the M (and O) case(s), one can't
> rely on postponing the irq_end call (since the interrupt is still needed
> in the RT-domain), but has to rely on some function that disables all
> non-RT hardware that generates interrupts on that irq-line; such a
> function naturally has to have intimate knowledge of all hardware that
> can generate interrupts in order to be able to disable those interrupt
> sources that are non-RT.
> 
> If we then take Jan's observation about the many (Linux-only) interrupts
> present in an ordinary PC and add it to Philippe's idea of disabling all
> non-RT interrupts while executing in the RT-domain, I think that the
> following is a workable (and fairly efficient) way of handling this:
> 
> Add hardware dependent enable/disable functions, where the enable is
> called just before normal execution in a domain starts (i.e. when
> playing back interrupts, the disable is still in effect), and disable is
> called when normal domain execution end. This does effectively handle
> the K case above, with the added benefit that NO non-RT interrupts will
> occur during RT execution.
> 
> In the 8259 case, the disable function could look something like:
> 
>   domain_irq_disable(uint irqmask) {
> if (irqmask & 0xff00 != 0xff00) {
>   irqmask &= ~0x0004; // Cascaded interrupt is still needed
>   outb(irqmask >> 8, PIC_SLAVE_IMR);
> }
> outb(irqmask, PIC_MASTER_IMR);
>   }
> 
> If we should extend this to handle the M (and O) case(s), the disable
> function could look like:
> 
>   domain_irq_disable(uint irqmask, shared_irq_t *shared[]) {
> int i;
> 
> for (i = 0 ; i < MAX_IRQ ; i++) {
>   if (shared[i]) {
> shared_irq_t *next = shared[i];
> irqmask &= ~(1< while (next) {
>   next->disable();
>   next = next->next;
> }

This obviously means that all non-RT IRQ handlers sharing a line with
the RT domain would have to be registered in that shared[]-list. This
gets close to my old suggestion. Just raises the question how to
organise these interface, both on the RT and the Linux side.

>   }
> }
> if (irqmask & 0xff00 != 0xff00) {
>   irqmask &= ~0x0004; // Cascaded interrupt is still needed
>   outb(irqmask >> 8, PIC_SLAVE_IMR);
> }
> outb(irqmask, PIC_MASTER_IMR);
>   }
> 
> An obvious optimization of the above scheme, is to never call the
> disable (or enable) function for the RT-domain, since there all
> interrupt processing is protected by the hardware.

Another point is to avoid that looping over disable handlers for IRQs of
the K case. Otherwise, too many device-specific disable handlers had to
be implemented e

[Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread John Schipper

Hello,
I'm new to this list but have in the past used RTAI in a single 
processor off the shelf solution. I'm looking to switch to native 
Xenomai api but have a general problem...
 The problem is SMI on new systems, and other latency killers that 
sometimes are not controllable by software always popping up when trying 
to migrate to a newer platform.  Can a dual core processor using 
isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip 
set issues (specificly AMD or Intel dual core solutions) by isolating a 
cpu for exclusively xenomai/realtime use?


Some background information: 
 The realtime software I've developed in the past (with RTAI/adeos in 
user space) is a simple high speed serializer driver (mmap) to 
communicate with outside hardware and is responsible for syncronizing 
(with a semaphore/mutex) a linux process (soft realtime) at ~60Hz.  The 
realtime process is periodic at 1.2Khz or 2.4Khz and calculates/filters 
the data before sending commands back down the serializer interface and 
to the linux process for soft realtime network access.


 Generally we like to use "off the shelf" business PC's (Dell 170's and 
Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is 
achievable.  We use "off the shelf" hardware because availability 
(recieve within a week) and low cost are desired.   Whenever looking for 
an alernative solution either availablity or cost becomes a show 
stopper.  I'm open to suggestions and invite anyones thoughts on the 
subject.


Thanks for your time !

JKS



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread Jan Kiszka
John Schipper wrote:
> Hello,
> I'm new to this list but have in the past used RTAI in a single
> processor off the shelf solution. I'm looking to switch to native
> Xenomai api but have a general problem...
>  The problem is SMI on new systems, and other latency killers that

What do you mean with "other" precisely?

> sometimes are not controllable by software always popping up when trying
> to migrate to a newer platform.  Can a dual core processor using
> isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip
> set issues (specificly AMD or Intel dual core solutions) by isolating a
> cpu for exclusively xenomai/realtime use?

SMI can only be addressed with CPU isolation if you are able to redirect
the related event only to one CPU. Don't know if this is possible / default.

There are tricks to disable SMI on modern Intel chipsets. Xenomai
implements this, you can select the workaround during system
configuration. Don't this work for your particular systems? Then please
report details.

"Other", more subtle latency issues can only be addressed when the
mechanisms behind them are understood. Depends on the chipset
manufacturer's documentation. So, no general answer is possible.

> 
> Some background information:  The realtime software I've developed in
> the past (with RTAI/adeos in user space) is a simple high speed
> serializer driver (mmap) to communicate with outside hardware and is
> responsible for syncronizing (with a semaphore/mutex) a linux process
> (soft realtime) at ~60Hz.  The realtime process is periodic at 1.2Khz or
> 2.4Khz and calculates/filters the data before sending commands back down
> the serializer interface and to the linux process for soft realtime
> network access.
> 
>  Generally we like to use "off the shelf" business PC's (Dell 170's and
> Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is
> achievable.  We use "off the shelf" hardware because availability
> (recieve within a week) and low cost are desired.   Whenever looking for
> an alernative solution either availablity or cost becomes a show
> stopper.  I'm open to suggestions and invite anyones thoughts on the
> subject.

Using off the shelf standard systems is always risky. I've heard of a
larger German automation company ordering standard PC hardware for
industrial control purposes only when initial latency tests on a
specific charge were successful. Ok, they are ordering large enough
quantities, so they can dictate certain conditions...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread John Schipper

Jan Kiszka wrote:


John Schipper wrote:
 


Hello,
I'm new to this list but have in the past used RTAI in a single
processor off the shelf solution. I'm looking to switch to native
Xenomai api but have a general problem...
The problem is SMI on new systems, and other latency killers that
  



What do you mean with "other" precisely?
 

I have seen on systems the need to disable USB Legacy/emulation? and 
only use a PS/2 keyboard.  The USB interface (tied to a usb keyboard and 
mouse) are standard on a Dell GX280 without a PS/2 interface.  I believe 
and have seen real-time having problems.  This is due to to SMM mode, I 
believe, being a latency killer (not sure if this is SMI related).


In regard to SMI I poked around the intel chips registers (sorry can't 
remember what the GX280 chipset was but I could get it if important!  ) 
and I found that their are lock bits that do not allow disabling global 
SMI or the watchdog capability.  This was 6 months ago at least so I 
have not tested the latest smi workaround module (in RTAI at least 
because I have not use xenomai yet but plan too :) )


 


sometimes are not controllable by software always popping up when trying
to migrate to a newer platform.  Can a dual core processor using
isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip
set issues (specificly AMD or Intel dual core solutions) by isolating a
cpu for exclusively xenomai/realtime use?
  



SMI can only be addressed with CPU isolation if you are able to redirect
the related event only to one CPU. Don't know if this is possible / 
default.


 

This redirection has occured to me also.   A couple months ago I 
inquired and did not get any confirmation either way from Dell about 
doing this and maybe they do not know and I need to ask 
Intel/Via/AMD?..  Its not clear (if even possible ) to me if this 
would be a bios feature or only controlled by using a special chipset 
kernel patch or module?



There are tricks to disable SMI on modern Intel chipsets. Xenomai
implements this, you can select the workaround during system
configuration. Don't this work for your particular systems? Then please
report details.

 

SMI tricks had not worked on a Dell GX280 system but I plan to start 
some more testing again with xenomai.  The fact that it did not work 
prompted me to dig into the SMI/SMM registers and is where I discovered 
that I could not get past the lock bit capability of the SMI/SMM chipset 
that was being set by the bios (not positive this is where it was 
happening ) and after some failed attempts I just continued with the 
Dell 170/270 solution.  If there is an advantage or maybe even a hope :) 
that an isolated cpu will help in regards to SMI/SMM then I plan to go 
ahead and order some systems to test otherwise I may continue to try to 
testing single core solutions.



"Other", more subtle latency issues can only be addressed when the
mechanisms behind them are understood. Depends on the chipset
manufacturer's documentation. So, no general answer is possible.

 


Some background information:  The realtime software I've developed in
the past (with RTAI/adeos in user space) is a simple high speed
serializer driver (mmap) to communicate with outside hardware and is
responsible for syncronizing (with a semaphore/mutex) a linux process
(soft realtime) at ~60Hz.  The realtime process is periodic at 1.2Khz or
2.4Khz and calculates/filters the data before sending commands back down
the serializer interface and to the linux process for soft realtime
network access.

Generally we like to use "off the shelf" business PC's (Dell 170's and
Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is
achievable.  We use "off the shelf" hardware because availability
(recieve within a week) and low cost are desired.   Whenever looking for
an alernative solution either availablity or cost becomes a show
stopper.  I'm open to suggestions and invite anyones thoughts on the
subject.
  



Using off the shelf standard systems is always risky. I've heard of a
larger German automation company ordering standard PC hardware for
industrial control purposes only when initial latency tests on a
specific charge were successful. Ok, they are ordering large enough
quantities, so they can dictate certain conditions...

 



Testing the system is done to verify latency and expected real-time 
isolation from the normal linux tasks. Its done on a standard system and 
once verified used until the system is no longer available which seems 
to be happening at an increasing rate.  PCI express may be a reason, 
maybe money.. but systems don't last much longer then a year and a 
half.  Either way the motherboards themselves don't stay around to 
long.  Our quantity is not there for the needed leverage with providers 
:( .  The system is used for an industrial purpose which is NOT for a 
life saving/risking situation.  Its used for simulation or simulator 
purposes.



Jan

 


John


[Xenomai-core] [PATCH] provide rtdm_mmap_to_user / rtdm_munmap

2006-02-09 Thread Jan Kiszka
Hi all,

this is a first attempt to add the requested mmap functionality to the
RTDM driver API. Anyone interested in this feature is invited to test my
patch (Rodrigo... ;) ). Comments on the implementation are welcome as well.

Philippe, I need xnarch_remap_page_range for this and added a #define
XENO_HEAP_MODULE hack to rtdm/drvlib.c therefore. What would you suggest
as a cleaner approach? Make this function available unconditionally or
introduce something like XENO_RTDM_MODULE? Moreover, isn't this
implementation also interesting for nucleus/heap.c?

Jan
Index: include/rtdm/rtdm_driver.h
===
--- include/rtdm/rtdm_driver.h	(Revision 551)
+++ include/rtdm/rtdm_driver.h	(Arbeitskopie)
@@ -995,6 +995,10 @@
 xnfree(ptr);
 }
 
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+  int prot, void **pptr);
+int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len);
+
 static inline int rtdm_read_user_ok(rtdm_user_info_t *user_info,
 const void __user *ptr, size_t size)
 {
Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c	(Revision 551)
+++ ksrc/skins/rtdm/drvlib.c	(Arbeitskopie)
@@ -31,7 +31,9 @@
 
 
 #include 
+#include 
 
+#define XENO_HEAP_MODULE
 #include 
 
 
@@ -1286,7 +1288,7 @@
  * Rescheduling: never.
  */
 int rtdm_irq_disable(rtdm_irq_t *irq_handle);
-/** @} */
+/** @} Interrupt Management Services */
 
 
 /*!
@@ -1358,16 +1360,127 @@
  * environments.
  */
 void rtdm_nrtsig_pend(rtdm_nrtsig_t *nrt_sig);
-/** @} */
+/** @} Non-Real-Time Signalling Services */
 
+#endif /* DOXYGEN_CPP */
 
+
 /*!
  * @ingroup driverapi
  * @defgroup util Utility Services
  * @{
  */
 
+static int rtdm_mmap_buffer(struct file *filp, struct vm_area_struct *vma)
+{
+return xnarch_remap_page_range(vma, vma->vm_start,
+   virt_to_phys(filp->private_data),
+   vma->vm_end - vma->vm_start, PAGE_SHARED);
+}
+
+static struct file_operations rtdm_mmap_fops = {
+.mmap = rtdm_mmap_buffer,
+};
+
 /**
+ * Map a kernel memory range into the address space of the user.
+ *
+ * @param[in] user_info User information pointer as passed to the invoked
+ * device operation handler
+ * @param[in] src_addr Kernel address to be mapped
+ * @param[in] len Length of the memory range
+ * @param[in] prot Protection flags for the user's memory range, typically
+ * either PROT_READ or PROT_READ|PROT_WRITE
+ * @param[in,out] pptr Address of a pointer containing the desired user
+ * address or NULL on entry and the finally assigned address on return
+ *
+ * @return 0 on success, otherwise:
+ *
+ * - -EXXX is returned if .
+ *
+ * @note An RTDM driver is expected to invoke rtdm_munmap on every mapped
+ * memory range either when the user requests it explicitly or when the
+ * related device is closed.
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - User-space task (non-RT)
+ *
+ * Rescheduling: possible.
+ */
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+  int prot, void **pptr)
+{
+struct file *filp;
+struct file_operations  *old_fops;
+void*old_priv_data;
+void*user_ptr;
+
+filp = filp_open("/dev/zero", O_RDWR, 0);
+if (IS_ERR(filp))
+return PTR_ERR(filp);
+
+old_fops = filp->f_op;
+filp->f_op = &rtdm_mmap_fops;
+
+old_priv_data = filp->private_data;
+filp->private_data = src_addr;
+
+down_write(&user_info->mm->mmap_sem);
+user_ptr = (void *)do_mmap(filp, (unsigned long)*pptr, len, prot,
+   MAP_SHARED, 0);
+up_write(&user_info->mm->mmap_sem);
+
+filp->f_op = old_fops;
+filp->private_data = old_priv_data;
+
+filp_close(filp, user_info->files);
+
+if (IS_ERR(user_ptr))
+return PTR_ERR(user_ptr);
+
+*pptr = user_ptr;
+return 0;
+}
+
+/**
+ * Unmap a user memory range.
+ *
+ * @param[in] user_info User information pointer as passed to
+ * rtdm_mmap_to_user() when requesting to map the memory range
+ * @param[in] ptr User address or the memory range
+ * @param[in] len Length of the memory range
+ *
+ * @return 0 on success, otherwise:
+ *
+ * - -EXXX is returned if .
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - User-space task (non-RT)
+ *
+ * Rescheduling: possible.
+ */
+int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len)
+{
+int err;
+
+down_write(&user_info->mm->mmap_sem);
+err = do_munmap(user_info->mm, (unsigned long)ptr, len);
+up_write(&user_info->mm->mmap_sem);
+
+return err;
+}
+
+#ifdef DOXYGEN_CPP /* Only used for doxygen doc gener

Re: [Xenomai-core] [PATCH] provide rtdm_mmap_to_user / rtdm_munmap

2006-02-09 Thread Jan Kiszka
Jan Kiszka wrote:
> Hi all,
> 
> this is a first attempt to add the requested mmap functionality to the
> RTDM driver API.

... and this version is even more useful than the previous one (now with
EXPORT_SYMBOL!). Be warned: I just compiled it, I count on third-party
testers.

Jan
Index: include/rtdm/rtdm_driver.h
===
--- include/rtdm/rtdm_driver.h  (Revision 556)
+++ include/rtdm/rtdm_driver.h  (Arbeitskopie)
@@ -995,6 +995,10 @@
 xnfree(ptr);
 }
 
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+  int prot, void **pptr);
+int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len);
+
 static inline int rtdm_read_user_ok(rtdm_user_info_t *user_info,
 const void __user *ptr, size_t size)
 {
Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(Revision 556)
+++ ksrc/skins/rtdm/drvlib.c(Arbeitskopie)
@@ -31,7 +31,9 @@
 
 
 #include 
+#include 
 
+#define XENO_HEAP_MODULE
 #include 
 
 
@@ -1286,7 +1288,7 @@
  * Rescheduling: never.
  */
 int rtdm_irq_disable(rtdm_irq_t *irq_handle);
-/** @} */
+/** @} Interrupt Management Services */
 
 
 /*!
@@ -1358,16 +1360,133 @@
  * environments.
  */
 void rtdm_nrtsig_pend(rtdm_nrtsig_t *nrt_sig);
-/** @} */
+/** @} Non-Real-Time Signalling Services */
 
+#endif /* DOXYGEN_CPP */
 
+
 /*!
  * @ingroup driverapi
  * @defgroup util Utility Services
  * @{
  */
 
+static int rtdm_mmap_buffer(struct file *filp, struct vm_area_struct *vma)
+{
+return xnarch_remap_page_range(vma, vma->vm_start,
+   virt_to_phys(filp->private_data),
+   vma->vm_end - vma->vm_start, PAGE_SHARED);
+}
+
+static struct file_operations rtdm_mmap_fops = {
+.mmap = rtdm_mmap_buffer,
+};
+
 /**
+ * Map a kernel memory range into the address space of the user.
+ *
+ * @param[in] user_info User information pointer as passed to the invoked
+ * device operation handler
+ * @param[in] src_addr Kernel address to be mapped
+ * @param[in] len Length of the memory range
+ * @param[in] prot Protection flags for the user's memory range, typically
+ * either PROT_READ or PROT_READ|PROT_WRITE
+ * @param[in,out] pptr Address of a pointer containing the desired user
+ * address or NULL on entry and the finally assigned address on return
+ *
+ * @return 0 on success, otherwise:
+ *
+ * - -EXXX is returned if .
+ *
+ * @note An RTDM driver is expected to invoke rtdm_munmap on every mapped
+ * memory range either when the user requests it explicitly or when the
+ * related device is closed.
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - User-space task (non-RT)
+ *
+ * Rescheduling: possible.
+ */
+int rtdm_mmap_to_user(rtdm_user_info_t *user_info, void *src_addr, size_t len,
+  int prot, void **pptr)
+{
+struct file *filp;
+struct file_operations  *old_fops;
+void*old_priv_data;
+void*user_ptr;
+
+filp = filp_open("/dev/zero", O_RDWR, 0);
+if (IS_ERR(filp))
+return PTR_ERR(filp);
+
+old_fops = filp->f_op;
+filp->f_op = &rtdm_mmap_fops;
+
+old_priv_data = filp->private_data;
+filp->private_data = src_addr;
+
+down_write(&user_info->mm->mmap_sem);
+user_ptr = (void *)do_mmap(filp, (unsigned long)*pptr, len, prot,
+   MAP_SHARED, 0);
+up_write(&user_info->mm->mmap_sem);
+
+filp->f_op = old_fops;
+filp->private_data = old_priv_data;
+
+filp_close(filp, user_info->files);
+
+if (IS_ERR(user_ptr))
+return PTR_ERR(user_ptr);
+
+*pptr = user_ptr;
+return 0;
+}
+
+EXPORT_SYMBOL(rtdm_mmap_to_user);
+
+
+/**
+ * Unmap a user memory range.
+ *
+ * @param[in] user_info User information pointer as passed to
+ * rtdm_mmap_to_user() when requesting to map the memory range
+ * @param[in] ptr User address or the memory range
+ * @param[in] len Length of the memory range
+ *
+ * @return 0 on success, otherwise:
+ *
+ * - -EXXX is returned if .
+ *
+ * Environments:
+ *
+ * This service can be called from:
+ *
+ * - Kernel module initialization/cleanup code
+ * - User-space task (non-RT)
+ *
+ * Rescheduling: possible.
+ */
+int rtdm_munmap(rtdm_user_info_t *user_info, void *ptr, size_t len)
+{
+int err;
+
+down_write(&user_info->mm->mmap_sem);
+err = do_munmap(user_info->mm, (unsigned long)ptr, len);
+up_write(&user_info->mm->mmap_sem);
+
+return err;
+}
+
+EXPORT_SYMBOL(rtdm_munmap);
+
+
+#ifdef DOXYGEN_CPP /* Only used for doxygen doc generation */
+
+/**
  * Real-time safe message printing on kernel console
  *
  * @param[in] format Format string (conforming standard @c printf())
@@ -1583,6 +1702,6 @@
  */
 int rtdm_in_rt_context(void);
 

Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Anders Blomdell

Philippe Gerum wrote:

Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the




Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED



ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.
This says that the interrupt will be ->end'ed at some later time (perhaps in the 
interrupt handler task)



- RT_INTR_CHAINED



ISR says it wants the IRQ to be propagated down the pipeline. Nothing is 
said about the fact that the last ISR did or did not handle the IRQ 
locally; this is irrelevant.
This says that the interrupt will eventually be ->end'ed by some later stage in 
the pipeline.



- RT_INTR_ENABLE



ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).

This says that the interrupt will be ->end'ed when this interrupt handler 
returns.




- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. It 
would mean to continue processing the chain of handlers because the last 
ISR invoked was not concerned by the outstanding IRQ.

Sounds like RT_INTR_CHAINED, except that it's for the current pipeline stage?

Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)
  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!
__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
My distinct feeling is that the return value should be a scalar and not a set!

...

I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.



I support that. Shared interrupts should be handled properly by Xeno 
since such - I'd say "last resort" - configuration could be needed; this 
said, we should not see this as the rule but rather as the exception, 
since this is basically required to solve some underlying hw limitations 
wrt interrupt management, and definitely has a significant cost on 
processing each shared IRQ wrt determinism.


Incidentally, there is an interesting optimization on the project's todo 
list 

Is this todo list accessible anywhere?

> that would allow non-RT interrupts to be masked at IC level when
the Xenomai domain is active. We could do that on any arch with 
civilized interrupt management, and that would prevent any asynchronous 
diversion from the critical code when Xenomai is running RT tasks 
(kernel or user-space). Think of this as some hw-controlled interrupt 
shield. Since this feature requires to be able to individually mask each 
interrupt source at IC level, there should be no point in sharing fully 
vectored interrupts in such a configuration anyway. This fact also 
pleads for having the shared IRQ support as a build-time option.


--
Anders Blomdell



Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Jan Kiszka
Anders Blomdell wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>
>>> Wolfgang Grandegger wrote:
>>>
 Hello,

 Dmitry Adamushko wrote:

> Hi,
>
> this is the final set of patches against the SVN trunk of 2006-02-03.
>
> It addresses mostly remarks concerning naming (XN_ISR_ISA ->
> XN_ISR_EDGE), a few cleanups and updated comments.
>
> Functionally, the support for shared interrupts (a few flags) to the
>>>
>>>
>>>
>>> Not directly your fault: the increasing number of return flags for IRQ
>>> handlers makes me worry that they are used correctly. I can figure out
>>> what they mean (not yet that clearly from the docs), but does someone
>>> else understand all this:
>>>
>>> - RT_INTR_HANDLED
>>
>>
>> ISR says it has handled the IRQ, and does not want any propagation to
>> take place down the pipeline. IOW, the IRQ processing stops there.
> This says that the interrupt will be ->end'ed at some later time
> (perhaps in the interrupt handler task)
> 
>>> - RT_INTR_CHAINED
>>
>>
>> ISR says it wants the IRQ to be propagated down the pipeline. Nothing
>> is said about the fact that the last ISR did or did not handle the IRQ
>> locally; this is irrelevant.
> This says that the interrupt will eventually be ->end'ed by some later
> stage in the pipeline.
> 
>>> - RT_INTR_ENABLE
>>
>>
>> ISR requests the interrupt dispatcher to re-enable the IRQ line upon
>> return (cumulable with HANDLED/CHAINED).
> This says that the interrupt will be ->end'ed when this interrupt
> handler returns.
> 
>>
>>> - RT_INTR_NOINT
>>>
>>
>> This new one comes from Dmitry's patch for shared IRQ support AFAICS.
>> It would mean to continue processing the chain of handlers because the
>> last ISR invoked was not concerned by the outstanding IRQ.
> Sounds like RT_INTR_CHAINED, except that it's for the current pipeline
> stage?
> 
> Now for the quiz question (powerpc arch):
> 
>   1. Assume an edge triggered interrupt
>   2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared

Kind of redundant. What did you really mean?

>  interrupt, but no problem since it's edge-triggered)
>   3. Interrupt gets ->end'ed right after RT-handler has returned
>   4. Linux interrupt eventually handler starts its ->end() handler:
> local_irq_save_hw(flags);
> if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
>   ipipe_irq_unlock(irq);
> // Next interrupt occurs here!
> __ipipe_std_irq_dtype[irq].end(irq);
> local_irq_restore_hw(flags);
> 
> 
> Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
> My distinct feeling is that the return value should be a scalar and not
> a set!

That's a good idea: only provide valid and reasonable flag combinations
to the user!

>> ...
>>> I would vote for the (already scheduled?) extension to register an
>>> optimised IRQ trampoline in case there is actually no sharing taking
>>> place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
>>> obsolete.
>>
>>
>> I support that. Shared interrupts should be handled properly by Xeno
>> since such - I'd say "last resort" - configuration could be needed;
>> this said, we should not see this as the rule but rather as the
>> exception, since this is basically required to solve some underlying
>> hw limitations wrt interrupt management, and definitely has a
>> significant cost on processing each shared IRQ wrt determinism.
>>
>> Incidentally, there is an interesting optimization on the project's
>> todo list 
> Is this todo list accessible anywhere?

I did not know of such interesting plans as well. Maybe we should start
using more of the feature GNA provide to us (task lists?)...

> 
>> that would allow non-RT interrupts to be masked at IC level when
>> the Xenomai domain is active. We could do that on any arch with
>> civilized interrupt management, and that would prevent any
>> asynchronous diversion from the critical code when Xenomai is running
>> RT tasks (kernel or user-space). Think of this as some hw-controlled
>> interrupt shield. Since this feature requires to be able to
>> individually mask each interrupt source at IC level, there should be
>> no point in sharing fully vectored interrupts in such a configuration
>> anyway. This fact also pleads for having the shared IRQ support as a
>> build-time option.
> 

This concept sound really thrilling. I already wondered if this is
possible after seeing how many non-RT IRQ stubs can hit between an RT
event and the RT task invocation: HD, network, keyboard, mouse, sound,
graphic card - and if you are "lucky", a lot of them chain up at the
same time. But I thought that such disabling is too costly for being
used at every domain switch. Is it not?

Jan




signature.asc
Description: OpenPGP digital signature


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Anders Blomdell wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the





Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED




ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.


This says that the interrupt will be ->end'ed at some later time 
(perhaps in the interrupt handler task)




The ISR may end the IRQ before returning, or leave it to the nucleus upon return 
by adding the ENABLE bit.



- RT_INTR_CHAINED




ISR says it wants the IRQ to be propagated down the pipeline. Nothing 
is said about the fact that the last ISR did or did not handle the IRQ 
locally; this is irrelevant.


This says that the interrupt will eventually be ->end'ed by some later 
stage in the pipeline.



- RT_INTR_ENABLE




ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).




This is wrong; we should only associate this to HANDLED; sorry.

This says that the interrupt will be ->end'ed when this interrupt 
handler returns.





- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. 
It would mean to continue processing the chain of handlers because the 
last ISR invoked was not concerned by the outstanding IRQ.


Sounds like RT_INTR_CHAINED, except that it's for the current pipeline 
stage?




Basically, yes.


Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)


( Assuming RT_INTR_CHAINED | RT_INTR_ENABLE )


  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!


It can't occur here: hw interrupts are off after local_irq_save_hw, and unlocking 
some IRQ does not flush the IRQ log.



__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?


This could happen, yep. Actually, this would be a possible misuse of the ISR 
return values.
If one chains the handler Adeos-wise, it is expected to leave the IC in its 
original state wrt the processed interrupt. CHAINED should be seen as mutually 
exclusive with ENABLE.


My distinct feeling is that the return value should be a scalar and not 
a set!




To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*), HANDLED | 
CHAINED and CHAINED. It's currently a set because I once thought that the 
"handled" indication (or lack of) could be a valuable information to gather at 
nucleus level to detect unhandled RT interrupts. Fact is that we currently don't 
use this information, though. IOW, we could indeed define some enum and have a 
scalar there instead of a set; or we could just leave this as a set, but whine 
when detecting the invalid ENABLE | CHAINED combination.


(*) because the handler does not necessary know how to ->end() the current IRQ at 
IC level, but Xenomai always does.



...


I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.




I support that. Shared interrupts should be handled properly by Xeno 
since such - I'd say "last resort" - configuration could be needed; 
this said, we should not see this as the rule but rather as the 
exception, since this is basically required to solve some underlying 
hw limitations wrt interrupt management, and definitely has a 
significant cost on processing each shared IRQ wrt determinism.


Incidentally, there is an interesting optimization on the project's 
todo list 


Is this todo list accessible anywhere?



There's a roadmap for v2.1 that has been posted to the -core list in 
October/November. Aside of that, the todos are not maintained in a centralized and 
accessible way yet. We could perhaps use GNA's task manager for that 
(http://gna.org/task/?group=xenomai), even if not to the full extent of its features.



 > that would allow non-RT interrupts to be masked at IC level when

the Xenomai domain is active. We could do that on any arch with 
civilized int

Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Jan Kiszka wrote:

Anders Blomdell wrote:


Philippe Gerum wrote:


Jan Kiszka wrote:



Wolfgang Grandegger wrote:



Hello,

Dmitry Adamushko wrote:



Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the




Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED



ISR says it has handled the IRQ, and does not want any propagation to
take place down the pipeline. IOW, the IRQ processing stops there.


This says that the interrupt will be ->end'ed at some later time
(perhaps in the interrupt handler task)



- RT_INTR_CHAINED



ISR says it wants the IRQ to be propagated down the pipeline. Nothing
is said about the fact that the last ISR did or did not handle the IRQ
locally; this is irrelevant.


This says that the interrupt will eventually be ->end'ed by some later
stage in the pipeline.



- RT_INTR_ENABLE



ISR requests the interrupt dispatcher to re-enable the IRQ line upon
return (cumulable with HANDLED/CHAINED).


This says that the interrupt will be ->end'ed when this interrupt
handler returns.



- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS.
It would mean to continue processing the chain of handlers because the
last ISR invoked was not concerned by the outstanding IRQ.


Sounds like RT_INTR_CHAINED, except that it's for the current pipeline
stage?

Now for the quiz question (powerpc arch):

 1. Assume an edge triggered interrupt
 2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared



Kind of redundant. What did you really mean?



interrupt, but no problem since it's edge-triggered)
 3. Interrupt gets ->end'ed right after RT-handler has returned
 4. Linux interrupt eventually handler starts its ->end() handler:
   local_irq_save_hw(flags);
   if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
 ipipe_irq_unlock(irq);
   // Next interrupt occurs here!
   __ipipe_std_irq_dtype[irq].end(irq);
   local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
My distinct feeling is that the return value should be a scalar and not
a set!



That's a good idea: only provide valid and reasonable flag combinations
to the user!



...


I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
obsolete.



I support that. Shared interrupts should be handled properly by Xeno
since such - I'd say "last resort" - configuration could be needed;
this said, we should not see this as the rule but rather as the
exception, since this is basically required to solve some underlying
hw limitations wrt interrupt management, and definitely has a
significant cost on processing each shared IRQ wrt determinism.

Incidentally, there is an interesting optimization on the project's
todo list 


Is this todo list accessible anywhere?



I did not know of such interesting plans as well. Maybe we should start
using more of the feature GNA provide to us (task lists?)...



that would allow non-RT interrupts to be masked at IC level when
the Xenomai domain is active. We could do that on any arch with
civilized interrupt management, and that would prevent any
asynchronous diversion from the critical code when Xenomai is running
RT tasks (kernel or user-space). Think of this as some hw-controlled
interrupt shield. Since this feature requires to be able to
individually mask each interrupt source at IC level, there should be
no point in sharing fully vectored interrupts in such a configuration
anyway. This fact also pleads for having the shared IRQ support as a
build-time option.




This concept sound really thrilling. I already wondered if this is
possible after seeing how many non-RT IRQ stubs can hit between an RT
event and the RT task invocation: HD, network, keyboard, mouse, sound,
graphic card - and if you are "lucky", a lot of them chain up at the
same time. But I thought that such disabling is too costly for being
used at every domain switch. Is it not?



It all depends on the underlying arch. I started to think about this when 
working
with the Blackfin, which provides an efficient and fine-grained control over the 
interrupt system (hey, it's a DSP after all). Anders recently brought up the issue 
too, waking up the sleeper. Of course, one would not want to try that with a 8259 
chip over x86...



Jan





--

Philippe.



Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Jan Kiszka
Philippe Gerum wrote:
> Anders Blomdell wrote:
> 
>> My distinct feeling is that the return value should be a scalar and
>> not a set!
>>
> 
> To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
> HANDLED | CHAINED and CHAINED. It's currently a set because I once
> thought that the "handled" indication (or lack of) could be a valuable
> information to gather at nucleus level to detect unhandled RT
> interrupts. Fact is that we currently don't use this information,

But it is required for the edge-triggered case to detect when the IRQ
line was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,
we could avoid this new flag by just making HANDLED non-zero.

> though. IOW, we could indeed define some enum and have a scalar there
> instead of a set; or we could just leave this as a set, but whine when
> detecting the invalid ENABLE | CHAINED combination.

In combination with the change above and some doc improvement ("valid
combinations are: ..."), I could also live with keeping the flags. The
advantage were that we wouldn't break existing applications.

> 
> (*) because the handler does not necessary know how to ->end() the
> current IRQ at IC level, but Xenomai always does.
> 

Jan



signature.asc
Description: OpenPGP digital signature


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Anders Blomdell

Philippe Gerum wrote:

Anders Blomdell wrote:


Philippe Gerum wrote:


Jan Kiszka wrote:


Wolfgang Grandegger wrote:


Hello,

Dmitry Adamushko wrote:


Hi,

this is the final set of patches against the SVN trunk of 2006-02-03.

It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.

Functionally, the support for shared interrupts (a few flags) to the






Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:

- RT_INTR_HANDLED





ISR says it has handled the IRQ, and does not want any propagation to 
take place down the pipeline. IOW, the IRQ processing stops there.



This says that the interrupt will be ->end'ed at some later time 
(perhaps in the interrupt handler task)




The ISR may end the IRQ before returning, or leave it to the nucleus 
upon return by adding the ENABLE bit.



- RT_INTR_CHAINED





ISR says it wants the IRQ to be propagated down the pipeline. Nothing 
is said about the fact that the last ISR did or did not handle the 
IRQ locally; this is irrelevant.



This says that the interrupt will eventually be ->end'ed by some later 
stage in the pipeline.



- RT_INTR_ENABLE





ISR requests the interrupt dispatcher to re-enable the IRQ line upon 
return (cumulable with HANDLED/CHAINED).





This is wrong; we should only associate this to HANDLED; sorry.

This says that the interrupt will be ->end'ed when this interrupt 
handler returns.





- RT_INTR_NOINT



This new one comes from Dmitry's patch for shared IRQ support AFAICS. 
It would mean to continue processing the chain of handlers because 
the last ISR invoked was not concerned by the outstanding IRQ.



Sounds like RT_INTR_CHAINED, except that it's for the current pipeline 
stage?




Basically, yes.


Now for the quiz question (powerpc arch):

  1. Assume an edge triggered interrupt
  2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
 interrupt, but no problem since it's edge-triggered)



( Assuming RT_INTR_CHAINED | RT_INTR_ENABLE )


  3. Interrupt gets ->end'ed right after RT-handler has returned
  4. Linux interrupt eventually handler starts its ->end() handler:
local_irq_save_hw(flags);
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
  ipipe_irq_unlock(irq);
// Next interrupt occurs here!



It can't occur here: hw interrupts are off after local_irq_save_hw, and 
unlocking some IRQ does not flush the IRQ log.



__ipipe_std_irq_dtype[irq].end(irq);
local_irq_restore_hw(flags);


Wouldn't this lead to a lost interrupt? Or am I overly paranoid?



This could happen, yep. Actually, this would be a possible misuse of the 
ISR return values.
If one chains the handler Adeos-wise, it is expected to leave the IC in 
its original state wrt the processed interrupt. CHAINED should be seen 
as mutually exclusive with ENABLE.


My distinct feeling is that the return value should be a scalar and 
not a set!




To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*), 
HANDLED | CHAINED and CHAINED. It's currently a set because I once 
thought that the "handled" indication (or lack of) could be a valuable 
information to gather at nucleus level to detect unhandled RT 
interrupts. Fact is that we currently don't use this information, 
though. IOW, we could indeed define some enum and have a scalar there 
instead of a set; or we could just leave this as a set, but whine when 
detecting the invalid ENABLE | CHAINED combination.


agile_programmer_mode_off();
realtime_programmer_hat_on();

I prefer programming errors to show up at compile time!

goto todays_work;

// Will never come here :-(
realtime_programmer_hat_off();
agile_programmer_mode_on();

--

Anders



Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Dmitry Adamushko
On 09/02/06, Jan Kiszka <[EMAIL PROTECTED]> wrote:
Philippe Gerum wrote:> Anders Blomdell wrote:>>> My distinct feeling is that the return value should be a scalar and>> not a set! To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
> HANDLED | CHAINED and CHAINED. It's currently a set because I once> thought that the "handled" indication (or lack of) could be a valuable> information to gather at nucleus level to detect unhandled RT
> interrupts. Fact is that we currently don't use this information,But it is required for the edge-triggered case to detect when the IRQline was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,we could avoid this new flag by just making HANDLED non-zero.
That's it.

I was about to make a comment on Philipe's list of possible return values, but you outrun me.

HANDLED is 0 so we can not distinguish between HANDLED | CHAINED and
CHAINED cases. NOINT denotes explicitly the handler's answer "this IRQ
is rised not by my hw!" and it's needed (at least) for implementing the
edge-triggered irq sharing.

It's not necessary in case HANDLED becomes non-zero and, actually, HANDLED is not necessary with NOINT.

As I may see, Philipe's list can be mapped as follows :

HANDLED  ->  0
HANDLED | ENABLE  ->  ENABLE
HANDLED | CHAINED ->  CHAINED
CHAINED  
->  CHAINED | NOINT

and 

NOINT as a separete use case (?). Could be useful at least for the edge-triggered stuff.

-- Best regards,Dmitry Adamushko


Re: [Xenomai-core] [Combo-PATCH] Shared interrupts (final)

2006-02-09 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Anders Blomdell wrote:



My distinct feeling is that the return value should be a scalar and
not a set!



To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*),
HANDLED | CHAINED and CHAINED. It's currently a set because I once
thought that the "handled" indication (or lack of) could be a valuable
information to gather at nucleus level to detect unhandled RT
interrupts. Fact is that we currently don't use this information,



But it is required for the edge-triggered case to detect when the IRQ
line was at least shortly released. I guess Dmitry introduced that NOINT
just because HANDLED equals 0 so far. As I would say HANDLED == !NOINT,
we could avoid this new flag by just making HANDLED non-zero.



Yes, we could. HANDLED is currently zero only because the nucleus does not care of 
the !handled case, yet.





though. IOW, we could indeed define some enum and have a scalar there
instead of a set; or we could just leave this as a set, but whine when
detecting the invalid ENABLE | CHAINED combination.



In combination with the change above and some doc improvement ("valid
combinations are: ..."), I could also live with keeping the flags. The
advantage were that we wouldn't break existing applications.



(*) because the handler does not necessary know how to ->end() the
current IRQ at IC level, but Xenomai always does.




Jan




--

Philippe.



[Xenomai-core] xenomai on SPARC V8

2006-02-09 Thread Frederic Pont

Dear all,

I would be interested in running xenomai on a SPARC V8 (LEON) CPU. 
Any information on a SPARC port of xenomai, anybody working on it? 
Could anyone evaluate the complexity of such a port?


Thanks for your feedback
Fred

--
Frederic Pont
http://asl.epfl.ch
tel: +41 21 693 78 27



[Xenomai-core] Benchmarks

2006-02-09 Thread Dmitry Adamushko

Hi there,

after a preliminary discussion with Philipe and, well, a few days later
than I expected, I'm starting a new effort of writting some simple (i.e. not 
too complex :) yet, hopefully, useful benchmarking utilites.

The idea of each utility is to emulate a certain use case but
at the level which is significant enough to prove that
the system is (or is not) working properly latency-wise.
This, hopefully, will help to determine some bottlenecks and 
the parts of code that need to be reworked/tweaked.
Then we may use such tests on release-by-release basis as indicators
of either progress or regress we are making with a certain release.

As an example, the first utility would implement the following use case :

(based on the latency program)

- a given number of periodic threads are running;

- configurable periods (so that e.g. a few threads can become active
  at the same moment of time). Actually, that's what we would need.

- timer : periodic or apperiodic;

...

the utility will not likely produce any screen-output during its work, but
rather the comprehensive statistic in a handy form after finishing.

---

other utils could make use of some scenarious where synch. primitives/
rt_queue's/pipes could be highly used etc.


I guess, Xenomai already provides a valid amount of functionality and it's
quite stable for the time being. So it's time to work on optimizing it!

Everyone is wellcome to come up with any scenarios on which such utilities
could be based.

Any comments on the one with a given number of threads are wellcome too.
-- Best regards,Dmitry Adamushko


Re: [Xenomai-core] Benchmarks

2006-02-09 Thread Luotao Fu
Hi folks,

Dmitry Adamushko schrieb:
> 
> Hi there,
> 
> after a preliminary discussion with Philipe and, well, a few days later
> than I expected, I'm starting a new effort of writting some simple (i.e.
> not
> too complex :) yet, hopefully, useful benchmarking utilites.
> 
> The idea of each utility is to emulate a certain use case but
> at the level which is significant enough to prove that
> the system is (or is not) working properly latency-wise.
> This, hopefully, will help to determine some bottlenecks and
> the parts of code that need to be reworked/tweaked.
> Then we may use such tests on release-by-release basis as indicators
> of either progress or regress we are making with a certain release.

Actually, I'm doing here some measurementstuffs to compare the Realtime
Performance of Xenomai and Preempt-RT stuffs. :-)

> 
> As an example, the first utility would implement the following use case :
> 
> (based on the latency program)
> 
> - a given number of periodic threads are running;
> 
> - configurable periods (so that e.g. a few threads can become active
>   at the same moment of time). Actually, that's what we would need.
> 
> - timer : periodic or apperiodic;

I've already implemented something in this way in POSIX. I took the
accuracy.c out of the posix demo from Gille and changed it. So that you
can start few threads with different nsleep duration. Except that it
writes a log, which can be plotted. The util does quite the same stuffs
like the cyclictest by Thomas Gleixner.

Further I implemented a tool for Interruptmeasurement with rtdm. Still
tuning on it, because the Pre-RT Kernel get ocassionally problems with
stability.

I even implemented Rhealstone Benchmark with xenomai-complaint Posix, it
however provides only middlevalues and might not be very interesting.

> 
> ...
> 
> the utility will not likely produce any screen-output during its work, but
> rather the comprehensive statistic in a handy form after finishing.
> 

Exactly what I thought :-)

> ---
> 
> other utils could make use of some scenarious where synch. primitives/
> rt_queue's/pipes could be highly used etc.
> 

Generally I'm quite interested on some xenomai specific latency
behaviour caused by i.E. Domain Switching, Function Wrapping and so on.
I'm still considering on some concrete Workload-scenarious.

> 
> I guess, Xenomai already provides a valid amount of functionality and it's
> quite stable for the time being. So it's time to work on optimizing it!
> 
> Everyone is wellcome to come up with any scenarios on which such utilities
> could be based.
> 
> Any comments on the one with a given number of threads are wellcome too.
> 

I'm now busy writing my stuffs together. no time to debug my hacks. so I
think I'd release them somehow later some time after I've given them
first to Jan for quice code review.

> 
> -- 
> Best regards,
> Dmitry Adamushko
> 
> 
> 
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core

Cheers
Luotao Fu



[Xenomai-core] More on Shared interrupts

2006-02-09 Thread Anders Blomdell
For the last few days, I have tried to figure out a good way to share interrupts 
between RT and non-RT domains. This has included looking through Dmitry's patch, 
correcting bugs and testing what is possible in my specific case. I'll therefore 
try to summarize at least a few of my thoughts.


1. When looking through Dmitry's patch I get the impression that the iack 
handler has very little to do with each interrupt (the test 'prev->iack != 
intr->iack' is a dead giveaway), but is more of a domain-specific function (or 
perhaps even just a placeholder for the hijacked Linux ack-function).



2. Somewhat inspired by the figure in "Life with Adeos", I have identified the 
following cases:


  irq K  | --- | ---o|   // Linux only
  ...
  irq L  | ---o| |   // RT-only
  ...
  irq M  | ---o--- | ---o|   // Shared between domains
  ...
  irq N  | ---o---o--- | |   // Shared inside single domain
  ...
  irq O  | ---o---o--- | ---o|   // Shared between and inside single domain

Xenomai currently handles the K & L cases, Dmitrys patch addresses the N case, 
with edge triggered interrupts the M (and O after Dmitry's patch) case(s) might 
be handled by returning RT_INTR_CHAINED | RT_INTR_ENABLE from the interrupt 
handler, for level triggered interrupt the M and O cases can't be handled.


If one looks more closely at the K case (Linux only interrupt), it works by when 
an interrupt occurs, the call to irq_end is postponed until the Linux interrupt 
handler has run, i.e. further interrupts are disabled. This can be seen as a 
lazy version of Philippe's idea of disabling all non-RT interrupts until the 
RT-domain is idle, i.e. the interrupt is disabled only if it indeed occurs.


If this idea should be generalized to the M (and O) case(s), one can't rely on 
postponing the irq_end call (since the interrupt is still needed in the 
RT-domain), but has to rely on some function that disables all non-RT hardware 
that generates interrupts on that irq-line; such a function naturally has to 
have intimate knowledge of all hardware that can generate interrupts in order to 
be able to disable those interrupt sources that are non-RT.


If we then take Jan's observation about the many (Linux-only) interrupts present 
in an ordinary PC and add it to Philippe's idea of disabling all non-RT 
interrupts while executing in the RT-domain, I think that the following is a 
workable (and fairly efficient) way of handling this:


Add hardware dependent enable/disable functions, where the enable is called just 
before normal execution in a domain starts (i.e. when playing back interrupts, 
the disable is still in effect), and disable is called when normal domain 
execution end. This does effectively handle the K case above, with the added 
benefit that NO non-RT interrupts will occur during RT execution.


In the 8259 case, the disable function could look something like:

  domain_irq_disable(uint irqmask) {
if (irqmask & 0xff00 != 0xff00) {
  irqmask &= ~0x0004; // Cascaded interrupt is still needed
  outb(irqmask >> 8, PIC_SLAVE_IMR);
}
outb(irqmask, PIC_MASTER_IMR);
  }

If we should extend this to handle the M (and O) case(s), the disable function 
could look like:


  domain_irq_disable(uint irqmask, shared_irq_t *shared[]) {
int i;

for (i = 0 ; i < MAX_IRQ ; i++) {
  if (shared[i]) {
shared_irq_t *next = shared[i];
irqmask &= ~(1next;
}
  }
}
if (irqmask & 0xff00 != 0xff00) {
  irqmask &= ~0x0004; // Cascaded interrupt is still needed
  outb(irqmask >> 8, PIC_SLAVE_IMR);
}
outb(irqmask, PIC_MASTER_IMR);
  }

An obvious optimization of the above scheme, is to never call the disable (or 
enable) function for the RT-domain, since there all interrupt processing is 
protected by the hardware.


Comments, anyone?

--

Anders




Re: [Xenomai-core] More on Shared interrupts

2006-02-09 Thread Jan Kiszka
Anders Blomdell wrote:
> For the last few days, I have tried to figure out a good way to share
> interrupts between RT and non-RT domains. This has included looking
> through Dmitry's patch, correcting bugs and testing what is possible in
> my specific case. I'll therefore try to summarize at least a few of my
> thoughts.
> 
> 1. When looking through Dmitry's patch I get the impression that the
> iack handler has very little to do with each interrupt (the test
> 'prev->iack != intr->iack' is a dead giveaway), but is more of a
> domain-specific function (or perhaps even just a placeholder for the
> hijacked Linux ack-function).
> 
> 
> 2. Somewhat inspired by the figure in "Life with Adeos", I have
> identified the following cases:
> 
>   irq K  | --- | ---o|   // Linux only
>   ...
>   irq L  | ---o| |   // RT-only
>   ...
>   irq M  | ---o--- | ---o|   // Shared between domains
>   ...
>   irq N  | ---o---o--- | |   // Shared inside single domain
>   ...
>   irq O  | ---o---o--- | ---o|   // Shared between and inside single
> domain
> 
> Xenomai currently handles the K & L cases, Dmitrys patch addresses the N
> case, with edge triggered interrupts the M (and O after Dmitry's patch)
> case(s) might be handled by returning RT_INTR_CHAINED | RT_INTR_ENABLE
> from the interrupt handler, for level triggered interrupt the M and O
> cases can't be handled.

I guess you mean it the other way around: for the edge-triggered
cross-domain case we would actually have to loop over both the RT and
the Linux handlers until we are sure, that the IRQ line was released once.

Luckily, I never saw such a scenario which were unavoidable (it hits you
with ISA hardware which tend to have nice IRQ jumpers or other means -
it's just that you often cannot divide several controllers on the same
extension card IRQ-wise apart).

> 
> If one looks more closely at the K case (Linux only interrupt), it works
> by when an interrupt occurs, the call to irq_end is postponed until the
> Linux interrupt handler has run, i.e. further interrupts are disabled.
> This can be seen as a lazy version of Philippe's idea of disabling all
> non-RT interrupts until the RT-domain is idle, i.e. the interrupt is
> disabled only if it indeed occurs.
> 
> If this idea should be generalized to the M (and O) case(s), one can't
> rely on postponing the irq_end call (since the interrupt is still needed
> in the RT-domain), but has to rely on some function that disables all
> non-RT hardware that generates interrupts on that irq-line; such a
> function naturally has to have intimate knowledge of all hardware that
> can generate interrupts in order to be able to disable those interrupt
> sources that are non-RT.
> 
> If we then take Jan's observation about the many (Linux-only) interrupts
> present in an ordinary PC and add it to Philippe's idea of disabling all
> non-RT interrupts while executing in the RT-domain, I think that the
> following is a workable (and fairly efficient) way of handling this:
> 
> Add hardware dependent enable/disable functions, where the enable is
> called just before normal execution in a domain starts (i.e. when
> playing back interrupts, the disable is still in effect), and disable is
> called when normal domain execution end. This does effectively handle
> the K case above, with the added benefit that NO non-RT interrupts will
> occur during RT execution.
> 
> In the 8259 case, the disable function could look something like:
> 
>   domain_irq_disable(uint irqmask) {
> if (irqmask & 0xff00 != 0xff00) {
>   irqmask &= ~0x0004; // Cascaded interrupt is still needed
>   outb(irqmask >> 8, PIC_SLAVE_IMR);
> }
> outb(irqmask, PIC_MASTER_IMR);
>   }
> 
> If we should extend this to handle the M (and O) case(s), the disable
> function could look like:
> 
>   domain_irq_disable(uint irqmask, shared_irq_t *shared[]) {
> int i;
> 
> for (i = 0 ; i < MAX_IRQ ; i++) {
>   if (shared[i]) {
> shared_irq_t *next = shared[i];
> irqmask &= ~(1< while (next) {
>   next->disable();
>   next = next->next;
> }

This obviously means that all non-RT IRQ handlers sharing a line with
the RT domain would have to be registered in that shared[]-list. This
gets close to my old suggestion. Just raises the question how to
organise these interface, both on the RT and the Linux side.

>   }
> }
> if (irqmask & 0xff00 != 0xff00) {
>   irqmask &= ~0x0004; // Cascaded interrupt is still needed
>   outb(irqmask >> 8, PIC_SLAVE_IMR);
> }
> outb(irqmask, PIC_MASTER_IMR);
>   }
> 
> An obvious optimization of the above scheme, is to never call the
> disable (or enable) function for the RT-domain, since there all
> interrupt processing is protected by the hardware.

Another point is to avoid that looping over disable handlers for IRQs of
the K case. Otherwise, too many device-specific disable handlers had to
be implemented e

[Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread John Schipper

Hello,
I'm new to this list but have in the past used RTAI in a single 
processor off the shelf solution. I'm looking to switch to native 
Xenomai api but have a general problem...
 The problem is SMI on new systems, and other latency killers that 
sometimes are not controllable by software always popping up when trying 
to migrate to a newer platform.  Can a dual core processor using 
isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip 
set issues (specificly AMD or Intel dual core solutions) by isolating a 
cpu for exclusively xenomai/realtime use?


Some background information: 
 The realtime software I've developed in the past (with RTAI/adeos in 
user space) is a simple high speed serializer driver (mmap) to 
communicate with outside hardware and is responsible for syncronizing 
(with a semaphore/mutex) a linux process (soft realtime) at ~60Hz.  The 
realtime process is periodic at 1.2Khz or 2.4Khz and calculates/filters 
the data before sending commands back down the serializer interface and 
to the linux process for soft realtime network access.


 Generally we like to use "off the shelf" business PC's (Dell 170's and 
Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is 
achievable.  We use "off the shelf" hardware because availability 
(recieve within a week) and low cost are desired.   Whenever looking for 
an alernative solution either availablity or cost becomes a show 
stopper.  I'm open to suggestions and invite anyones thoughts on the 
subject.


Thanks for your time !

JKS





Re: [Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread Jan Kiszka
John Schipper wrote:
> Hello,
> I'm new to this list but have in the past used RTAI in a single
> processor off the shelf solution. I'm looking to switch to native
> Xenomai api but have a general problem...
>  The problem is SMI on new systems, and other latency killers that

What do you mean with "other" precisely?

> sometimes are not controllable by software always popping up when trying
> to migrate to a newer platform.  Can a dual core processor using
> isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip
> set issues (specificly AMD or Intel dual core solutions) by isolating a
> cpu for exclusively xenomai/realtime use?

SMI can only be addressed with CPU isolation if you are able to redirect
the related event only to one CPU. Don't know if this is possible / default.

There are tricks to disable SMI on modern Intel chipsets. Xenomai
implements this, you can select the workaround during system
configuration. Don't this work for your particular systems? Then please
report details.

"Other", more subtle latency issues can only be addressed when the
mechanisms behind them are understood. Depends on the chipset
manufacturer's documentation. So, no general answer is possible.

> 
> Some background information:  The realtime software I've developed in
> the past (with RTAI/adeos in user space) is a simple high speed
> serializer driver (mmap) to communicate with outside hardware and is
> responsible for syncronizing (with a semaphore/mutex) a linux process
> (soft realtime) at ~60Hz.  The realtime process is periodic at 1.2Khz or
> 2.4Khz and calculates/filters the data before sending commands back down
> the serializer interface and to the linux process for soft realtime
> network access.
> 
>  Generally we like to use "off the shelf" business PC's (Dell 170's and
> Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is
> achievable.  We use "off the shelf" hardware because availability
> (recieve within a week) and low cost are desired.   Whenever looking for
> an alernative solution either availablity or cost becomes a show
> stopper.  I'm open to suggestions and invite anyones thoughts on the
> subject.

Using off the shelf standard systems is always risky. I've heard of a
larger German automation company ordering standard PC hardware for
industrial control purposes only when initial latency tests on a
specific charge were successful. Ok, they are ordering large enough
quantities, so they can dictate certain conditions...

Jan



signature.asc
Description: OpenPGP digital signature


Re: [Xenomai-core] Isolated CPU, SMI problems an thoughts

2006-02-09 Thread John Schipper

Jan Kiszka wrote:


John Schipper wrote:
 


Hello,
I'm new to this list but have in the past used RTAI in a single
processor off the shelf solution. I'm looking to switch to native
Xenomai api but have a general problem...
The problem is SMI on new systems, and other latency killers that
  



What do you mean with "other" precisely?
 

I have seen on systems the need to disable USB Legacy/emulation? and 
only use a PS/2 keyboard.  The USB interface (tied to a usb keyboard and 
mouse) are standard on a Dell GX280 without a PS/2 interface.  I believe 
and have seen real-time having problems.  This is due to to SMM mode, I 
believe, being a latency killer (not sure if this is SMI related).


In regard to SMI I poked around the intel chips registers (sorry can't 
remember what the GX280 chipset was but I could get it if important!  ) 
and I found that their are lock bits that do not allow disabling global 
SMI or the watchdog capability.  This was 6 months ago at least so I 
have not tested the latest smi workaround module (in RTAI at least 
because I have not use xenomai yet but plan too :) )


 


sometimes are not controllable by software always popping up when trying
to migrate to a newer platform.  Can a dual core processor using
isolcpus, preemp-rt and xenomai effectively future proof agains smi/chip
set issues (specificly AMD or Intel dual core solutions) by isolating a
cpu for exclusively xenomai/realtime use?
  



SMI can only be addressed with CPU isolation if you are able to redirect
the related event only to one CPU. Don't know if this is possible / 
default.


 

This redirection has occured to me also.   A couple months ago I 
inquired and did not get any confirmation either way from Dell about 
doing this and maybe they do not know and I need to ask 
Intel/Via/AMD?..  Its not clear (if even possible ) to me if this 
would be a bios feature or only controlled by using a special chipset 
kernel patch or module?



There are tricks to disable SMI on modern Intel chipsets. Xenomai
implements this, you can select the workaround during system
configuration. Don't this work for your particular systems? Then please
report details.

 

SMI tricks had not worked on a Dell GX280 system but I plan to start 
some more testing again with xenomai.  The fact that it did not work 
prompted me to dig into the SMI/SMM registers and is where I discovered 
that I could not get past the lock bit capability of the SMI/SMM chipset 
that was being set by the bios (not positive this is where it was 
happening ) and after some failed attempts I just continued with the 
Dell 170/270 solution.  If there is an advantage or maybe even a hope :) 
that an isolated cpu will help in regards to SMI/SMM then I plan to go 
ahead and order some systems to test otherwise I may continue to try to 
testing single core solutions.



"Other", more subtle latency issues can only be addressed when the
mechanisms behind them are understood. Depends on the chipset
manufacturer's documentation. So, no general answer is possible.

 


Some background information:  The realtime software I've developed in
the past (with RTAI/adeos in user space) is a simple high speed
serializer driver (mmap) to communicate with outside hardware and is
responsible for syncronizing (with a semaphore/mutex) a linux process
(soft realtime) at ~60Hz.  The realtime process is periodic at 1.2Khz or
2.4Khz and calculates/filters the data before sending commands back down
the serializer interface and to the linux process for soft realtime
network access.

Generally we like to use "off the shelf" business PC's (Dell 170's and
Dell 270's, HP 5000 with 1Gig memory) and find that 20-30us latency is
achievable.  We use "off the shelf" hardware because availability
(recieve within a week) and low cost are desired.   Whenever looking for
an alernative solution either availablity or cost becomes a show
stopper.  I'm open to suggestions and invite anyones thoughts on the
subject.
  



Using off the shelf standard systems is always risky. I've heard of a
larger German automation company ordering standard PC hardware for
industrial control purposes only when initial latency tests on a
specific charge were successful. Ok, they are ordering large enough
quantities, so they can dictate certain conditions...

 



Testing the system is done to verify latency and expected real-time 
isolation from the normal linux tasks. Its done on a standard system and 
once verified used until the system is no longer available which seems 
to be happening at an increasing rate.  PCI express may be a reason, 
maybe money.. but systems don't last much longer then a year and a 
half.  Either way the motherboards themselves don't stay around to 
long.  Our quantity is not there for the needed leverage with providers 
:( .  The system is used for an industrial purpose which is NOT for a 
life saving/risking situation.  Its used for simulation or simulator 
purposes.



Jan

 


John