Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 > 
 > And another here, whereas if I understand correctly, the mm did not
 > change. So, this is probably an unwanted effect of the cache flush
 > "optimization" in the arm patch.
 > 
 > I will now try to understand if this second cache flush is really normal.

Yes, it is normal: the first context switch, which xnshadow_relax does,
is a switch to whatever task Linux was running when preempted, not
necessarily latency (and it turns out to never be latency when we
capture the worst case) hence the first cache flush. We then re-interrupt
Linux after this context switch, and switch again to latency, and we get
a second cache flush.

So, the conclusion is: everything is normal. What we obtain when
pressing the enter key while latency is running in the background is a
wakeup of the shell process and this process uses cache, so that the
next latency context switches need to flush cache.

In other words: pressing the enter key yields the same latency as
running the cache calibrator because it has the same effect, it fills
the cache.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Jan Kiszka

Gilles Chanteperdrix wrote:

Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Jan Kiszka wrote:
 > >  > Gilles Chanteperdrix wrote:
 > >  > > Jan Kiszka wrote:
 > >  > >  > Gilles Chanteperdrix wrote:
 > >  > >  > > Would not it be simpler to put a pointer to the task_struct ? 
After all,
 > >  > >  > > it already has a pid, comm and mm, and a file descriptor will not
 > >  > >  > > survive a task_struct thanks to automatic closing of file 
descriptors.
 > >  > >  > 
 > >  > >  > Hmm, hmm, h... Sounds reasonable, should be safe.
 > >  > > 
 > >  > > Actually no, we can not do that because a task_struct may well disappear

 > >  > > and the rtdm_process continue to exist as long as another thread uses 
the
 > >  > > same mm.
 > >  > 
 > >  > Because we cleanup on mm exit, not task exit, right? OK, looks like I 
 > >  > originally spent a few more thoughts than this time :-/.
 > >  > 
 > >  > > 
 > >  > >  > 
 > >  > >  > > 
 > >  > >  > >  Could you

 > >  > >  > >  > live without the check until we have per-process fd tabled, 
or was it
 > >  > >  > >  > essential for the select thing?
 > >  > >  > > 
 > >  > >  > > An application which I ported to Xenomai (and which uses the select

 > >  > >  > > call) closes all file descriptors in a for loop. The purpose of 
this
 > >  > >  > > loop is, I guess, to avoid leaving a file descriptor opened that 
was
 > >  > >  > > passed through exec.
 > >  > >  > 
 > >  > >  > OK.
 > >  > >  > 
 > >  > >  > So, will you change rtdm_process too? Thanks.
 > >  > > 
 > >  > > I commited the select support, without any change to rtdm_context_get or

 > >  > > rtdm_process. So, now, how do you prefer this to be fixed, by adding 
an
 > >  > > mm struct to the rtdm_process struct ? By the way, after thinking 
about
 > >  > > it I can live without this fix: I just have to stop the loop closing
 > >  > > file descriptors at 768, so that it will not try to close RTDM file
 > >  > > descriptors.
 > >  > 
 > >  > If you can live with it, I would vote for fixing it by the intended 
 > >  > redesign via per-process fds.
 > > 
 > > Ok.
 > > 
 > >  > 
 > >  > > 
 > >  > > While commiting the support for select, I also had a dependency problem

 > >  > > in Kconfig: when support for posix select is enabled the posix module
 > >  > > uses a function defined in the RTDM module. So, there is one invalid
 > >  > > configuration: posix built-in with support for select and rtdm built 
as
 > >  > > a module. I could not find a way to express this condition in the
 > >  > > Kconfig language, so I just made a comment depend on this condition, 
but
 > >  > > would be happy if anyone found a better solution.
 > >  > 
 > >  > I would say:
 > >  > 
 > >  > config POSIX

 > >  >select RTDM if OPT_SELECT
 > > 
 > > But in this case, I can not have posix with select and without rtdm.
 > 
 > Then you need to isolate those services that POSIX needs from RTDM. The 
 > above just expresses the dependency you described. 


Then I was wrong in what I described: the problem is that if posix is
built-in and RTDM is enabled, then RTDM must be built-in.


That should be covered by the kconfig rule automatically.



In the end this just 
 > shows that we have to define the common fd-ground for both skins in the 
 > core.


I also have the choice of defining the service needed
(rt_dev_select_bind) as a callback in the posix module, the RTDM module
setting this callback when loaded (like what the rtcap module does with
rtnet). But I wanted something simple, so I aimed at Kconfig stuff.



If you can live with the callback being NULL, you could also perfectly 
wrap some ifdef CONFIG_...RTDM[_MODULE] around the current invocations. 
Then you don't need the dependency above. I think I have to look at the 
code...


Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > I-pipe frozen back-tracing service on 2.6.20/ipipe-1.8-04
 > > 
 > > CPU: 0, Freeze: 450692973 cycles, Trace Points: 1000 (+10)
 > > Calibrated minimum trace-point overhead: 1.000 us
 > 
 > That is interesting. I tells us that we might subtract 1 us
 > _per_tracepoint_ from the given latencies due to the inherent tracer
 > overhead. We have about 50 entries in the critical path, so 50 us
 > compared to 220 us that were measured - roughly 170 us real latency.
 > 
 > What is the clock resolution btw? 500 ns?
 > 
 > So here is the interesting block, starting with the last larger IRQs-on
 > window.
 > 
 > > :   + func-447+   2.500  xnshadow_relax+0x14 
 > > (hisyscall_event+0x210)
 > > :|  + begin   0x8000  -445+   3.000  xnshadow_relax+0xd4 
 > > (hisyscall_event+0x210)
 > > :|  # func-442+   5.000  schedule_linux_call+0x10 
 > > (xnshadow_relax+0x114)
 > > :|  # func-437+   4.000  rthal_apc_schedule+0x10 
 > > (schedule_linux_call+0x1e8)
 > > :|  # func-433+   5.000  __ipipe_schedule_irq+0x10 
 > > (rthal_apc_schedule+0xac)
 > > :|  # func-428+   4.500  __ipipe_set_irq_pending+0x10 
 > > (__ipipe_schedule_irq+0xa4)
 > > :|  # func-423+   3.500  rpi_push+0x14 
 > > (xnshadow_relax+0x11c)
 > > :|  # func-420+   5.500  xnpod_suspend_thread+0x14 
 > > (xnshadow_relax+0x148)
 > > :|  # func-414+   4.000  xnpod_schedule+0x14 
 > > (xnpod_suspend_thread+0x60c)
 > > :|  # [  752] --0  -410+   7.000  xnpod_schedule+0xc8 
 > > (xnpod_suspend_thread+0x60c)
 > > :|  # func-403!  56.000  xnheap_finalize_free_inner+0x10 
 > > (xnpod_schedule+0x82c)

Ok, we get a cache flush here

 > > :|  # [0] --   -1  -347!  20.000  xnpod_schedule+0xb14 
 > > (xnintr_clock_handler+0xa0)
 > > :|   +func-327+   3.000  __ipipe_walk_pipeline+0x10 
 > > (__ipipe_handle_irq+0x124)
 > > :|   +func-324+   7.500  __ipipe_sync_stage+0x14 
 > > (__ipipe_walk_pipeline+0xa8)
 > > :|   #end 0x8000  -317+   7.000  __ipipe_sync_stage+0x250 
 > > (__ipipe_walk_pipeline+0xa8)
 > 
 > OK, the clock starts ticking...
 > 
 > > :|   #func-310+   2.500  __ipipe_grab_irq+0x10 
 > > (__irq_svc+0x28)
 > > :|   #begin   0x  -307+   7.000  __ipipe_grab_irq+0x20 
 > > (__irq_svc+0x28)
 > > :|   #(0x2a)  0x0012  -300+   5.000  __ipipe_grab_irq+0x2c 
 > > (__irq_svc+0x28)
 > > :|   #func-295+   4.000  __ipipe_handle_irq+0x10 
 > > (__ipipe_grab_irq+0x104)
 > > :|   #func-291+   2.500  __ipipe_ack_timerirq+0x10 
 > > (__ipipe_handle_irq+0x74)
 > > :|   #func-289+   3.000  __ipipe_ack_level_irq+0x10 
 > > (__ipipe_ack_timerirq+0x30)
 > > :|   #func-286+   2.000  at91_aic_mask_irq+0x10 
 > > (__ipipe_ack_level_irq+0x3c)
 > > :|   #func-284+   2.000  at91_aic_mask_irq+0x10 
 > > (__ipipe_ack_level_irq+0x4c)
 > 
 > (Without looking at the arm code: Is this double invocation of
 > at91_aic_mask_irq correct and required?)
 > 
 > > :|   #func-282+   4.000  __ipipe_mach_acktimer+0x10 
 > > (__ipipe_ack_timerirq+0x40)
 > > :|   #func-278+   2.000  __ipipe_end_level_irq+0x10 
 > > (__ipipe_ack_timerirq+0x50)
 > > :|   #func-276+   2.500  at91_aic_unmask_irq+0x10 
 > > (__ipipe_end_level_irq+0x28)
 > > :|   #func-273+   3.500  __ipipe_dispatch_wired+0x14 
 > > (__ipipe_handle_irq+0x80)
 > > :|  #*func-270+   3.500  xnintr_clock_handler+0x10 
 > > (__ipipe_dispatch_wired+0xe4)
 > > :|  #*func-266+   9.500  xntimer_tick_aperiodic+0x14 
 > > (xnintr_clock_handler+0x34)
 > > :|  #*func-257+   3.500  xnthread_periodic_handler+0x10 
 > > (xntimer_tick_aperiodic+0x354)
 > > :|  #*func-253+   4.000  xnpod_resume_thread+0x14 
 > > (xnthread_periodic_handler+0x34)
 > > :|  #*[  753] --   99  -249!  15.000  xnpod_resume_thread+0x84 
 > > (xnthread_periodic_handler+0x34)
 > 
 > Hmm, comparably costly, this simple resume. Hope it's not the
 > instrumentation (ipipe_trace_pid?) itself.
 > 
 > > :|  #*func-234+   6.500  xntimer_next_local_shot+0x10 
 > > (xntimer_tick_aperiodic+0x7c0)
 > > :|  #*func-228+   4.000  __ipipe_mach_set_dec+0x10 
 > > (xntimer_next_local_shot+0xbc)
 > > :|  #*func-224+   3.500  xnpod_schedule+0x14 
 > > (xnintr_clock_handler+0xa0)
 > > :|  #*[0] --   -1  -220!  59.500  xnpod_schedule+0xc8 
 > > (xnintr_clock_handler+0xa0)
 > 
 > OK, this is the cache flushing thing, I guess. Expected.

And another here, whereas if I understand correctly, the mm did not
change. So, this is probably an unwanted effect of the cache flush
"optimization" in the arm patch.

I will now try to understand 

Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Jan Kiszka wrote:
 > >  > Gilles Chanteperdrix wrote:
 > >  > > Jan Kiszka wrote:
 > >  > >  > Gilles Chanteperdrix wrote:
 > >  > >  > > Would not it be simpler to put a pointer to the task_struct ? 
 > > After all,
 > >  > >  > > it already has a pid, comm and mm, and a file descriptor will not
 > >  > >  > > survive a task_struct thanks to automatic closing of file 
 > > descriptors.
 > >  > >  > 
 > >  > >  > Hmm, hmm, h... Sounds reasonable, should be safe.
 > >  > > 
 > >  > > Actually no, we can not do that because a task_struct may well 
 > > disappear
 > >  > > and the rtdm_process continue to exist as long as another thread uses 
 > > the
 > >  > > same mm.
 > >  > 
 > >  > Because we cleanup on mm exit, not task exit, right? OK, looks like I 
 > >  > originally spent a few more thoughts than this time :-/.
 > >  > 
 > >  > > 
 > >  > >  > 
 > >  > >  > > 
 > >  > >  > >  Could you
 > >  > >  > >  > live without the check until we have per-process fd tabled, 
 > > or was it
 > >  > >  > >  > essential for the select thing?
 > >  > >  > > 
 > >  > >  > > An application which I ported to Xenomai (and which uses the 
 > > select
 > >  > >  > > call) closes all file descriptors in a for loop. The purpose of 
 > > this
 > >  > >  > > loop is, I guess, to avoid leaving a file descriptor opened that 
 > > was
 > >  > >  > > passed through exec.
 > >  > >  > 
 > >  > >  > OK.
 > >  > >  > 
 > >  > >  > So, will you change rtdm_process too? Thanks.
 > >  > > 
 > >  > > I commited the select support, without any change to rtdm_context_get 
 > > or
 > >  > > rtdm_process. So, now, how do you prefer this to be fixed, by adding 
 > > an
 > >  > > mm struct to the rtdm_process struct ? By the way, after thinking 
 > > about
 > >  > > it I can live without this fix: I just have to stop the loop closing
 > >  > > file descriptors at 768, so that it will not try to close RTDM file
 > >  > > descriptors.
 > >  > 
 > >  > If you can live with it, I would vote for fixing it by the intended 
 > >  > redesign via per-process fds.
 > > 
 > > Ok.
 > > 
 > >  > 
 > >  > > 
 > >  > > While commiting the support for select, I also had a dependency 
 > > problem
 > >  > > in Kconfig: when support for posix select is enabled the posix module
 > >  > > uses a function defined in the RTDM module. So, there is one invalid
 > >  > > configuration: posix built-in with support for select and rtdm built 
 > > as
 > >  > > a module. I could not find a way to express this condition in the
 > >  > > Kconfig language, so I just made a comment depend on this condition, 
 > > but
 > >  > > would be happy if anyone found a better solution.
 > >  > 
 > >  > I would say:
 > >  > 
 > >  > config POSIX
 > >  > select RTDM if OPT_SELECT
 > > 
 > > But in this case, I can not have posix with select and without rtdm.
 > 
 > Then you need to isolate those services that POSIX needs from RTDM. The 
 > above just expresses the dependency you described. 

Then I was wrong in what I described: the problem is that if posix is
built-in and RTDM is enabled, then RTDM must be built-in.

In the end this just 
 > shows that we have to define the common fd-ground for both skins in the 
 > core.

I also have the choice of defining the service needed
(rt_dev_select_bind) as a callback in the posix module, the RTDM module
setting this callback when loaded (like what the rtcap module does with
rtnet). But I wanted something simple, so I aimed at Kconfig stuff.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Jan Kiszka

Gilles Chanteperdrix wrote:

Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Jan Kiszka wrote:
 > >  > Gilles Chanteperdrix wrote:
 > >  > > Would not it be simpler to put a pointer to the task_struct ? After 
all,
 > >  > > it already has a pid, comm and mm, and a file descriptor will not
 > >  > > survive a task_struct thanks to automatic closing of file descriptors.
 > >  > 
 > >  > Hmm, hmm, h... Sounds reasonable, should be safe.
 > > 
 > > Actually no, we can not do that because a task_struct may well disappear

 > > and the rtdm_process continue to exist as long as another thread uses the
 > > same mm.
 > 
 > Because we cleanup on mm exit, not task exit, right? OK, looks like I 
 > originally spent a few more thoughts than this time :-/.
 > 
 > > 
 > >  > 
 > >  > > 
 > >  > >  Could you

 > >  > >  > live without the check until we have per-process fd tabled, or was 
it
 > >  > >  > essential for the select thing?
 > >  > > 
 > >  > > An application which I ported to Xenomai (and which uses the select

 > >  > > call) closes all file descriptors in a for loop. The purpose of this
 > >  > > loop is, I guess, to avoid leaving a file descriptor opened that was
 > >  > > passed through exec.
 > >  > 
 > >  > OK.
 > >  > 
 > >  > So, will you change rtdm_process too? Thanks.
 > > 
 > > I commited the select support, without any change to rtdm_context_get or

 > > rtdm_process. So, now, how do you prefer this to be fixed, by adding an
 > > mm struct to the rtdm_process struct ? By the way, after thinking about
 > > it I can live without this fix: I just have to stop the loop closing
 > > file descriptors at 768, so that it will not try to close RTDM file
 > > descriptors.
 > 
 > If you can live with it, I would vote for fixing it by the intended 
 > redesign via per-process fds.


Ok.

 > 
 > > 
 > > While commiting the support for select, I also had a dependency problem

 > > in Kconfig: when support for posix select is enabled the posix module
 > > uses a function defined in the RTDM module. So, there is one invalid
 > > configuration: posix built-in with support for select and rtdm built as
 > > a module. I could not find a way to express this condition in the
 > > Kconfig language, so I just made a comment depend on this condition, but
 > > would be happy if anyone found a better solution.
 > 
 > I would say:
 > 
 > config POSIX

 >   select RTDM if OPT_SELECT

But in this case, I can not have posix with select and without rtdm.


Then you need to isolate those services that POSIX needs from RTDM. The 
above just expresses the dependency you described. In the end this just 
shows that we have to define the common fd-ground for both skins in the 
core.


Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Jan Kiszka wrote:
 > >  > Gilles Chanteperdrix wrote:
 > >  > > Would not it be simpler to put a pointer to the task_struct ? After 
 > > all,
 > >  > > it already has a pid, comm and mm, and a file descriptor will not
 > >  > > survive a task_struct thanks to automatic closing of file descriptors.
 > >  > 
 > >  > Hmm, hmm, h... Sounds reasonable, should be safe.
 > > 
 > > Actually no, we can not do that because a task_struct may well disappear
 > > and the rtdm_process continue to exist as long as another thread uses the
 > > same mm.
 > 
 > Because we cleanup on mm exit, not task exit, right? OK, looks like I 
 > originally spent a few more thoughts than this time :-/.
 > 
 > > 
 > >  > 
 > >  > > 
 > >  > >  Could you
 > >  > >  > live without the check until we have per-process fd tabled, or was 
 > > it
 > >  > >  > essential for the select thing?
 > >  > > 
 > >  > > An application which I ported to Xenomai (and which uses the select
 > >  > > call) closes all file descriptors in a for loop. The purpose of this
 > >  > > loop is, I guess, to avoid leaving a file descriptor opened that was
 > >  > > passed through exec.
 > >  > 
 > >  > OK.
 > >  > 
 > >  > So, will you change rtdm_process too? Thanks.
 > > 
 > > I commited the select support, without any change to rtdm_context_get or
 > > rtdm_process. So, now, how do you prefer this to be fixed, by adding an
 > > mm struct to the rtdm_process struct ? By the way, after thinking about
 > > it I can live without this fix: I just have to stop the loop closing
 > > file descriptors at 768, so that it will not try to close RTDM file
 > > descriptors.
 > 
 > If you can live with it, I would vote for fixing it by the intended 
 > redesign via per-process fds.

Ok.

 > 
 > > 
 > > While commiting the support for select, I also had a dependency problem
 > > in Kconfig: when support for posix select is enabled the posix module
 > > uses a function defined in the RTDM module. So, there is one invalid
 > > configuration: posix built-in with support for select and rtdm built as
 > > a module. I could not find a way to express this condition in the
 > > Kconfig language, so I just made a comment depend on this condition, but
 > > would be happy if anyone found a better solution.
 > 
 > I would say:
 > 
 > config POSIX
 >  select RTDM if OPT_SELECT

But in this case, I can not have posix with select and without rtdm.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Jan Kiszka

Gilles Chanteperdrix wrote:

Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Would not it be simpler to put a pointer to the task_struct ? After all,
 > > it already has a pid, comm and mm, and a file descriptor will not
 > > survive a task_struct thanks to automatic closing of file descriptors.
 > 
 > Hmm, hmm, h... Sounds reasonable, should be safe.


Actually no, we can not do that because a task_struct may well disappear
and the rtdm_process continue to exist as long as another thread uses the
same mm.


Because we cleanup on mm exit, not task exit, right? OK, looks like I 
originally spent a few more thoughts than this time :-/.




 > 
 > > 
 > >  Could you

 > >  > live without the check until we have per-process fd tabled, or was it
 > >  > essential for the select thing?
 > > 
 > > An application which I ported to Xenomai (and which uses the select

 > > call) closes all file descriptors in a for loop. The purpose of this
 > > loop is, I guess, to avoid leaving a file descriptor opened that was
 > > passed through exec.
 > 
 > OK.
 > 
 > So, will you change rtdm_process too? Thanks.


I commited the select support, without any change to rtdm_context_get or
rtdm_process. So, now, how do you prefer this to be fixed, by adding an
mm struct to the rtdm_process struct ? By the way, after thinking about
it I can live without this fix: I just have to stop the loop closing
file descriptors at 768, so that it will not try to close RTDM file
descriptors.


If you can live with it, I would vote for fixing it by the intended 
redesign via per-process fds.




While commiting the support for select, I also had a dependency problem
in Kconfig: when support for posix select is enabled the posix module
uses a function defined in the RTDM module. So, there is one invalid
configuration: posix built-in with support for select and rtdm built as
a module. I could not find a way to express this condition in the
Kconfig language, so I just made a comment depend on this condition, but
would be happy if anyone found a better solution.


I would say:

config POSIX
select RTDM if OPT_SELECT

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > There is a shadow relax procedure running before the timer IRQ fires,
 > and that takes another context switch. So the latency sum is:
 > 
 >  - unrelated context switch
 >  - timer IRQ
 >  - switch to woken up RT process
 >  - serial IRQ
 > 
 > Almost the theoretical worst case.

The problem is that it does not happen when launching latency with -t 1
or -t 2, or if latency is not run in the background.

Anyway, I will investigate on the double ack issue, because it could
mean that there is a double EOI, which, I guess could cause some
troubles at interrupt controller level.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 2/4] RTDM support for select-like service.

2008-02-11 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Gilles Chanteperdrix wrote:
 > > Would not it be simpler to put a pointer to the task_struct ? After all,
 > > it already has a pid, comm and mm, and a file descriptor will not
 > > survive a task_struct thanks to automatic closing of file descriptors.
 > 
 > Hmm, hmm, h... Sounds reasonable, should be safe.

Actually no, we can not do that because a task_struct may well disappear
and the rtdm_process continue to exist as long as another thread uses the
same mm.

 > 
 > > 
 > >  Could you
 > >  > live without the check until we have per-process fd tabled, or was it
 > >  > essential for the select thing?
 > > 
 > > An application which I ported to Xenomai (and which uses the select
 > > call) closes all file descriptors in a for loop. The purpose of this
 > > loop is, I guess, to avoid leaving a file descriptor opened that was
 > > passed through exec.
 > 
 > OK.
 > 
 > So, will you change rtdm_process too? Thanks.

I commited the select support, without any change to rtdm_context_get or
rtdm_process. So, now, how do you prefer this to be fixed, by adding an
mm struct to the rtdm_process struct ? By the way, after thinking about
it I can live without this fix: I just have to stop the loop closing
file descriptors at 768, so that it will not try to close RTDM file
descriptors.

While commiting the support for select, I also had a dependency problem
in Kconfig: when support for posix select is enabled the posix module
uses a function defined in the RTDM module. So, there is one invalid
configuration: posix built-in with support for select and rtdm built as
a module. I could not find a way to express this condition in the
Kconfig language, so I just made a comment depend on this condition, but
would be happy if anyone found a better solution.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCHES] LTTng for Xenomai

2008-02-11 Thread Jan Kiszka
[Steven, I promised you this tool earlier, and now it runs. It /may/
help to understand some of your problems, at least it should give an
overview of your schedule...]


This is an update on how to get latest LTTng running with latest Xenomai!

For those who don't know what I'm talking about: LTTng [1] is an event
tracing framework for Linux. It is fairly light-weight during runtime,
its hooks into interesting spots of the system can easily be turned to
(almost) zero-overhead stubs when disabled. When enabled, the recorded
events are written to a log file and can later be analyzed with the help
of LTTV [1], a textual or graphical trace viewer.

Xenomai 2.4 already comes with the required instrumentation, all you
additionally need is a (fitting) set of LTTng patches + some minor
adjustments for the I-pipe environment.

This is what to do in order to marry LTTng with an I-pipe 2.6.24 kernel:
 - Download patch-2.6.24-lttng-0.10-pre43.tar.bz2 from [1]
 - Unpack it, move its folder as "patches" in your Xenomai/I-pipe
   kernel source tree
 - Copy the attached files into that "patches" folder as well
 - Use quilt to apply all required patches to your kernel (if ipipe was
   already applied, comment out the first line in "series")
 - Run prepare-kernel.sh if not done yet
 - Enable LTT in your config, rebuild, install, and boot the kernel

To use LTTng, you need the control tools (currently ltt-control-0.47)
and the related viewer (lttv-0.10.0-pre10 for this combination). Build
and install both.

Now on your target, run the following to enable LTTng tracing:
 1. ltt-armall
 2. lttctl -n trace -d -l /sys/kernel/debug/ltt -t /path/to/your/trace

After running your application, stop the trace and dump it
 1. lttctl -n trace -R
 2. lttv -m textDump -t /path/to/your/trace
(or use lttv-gui, but text dumps are IMO easier to browse at the
 moment)

I'm looking forward to feedback and hope some of you gain interesting
insights into their systems. Feel free to share your findings here and
discus them with us - we may all benefit from this. :)

Jan

[1] http://ltt.polymtl.ca/

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
---
 arch/x86/kernel/Makefile_64 |2 
 include/linux/kernel.h  |1 
 kernel/exit.c   |1 
 kernel/sched.c  |   63 -
 mm/memory.c |  108 
 5 files changed, 175 deletions(-)

Index: b/mm/memory.c
===
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -50,7 +50,6 @@
 #include 
 #include 
 #include 
-#include 
 
 #include 
 #include 
@@ -2799,110 +2798,3 @@ int access_process_vm(struct task_struct
 
 	return buf - old_buf;
 }
-
-#ifdef CONFIG_IPIPE
-
-static inline int ipipe_pin_pte_range(struct mm_struct *mm, pmd_t *pmd,
-  struct vm_area_struct *vma,
-  unsigned long addr, unsigned long end)
-{
-	spinlock_t *ptl;
-	pte_t *pte;
-	
-	do {
-		pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
-		if (!pte)
-			continue;
-
-		if (!pte_present(*pte)) {
-			pte_unmap_unlock(pte, ptl);
-			continue;
-		}
-
-		if (do_wp_page(mm, vma, addr, pte, pmd, ptl, *pte) == VM_FAULT_OOM)
-			return -ENOMEM;
-	} while (addr += PAGE_SIZE, addr != end);
-	return 0;
-}
-
-static inline int ipipe_pin_pmd_range(struct mm_struct *mm, pud_t *pud,
-  struct vm_area_struct *vma,
-  unsigned long addr, unsigned long end)
-{
-	unsigned long next;
-	pmd_t *pmd;
-
-	pmd = pmd_offset(pud, addr);
-	do {
-		next = pmd_addr_end(addr, end);
-		if (pmd_none_or_clear_bad(pmd))
-			continue;
-		if (ipipe_pin_pte_range(mm, pmd, vma, addr, next))
-			return -ENOMEM;
-	} while (pmd++, addr = next, addr != end);
-	return 0;
-}
-
-static inline int ipipe_pin_pud_range(struct mm_struct *mm, pgd_t *pgd,
-  struct vm_area_struct *vma,
-  unsigned long addr, unsigned long end)
-{
-	unsigned long next;
-	pud_t *pud;
-
-	pud = pud_offset(pgd, addr);
-	do {
-		next = pud_addr_end(addr, end);
-		if (pud_none_or_clear_bad(pud))
-			continue;
-		if (ipipe_pin_pmd_range(mm, pud, vma, addr, next))
-			return -ENOMEM;
-	} while (pud++, addr = next, addr != end);
-	return 0;
-}
-
-int ipipe_disable_ondemand_mappings(struct task_struct *tsk)
-{
-	unsigned long addr, next, end;
-	struct vm_area_struct *vma;
-	struct mm_struct *mm;
-	int result = 0;
-	pgd_t *pgd;
-
-	mm = get_task_mm(tsk);
-	if (!mm)
-		return -EPERM;
-
-	down_write(&mm->mmap_sem);
-	if (mm->def_flags & VM_PINNED)
-		goto done_mm;
-
-	for (vma = mm->mmap; vma; vma = vma->vm_next) {
-		if (!is_cow_mapping(vma->vm_flags))
-			continue;
-
-		addr = vma->vm_start;
-		end = vma->vm_end;
-		
-		pgd = pgd_offset(mm, addr);
-		do {
-			next = pgd_addr_end(addr, end);
-			if (pgd_none_or_clear_bad(pgd))
-continue;
-			if (ipipe_pin_pud_range(mm, pgd, vma, addr, next)) {
-result = -ENOMEM;
-goto done_mm;
-			}
-		} while (pgd++, addr = nex

Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> On Mon, Feb 11, 2008 at 2:41 PM, Jan Kiszka <[EMAIL PROTECTED]> wrote:
>> Gilles Chanteperdrix wrote:
>>  > Juan Antonio Garcia Redondo wrote:
>>  >  > On 23/01/08 14:15, Gilles Chanteperdrix wrote:
>>  >  > > On Jan 23, 2008 11:04 AM, Gilles Chanteperdrix
>>  >  > > <[EMAIL PROTECTED]> wrote:
>>  >  > > > On Jan 23, 2008 7:52 AM, Juan Antonio Garcia Redondo
>>  >  > > >
>>  >  > > > <[EMAIL PROTECTED]> wrote:
>>  >  > > > > I see everything OK except for the first samples of cyclictests. 
>> Any comments ?
>>  >  > > >
>>  >  > > > The load you apply does not load the cache, which is a source of
>>  >  > > > jitter. You should run the cache calibrator (I do not find the 
>> cache
>>  >  > > > calibrator URL, but it is somewhere in Xenomai distribution or 
>> wiki).
>>  >  > >
>>  >  > > It is in the TROUBLESHOOTING guide, question "How do I adequately 
>> stress test".
>>  >  > >
>>  >  > > --
>>  >  > >Gilles Chanteperdrix
>>  >  >
>>  >  > Thanks Gilles, I've done more tests using the cache calibrator from
>>  >  > http://www.cwi.nl/~manegold/Calibrator. The latency numbers are very
>>  >  > similar althought I've found an strange behaviour related to telnet
>>  >  > sessions.
>>  >  >
>>  >  > Environment:
>>  >  >o Tests running from console over atmel serial port.
>>  >  >o A telnet session over on-chip ethernet.
>>  >  > o System without load.
>>  >  >
>>  >  > ./latency -p 500 -t0
>>  >  > == All results in microseconds
>>  >  > warming up...
>>  >  > RTT|  00:00:01  (periodic user-mode task, 500 us period, priority 99)
>>  >  > RTH|-RTHlat min|-lat avg|-lat max|-overrun|lat 
>> best|---lat
>>  >  > worst
>>  >  > RTD|  49.613|  52.190|  62.822|   0|  49.613| 
>> 62.822
>>  >  > RTD|  42.203|  52.512|  66.365|   0|  42.203| 
>> 66.365
>>  >  >
>>  >  >
>>  >  > Now If hit a key on the telnet session :
>>  >  >
>>  >  > RTD|  36.726|  57.989| 109.536|   0|  31.572| 
>> 109.536  < Here I've hit the key.
>>  >  > RTD|  36.404|  51.868|  69.587|   0|  31.572| 
>> 109.536
>>  >  > RTD|  35.760|  51.868|  73.775|   0|  31.572| 
>> 109.536
>>  >  >
>>  >  > Now, I launch an script which executes four instances of cache
>>  >  > calibrator.
>>  >  >
>>  >  > RTD|  45.103|  57.667|  75.708|   0|  32.538| 
>> 122.422
>>  >  > RTD|  45.425|  57.023|  76.030|   0|  32.538| 
>> 122.422
>>  >  > RTD|  46.069|  57.023|  75.708|   0|  32.538| 
>> 122.422
>>  >  >
>>  >  > Now, I can hit a key on the telnet session without effects over latency
>>  >  > numbers:
>>  >  >
>>  >  > RTD|  44.136|  57.989|  75.386|   0|  27.384| 
>> 128.221
>>  >  > RTD|  46.713|  57.345|  76.353|   0|  27.384| 
>> 128.221
>>  >  > RTD|  44.780|  57.345|  76.675|   0|  27.384| 
>> 128.221
>>  >  > RTD|  43.492|  56.701|  76.997|   0|  27.384| 
>> 128.221
>>  >  >
>>  >  > Now I stop the calibrator process and launch 'ping -f -s2048 
>> 192.168.2.82' from an external
>>  >  > machine.
>>  >  >
>>  >  > RTD|  40.270|  68.621|  90.850|   0|  27.384| 
>> 128.221
>>  >  > RTD|  36.082|  68.621|  88.273|   0|  27.384| 
>> 128.221
>>  >  > RTD|  40.592|  67.976|  91.494|   0|  27.384| 
>> 128.221
>>  >  > RTD|  41.237|  68.298|  89.239|   0|  27.384| 
>> 128.221
>>  >  >
>>  >  >
>>  >  > Now If hit a key on the telnet session :
>>  >  >
>>  >  > RTD|  42.203|  67.976|  88.273|   0|  27.384| 
>> 128.221
>>  >  > RTD|  32.216|  93.427| 128.543|   0|  27.384| 
>> 128.543 <-- Here I've hit the key.
>>  >  > RTD|  42.203|  68.298|  87.628|   0|  27.384| 
>> 128.543
>>  >  >
>>  >  > And again the calibrator execution results on eliminate the strange
>>  >  > behaviour whith the telnet session.
>>  >  >
>>  >  > Any clues ?
>>  >
>>  > Here is an update, follow-up on xenomai-core. I was finally able to
>>  > reproduce this behaviour: I run latency in the background and hit the
>>  > "Enter" key on my serial console, and get high latency figures.
>>  >
>>  > I enabled the tracer, set xenomai latency to 300us and managed to get a
>>  > trace (220us latency). However, I do not understand what is going wrong
>>  > from reading the trace, so I post it here in case someone sees something.
>>  >
>>  > Ah, and I added an ipipe_trace_special in ipipe_grab_irq to log the
>>  > number of the received irq. 1 is serial interrupt 18 (0x12) is timer
>>  > interrupt.
>>  >
>>  > Inline, so that Jan can comment it.
>>
>>  Thanks, but TB is too "smart" - it cuts off everything that is marked as
>>  footer ("--"). :-/
>>
>>
>>  > I-pipe frozen back-tracing service on 2.6

Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Gilles Chanteperdrix
On Mon, Feb 11, 2008 at 2:41 PM, Jan Kiszka <[EMAIL PROTECTED]> wrote:
>
> Gilles Chanteperdrix wrote:
>  > Juan Antonio Garcia Redondo wrote:
>  >  > On 23/01/08 14:15, Gilles Chanteperdrix wrote:
>  >  > > On Jan 23, 2008 11:04 AM, Gilles Chanteperdrix
>  >  > > <[EMAIL PROTECTED]> wrote:
>  >  > > > On Jan 23, 2008 7:52 AM, Juan Antonio Garcia Redondo
>  >  > > >
>  >  > > > <[EMAIL PROTECTED]> wrote:
>  >  > > > > I see everything OK except for the first samples of cyclictests. 
> Any comments ?
>  >  > > >
>  >  > > > The load you apply does not load the cache, which is a source of
>  >  > > > jitter. You should run the cache calibrator (I do not find the cache
>  >  > > > calibrator URL, but it is somewhere in Xenomai distribution or 
> wiki).
>  >  > >
>  >  > > It is in the TROUBLESHOOTING guide, question "How do I adequately 
> stress test".
>  >  > >
>  >  > > --
>  >  > >Gilles Chanteperdrix
>  >  >
>  >  > Thanks Gilles, I've done more tests using the cache calibrator from
>  >  > http://www.cwi.nl/~manegold/Calibrator. The latency numbers are very
>  >  > similar althought I've found an strange behaviour related to telnet
>  >  > sessions.
>  >  >
>  >  > Environment:
>  >  >o Tests running from console over atmel serial port.
>  >  >o A telnet session over on-chip ethernet.
>  >  > o System without load.
>  >  >
>  >  > ./latency -p 500 -t0
>  >  > == All results in microseconds
>  >  > warming up...
>  >  > RTT|  00:00:01  (periodic user-mode task, 500 us period, priority 99)
>  >  > RTH|-RTHlat min|-lat avg|-lat max|-overrun|lat 
> best|---lat
>  >  > worst
>  >  > RTD|  49.613|  52.190|  62.822|   0|  49.613| 62.822
>  >  > RTD|  42.203|  52.512|  66.365|   0|  42.203| 66.365
>  >  >
>  >  >
>  >  > Now If hit a key on the telnet session :
>  >  >
>  >  > RTD|  36.726|  57.989| 109.536|   0|  31.572| 
> 109.536  < Here I've hit the key.
>  >  > RTD|  36.404|  51.868|  69.587|   0|  31.572| 
> 109.536
>  >  > RTD|  35.760|  51.868|  73.775|   0|  31.572| 
> 109.536
>  >  >
>  >  > Now, I launch an script which executes four instances of cache
>  >  > calibrator.
>  >  >
>  >  > RTD|  45.103|  57.667|  75.708|   0|  32.538| 
> 122.422
>  >  > RTD|  45.425|  57.023|  76.030|   0|  32.538| 
> 122.422
>  >  > RTD|  46.069|  57.023|  75.708|   0|  32.538| 
> 122.422
>  >  >
>  >  > Now, I can hit a key on the telnet session without effects over latency
>  >  > numbers:
>  >  >
>  >  > RTD|  44.136|  57.989|  75.386|   0|  27.384| 
> 128.221
>  >  > RTD|  46.713|  57.345|  76.353|   0|  27.384| 
> 128.221
>  >  > RTD|  44.780|  57.345|  76.675|   0|  27.384| 
> 128.221
>  >  > RTD|  43.492|  56.701|  76.997|   0|  27.384| 
> 128.221
>  >  >
>  >  > Now I stop the calibrator process and launch 'ping -f -s2048 
> 192.168.2.82' from an external
>  >  > machine.
>  >  >
>  >  > RTD|  40.270|  68.621|  90.850|   0|  27.384| 
> 128.221
>  >  > RTD|  36.082|  68.621|  88.273|   0|  27.384| 
> 128.221
>  >  > RTD|  40.592|  67.976|  91.494|   0|  27.384| 
> 128.221
>  >  > RTD|  41.237|  68.298|  89.239|   0|  27.384| 
> 128.221
>  >  >
>  >  >
>  >  > Now If hit a key on the telnet session :
>  >  >
>  >  > RTD|  42.203|  67.976|  88.273|   0|  27.384| 
> 128.221
>  >  > RTD|  32.216|  93.427| 128.543|   0|  27.384| 
> 128.543 <-- Here I've hit the key.
>  >  > RTD|  42.203|  68.298|  87.628|   0|  27.384| 
> 128.543
>  >  >
>  >  > And again the calibrator execution results on eliminate the strange
>  >  > behaviour whith the telnet session.
>  >  >
>  >  > Any clues ?
>  >
>  > Here is an update, follow-up on xenomai-core. I was finally able to
>  > reproduce this behaviour: I run latency in the background and hit the
>  > "Enter" key on my serial console, and get high latency figures.
>  >
>  > I enabled the tracer, set xenomai latency to 300us and managed to get a
>  > trace (220us latency). However, I do not understand what is going wrong
>  > from reading the trace, so I post it here in case someone sees something.
>  >
>  > Ah, and I added an ipipe_trace_special in ipipe_grab_irq to log the
>  > number of the received irq. 1 is serial interrupt 18 (0x12) is timer
>  > interrupt.
>  >
>  > Inline, so that Jan can comment it.
>
>  Thanks, but TB is too "smart" - it cuts off everything that is marked as
>  footer ("--"). :-/
>
>
>  > I-pipe frozen back-tracing service on 2.6.20/ipipe-1.8-04
>  > 
>  > CPU: 0, Freeze: 450692973 cycles, Trace Points: 1000 (+10)
>  > Calibrate

Re: [Xenomai-core] [Xenomai-help] AT91SAM9260 latency

2008-02-11 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Juan Antonio Garcia Redondo wrote:
>  > On 23/01/08 14:15, Gilles Chanteperdrix wrote:
>  > > On Jan 23, 2008 11:04 AM, Gilles Chanteperdrix
>  > > <[EMAIL PROTECTED]> wrote:
>  > > > On Jan 23, 2008 7:52 AM, Juan Antonio Garcia Redondo
>  > > >
>  > > > <[EMAIL PROTECTED]> wrote:
>  > > > > I see everything OK except for the first samples of cyclictests. Any 
> comments ?
>  > > >
>  > > > The load you apply does not load the cache, which is a source of
>  > > > jitter. You should run the cache calibrator (I do not find the cache
>  > > > calibrator URL, but it is somewhere in Xenomai distribution or wiki).
>  > > 
>  > > It is in the TROUBLESHOOTING guide, question "How do I adequately stress 
> test".
>  > > 
>  > > -- 
>  > >Gilles Chanteperdrix
>  > 
>  > Thanks Gilles, I've done more tests using the cache calibrator from
>  > http://www.cwi.nl/~manegold/Calibrator. The latency numbers are very
>  > similar althought I've found an strange behaviour related to telnet
>  > sessions.
>  > 
>  > Environment:
>  >o Tests running from console over atmel serial port.
>  >o A telnet session over on-chip ethernet. 
>  > o System without load.
>  > 
>  > ./latency -p 500 -t0
>  > == All results in microseconds
>  > warming up...
>  > RTT|  00:00:01  (periodic user-mode task, 500 us period, priority 99)
>  > RTH|-RTHlat min|-lat avg|-lat max|-overrun|lat best|---lat
>  > worst
>  > RTD|  49.613|  52.190|  62.822|   0|  49.613| 62.822
>  > RTD|  42.203|  52.512|  66.365|   0|  42.203| 66.365
>  > 
>  > 
>  > Now If hit a key on the telnet session :
>  > 
>  > RTD|  36.726|  57.989| 109.536|   0|  31.572| 109.536  
> < Here I've hit the key.
>  > RTD|  36.404|  51.868|  69.587|   0|  31.572| 109.536
>  > RTD|  35.760|  51.868|  73.775|   0|  31.572| 109.536
>  > 
>  > Now, I launch an script which executes four instances of cache
>  > calibrator.
>  > 
>  > RTD|  45.103|  57.667|  75.708|   0|  32.538| 122.422
>  > RTD|  45.425|  57.023|  76.030|   0|  32.538| 122.422
>  > RTD|  46.069|  57.023|  75.708|   0|  32.538| 122.422
>  > 
>  > Now, I can hit a key on the telnet session without effects over latency
>  > numbers:
>  > 
>  > RTD|  44.136|  57.989|  75.386|   0|  27.384| 128.221
>  > RTD|  46.713|  57.345|  76.353|   0|  27.384| 128.221
>  > RTD|  44.780|  57.345|  76.675|   0|  27.384| 128.221
>  > RTD|  43.492|  56.701|  76.997|   0|  27.384| 128.221
>  > 
>  > Now I stop the calibrator process and launch 'ping -f -s2048 192.168.2.82' 
> from an external
>  > machine.
>  > 
>  > RTD|  40.270|  68.621|  90.850|   0|  27.384| 128.221
>  > RTD|  36.082|  68.621|  88.273|   0|  27.384| 128.221
>  > RTD|  40.592|  67.976|  91.494|   0|  27.384| 128.221
>  > RTD|  41.237|  68.298|  89.239|   0|  27.384| 128.221
>  > 
>  > 
>  > Now If hit a key on the telnet session :
>  > 
>  > RTD|  42.203|  67.976|  88.273|   0|  27.384| 128.221
>  > RTD|  32.216|  93.427| 128.543|   0|  27.384| 128.543 
> <-- Here I've hit the key.
>  > RTD|  42.203|  68.298|  87.628|   0|  27.384| 128.543
>  > 
>  > And again the calibrator execution results on eliminate the strange
>  > behaviour whith the telnet session.
>  > 
>  > Any clues ?
> 
> Here is an update, follow-up on xenomai-core. I was finally able to
> reproduce this behaviour: I run latency in the background and hit the
> "Enter" key on my serial console, and get high latency figures.
> 
> I enabled the tracer, set xenomai latency to 300us and managed to get a
> trace (220us latency). However, I do not understand what is going wrong
> from reading the trace, so I post it here in case someone sees something.
> 
> Ah, and I added an ipipe_trace_special in ipipe_grab_irq to log the
> number of the received irq. 1 is serial interrupt 18 (0x12) is timer
> interrupt.
> 
> Inline, so that Jan can comment it.

Thanks, but TB is too "smart" - it cuts off everything that is marked as
footer ("--"). :-/

> I-pipe frozen back-tracing service on 2.6.20/ipipe-1.8-04
> 
> CPU: 0, Freeze: 450692973 cycles, Trace Points: 1000 (+10)
> Calibrated minimum trace-point overhead: 1.000 us

That is interesting. I tells us that we might subtract 1 us
_per_tracepoint_ from the given latencies due to the inherent tracer
overhead. We have about 50 entries in the critical path, so 50 us
compared to 220 us that were measured - roughly 170 us real latency.

What is the clock resolution btw? 500 ns?

So here is the interesting block, starting with t

[Xenomai-core] Xenomai v2.4.2

2008-02-11 Thread Philippe Gerum

Here is the second maintenance release for the v2.4.x branch.  Short
log follows:

[x86]

* Fix tick interrupt setup and related accounting when
  CONFIG_GENERIC_CLOCKEVENTS is disabled.

* Fix race when releasing the timer.

* Update Adeos support for 2.6.20.21/i386, 2.6.23/i386 and
  2.6.23/x86_64.
* Upgrade Adeos support to 2.6.24/x86 final.

[powerpc]

* Update Adeos support for 2.6.20/powerpc and 2.6.23/powerpc.
* Upgrade Adeos support to 2.6.24/powerpc over
  DENX-v2.6.24-stable (all-in-one patch also supporting the
  legacy ppc32 arch).

[16550]

* Set correct bit in IER to enable modem status IRQs.

[clocktest]

* Fix soft-lockups due to randomization of measurement thread
  delays.
* Avoid races when storing time warps.

See the ChangeLog for details.

http://download.gna.org/xenomai/stable/xenomai-2.4.2.tar.bz2

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [patch 0/4] Support for select-like services.

2008-02-11 Thread Gilles Chanteperdrix
On Mon, Feb 11, 2008 at 8:28 AM, Johan Borkhuis
<[EMAIL PROTECTED]> wrote:
> Gilles,
>
>
>  Gilles Chanteperdrix wrote:
>  > Hi,
>  >
>  > here comes a third edition of the patchset adding support for select.
>  >
>  Could you tell me against which version this patch is tested? Would it
>  work with 2.4.1, or do I need the latest version from SVN?

The patch is against trunk. We would need to rework it a bit to adapt
it to 2.4.1, but since it adds a new syscall, I do not know if it will
be backported, though adding a syscall does not really break the ABI.

-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core