Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Philippe Gerum
On Wed, 2007-06-20 at 19:53 +0200, Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>  > On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
>  > > Jan Kiszka wrote:
>  > > > ...
>  > > > The answer I found is to synchronise all time bases as good as 
> possible.
>  > > > That means if one base changes its wall clock offset, all others need 
> to
>  > > > be adjusted as well. At this chance, we would also implement
>  > > > synchronisation of the time bases on the system clock when they get
>  > > > started. Because skins may work with different type width to represent
>  > > > time, relative changes have to be applied, i.e. the core API changes
>  > > > from xntbase_set_time(new_time) to 
> xntbase_adjust_time(relative_change).
>  > > > The patch (global-wallclock.patch) finally touches more parts than I 
> was
>  > > > first hoping. Here is the full list:
>  > > > 
>  > > >  - synchronise slave time bases on the master on xntbase_start
>  > > >  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
>  > > >currently registered
>  > > >  - make xnarch_start_timer return the nanos since the last host tick
>  > > >(only ia64 affected, all others return 0 anyway, causing one tick
>  > > >off when synchronising on system time -- but this fiddling becomes
>  > > >pointless on the long term due to better clocksourses on all archs)
>  > 
>  > Support for 2.4 kernels will be still around for the Xenomai 2.x series
>  > though, and those will likely never support clocksources. Support for
>  > Linux 2.4 will be discontinued starting from x3.
>  > 
>  > > >  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
>  > > >at this chance
>  > > >  - make xnarch_get_sys_time internal, no skin should (need to) touch
>  > > >this anymore
>  > > 
>  > 
>  > This interface has not been meant to be part of the skin building
>  > interface, but for internal support code that needs to get the host
>  > time. For instance, one may want this information for obscure data
>  > logging from within a module, independently of any wallclock offset
>  > fiddling Xenomai may do on its timebases (so nktbase is not an option
>  > here if timebases start being tighly coupled). And this should work in
>  > real execution mode, or in virtual simulation mode. IOW,
>  > xnarch_get_sys_time() has to remain part of the exported internal
>  > interface (even if as some inline routine, that's not the main issue
>  > here).
>  > 
>  > > Forgot to mention two further aspects:
>  > > 
>  > >  - The semantic of XNTBSET was kept time base-local. But I wonder if
>  > >this flag is still required. Unless it was introduced to emulated
>  > >some special RTOS behaviour, we now have the time bases automatically
>  > >set on startup. Comments welcome.
>  > > 
>  > 
>  > That might be a problem wrt pSOS for instance. In theory, tm_set() has
>  > to be issued to set the initial time, so there is indeed the notion of
>  > unset/invalid state for the pSOS wallclock time when the system starts.
>  > This said, in the real world, such initialization better belongs to the
>  > BSP rather than to the application itself, and in our case, the BSP is
>  > Linux/Xenomai's business, so this would still make sense to assume that
>  > a timebase has no unset state from the application POV, and XNTBSET
>  > could therefore go away.
>  > 
>  > The main concern I have right now regarding this patch is that it
>  > changes a key aspect of Xenomai's current time management scheme:
>  > timebases would be tighly coupled, whilst they aren't right now. For
>  > instance, two timebases could have a very different idea of the Epoch in
>  > the current implementation, and this patch is precisely made to kill
>  > that aspect. This is a key issue if one considers how Xenomai should
>  > deal with concurrent skins: either 1) as isolated virtual RTOS machines
>  > with only a few bridges allowing very simple interfaces, or 2) as
>  > possibly cooperating interfaces. It's all a matter of design; actually,
>  > user/customer experience I know of clearly proves that #2 makes a lot of
>  > sense, but still, this point needs to be discussed if needed.
>  > 
>  > So, two questions arise:
>  > 
>  > - what's the short term impact on the common - or not that common - use
>  > case involving multiple concurrent skins? I tend to think that not that
>  > many people are actually leveraging the current decoupling between
>  > timebases. But, would some do, well, then they should definitely speak
>  > up NOW.
> 
> There is a special concern with the POSIX spec: it states that when the
> time is set, absolute timers should keep their absolute tick date (so,
> when the time is set to a later date, absolute timers that should have
> elapsed in the interval should elapse asap), and relative timers should
> be changed to elapse at the correct date (new_elapse_date = new_date +
> previous_elapse_date - old_date). The fact that the nucleus

Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Philippe Gerum
On Wed, 2007-06-20 at 20:52 +0200, Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
> > Philippe Gerum wrote:
> >  > On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
> >  > > Jan Kiszka wrote:
> >  > > > ...
> >  > > > The answer I found is to synchronise all time bases as good as 
> > possible.
> >  > > > That means if one base changes its wall clock offset, all others 
> > need to
> >  > > > be adjusted as well. At this chance, we would also implement
> >  > > > synchronisation of the time bases on the system clock when they get
> >  > > > started. Because skins may work with different type width to 
> > represent
> >  > > > time, relative changes have to be applied, i.e. the core API changes
> >  > > > from xntbase_set_time(new_time) to 
> > xntbase_adjust_time(relative_change).
> >  > > > The patch (global-wallclock.patch) finally touches more parts than I 
> > was
> >  > > > first hoping. Here is the full list:
> >  > > > 
> >  > > >  - synchronise slave time bases on the master on xntbase_start
> >  > > >  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
> >  > > >currently registered
> >  > > >  - make xnarch_start_timer return the nanos since the last host tick
> >  > > >(only ia64 affected, all others return 0 anyway, causing one tick
> >  > > >off when synchronising on system time -- but this fiddling becomes
> >  > > >pointless on the long term due to better clocksourses on all 
> > archs)
> >  > 
> >  > Support for 2.4 kernels will be still around for the Xenomai 2.x series
> >  > though, and those will likely never support clocksources. Support for
> >  > Linux 2.4 will be discontinued starting from x3.
> >  > 
> >  > > >  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing 
> > sc_sclock
> >  > > >at this chance
> >  > > >  - make xnarch_get_sys_time internal, no skin should (need to) touch
> >  > > >this anymore
> >  > > 
> >  > 
> >  > This interface has not been meant to be part of the skin building
> >  > interface, but for internal support code that needs to get the host
> >  > time. For instance, one may want this information for obscure data
> >  > logging from within a module, independently of any wallclock offset
> >  > fiddling Xenomai may do on its timebases (so nktbase is not an option
> >  > here if timebases start being tighly coupled). And this should work in
> >  > real execution mode, or in virtual simulation mode. IOW,
> >  > xnarch_get_sys_time() has to remain part of the exported internal
> >  > interface (even if as some inline routine, that's not the main issue
> >  > here).
> >  > 
> >  > > Forgot to mention two further aspects:
> >  > > 
> >  > >  - The semantic of XNTBSET was kept time base-local. But I wonder if
> >  > >this flag is still required. Unless it was introduced to emulated
> >  > >some special RTOS behaviour, we now have the time bases 
> > automatically
> >  > >set on startup. Comments welcome.
> >  > > 
> >  > 
> >  > That might be a problem wrt pSOS for instance. In theory, tm_set() has
> >  > to be issued to set the initial time, so there is indeed the notion of
> >  > unset/invalid state for the pSOS wallclock time when the system starts.
> >  > This said, in the real world, such initialization better belongs to the
> >  > BSP rather than to the application itself, and in our case, the BSP is
> >  > Linux/Xenomai's business, so this would still make sense to assume that
> >  > a timebase has no unset state from the application POV, and XNTBSET
> >  > could therefore go away.
> >  > 
> >  > The main concern I have right now regarding this patch is that it
> >  > changes a key aspect of Xenomai's current time management scheme:
> >  > timebases would be tighly coupled, whilst they aren't right now. For
> >  > instance, two timebases could have a very different idea of the Epoch in
> >  > the current implementation, and this patch is precisely made to kill
> >  > that aspect. This is a key issue if one considers how Xenomai should
> >  > deal with concurrent skins: either 1) as isolated virtual RTOS machines
> >  > with only a few bridges allowing very simple interfaces, or 2) as
> >  > possibly cooperating interfaces. It's all a matter of design; actually,
> >  > user/customer experience I know of clearly proves that #2 makes a lot of
> >  > sense, but still, this point needs to be discussed if needed.
> >  > 
> >  > So, two questions arise:
> >  > 
> >  > - what's the short term impact on the common - or not that common - use
> >  > case involving multiple concurrent skins? I tend to think that not that
> >  > many people are actually leveraging the current decoupling between
> >  > timebases. But, would some do, well, then they should definitely speak
> >  > up NOW.
> > 
> > There is a special concern with the POSIX spec: it states that when the
> > time is set, absolute timers should keep their absolute tick date (so,
> > when the time is set to a later date, absolute timers that should

Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Philippe Gerum
On Wed, 2007-06-20 at 19:08 +0200, Jan Kiszka wrote: 
> Philippe Gerum wrote:
> > On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
> >> Jan Kiszka wrote:
> >>> ...
> >>> The answer I found is to synchronise all time bases as good as possible.
> >>> That means if one base changes its wall clock offset, all others need to
> >>> be adjusted as well. At this chance, we would also implement
> >>> synchronisation of the time bases on the system clock when they get
> >>> started. Because skins may work with different type width to represent
> >>> time, relative changes have to be applied, i.e. the core API changes
> >>> from xntbase_set_time(new_time) to xntbase_adjust_time(relative_change).
> >>> The patch (global-wallclock.patch) finally touches more parts than I was
> >>> first hoping. Here is the full list:
> >>>
> >>>  - synchronise slave time bases on the master on xntbase_start
> >>>  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
> >>>currently registered
> >>>  - make xnarch_start_timer return the nanos since the last host tick
> >>>(only ia64 affected, all others return 0 anyway, causing one tick
> >>>off when synchronising on system time -- but this fiddling becomes
> >>>pointless on the long term due to better clocksourses on all archs)
> > 
> > Support for 2.4 kernels will be still around for the Xenomai 2.x series
> > though, and those will likely never support clocksources. Support for
> > Linux 2.4 will be discontinued starting from x3.
> 
> Again: As the code looks right now, only ia64 made use of this feature.
> We have i386 and PPC for 2.4, and both did not bother to synchronise
> that precisely so far (here, this interface is pointless).
> 
> And on x86 with recent 2.6 kernels, simply returning 0 on success of the
> timer setup made the master clock deviate from the real timeofday by one
> tick.
> 

My remark was actually a general one: what happens within Linux 2.6
right now cannot be used to generalize anything for Xenomai 2, so we
cannot use anything related as an argument for what should happen in
this series. For the same reason, we do need wrappers for 2.4, even when
the latest incarnations might backport some 2.6 features, because the
whole point for people about remaining with some oldish Linux 2.4
release is precisely that most of them will _never_ want to upgrade
their current setup, 2.4.25 to 2.4.34, 2.4.x to 2.6, whatever, just
because it works as it is, and they don't want to take the upgrade hit
once again for their product in the field.

> > 
> >>>  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
> >>>at this chance
> >>>  - make xnarch_get_sys_time internal, no skin should (need to) touch
> >>>this anymore
> > 
> > This interface has not been meant to be part of the skin building
> > interface, but for internal support code that needs to get the host
> > time. For instance, one may want this information for obscure data
> > logging from within a module, independently of any wallclock offset
> > fiddling Xenomai may do on its timebases (so nktbase is not an option
> > here if timebases start being tighly coupled). And this should work in
> > real execution mode, or in virtual simulation mode. IOW,
> > xnarch_get_sys_time() has to remain part of the exported internal
> > interface (even if as some inline routine, that's not the main issue
> > here).
> 
> As I still haven't been able to see real code using it like this, I
> can't comment on it.
> 

It's pretty simple to sketch some of it: you want to add some debugging
facility that needs timestamps, but you don't want to depend on any
timebase, because the timebase is part of what is being observed. What
you want is raw, silly, purely host-based timestamping. A bunch of
software models attached to Xenomai systems used for real-time
simulation I've seen do rely on that kind of facility.

> > 
> >> Forgot to mention two further aspects:
> >>
> >>  - The semantic of XNTBSET was kept time base-local. But I wonder if
> >>this flag is still required. Unless it was introduced to emulated
> >>some special RTOS behaviour, we now have the time bases automatically
> >>set on startup. Comments welcome.
> >>
> > 
> > That might be a problem wrt pSOS for instance. In theory, tm_set() has
> > to be issued to set the initial time, so there is indeed the notion of
> > unset/invalid state for the pSOS wallclock time when the system starts.
> > This said, in the real world, such initialization better belongs to the
> > BSP rather than to the application itself, and in our case, the BSP is
> > Linux/Xenomai's business, so this would still make sense to assume that
> > a timebase has no unset state from the application POV, and XNTBSET
> > could therefore go away.
> 
> That was my first impression as well, but I cannot asses the impact as I
> don't know real pSOS porting scenarios.
> 

The impact is basically that you won't be able to emulate some error
condition, becaus

Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-20 Thread Jan Kiszka
Dmitry Adamushko wrote:
> Hello Jan,
> 
>> Well, I hate nested locks when it comes to real-time, but in this case I
>> really found no simpler approach. There is the risk of deadlocks via
>>
>> IRQ:xnintr_shirq::lock -> handler -> nklock vs.
>> Application:nklock -> xnintr_attach/detach -> xnintr_shirq::lock,
> 
> it's also relevant for the current code - xnintr_attach/detach() must
> not be called while holding the 'nklock'.

That's good, no new restriction (the existing one will be documented now).

> 
> I think, your approach should work as well.. provided we have only a
> single reader vs. a single writter contention case, which seems to be
> the case here ('intrlock' is responsible for synchronization between

Single writer is ensured by intrlock, single reader comes from the
per-IRQ scope of the inner lock.

> xnintr_attach/detach()).. your approach does increase a worst case
> length of the lock(&intrlock) --> unlock(&intrlock) section... but
> that's unlikely to be critical.
> 
> I think, the patch I sent before addresses a reported earlier case
> with xnintr_edge_shirq_handler().. but your approach does make code
> shorter (and likely simpler), right? I don't see any problems it would
> possibly cause (maybe I'm missing smth yet :)

I must confess I didn't get your idea immediately. Later on (cough,
after hacking my own patch, cough) I had a closer look, and it first
appeared fairly nice, despite the additional "ifs". But then I realised
that "end == old_end" may produce false positives in case we have
several times the same (and only the same) IRQ in a row. So, I'm afraid
we have to live with only one candidate. :->

OK, will point our bug reporter to that patch now and ask for testing.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>  > I was afraid you would insist on this support. ;)
> 
> Well, clock_settime is almost the only service missing in posix skin,
> and since I saw in Thomas Gleixner slides that one reason for not using
> Xenomai is that its posix support is incomplete, I am eager to implement

Well, those slides are from a time when Thomas already made up his plans
(I watched his first LibeRTOS presentation in 2003...).

> the missing services (and to remove the sentence "xenomai posix skin is
> a work in progress" from posix skin text file).

Anyway, your goal is valid.

> 
>  > 
>  > There are two ways to implement this:
>  > 
>  >  A) The poor man's variant
>  > 
>  > On xntbase_adjust_time() (the code will change again, pay attention!
>  > ;) ), iterate over all pending timers (or over all timers in the
>  > base that POSIX uses?) and fix those which do not have the recently
>  > introduced XNTIMER_MONOTONIC flag set. "Poor man" because it's
>  > simple, but it scales poorly.
>  > 
>  >  B) The scalable but complex one
>  > 
>  > Introduce a second time base for each existing one (or for the one
>  > that POSIX uses?), put in all the adjustable (realtime) timers. We
>  > then only need to play with the base's clock offset on adjustment,
>  > but we would also have to include that offset into timeout
>  > considerations inside the timer interrupt handler.
>  > 
>  > I wonder now if the number of use cases where people are playing with
>  > the wallclock all over the time while a significant amounts of timers
>  > are pending is actually worth the troubles of B)... What do you think?
> 
> I think one use of clock_settime would be to resync Xenomai clock with
> Linux one from time to time, but even if we implemented B, that would be
> a bad idea because of the effect on timers. So, A would be enough for me.

For the resync with any kind of external time source, I rather have a
scheme of one set-time during startup + continuous clock frequency
tuning in mind. As you say, permanently playing with the offset is _bad_.

So only the question remains if we should apply the timer adjustment on
all bases or only the POSIX-related one.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-20 Thread Dmitry Adamushko
Hello Jan,

> Well, I hate nested locks when it comes to real-time, but in this case I
> really found no simpler approach. There is the risk of deadlocks via
>
> IRQ:xnintr_shirq::lock -> handler -> nklock vs.
> Application:nklock -> xnintr_attach/detach -> xnintr_shirq::lock,

it's also relevant for the current code - xnintr_attach/detach() must
not be called while holding the 'nklock'.

I think, your approach should work as well.. provided we have only a
single reader vs. a single writter contention case, which seems to be
the case here ('intrlock' is responsible for synchronization between
xnintr_attach/detach()).. your approach does increase a worst case
length of the lock(&intrlock) --> unlock(&intrlock) section... but
that's unlikely to be critical.

I think, the patch I sent before addresses a reported earlier case
with xnintr_edge_shirq_handler().. but your approach does make code
shorter (and likely simpler), right? I don't see any problems it would
possibly cause (maybe I'm missing smth yet :)


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > I was afraid you would insist on this support. ;)

Well, clock_settime is almost the only service missing in posix skin,
and since I saw in Thomas Gleixner slides that one reason for not using
Xenomai is that its posix support is incomplete, I am eager to implement
the missing services (and to remove the sentence "xenomai posix skin is
a work in progress" from posix skin text file).

 > 
 > There are two ways to implement this:
 > 
 >  A) The poor man's variant
 > 
 > On xntbase_adjust_time() (the code will change again, pay attention!
 > ;) ), iterate over all pending timers (or over all timers in the
 > base that POSIX uses?) and fix those which do not have the recently
 > introduced XNTIMER_MONOTONIC flag set. "Poor man" because it's
 > simple, but it scales poorly.
 > 
 >  B) The scalable but complex one
 > 
 > Introduce a second time base for each existing one (or for the one
 > that POSIX uses?), put in all the adjustable (realtime) timers. We
 > then only need to play with the base's clock offset on adjustment,
 > but we would also have to include that offset into timeout
 > considerations inside the timer interrupt handler.
 > 
 > I wonder now if the number of use cases where people are playing with
 > the wallclock all over the time while a significant amounts of timers
 > are pending is actually worth the troubles of B)... What do you think?

I think one use of clock_settime would be to resync Xenomai clock with
Linux one from time to time, but even if we implemented B, that would be
a bad idea because of the effect on timers. So, A would be enough for me.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>  > On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
>  > > Jan Kiszka wrote:
>  > > > ...
>  > > > The answer I found is to synchronise all time bases as good as 
> possible.
>  > > > That means if one base changes its wall clock offset, all others need 
> to
>  > > > be adjusted as well. At this chance, we would also implement
>  > > > synchronisation of the time bases on the system clock when they get
>  > > > started. Because skins may work with different type width to represent
>  > > > time, relative changes have to be applied, i.e. the core API changes
>  > > > from xntbase_set_time(new_time) to 
> xntbase_adjust_time(relative_change).
>  > > > The patch (global-wallclock.patch) finally touches more parts than I 
> was
>  > > > first hoping. Here is the full list:
>  > > > 
>  > > >  - synchronise slave time bases on the master on xntbase_start
>  > > >  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
>  > > >currently registered
>  > > >  - make xnarch_start_timer return the nanos since the last host tick
>  > > >(only ia64 affected, all others return 0 anyway, causing one tick
>  > > >off when synchronising on system time -- but this fiddling becomes
>  > > >pointless on the long term due to better clocksourses on all archs)
>  > 
>  > Support for 2.4 kernels will be still around for the Xenomai 2.x series
>  > though, and those will likely never support clocksources. Support for
>  > Linux 2.4 will be discontinued starting from x3.
>  > 
>  > > >  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
>  > > >at this chance
>  > > >  - make xnarch_get_sys_time internal, no skin should (need to) touch
>  > > >this anymore
>  > > 
>  > 
>  > This interface has not been meant to be part of the skin building
>  > interface, but for internal support code that needs to get the host
>  > time. For instance, one may want this information for obscure data
>  > logging from within a module, independently of any wallclock offset
>  > fiddling Xenomai may do on its timebases (so nktbase is not an option
>  > here if timebases start being tighly coupled). And this should work in
>  > real execution mode, or in virtual simulation mode. IOW,
>  > xnarch_get_sys_time() has to remain part of the exported internal
>  > interface (even if as some inline routine, that's not the main issue
>  > here).
>  > 
>  > > Forgot to mention two further aspects:
>  > > 
>  > >  - The semantic of XNTBSET was kept time base-local. But I wonder if
>  > >this flag is still required. Unless it was introduced to emulated
>  > >some special RTOS behaviour, we now have the time bases automatically
>  > >set on startup. Comments welcome.
>  > > 
>  > 
>  > That might be a problem wrt pSOS for instance. In theory, tm_set() has
>  > to be issued to set the initial time, so there is indeed the notion of
>  > unset/invalid state for the pSOS wallclock time when the system starts.
>  > This said, in the real world, such initialization better belongs to the
>  > BSP rather than to the application itself, and in our case, the BSP is
>  > Linux/Xenomai's business, so this would still make sense to assume that
>  > a timebase has no unset state from the application POV, and XNTBSET
>  > could therefore go away.
>  > 
>  > The main concern I have right now regarding this patch is that it
>  > changes a key aspect of Xenomai's current time management scheme:
>  > timebases would be tighly coupled, whilst they aren't right now. For
>  > instance, two timebases could have a very different idea of the Epoch in
>  > the current implementation, and this patch is precisely made to kill
>  > that aspect. This is a key issue if one considers how Xenomai should
>  > deal with concurrent skins: either 1) as isolated virtual RTOS machines
>  > with only a few bridges allowing very simple interfaces, or 2) as
>  > possibly cooperating interfaces. It's all a matter of design; actually,
>  > user/customer experience I know of clearly proves that #2 makes a lot of
>  > sense, but still, this point needs to be discussed if needed.
>  > 
>  > So, two questions arise:
>  > 
>  > - what's the short term impact on the common - or not that common - use
>  > case involving multiple concurrent skins? I tend to think that not that
>  > many people are actually leveraging the current decoupling between
>  > timebases. But, would some do, well, then they should definitely speak
>  > up NOW.
> 
> There is a special concern with the POSIX spec: it states that when the
> time is set, absolute timers should keep their absolute tick date (so,
> when the time is set to a later date, absolute timers that should have
> elapsed in the interval should elapse asap), and relative timers should
> be changed to elapse at the correct date (new_elapse_date = new_date +
> previous_elapse_date - old_date). The fact that the nucleus did
> not implement relative and a

Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 > xnpod_settime does not do what the posix spec wants is the reason why

s/xnpod_settime/xntbase_set_time/

The code changed when I was not looking.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 > On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
 > > Jan Kiszka wrote:
 > > > ...
 > > > The answer I found is to synchronise all time bases as good as possible.
 > > > That means if one base changes its wall clock offset, all others need to
 > > > be adjusted as well. At this chance, we would also implement
 > > > synchronisation of the time bases on the system clock when they get
 > > > started. Because skins may work with different type width to represent
 > > > time, relative changes have to be applied, i.e. the core API changes
 > > > from xntbase_set_time(new_time) to xntbase_adjust_time(relative_change).
 > > > The patch (global-wallclock.patch) finally touches more parts than I was
 > > > first hoping. Here is the full list:
 > > > 
 > > >  - synchronise slave time bases on the master on xntbase_start
 > > >  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
 > > >currently registered
 > > >  - make xnarch_start_timer return the nanos since the last host tick
 > > >(only ia64 affected, all others return 0 anyway, causing one tick
 > > >off when synchronising on system time -- but this fiddling becomes
 > > >pointless on the long term due to better clocksourses on all archs)
 > 
 > Support for 2.4 kernels will be still around for the Xenomai 2.x series
 > though, and those will likely never support clocksources. Support for
 > Linux 2.4 will be discontinued starting from x3.
 > 
 > > >  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
 > > >at this chance
 > > >  - make xnarch_get_sys_time internal, no skin should (need to) touch
 > > >this anymore
 > > 
 > 
 > This interface has not been meant to be part of the skin building
 > interface, but for internal support code that needs to get the host
 > time. For instance, one may want this information for obscure data
 > logging from within a module, independently of any wallclock offset
 > fiddling Xenomai may do on its timebases (so nktbase is not an option
 > here if timebases start being tighly coupled). And this should work in
 > real execution mode, or in virtual simulation mode. IOW,
 > xnarch_get_sys_time() has to remain part of the exported internal
 > interface (even if as some inline routine, that's not the main issue
 > here).
 > 
 > > Forgot to mention two further aspects:
 > > 
 > >  - The semantic of XNTBSET was kept time base-local. But I wonder if
 > >this flag is still required. Unless it was introduced to emulated
 > >some special RTOS behaviour, we now have the time bases automatically
 > >set on startup. Comments welcome.
 > > 
 > 
 > That might be a problem wrt pSOS for instance. In theory, tm_set() has
 > to be issued to set the initial time, so there is indeed the notion of
 > unset/invalid state for the pSOS wallclock time when the system starts.
 > This said, in the real world, such initialization better belongs to the
 > BSP rather than to the application itself, and in our case, the BSP is
 > Linux/Xenomai's business, so this would still make sense to assume that
 > a timebase has no unset state from the application POV, and XNTBSET
 > could therefore go away.
 > 
 > The main concern I have right now regarding this patch is that it
 > changes a key aspect of Xenomai's current time management scheme:
 > timebases would be tighly coupled, whilst they aren't right now. For
 > instance, two timebases could have a very different idea of the Epoch in
 > the current implementation, and this patch is precisely made to kill
 > that aspect. This is a key issue if one considers how Xenomai should
 > deal with concurrent skins: either 1) as isolated virtual RTOS machines
 > with only a few bridges allowing very simple interfaces, or 2) as
 > possibly cooperating interfaces. It's all a matter of design; actually,
 > user/customer experience I know of clearly proves that #2 makes a lot of
 > sense, but still, this point needs to be discussed if needed.
 > 
 > So, two questions arise:
 > 
 > - what's the short term impact on the common - or not that common - use
 > case involving multiple concurrent skins? I tend to think that not that
 > many people are actually leveraging the current decoupling between
 > timebases. But, would some do, well, then they should definitely speak
 > up NOW.

There is a special concern with the POSIX spec: it states that when the
time is set, absolute timers should keep their absolute tick date (so,
when the time is set to a later date, absolute timers that should have
elapsed in the interval should elapse asap), and relative timers should
be changed to elapse at the correct date (new_elapse_date = new_date +
previous_elapse_date - old_date). The fact that the nucleus did
not implement relative and absolute timers (now it does) and that
xnpod_settime does not do what the posix spec wants is the reason why
clock_settime is still not implemented. Now, if another skin is allowed
to change the nucleus time, I gues

Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Jan Kiszka
Philippe Gerum wrote:
> On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
>> Jan Kiszka wrote:
>>> ...
>>> The answer I found is to synchronise all time bases as good as possible.
>>> That means if one base changes its wall clock offset, all others need to
>>> be adjusted as well. At this chance, we would also implement
>>> synchronisation of the time bases on the system clock when they get
>>> started. Because skins may work with different type width to represent
>>> time, relative changes have to be applied, i.e. the core API changes
>>> from xntbase_set_time(new_time) to xntbase_adjust_time(relative_change).
>>> The patch (global-wallclock.patch) finally touches more parts than I was
>>> first hoping. Here is the full list:
>>>
>>>  - synchronise slave time bases on the master on xntbase_start
>>>  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
>>>currently registered
>>>  - make xnarch_start_timer return the nanos since the last host tick
>>>(only ia64 affected, all others return 0 anyway, causing one tick
>>>off when synchronising on system time -- but this fiddling becomes
>>>pointless on the long term due to better clocksourses on all archs)
> 
> Support for 2.4 kernels will be still around for the Xenomai 2.x series
> though, and those will likely never support clocksources. Support for
> Linux 2.4 will be discontinued starting from x3.

Again: As the code looks right now, only ia64 made use of this feature.
We have i386 and PPC for 2.4, and both did not bother to synchronise
that precisely so far (here, this interface is pointless).

And on x86 with recent 2.6 kernels, simply returning 0 on success of the
timer setup made the master clock deviate from the real timeofday by one
tick.

> 
>>>  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
>>>at this chance
>>>  - make xnarch_get_sys_time internal, no skin should (need to) touch
>>>this anymore
> 
> This interface has not been meant to be part of the skin building
> interface, but for internal support code that needs to get the host
> time. For instance, one may want this information for obscure data
> logging from within a module, independently of any wallclock offset
> fiddling Xenomai may do on its timebases (so nktbase is not an option
> here if timebases start being tighly coupled). And this should work in
> real execution mode, or in virtual simulation mode. IOW,
> xnarch_get_sys_time() has to remain part of the exported internal
> interface (even if as some inline routine, that's not the main issue
> here).

As I still haven't been able to see real code using it like this, I
can't comment on it.

> 
>> Forgot to mention two further aspects:
>>
>>  - The semantic of XNTBSET was kept time base-local. But I wonder if
>>this flag is still required. Unless it was introduced to emulated
>>some special RTOS behaviour, we now have the time bases automatically
>>set on startup. Comments welcome.
>>
> 
> That might be a problem wrt pSOS for instance. In theory, tm_set() has
> to be issued to set the initial time, so there is indeed the notion of
> unset/invalid state for the pSOS wallclock time when the system starts.
> This said, in the real world, such initialization better belongs to the
> BSP rather than to the application itself, and in our case, the BSP is
> Linux/Xenomai's business, so this would still make sense to assume that
> a timebase has no unset state from the application POV, and XNTBSET
> could therefore go away.

That was my first impression as well, but I cannot asses the impact as I
don't know real pSOS porting scenarios.

> 
> The main concern I have right now regarding this patch is that it
> changes a key aspect of Xenomai's current time management scheme:
> timebases would be tighly coupled, whilst they aren't right now. For

Which already caused troubles when dealing with RTDM, you remember?

> instance, two timebases could have a very different idea of the Epoch in
> the current implementation, and this patch is precisely made to kill
> that aspect. This is a key issue if one considers how Xenomai should
> deal with concurrent skins: either 1) as isolated virtual RTOS machines
> with only a few bridges allowing very simple interfaces, or 2) as
> possibly cooperating interfaces. It's all a matter of design; actually,
> user/customer experience I know of clearly proves that #2 makes a lot of
> sense, but still, this point needs to be discussed if needed.
> 
> So, two questions arise:
> 
> - what's the short term impact on the common - or not that common - use
> case involving multiple concurrent skins? I tend to think that not that
> many people are actually leveraging the current decoupling between
> timebases. But, would some do, well, then they should definitely speak
> up NOW.
> 
> - regarding the initial RTDM-related motivation now, why requiring all
> timebases to be in sync Epoch-wise, instead of asking the software
> wanting to exchange time

Re: [Xenomai-core] [PATCH-STACK] Synchronised timebases and more

2007-06-20 Thread Philippe Gerum
On Mon, 2007-06-18 at 10:27 +0200, Jan Kiszka wrote: 
> Jan Kiszka wrote:
> > ...
> > The answer I found is to synchronise all time bases as good as possible.
> > That means if one base changes its wall clock offset, all others need to
> > be adjusted as well. At this chance, we would also implement
> > synchronisation of the time bases on the system clock when they get
> > started. Because skins may work with different type width to represent
> > time, relative changes have to be applied, i.e. the core API changes
> > from xntbase_set_time(new_time) to xntbase_adjust_time(relative_change).
> > The patch (global-wallclock.patch) finally touches more parts than I was
> > first hoping. Here is the full list:
> > 
> >  - synchronise slave time bases on the master on xntbase_start
> >  - xntbase_set_time -> xntbase_adjust_time, fixing all time bases
> >currently registered
> >  - make xnarch_start_timer return the nanos since the last host tick
> >(only ia64 affected, all others return 0 anyway, causing one tick
> >off when synchronising on system time -- but this fiddling becomes
> >pointless on the long term due to better clocksourses on all archs)

Support for 2.4 kernels will be still around for the Xenomai 2.x series
though, and those will likely never support clocksources. Support for
Linux 2.4 will be discontinued starting from x3.

> >  - adapt vrtx, vxworks, and psos+ skin to new scheme, fixing sc_sclock
> >at this chance
> >  - make xnarch_get_sys_time internal, no skin should (need to) touch
> >this anymore
> 

This interface has not been meant to be part of the skin building
interface, but for internal support code that needs to get the host
time. For instance, one may want this information for obscure data
logging from within a module, independently of any wallclock offset
fiddling Xenomai may do on its timebases (so nktbase is not an option
here if timebases start being tighly coupled). And this should work in
real execution mode, or in virtual simulation mode. IOW,
xnarch_get_sys_time() has to remain part of the exported internal
interface (even if as some inline routine, that's not the main issue
here).

> Forgot to mention two further aspects:
> 
>  - The semantic of XNTBSET was kept time base-local. But I wonder if
>this flag is still required. Unless it was introduced to emulated
>some special RTOS behaviour, we now have the time bases automatically
>set on startup. Comments welcome.
> 

That might be a problem wrt pSOS for instance. In theory, tm_set() has
to be issued to set the initial time, so there is indeed the notion of
unset/invalid state for the pSOS wallclock time when the system starts.
This said, in the real world, such initialization better belongs to the
BSP rather than to the application itself, and in our case, the BSP is
Linux/Xenomai's business, so this would still make sense to assume that
a timebase has no unset state from the application POV, and XNTBSET
could therefore go away.

The main concern I have right now regarding this patch is that it
changes a key aspect of Xenomai's current time management scheme:
timebases would be tighly coupled, whilst they aren't right now. For
instance, two timebases could have a very different idea of the Epoch in
the current implementation, and this patch is precisely made to kill
that aspect. This is a key issue if one considers how Xenomai should
deal with concurrent skins: either 1) as isolated virtual RTOS machines
with only a few bridges allowing very simple interfaces, or 2) as
possibly cooperating interfaces. It's all a matter of design; actually,
user/customer experience I know of clearly proves that #2 makes a lot of
sense, but still, this point needs to be discussed if needed.

So, two questions arise:

- what's the short term impact on the common - or not that common - use
case involving multiple concurrent skins? I tend to think that not that
many people are actually leveraging the current decoupling between
timebases. But, would some do, well, then they should definitely speak
up NOW.

- regarding the initial RTDM-related motivation now, why requiring all
timebases to be in sync Epoch-wise, instead of asking the software
wanting to exchange timestamps to always use the master timebase for
that purpose? By definition, nktbase is the most accurate, always valid
and running, and passing the timebase id along with the jiffy value when
exchanging timestamps would be no more needed.

> - This patch is a nice foundation for the stuff I have in mind post 2.4
>release: an infrastructure to synchronise the Xenomai clock on
>arbitrary external sources like local ToD, RTCs, sync packets sent
>over CAN (CANopen...), Ethernet (RTnet, IEEE1588, ...), or whatever
>media (poor-man's nullmodem links...)
> 
> 
> 
> Jan
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.