Wolfgang Mauerer wrote:
> Gilles Chanteperdrix wrote:
>> Wolfgang Mauerer wrote:
>>> Hi,
>>> On 03.12.2009, at 14:14, Gilles Chanteperdrix 
>>> <gilles.chanteperd...@xenomai.org 
>>>  > wrote:
>>>> Wolfgang Mauerer wrote:
>>>>> Hi,
>>>>> Gilles Chanteperdrix wrote:
>>>>>> Wolfgang Mauerer wrote:
>>>>>>> So that means, in essence, that you would accept probabilistic
>>>>>>> algorithms in realtime context?
>>>>>> Ah, today's troll!
>>>>> though it seems that I have to replace Jan this time ;-)
>>>>>> As I think I explained, the use of a seqlock in real-time context  
>>>>>> when
>>>>>> the seqlock writer only happens in linux context is not  
>>>>>> probabilistic.
>>>>>> It will work every time the first pass.
>>>>> I still don't see why it should succeed every time: What about
>>>>> the case that the Linux kernel on CPU0 updates the data, while
>>>>> Xenomai accesses them on another CPU? This can lead to
>>>>> inconsistent data, and they must be reread on the Xenomai side.
>>>> Yeah, right. I was not thinking about SMP. But admit that in this  
>>>> case,
>>>> there will be only one retry, there is nothing pathological.
> that's right. Which makes it bounded again, so it's maybe
> the best way to go.
>>>>> I'm asking because if this case can not happen, then there's
>>>>> nothing left to to as I have the code already at hand.
>>>> You have reworked the nucleus timers handling to adapt to this new
>>>> real-time clock ?
>>> Nope. Sorry, I was a bit unclear: I'm just referring to the gtod  
>>> syscall that does the timer handling, Not any other adaptions.
>> Ok, but what good is the gtod syscall if you can not use it as a time
>> reference for other timing related services?
> it suffices for our project's current purposes ;-)
> But it's certainly not the full solution. Before, we
> should have a decision wrt. the design issues, but I
> won't be able to continue working on this before
> mid of next week to look at the changed required for timer
> handling and come up with code.

Ok. To summarize what we have said, here is how I see we could implement
the NTP synchronized clock fully, and portably:
1- allocate at nucleus init time, an area in the global sem heap for
this clock house-keeping
2- add an event to the I-pipe patch when vsyscall_update is called
3- implement the nucleus callback for the I-pipe event which copies
relevant data with our own version of seqlock called with hardware irqs
off, to the area allocated in 1 if the current clock source is the tsc
4- rework the nucleus clocks and timers handling to use these data
5- pass the offset of the data allocated in 1 to user-space through the
xnsysinfo, or xnfeatinfo structures
6- rework clock_gettime to use these data, using the user-space
counterpart of the seqlock used in 3

The real hard work is 4, and note that something which I did not mention
yesterday, is that we not only have to change the real-time clock
implementation, we also have to change the monotonic clock
implementation, otherwise the two clocks will drift apart.

I think making such a change now is unreasonable.

So, solution 1, we can implement 5, passing a null offset to mean that
the support is unimplemented by the kernel and not even use it in
user-space. Keeping the work for later in the 2.5 life cycle.

Solution 2, we keep this change for 3.0.

Solution 3, we implement a way to read that clock without synchronizing
the nucleus with it (that is, everything but 4). One way to do this,
which I do not like, is to add a dummy clock id to the posix skin, for
instance CLOCK_LINUX_NTP_REALTIME, and implement the clock reading for
that clock id in clock_gettime. This clock id, when passed to any other
service, causes EINVAL to be returned, making it clear that this clock
can not be used for anything else. Note that if we do that, even if we
implement the full support later, we will have to keep that dummy clock
id forever.

My preference goes to solution 1. Philippe, what do you think?


Xenomai-core mailing list

Reply via email to