Rodrigo Rosenfeld Rosas wrote:
> Em Sexta 10 Março 2006 15:32, Jan Kiszka escreveu:
> 
>> Rodrigo Rosenfeld Rosas wrote:
>>> Em Quinta 09 Março 2006 17:33, Jan Kiszka escreveu:
>>>> Rodrigo Rosenfeld Rosas wrote:
>>>>> Hi Jan,
>>>>>
>>>>> I'm still concerned about the future of RTDM and timer functions. I
>>>>> think there should be some function for starting the timer manually,
>>>>> since the automatic feature don't work great for RTDM drivers.
>>>>>
>>>>> It is not nice to have to run the latency (or any other) program for
>>>>> starting the timer before I can load my driver. And it is not suffice to
>>>>> run it once I booted. After I open/close my rtdm device and reload my
>>>>> driver the problem will occur again and I'll have to re-run the latency
>>>>> program.
>>>> Sorry I don't see the problem here.
>>>>
>>>> # modprobe xeno_nucleus; cat /proc/xenomai/timer
>>>> status=off:setup=1392:tickval=0:jiffies=0
>>>>
>>>> # modprobe xeno_rtdm; cat /proc/xenomai/timer
>>>> status=oneshot:setup=1392:tickval=1:jiffies=8113917792696
>>>>
>>>> So the timer is running right since when rtdm is loaded?!
>>> Yes, here too.
>>>
>>>> And that simple heartbeat rtdm example on my rt-addon homepage now
>>>> cleanly runs even without any further helper to start some timer.
>>> Yes, here too. You are right, once the timer is in oneshot mode. My driver
>>> loads correctly without the helper. Then I start a user application that
>>> changes the timer to periodic mode and uses my driver. When I reload my
>>> driver, now in periodic mode, the problem raises.
>> What happens if you make the periodic timer the default one in the
>> kernel configuration?
> 
> The same behaviour.
> 
>>> It seems, there is no problem when the timer is set to oneshot. But when
>>> turning it to periodic, at least one of rtdm_task_busy_sleep() or
>>> rtdm_clock_read() doesn't seem to work. See below:
>>>
>>> cat /proc/xenomai/timer
>>>   status=periodic:setup=188:tickval=100000:jiffies=19972453
>>>
>>> start_time = rtdm_clock_read();
>>> rtdm_task_busy_sleep(84000);
>>> temp_time = rtdm_clock_read();
>>> rtdm_printk(KERN_INFO "Should be near 84000: %u\n", (unsigned int)
>>>                    (temp_time-start_time));
>>>
>>> Sometimes the result is "Should be near 84000: 100000", that is kind of
>>> correct, since the tickval is 100000, although I think that those
>>> functions in the RTDM driver context should be independent of the tick
>>> value set by the user program... Maybe using oneshot in the driver calls
>>> and periodic in the application... I really don't know what would be the
>>> best approach here...
>> rtdm_clock_read always uses the nucleus clock.
> 
> Do you mean that rtdm_clock_read will always read a multiple of tickval 
> value? 
> If so, I think it would be good to make it clear on its documentation. "Get 
> system time" isn't enough for getting this information, IMHO.

Please have a look at the two sentences of documentation I added to SVN
(didn't make it into the release).

I gave your scenario a try, and I was able to verify that
rtdm_task_busy_sleep works correctly under both timer modes. Indeed, the
behaviour of rtdm_clock_read may have been confusing due to lacking
information, but it was also correct.

> 
>> Using something different 
>> (e.g. always TSC) would break applications specifying absolute times
>> derived from the return values of other skins' functions.
> 
> I did not understand. I'm talking about using TSC only for these two 
> functions. I can not see why shouldn't it be possible... I mean, I think the 
> driver should not depend on the userspace program timer for these two 
> functions.
> 
>>> But the worst case is that sometimes I get "Should be near 84000: 0" which
>>> clearly is a incorrect result.
>> That might be a rounding issue somewhere, as the sleep than clearly did
>> not wait at least one tick. Will have to check this when time permits.
>>
>>> After I run the latency program, the timer turns to be oneshot again and
>>> everything goes right.
>>>
>>> What can I do to solve this problem?
>> Use oneshot mode in the meantime - or even longer ;).
> 
> That is what I'll gonna do, but I know it is not a definite solution. Since 
> I'm providing a framework, the user should decide which approach is better 
> for him/her, oneshot or periodic mode.
> 
>> Why do you prefer 
>> periodic mode for your application? Another workaround: reduce the tick
>> interval.
> 
> I have some loops in my userspace programs that a common 100us tick would 
> satisfy them all. I think the overhead would be lower than using the 
> aperiodic oneshot mode... I'm not pretty sure about that. But that is not the 
> question. My application is just an use case of my framework (actually I 
> didn't even started building it). The final user should decide what is the 
> best approach for him/her, not me. So, I would prefer that the driver be 
> independent from the timer source chosen by the user program.
> 

I see your point. But when the user decides to pick a low-precision
timer source to reduce overhead, (s)he has to live with the side-effect.
There is no such thing like "user vs. kernel" timer source - it is
always the same. Thus, also the precision of time stamping in drivers
suffers.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to