I ran a 5 day stress test on 2.0.36-9J throwing several huge web access (to 
cause the memory to start swapping), as well as a lot of  Xwindows eye candy to
bog down the CPU.  I ran the "monitor" RT app and passed it through a perl 
script to capture the global max and min.

What I got was a global min of 5 and a global max of 60.

What do these numbers mean?
In the Monitor source, they are the difference of rt_get_time which is a 
macro call to rt_get_time_pentium.

I was able to trace the logs and find that the Pentium scalar for my machine
was calculated to be 38544535.

It appears the Code for rt_get_time_pentium basically get the pentium real-time
counter value (incremented on every clock tick) and scales it.

The formula appears to be Time =  counter * scalar / 2^32.

If that is true then the resolution would be 8.9 ms ? so the RT app monitor 
has a min max responce of 45 to 300 ms?  That seems a far cry from the 
120 us claims of the paper?

Could someone please explain what the monitor program is actually measuring?
And what the time resolution of rt_pentium_getime is? (It appears it is trying
to calibrate to some predefined unit of measure but the source doens't say what 
it is trying to calibrate to...)

Thanks,
        Kirk Smith
        Micro Systems Engineering
        [EMAIL PROTECTED]
--- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
----
For more information on Real-Time Linux see:
http://www.rtlinux.org/~rtlinux/

Reply via email to