Re: [rtl] Time Stamped Counter (TSC) jitter
Shel -- take a look at the paper by Fred Proctor. He address some of this. Look at: http://www.realtimelinuxfoundation.org/events/rtlws-2001/keynotes.html This was a keynote at the recent Real-Time Linux Workshop. Regards, Chris Christopher D. Carothers Assistant Professor Department of Computer Science Rensselaer Polytechnic Institute 110 8th Street Troy, New York 12180-3590 e-mail: [EMAIL PROTECTED] web page: www.cs.rpi.edu/~chrisc phone: (518) 276-2930 fax: (518) 276-4033 On Tue, 11 Dec 2001, Sheldon Hoffman wrote: > > > Hello all, > > I would like to use the pentium processor's Time Stamped Counter (TSC) > to measure elapsed time with an accuracy of <= 1 uSec. The TSC ticks > at the frequency of the processor speed. So on a 333 Mhz computer, > the TSC ticks at (approximately) 333,000,000 hz. > > In order to use the TSC as an elapsed clock, I need to determine the > frequency (eg in Hz) of the TSC (ie many ticks the TSC makes per > second). > > I tried using the pentium's 8254 counter, which ticks at 1,193,180 > Hz to determine the frequency of the TSC but I have encountered a > problem: I am seeing a large variance in the TSC over multiple tests > and I don't understand why it is occurring or what I can do to > eliminate it. > > I was hoping someone might have some insights on how I can > accurately determine the frequency of the TSC. > > As another note, I'm doing this testing under DOS (16-bit virtual > 8086 mode) with interrupts disabled. I believe the same method > should apply to RT Linux. > > I am seeing a TSC count of 331,113,713 +/- 35,000 TSC ticks per second > on a 333 Mhz pentium II processor. > > This variance seems excessively large. Here are some other facts. > > 1. As recommended by Intel's IA-32 Intel Architecture Software > Developer's Manual, I execute the CPUID instruction before executing > RDTSC to serialize the TSC. > > 2. I use simple inp/outp instructions to retrieve counter 0 the 8254 > (port 0x40). This retrieves a 15 bit countdown value which ticks at > a frequency of 1,193,180 Hz. > > Here's the algorithm I use to "calibrate" (determine the frequency of) > the TSC: > > 1. synchronize the CPU with with the 8254 by reading it until it > wraps (from x7FFF to 0). > > 2. execute CPUID and RDTSC. This is the starting TSC count. > > 3. Loop 36 times (36 =appx 1,193,180/32,768 = appx 1 seconds worth of >8254 ticks): > >A. wait for 8254 to wrap >B. execute CPUID and RDTSC (and store in the final TSC count) > > 4. tscHz = (final TSC count - starting TSC count) * (36.0 * 32768.0 / 1193180.0); > > I know I'm pretty close to the right answer because when I run the > program on various pentiums, I see approximately the correct TSC > based on the Mhz of the processor. It is the magnitude of the > variance (jitter?) that I don't understand. > > Any insights would be much appreciated! I'm happy to share code with anyone > interested in this. > - > Shel Hoffman > > - > > -- [rtl] --- > To unsubscribe: > echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR > echo "unsubscribe rtl " | mail [EMAIL PROTECTED] > -- > For more information on Real-Time Linux see: > http://www.rtlinux.org/ > > -- [rtl] --- To unsubscribe: echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR echo "unsubscribe rtl " | mail [EMAIL PROTECTED] -- For more information on Real-Time Linux see: http://www.rtlinux.org/
Re: [rtl] Time Stamped Counter (TSC) jitter
On Tue, Dec 11, 2001 at 06:55:37AM -0600, Sheldon Hoffman wrote: > > > Hello all, > > I would like to use the pentium processor's Time Stamped Counter (TSC) > to measure elapsed time with an accuracy of <= 1 uSec. The TSC ticks > at the frequency of the processor speed. So on a 333 Mhz computer, > the TSC ticks at (approximately) 333,000,000 hz. > > In order to use the TSC as an elapsed clock, I need to determine the > frequency (eg in Hz) of the TSC (ie many ticks the TSC makes per > second). > > I was hoping someone might have some insights on how I can > accurately determine the frequency of the TSC. > > As another note, I'm doing this testing under DOS (16-bit virtual > 8086 mode) with interrupts disabled. I believe the same method > should apply to RT Linux. > > I am seeing a TSC count of 331,113,713 +/- 35,000 TSC ticks per second > on a 333 Mhz pentium II processor. > > This variance seems excessively large. Here are some other facts. > > 1. As recommended by Intel's IA-32 Intel Architecture Software > Developer's Manual, I execute the CPUID instruction before executing > RDTSC to serialize the TSC. The deal with this is that CPUID actually flushes the pipeline. With modern processors doing out-of-order processing and the like, the CPUID forces your RDTSC instruction to get executed _exactly_ in place instead of at some other random point in your program. [give or take...] > Here's the algorithm I use to "calibrate" (determine the frequency of) > the TSC: > > 1. synchronize the CPU with with the 8254 by reading it until it > wraps (from x7FFF to 0). > > 2. execute CPUID and RDTSC. This is the starting TSC count. > > 3. Loop 36 times (36 =appx 1,193,180/32,768 = appx 1 seconds worth of >8254 ticks): > >A. wait for 8254 to wrap >B. execute CPUID and RDTSC (and store in the final TSC count) > > 4. tscHz = (final TSC count - starting TSC count) * (36.0 * 32768.0 / 1193180.0); > > I know I'm pretty close to the right answer because when I run the > program on various pentiums, I see approximately the correct TSC > based on the Mhz of the processor. It is the magnitude of the > variance (jitter?) that I don't understand. > > Any insights would be much appreciated! I'm happy to share code with anyone > interested in this. What I have to question is... why? Why are you noncing about doing all this stuff? TSC is a simple register. It ticks once every clock cycle, by definition. It's actually on the chip and hard-wired to do this. If you want to know how many times a second it's going to tick, why not ask the CPU how many time a second that _it_ ticks? The main linux kernel has already worked this out once [you may like to have away with some of that code...], but why not use something like CPUID? It should tell you exactly what you want to know. And for reference, you may like to try adding bits of this to the RT Linux scheduler [easy; I've done it] instead of running it in just normal RT task-space. RTLinux is pre-emptive and stuff; you're just asking to be put in a world of hurt if you're trying to work things out this accurately, but doing it from the RT equivalent of user-space. Gary (-; PS Can I have a shufty at your source? I'm interested... -- [rtl] --- To unsubscribe: echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR echo "unsubscribe rtl " | mail [EMAIL PROTECTED] -- For more information on Real-Time Linux see: http://www.rtlinux.org/
[rtl] Time Stamped Counter (TSC) jitter
Hello all, I would like to use the pentium processor's Time Stamped Counter (TSC) to measure elapsed time with an accuracy of <= 1 uSec. The TSC ticks at the frequency of the processor speed. So on a 333 Mhz computer, the TSC ticks at (approximately) 333,000,000 hz. In order to use the TSC as an elapsed clock, I need to determine the frequency (eg in Hz) of the TSC (ie many ticks the TSC makes per second). I tried using the pentium's 8254 counter, which ticks at 1,193,180 Hz to determine the frequency of the TSC but I have encountered a problem: I am seeing a large variance in the TSC over multiple tests and I don't understand why it is occurring or what I can do to eliminate it. I was hoping someone might have some insights on how I can accurately determine the frequency of the TSC. As another note, I'm doing this testing under DOS (16-bit virtual 8086 mode) with interrupts disabled. I believe the same method should apply to RT Linux. I am seeing a TSC count of 331,113,713 +/- 35,000 TSC ticks per second on a 333 Mhz pentium II processor. This variance seems excessively large. Here are some other facts. 1. As recommended by Intel's IA-32 Intel Architecture Software Developer's Manual, I execute the CPUID instruction before executing RDTSC to serialize the TSC. 2. I use simple inp/outp instructions to retrieve counter 0 the 8254 (port 0x40). This retrieves a 15 bit countdown value which ticks at a frequency of 1,193,180 Hz. Here's the algorithm I use to "calibrate" (determine the frequency of) the TSC: 1. synchronize the CPU with with the 8254 by reading it until it wraps (from x7FFF to 0). 2. execute CPUID and RDTSC. This is the starting TSC count. 3. Loop 36 times (36 =appx 1,193,180/32,768 = appx 1 seconds worth of 8254 ticks): A. wait for 8254 to wrap B. execute CPUID and RDTSC (and store in the final TSC count) 4. tscHz = (final TSC count - starting TSC count) * (36.0 * 32768.0 / 1193180.0); I know I'm pretty close to the right answer because when I run the program on various pentiums, I see approximately the correct TSC based on the Mhz of the processor. It is the magnitude of the variance (jitter?) that I don't understand. Any insights would be much appreciated! I'm happy to share code with anyone interested in this. - Shel Hoffman - -- [rtl] --- To unsubscribe: echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR echo "unsubscribe rtl " | mail [EMAIL PROTECTED] -- For more information on Real-Time Linux see: http://www.rtlinux.org/
[rtl] Time Stamped Counter (TSC) jitter
Hello all, I would like to use the pentium processor's Time Stamped Counter (TSC) to measure elapsed time with an accuracy of <= 1 uSec. The TSC ticks at the frequency of the processor speed. So on a 333 Mhz computer, the TSC ticks at (approximately) 333,000,000 hz. In order to use the TSC as an elapsed clock, I need to determine the frequency (eg in Hz) of the TSC (ie many ticks the TSC makes per second). I tried using the pentium's 8254 counter, which ticks at 1,193,180 Hz to determine the frequency of the TSC but I have encountered a problem: I am seeing a large variance in the TSC over multiple tests and I don't understand why it is occurring or what I can do to eliminate it. I was hoping someone might have some insights on how I can accurately determine the frequency of the TSC. As another note, I'm doing this testing under DOS (16-bit virtual 8086 mode) with interrupts disabled. I believe the same method should apply to RT Linux. I am seeing a TSC count of 331,113,713 +/- 35,000 TSC ticks per second on a 333 Mhz pentium II processor. This variance seems excessively large. Here are some other facts. 1. As recommended by Intel's IA-32 Intel Architecture Software Developer's Manual, I execute the CPUID instruction before executing RDTSC to serialize the TSC. 2. I use simple inp/outp instructions to retrieve counter 0 the 8254 (port 0x40). This retrieves a 15 bit countdown value which ticks at a frequency of 1,193,180 Hz. Here's the algorithm I use to "calibrate" (determine the frequency of) the TSC: 1. synchronize the CPU with with the 8254 by reading it until it wraps (from x7FFF to 0). 2. execute CPUID and RDTSC. This is the starting TSC count. 3. Loop 36 times (36 =appx 1,193,180/32,768 = appx 1 seconds worth of 8254 ticks): A. wait for 8254 to wrap B. execute CPUID and RDTSC (and store in the final TSC count) 4. tscHz = (final TSC count - starting TSC count) * (36.0 * 32768.0 / 1193180.0); I know I'm pretty close to the right answer because when I run the program on various pentiums, I see approximately the correct TSC based on the Mhz of the processor. It is the magnitude of the variance (jitter?) that I don't understand. Any insights would be much appreciated! I'm happy to share code with anyone interested in this. - Shel Hoffman - -- [rtl] --- To unsubscribe: echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR echo "unsubscribe rtl " | mail [EMAIL PROTECTED] -- For more information on Real-Time Linux see: http://www.rtlinux.org/