Hi Fred,

On 05/15/2011 10:01 PM, Tijd Dingen wrote:
Check. That is what I understood the "Overlapped variable tau estimators"
bit on wikipedia to be about. Same raw data, smarter processing.

Indeed.

 Notice that you need to adjust your data for cycle-slips. If you don't
 do that you will get a significant performance hit with typical several
 decades higher ADEV curve than expected.

"Adjust for cycle-slips"... You mean the following ... ?

Your processing back-end receives a series of timestamps from the
timestamper.
The timestamper claims "this is the timestamp for cycle number XYZ. No,
really!".
However you notice that given the distribution of all the other
(cycle_no, time)
pairs this would be hard to believe. If however you would assume add +1
to that
"claimed" cycle number, then it would perfectly. So you adjust the cycle
number
by one, under the assumption that /somewhere/ 1 cycle got lost.
Somewhere being
a PLL cycle slip, an fpga counter missing a count, etc...

That sort of adjustment I take it? If yes, then understood. If not, I'm
afraid
I don't follow. :)

Consider that your time-base and your measurement signal is 1E-10 away from each other in the case I gave, let's assume 10 MHz clocks. This means that their phases will shift 1E-10*1E7 = 1E-3 degrees of a cycle every second, so after 1000 seconds it will have shifted a full cycle, somewhere in that data there will be a 99.9 ns to 0.0 ns (or vice-versa) transition. This is not since the counter missed a cycle, but since the signals actually beats against each other. Now, if you would take that time-series and do an ADEV on it you will get a surprise. If you correct it such that you extend the curve with +100 ns or -100 ns (i.e. add or subtract a cycle, but expressed as a time shift) you will get a more correct TI curve which you will get your expected results from.

 Do you mean to say that your low resolution time-stamping rate is 400
 MS/s and high resolution time-stamping rate is 20 MS/s?

That is what I mean to say. There are still design issues with both modes,
so could become better, could become worse. Knowing how reality works,
probably worse. ;-> But those numbers are roughly it yes.

Great, now we speak the same language on that aspect.

At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
is trickier.

The reason: 400 MHz is the full tap sampling speed, and I can barely keep
up with the data. The data is from a 192 tap delay line incidentally.
Active length is typically about 130 taps, but I have to design worst case.
Or rather best case, because needing all those taps to fit a 2.5 ns cycle
would be really good news. ;) But hey, we can always hope for fast silicon
right?

See what a little fiddling with temperature and core voltage can do for you :)

Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
The second part (as it happens also 5 stages) works at 200 MHz. If only
for the simple reason that the block ram in a spartan-6 has a max frequency
of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in
parallel.

For this part of the processing I have spent more design effort on the
modules
that are responsible for the high resolution timestamps. So low resolution,
200 MS/s == done. 400 Ms/s == working on it. :P

I guess you learn by doing, right? :)

 It is perfectly respectable to skip a number of cycles, but the number
 of cycles must be known. One way is to have an event-counter which is
 sampled, or you always provide samples at a fixed distance
 event-counter-wise such that the event-counter can be rebuilt
 afterwards. The later method save data, but have the draw-back that your
 observation period becomes dependent on the frequency of the signal
 which may or may not be what you want, depending on your application.

What I have now is an event counter which is sampled.

Great.

 Recall, you will have to store and process this flood of data. For
 higher tau plots you will wade in sufficiently high amounts of data
 anyway, so dropping high frequency data to achieve a more manageable
 data rate in order to be able to store and process the longer tau data
 is needed.

Heh, "store and process this flood of data" is the reason why I'm at
revision numero 3 for the frigging taps processor. :P But oh well,
good for my pipeline design skills.

Definitively. It's a learning process to unroll things which one understands well in sequential processing but all of a sudden needs to do in parallel.

 For most of the ADEV plots on stability, starting at 100 ms or 1 s is
 perfectly useful, so a measurement rate of 10 S/s is acceptable.

Well, that would be too easy. Where's the fun in that?

True, where's the fun in that :)

 For high speed things like startup burps etc. you have a different
 requirement race. A counter capable of doing both will be great, but
 they usually don't do it.

Check. For me the main purpose of this thing is:
1 - learn new things
2 - be able to measure frequency with accuracy comparable to current
commercial counters
3 - monitor frequency stability

All good goals. Don't let me get in the way of them... :)

Anyways, for now the mandate is to be able to spit out as many
timestamps as I can get away
with, and then figure out fun ways to process them. ;)

Also interesting. I'm considering similar projects. Don't have a spartan-6 lying around on a board so I will have to do with less. Got some spartan-3E boards which should suffice for some initial experience.

Cheers,
Magnus

_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to