Re: [time-nuts] Question about frequency counter testing

2018-06-06 Thread Magnus Danielson
Hi Oleg,

On 06/06/2018 02:53 PM, Oleg Skydan wrote:
> Hi, Magnus!
> 
> Sorry for the late answer, I injured my left eye last Monday, so had
> very limited abilities to use computer.

Sorry to hear that. Hope you heal up well and quick enough.

> From: "Magnus Danielson" 
>> As long as the sums C and D becomes correct, your
>> path to it can be whatever.
> 
> Yes. It produces the same sums.
> 
>> Yes please do, then I can double check it.
> 
> I have write a note and attached it. The described modifications to the
> original method were successfully tested on my experimental HW.

You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Anyway, you have sure understood the toolbox given to you, and your
contribution is to play the same game, but to reduce the needed dynamics
of the blocks. Neat. I may include that with due reference.

>> Yeah, now you can move your harware focus on considering interpolation
>> techniques beyond the processing power of least-square estimation, which
>> integrate noise way down.
> 
> If you are talking about adding traditional HW interpolation of the
> trigger events I have no plans to do it. It is not possible to do it
> keeping 2.5ns base counter resolution (there is no way to output 400MHz
> clock signal out of the chip) and I do not want to add extra complexity
> to the HW of this project.
> 
> But, the HW I use can simultaneously sample up to 10 timestamps. So, I
> can push the one shoot resolution down to 250ps using several delay
> lines (theoretically). I do not think that going down to 250ps has much
> sense (also I have another plans for that additional HW), but 2x or 4x
> one shot resolution improvement (down to 1.25ns or 625ps) is relatively
> simple to implement in HW and should be a good idea to try.

Sounds fun!

>>> I will probably throw out the power hungry and expensive SDRAM chip or
>>> use much smaller one :).
>>
>> Yeah, it would only be if you build multi-tau PDEV plots that you would
>> need much memory, other than that it is just buffer memory to buffer
>> before it goes to off-board processing, at which time you would need to
>> convey the C, D, N and tau0 values.
> 
> Yes, I want to produce multi-tau PDEV plots :).

Make good sense. :)

> They can be computed with small memory footprint, but it will be non
> overlapped PDEVs, so the confidence level at large taus will be poor
> (with the practical durations of the measurements). I have a working
> code that realizes such algorithm. It uses only 272bytes of memory for
> each decade (1-2-5 values).

Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.

> I need to think how to do the overlapping PDEV calculations with minimal
> memory/processing power requirements (I am aware that decimation
> routines should not use the overlapped calculations).

It's fairly simple, as you decimate samples and/or blocks, the produced
blocks overlaps one way or another. The multiple overlap variants should
each behave as a complete PDEV stream, and the variances can then be
added safely.

> BTW, are there any "optimal overlapping"? Or I should just use as much
> data as I can process?

"optimal overlapping" would be when all overlapping variants is used,
that is all with tau0 offsets available. When done for Allan Deviation
some refer to this as OADEV. This is however an misnomer as it is an
ADEV estimator which just has better confidence intervals than the
non-overlapping ADEV estimator. Thus, both estimator algorithms have the
same scale of measure, that of ADEV, but different amount of Equivalent
Degrees of Freedom (EDF) which has direct implications on the confidence
interval bounds. The more EDF, the better confidence interval. The more
overlapping, the more EDF. Further improvements would be TOTAL ADEV and
Theo, which both aim to squeeze out as much EDF as possible from the
dataset, in an attempt of reducing the length of measurement.

>> Please report on that progress! Sounds fun!
> 
> I will drop a note when I will move on the next step. The things are a
> bit slower now.

Take care. Heal up properly. It's a hobby after all. :)

Good work there.

Cheers,
Magnus

> Thanks!
> Oleg
> 
> 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
> 
___
time-nuts mailing list -- time-nuts@febo.com
To 

Re: [time-nuts] Question about frequency counter testing

2018-06-06 Thread Oleg Skydan

Hi, Magnus!

Sorry for the late answer, I injured my left eye last Monday, so had very 
limited abilities to use computer.


From: "Magnus Danielson" 

As long as the sums C and D becomes correct, your
path to it can be whatever.


Yes. It produces the same sums.


Yes please do, then I can double check it.


I have write a note and attached it. The described modifications to the 
original method were successfully tested on my experimental HW.



Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.


If you are talking about adding traditional HW interpolation of the trigger 
events I have no plans to do it. It is not possible to do it keeping 2.5ns 
base counter resolution (there is no way to output 400MHz clock signal out 
of the chip) and I do not want to add extra complexity to the HW of this 
project.


But, the HW I use can simultaneously sample up to 10 timestamps. So, I can 
push the one shoot resolution down to 250ps using several delay lines 
(theoretically). I do not think that going down to 250ps has much sense 
(also I have another plans for that additional HW), but 2x or 4x one shot 
resolution improvement (down to 1.25ns or 625ps) is relatively simple to 
implement in HW and should be a good idea to try.



I will probably throw out the power hungry and expensive SDRAM chip or
use much smaller one :).


Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.


Yes, I want to produce multi-tau PDEV plots :).

They can be computed with small memory footprint, but it will be non 
overlapped PDEVs, so the confidence level at large taus will be poor (with 
the practical durations of the measurements). I have a working code that 
realizes such algorithm. It uses only 272bytes of memory for each decade 
(1-2-5 values).


I need to think how to do the overlapping PDEV calculations with minimal 
memory/processing power requirements (I am aware that decimation routines 
should not use the overlapped calculations).


BTW, are there any "optimal overlapping"? Or I should just use as much data 
as I can process?



Please report on that progress! Sounds fun!


I will drop a note when I will move on the next step. The things are a bit 
slower now.


Thanks!
Oleg 


Efficient C and D sums calculation for least square estimation of phase, frequency and PDEV.pdf
Description: Adobe PDF document
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Question about frequency counter testing

2018-05-27 Thread Glenn Little WB4UIV

It appears that I replied to the wrong message, please ignore.

Glenn

On 5/27/2018 11:52 AM, Oleg Skydan wrote:

Hi!

From: "Magnus Danielson" 

You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you 
can
then using the formulas I provided calculate the least-square phase 
and

frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block 
sums

C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software 
processing and

create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing 
rather

than O(N^2), which is pretty cool.


I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and 
experiments,

thanks!


Thanks for the kind words!


It looks like the proposed method of decimation can be
efficiently realized on the current HW.


I had some free time yesterday and today, so I decided to test the new 
algorithms on the real hardware (the HW is still an old "ugly 
construction" one, but I hope I will have some time to make normal HW 
- I have already got almost all components I need).


I had to modify the original decimation scheme you propose in the 
paper, so it better fits my HW, also the calculation precision and 
speed should be higher now. The nice side effect - I do not need to 
care about phase unwrapping anymore. I can prepare a short description 
of the modifications and post it here, if it is interesting.


It works like a charm!

The new algorithm (base on C and D sums calculation and decimation) 
uses much less memory (less than 256KB for any gaiting time/sampling 
speed, the old one (direct LR calculation) was very memory hungry - it 
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for 
5MSPS). Now I can fit all data into the internal memory and have a 
single chip digital part of the frequency counter, well, almost single 
chip ;) The timestamping speed has increased and is limited now by the 
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with 
continuous real time data processing. It looks like it is the limit 
for the used chip (I expected a bit higher numbers). The calculation 
speed is also much higher now (approx 23ns per one timestamp, so up to 
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or 
10MSPS with the double time resolution (1.25ns). It will leave a 
plenty of CPU time for the UI/communication/GPS/statistics stuff.


I will probably throw out the power hungry and expensive SDRAM chip or 
use much smaller one :).


I have some plans to experiment with doubling the one shoot resolution 
down to 1.25ns. I see no much benefits from it, but it can be made 
with just a piece of coax and a couple of resistors, so it is 
interesting to try :).


All the best!
Oleg UR3IQO


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.



--
---
Glenn LittleARRL Technical Specialist   QCWA  LM 28417
Amateur Callsign:  WB4UIVwb4...@arrl.netAMSAT LM 2178
QTH:  Goose Creek, SC USA (EM92xx)  USSVI LM   NRA LM   SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-27 Thread Glenn Little WB4UIV

The MSDS is here:
https://simplegreen.com/data-sheets/

They claim that it is non reactive and chemically stable.
It is for water tolerant surfaces and should be rinsed.
Probably due to the citric acid.

Glenn

On 5/27/2018 11:52 AM, Oleg Skydan wrote:

Hi!

From: "Magnus Danielson" 

You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you 
can
then using the formulas I provided calculate the least-square phase 
and

frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block 
sums

C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software 
processing and

create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing 
rather

than O(N^2), which is pretty cool.


I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and 
experiments,

thanks!


Thanks for the kind words!


It looks like the proposed method of decimation can be
efficiently realized on the current HW.


I had some free time yesterday and today, so I decided to test the new 
algorithms on the real hardware (the HW is still an old "ugly 
construction" one, but I hope I will have some time to make normal HW 
- I have already got almost all components I need).


I had to modify the original decimation scheme you propose in the 
paper, so it better fits my HW, also the calculation precision and 
speed should be higher now. The nice side effect - I do not need to 
care about phase unwrapping anymore. I can prepare a short description 
of the modifications and post it here, if it is interesting.


It works like a charm!

The new algorithm (base on C and D sums calculation and decimation) 
uses much less memory (less than 256KB for any gaiting time/sampling 
speed, the old one (direct LR calculation) was very memory hungry - it 
used 4xSampling_Rate bytes/s - 20MB per second of the gate time for 
5MSPS). Now I can fit all data into the internal memory and have a 
single chip digital part of the frequency counter, well, almost single 
chip ;) The timestamping speed has increased and is limited now by the 
bus/bus matrix switch/DMA unit at a bit more then 24MSPS with 
continuous real time data processing. It looks like it is the limit 
for the used chip (I expected a bit higher numbers). The calculation 
speed is also much higher now (approx 23ns per one timestamp, so up to 
43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or 
10MSPS with the double time resolution (1.25ns). It will leave a 
plenty of CPU time for the UI/communication/GPS/statistics stuff.


I will probably throw out the power hungry and expensive SDRAM chip or 
use much smaller one :).


I have some plans to experiment with doubling the one shoot resolution 
down to 1.25ns. I see no much benefits from it, but it can be made 
with just a piece of coax and a couple of resistors, so it is 
interesting to try :).


All the best!
Oleg UR3IQO


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts

and follow the instructions there.



--
---
Glenn LittleARRL Technical Specialist   QCWA  LM 28417
Amateur Callsign:  WB4UIVwb4...@arrl.netAMSAT LM 2178
QTH:  Goose Creek, SC USA (EM92xx)  USSVI LM   NRA LM   SBE ARRL TAPR
"It is not the class of license that the Amateur holds but the class
of the Amateur that holds the license"

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-27 Thread Magnus Danielson
Hi Oleg,

On 05/27/2018 05:52 PM, Oleg Skydan wrote:
> Hi!
> 
>>> It looks like the proposed method of decimation can be
>>> efficiently realized on the current HW.
> 
> I had some free time yesterday and today, so I decided to test the new
> algorithms on the real hardware (the HW is still an old "ugly
> construction" one, but I hope I will have some time to make normal HW -
> I have already got almost all components I need).
> 
> I had to modify the original decimation scheme you propose in the paper,
> so it better fits my HW, also the calculation precision and speed should
> be higher now.

The point about the decimation scheme I did was to provide a toolbox,
and as long as you respect the rules within that toolbox you can adapt
it just as you like. As long as the sums C and D becomes correct, your
path to it can be whatever.

> The nice side effect - I do not need to care about phase
> unwrapping anymore.

You should always care about how that works out, and if you play your
cards right, it works out very smoothly.

> I can prepare a short description of the
> modifications and post it here, if it is interesting.

Yes please do, then I can double check it.

> It works like a charm!

Good. :)

> The new algorithm (base on C and D sums calculation and decimation) uses
> much less memory (less than 256KB for any gaiting time/sampling speed,
> the old one (direct LR calculation) was very memory hungry - it used
> 4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS).

This is one of the benefits of this. Assuming the same tau0, it is all
contained in the C, D and N triplet, and the memory need of these values
can be trivially analyzed, but it is very small, so it's a really
effective decimation technique while maintaining the least-square
properties.

> Now I can fit all data into the internal memory and have a single chip
> digital part of the frequency counter, well, almost single chip ;) The
> timestamping speed has increased and is limited now by the bus/bus
> matrix switch/DMA unit at a bit more then 24MSPS with continuous real
> time data processing. It looks like it is the limit for the used chip (I
> expected a bit higher numbers).

Yeah, now you can move your harware focus on considering interpolation
techniques beyond the processing power of least-square estimation, which
integrate noise way down.

> The calculation speed is also much higher now (approx 23ns per one
> timestamp, so up to 43MSPS can be processed in realtime).

Just to indicate that my claim for "High speed" is not completely wrong.

For each time-stamp, the pseudo-code becomes:

C = C + x_0
D = D + n*x_0
n = n + 1

Whenever n reaches N, C and D is output, and the values C, D and n is
set to 0.

However, this may be varied in several fun ways, but is left over as an
exercise for the implementer. Much of the other complexity is gone, so
this is the fun problem.

> I plan to stay at 20MSPS rate or 10MSPS with the
> double time resolution (1.25ns). It will leave a plenty of CPU time for
> the UI/communication/GPS/statistics stuff.

Sounds like a good plan.

> I will probably throw out the power hungry and expensive SDRAM chip or
> use much smaller one :).

Yeah, it would only be if you build multi-tau PDEV plots that you would
need much memory, other than that it is just buffer memory to buffer
before it goes to off-board processing, at which time you would need to
convey the C, D, N and tau0 values.

> I have some plans to experiment with doubling the one shoot resolution
> down to 1.25ns. I see no much benefits from it, but it can be made with
> just a piece of coax and a couple of resistors, so it is interesting to
> try :).

Please report on that progress! Sounds fun!

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-27 Thread Oleg Skydan

Hi!

From: "Magnus Danielson" 

You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.


I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and experiments,
thanks!


Thanks for the kind words!


It looks like the proposed method of decimation can be
efficiently realized on the current HW.


I had some free time yesterday and today, so I decided to test the new 
algorithms on the real hardware (the HW is still an old "ugly construction" 
one, but I hope I will have some time to make normal HW - I have already got 
almost all components I need).


I had to modify the original decimation scheme you propose in the paper, so 
it better fits my HW, also the calculation precision and speed should be 
higher now. The nice side effect - I do not need to care about phase 
unwrapping anymore. I can prepare a short description of the modifications 
and post it here, if it is interesting.


It works like a charm!

The new algorithm (base on C and D sums calculation and decimation) uses 
much less memory (less than 256KB for any gaiting time/sampling speed, the 
old one (direct LR calculation) was very memory hungry - it used 
4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS). Now I 
can fit all data into the internal memory and have a single chip digital 
part of the frequency counter, well, almost single chip ;) The timestamping 
speed has increased and is limited now by the bus/bus matrix switch/DMA unit 
at a bit more then 24MSPS with continuous real time data processing. It 
looks like it is the limit for the used chip (I expected a bit higher 
numbers). The calculation speed is also much higher now (approx 23ns per one 
timestamp, so up to 43MSPS can be processed in realtime). I plan to stay at 
20MSPS rate or 10MSPS with the double time resolution (1.25ns). It will 
leave a plenty of CPU time for the UI/communication/GPS/statistics stuff.


I will probably throw out the power hungry and expensive SDRAM chip or use 
much smaller one :).


I have some plans to experiment with doubling the one shoot resolution down 
to 1.25ns. I see no much benefits from it, but it can be made with just a 
piece of coax and a couple of resistors, so it is interesting to try :).


All the best!
Oleg UR3IQO


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-18 Thread Oleg Skydan

Hi!

--
From: "Magnus Danielson" 
From the 2.5 ns single shot resolution, I deduce a 400 MHz count 
clock.


Yes. It is approx. 400MHz.


OK, good to have that verified. Free-running or locked to a 10 MHz
reference?


Locked to OCXO (10MHz).


OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.


I can use two or more, but unfortunately not simultaneously. So I will 
switch frequency if the problem is detected. Switching will interact with 
GPS data processing, but that probably can be fixed in software (I had no 
time to investigate the possible solutions and find the best one yet).


BTW, the single shoot resolution can be doubled (to 1.25ns) with almost no 
additional HW (just a delay line for a bit more than 1.25ns and some 
resistors). Not sure if it worth to do (it also will halve the timestamping 
speed and double the timestamps memory requirements, so, in averaging modes 
it will be only sqrt(2) improvement).


All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-17 Thread Magnus Danielson
Hi Oleg,

On 05/18/2018 12:25 AM, Oleg Skydan wrote:
> Hi, Magnus!
> 
> --
> From: "Magnus Danielson" 
>>> 2. Study how PDEV calculation fits on the used HW. If it is possible to
>>> do in real time PDEV option can be added.
>>
>> You build two sums C and D, one is the phase-samples and the other is
>> phase-samples scaled with their index n in the block. From this you can
>> then using the formulas I provided calculate the least-square phase and
>> frequency, and using the least square frequency measures you can do
>> PDEV. The up-front processing is thus cheap, and there is meathods to
>> combine measurement blocks into longer measurement blocks, thus
>> decimation, using relatively simple linear processing on the block sums
>> C and D, with their respective lengths. The end result is that you can
>> very cheaply decimate data in HW/FW and then extend the properties to
>> arbitrary long observation intervals using cheap software processing and
>> create unbiased least square measurements this way. Once the linear
>> algebra of least square processing has vanished in a puff of logic, it
>> is fairly simple processing with very little memory requirements at
>> hand. For multi-tau, you can reach O(N log N) type of processing rather
>> than O(N^2), which is pretty cool.
> 
> I had some free time today to study the document you suggested and do
> some experiments in matlab - it was very useful reading and experiments,
> thanks!

Thanks for the kind words!

> It looks like the proposed method of decimation can be
> efficiently realized on the current HW.

The algorithm was crafted with the aim of achieving just that. It's
really a powerful method.

> Also as a side effect calculating large averaging in several blocks should 
> reduce floating
> point associated errors which can reach significant values with careless 
> coding.

Indeed. The framework provided should allow numerically precision to be
crafted without too much difficulty, which is another goal.

> Also all modes can be unified and can reuse the same acquisition code,
> nice... :)

As intended. :)

The C sums is what you use of MDEV type of processing.

>> I hope to have an updated version of that article available soon.
> 
> Please share the link if it will be publicly available.

Will do.

 From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>>>
>>> Yes. It is approx. 400MHz.
>>
>> OK, good to have that verified. Free-running or locked to a 10 MHz
>> reference?
> 
> Locked to OCXO (10MHz).

OK. I saw some odd frequencies, and I agree with Bob that if you can,
using two of those with non-trivial relationship can be used to get
really good performance.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-17 Thread Oleg Skydan

Hi, Magnus!

--
From: "Magnus Danielson" 

2. Study how PDEV calculation fits on the used HW. If it is possible to
do in real time PDEV option can be added.


You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.


I had some free time today to study the document you suggested and do some 
experiments in matlab - it was very useful reading and experiments, thanks! 
It looks like the proposed method of decimation can be efficiently realized 
on the current HW. Also as a side effect calculating large averaging in 
several blocks should reduce floating point associated errors which can 
reach significant values with careless coding.


Also all modes can be unified and can reuse the same acquisition code, 
nice... :)



I hope to have an updated version of that article available soon.


Please share the link if it will be publicly available.


From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.


Yes. It is approx. 400MHz.


OK, good to have that verified. Free-running or locked to a 10 MHz
reference?


Locked to OCXO (10MHz).

All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-17 Thread Magnus Danielson
Hi,

On 05/13/2018 11:13 PM, Oleg Skydan wrote:
> Hi Magnus,
> 
> From: "Magnus Danielson" 
>> I would be inclined to just continue the MDEV compliant processing
>> instead. If you want the matching ADEV, rescale it using the
>> bias-function, which can be derived out of p.51 of that presentation.
>> You just need to figure out the dominant noise-type of each range of
>> tau, something which is much simpler in MDEV since White PM and Flicker
>> PM separates more clearly than the weak separation of ADEV.
> 
> 
>> As you measure a DUT, the noise of the DUT, the noise of the counter and
>> the systematics of the counter adds up and we cannot distinguish them in
>> that measurement.
> 
> Probably I did not express what I meant clearly. I understand that we
> can not separate them, but if the DUT noise has most of the power inside
> the filter BW while instrument noise is wideband one, we can filter out
> part of instrument noise with minimal influence to the DUT one.

Yes, if you for a certain range can show that the instruments noise is
not dominant, then you measure the DUT. This is what happens as the
1/tau slope on the ADEV reaches down the DUT noise, where the resulting
curve is mostly DUT noise.

We may then hunt better counters to shift that slope leftwards on the
plot to see more of the DUT noise.

>> There is measurement setups, such as
>> cross-correlation, which makes multiple measurements in parallel which
>> can start combat the noise separation issue.
> 
> Yes, I am aware of that technique. I event did some experiments with
> cross correlation phase noise measurements.

Check.

>> Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
>> This is the hard lesson to learn. You can't really cheat if you aim to
>> get proper ADEV.
>>
>> You can use averaging, and it will cause biased values, so you might use
>> the part with less bias, but there is safer ways of doing that, by going
>> full MDEV or PDEV instead.
>>
>> With biases, you have something similar to, but not being _the_ ADEV.
> 
> OK. It looks like the last sentence very precisely describes what I was
> going to do, so we understood each other right. Summarizing the
> discussion, as far as I understand, the best strategy regarding *DEV
> calculations is:
> 1. Make MDEV the primary variant. It is suitable for calculation inside
> counter as well as for exporting data for the following post processing.

Doable.

> 2. Study how PDEV calculation fits on the used HW. If it is possible to
> do in real time PDEV option can be added.

You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.

I hope to have an updated version of that article available soon.

> 3. ADEV can be safely calculated only from the Pi mode counter data.
> Probably it will not be very useful because of low single shoot
> resolution, but Pi mode and corresponding data export can be easily added.

You will be assured it is bias-free. You want to have that option.

> I think it will be more than enough for my needs, at least now.
> 
>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
> 
> Yes. It is approx. 400MHz.

OK, good to have that verified. Free-running or locked to a 10 MHz
reference?

>>> I have no FPGA also :) All processing is in the FW, I will see how it
>>> fits the used HW architecture.
>>>
>>> Doing it all in FPGA has many benefits, but the HW will be more
>>> complicated and pricier with minimal benefits for my main goals.
>>
>> Exactly what you mean by FW now I don't get, for me that is FPGA code.
> 
> I meant MCU code, to make things clearer I can use the SW term for it.
> 
> Thank you for the answers and explanations, they are highly appreciated!

Nice! Really hope you can make sense out of them and apply them. I hope
I contribute to insight about what to do when to do good measurements.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 

Re: [time-nuts] Question about frequency counter testing

2018-05-15 Thread Oleg Skydan

Hi

From: "Bob kb8tq" 
What I’m suggesting is that if the hardware is very simple and very cheap, 
simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the 
process you move the decimated data

from B over to A and finish out all the math there ….


The hardware is simple and cheap cause it is all digital, requires no 
calibrations and the same HW is capable of driving TFT, UI handling, 
providing all control functionalities for input conditioning circuits, GPS 
module and etc. It also  provides USB interface for data exchange or remote 
control. So doubling it is not a way to go if I want to keep things simple 
and relatively cheap.


I think I will stay with the current plans for HW and try to handle some 
troubles to GPS timing in software. I have to make initial variant of HW, so 
I will be able to move on with the SW part towards useful counter. Then I 
will see how well it performs and will decide if it satisfies the 
requirements or I need to change something.


BTW, after quick check of the GPS module specs and OCXO's one it looks 
like a very simple algorithm can be used for frequency correction. OCXO 
frequency can be measured against GPS for a long enough period (some 
thousands of seconds, LR algorithm can be used here also) and we have got 
a correction coefficient. It can be updated at a rate of one second 
(probably we do not need to do it as fast). I do not believe it can be as 
simple. I feel I missed something :)…


That is one way it is done. A lot depends on the accuracy of the GPS PPS 
on your module.


The module is uBlox NEO-6M, I know there is better suited for my needs 
NEO-6T, but the first one was easy to get and insanely cheap. It should be 
enough to start.


More or less, with a thousand second observation time you will likely get 
below parts in 10^-10, but maybe not to the 1x10^-11 level.


1e-10 should satisfy my requirements. More sophisticated algorithm can be 
developed and used later, if needed.


Thanks!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-14 Thread Bob kb8tq
Hi

> On May 14, 2018, at 1:50 PM, Oleg Skydan  wrote:
> 
> Hi!
> 
> From: "Bob kb8tq" 
>>> If such conditions detected, I avoid problem by changing the counter clock. 
>>> But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. 
>>> It is related to HW and I can probably control it only partially. I will 
>>> try to improve clock and reference isolation in the "normal" HW and of 
>>> cause I will thoroughly test such frequencies when that HW will be ready.
>> 
>> It’s a very common problem in this sort of counter. The “experts” have a lot 
>> of trouble with it
>> on their designs. One answer with simple enough hardware could be to run 
>> *two* clocks
>> all the time. Digitize them both and process the results from both.
> 
> I thought about such solution, unfortunately it can not be implemented 
> because of HW limitations. Switching 400MHz clock is also not ideal solution, 
> cause it will make trouble to GPS correction calculations, the latter can be 
> fixed in software, but it is not an elegant solution. It all still needs some 
> polishing...
> 
>> still have the issue of a frequency that is a multiple (or sub multiple) of 
>> both clocks.
> 
> The clocks (if we are talking about 400MHz) has very interesting values like 
> 397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. 
> Also I am not limited by two or three values, so clock switching should solve 
> the problem, but not in elegant way, cause it breaks normal work of GPS 
> frequency correction algorithm, so additional steps to fix that will be 
> required :-\.
> 

What I’m suggesting is that if the hardware is very simple and very cheap, 
simply put two chips on the board.
One runs at Clock A and the other runs at Clock B. At some point in the process 
you move the decimated data
from B over to A and finish out all the math there ….


> BTW, after quick check of the GPS module specs and OCXO's one it looks like a 
> very simple algorithm can be used for frequency correction. OCXO frequency 
> can be measured against GPS for a long enough period (some thousands of 
> seconds, LR algorithm can be used here also) and we have got a correction 
> coefficient. It can be updated at a rate of one second (probably we do not 
> need to do it as fast). I do not believe it can be as simple. I feel I missed 
> something :)…

That is one way it is done. A lot depends on the accuracy of the GPS PPS on 
your module. It is unfortunately fairly easy to find
modules that are in the 10’s of ns error on a second to second basis. Sawtooth 
correction can help this a bit. OCXO’s have warmup
characteristics that also can move them a bit in the first hours of use. 

More or less, with a thousand second observation time you will likely get below 
parts in 10^-10, but maybe not to the 1x10^-11 level.

Bob

> 
> All the best!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-14 Thread Oleg Skydan

Hi!

From: "Bob kb8tq" 
If such conditions detected, I avoid problem by changing the counter 
clock. But it does not solve the effects at "about OCXO" * N or "about 
OCXO" / M. It is related to HW and I can probably control it only 
partially. I will try to improve clock and reference isolation in the 
"normal" HW and of cause I will thoroughly test such frequencies when 
that HW will be ready.


It’s a very common problem in this sort of counter. The “experts” have a 
lot of trouble with it
on their designs. One answer with simple enough hardware could be to run 
*two* clocks

all the time. Digitize them both and process the results from both.


I thought about such solution, unfortunately it can not be implemented 
because of HW limitations. Switching 400MHz clock is also not ideal 
solution, cause it will make trouble to GPS correction calculations, the 
latter can be fixed in software, but it is not an elegant solution. It all 
still needs some polishing...


still have the issue of a frequency that is a multiple (or sub multiple) 
of both clocks.


The clocks (if we are talking about 400MHz) has very interesting values like 
397501220.703Hz or 395001831.055Hz , so it will really occur very rarely. 
Also I am not limited by two or three values, so clock switching should 
solve the problem, but not in elegant way, cause it breaks normal work of 
GPS frequency correction algorithm, so additional steps to fix that will be 
required :-\.


BTW, after quick check of the GPS module specs and OCXO's one it looks like 
a very simple algorithm can be used for frequency correction. OCXO frequency 
can be measured against GPS for a long enough period (some thousands of 
seconds, LR algorithm can be used here also) and we have got a correction 
coefficient. It can be updated at a rate of one second (probably we do not 
need to do it as fast). I do not believe it can be as simple. I feel I 
missed something :)...


All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-14 Thread Bob kb8tq
Hi

> On May 14, 2018, at 5:25 AM, Oleg Skydan  wrote:
> 
> Hi Bob!
> 
> From: "Bob kb8tq" 
>>> I think it will be more than enough for my needs, at least now.
>>> 
 From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
>>> 
>>> Yes. It is approx. 400MHz.
>> 
>> I think I would spend more time working out what happens at “about 400  MHz” 
>> X N or
>> “about 400 MHz / M” …….
> 
> If such conditions detected, I avoid problem by changing the counter clock. 
> But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. It 
> is related to HW and I can probably control it only partially. I will try to 
> improve clock and reference isolation in the "normal" HW and of cause I will 
> thoroughly test such frequencies when that HW will be ready.

It’s a very common problem in this sort of counter. The “experts” have a lot of 
trouble with it
on their designs. One answer with simple enough hardware could be to run *two* 
clocks
all the time. Digitize them both and process the results from both. …. just a 
thought …. You
still have the issue of a frequency that is a multiple (or sub multiple) of 
both clocks. With 
some care in clock selection you could make that a pretty rare occurrence ( 
thus making 
it easy to identify in firmware ….). 


Bob



> 
> All the best!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-14 Thread Oleg Skydan

Hi Bob!

From: "Bob kb8tq" 

I think it will be more than enough for my needs, at least now.


From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.


Yes. It is approx. 400MHz.


I think I would spend more time working out what happens at “about 400 
 MHz” X N or

“about 400 MHz / M” …….


If such conditions detected, I avoid problem by changing the counter clock. 
But it does not solve the effects at "about OCXO" * N or "about OCXO" / M. 
It is related to HW and I can probably control it only partially. I will try 
to improve clock and reference isolation in the "normal" HW and of cause I 
will thoroughly test such frequencies when that HW will be ready.


All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Bob kb8tq
Hi



> On May 13, 2018, at 5:13 PM, Oleg Skydan  wrote:
> 
> Hi Magnus,
> 
> From: "Magnus Danielson" 
>> I would be inclined to just continue the MDEV compliant processing
>> instead. If you want the matching ADEV, rescale it using the
>> bias-function, which can be derived out of p.51 of that presentation.
>> You just need to figure out the dominant noise-type of each range of
>> tau, something which is much simpler in MDEV since White PM and Flicker
>> PM separates more clearly than the weak separation of ADEV.
> 
> 
>> As you measure a DUT, the noise of the DUT, the noise of the counter and
>> the systematics of the counter adds up and we cannot distinguish them in
>> that measurement.
> 
> Probably I did not express what I meant clearly. I understand that we can not 
> separate them, but if the DUT noise has most of the power inside the filter 
> BW while instrument noise is wideband one, we can filter out part of 
> instrument noise with minimal influence to the DUT one.
> 
>> There is measurement setups, such as
>> cross-correlation, which makes multiple measurements in parallel which
>> can start combat the noise separation issue.
> 
> Yes, I am aware of that technique. I event did some experiments with cross 
> correlation phase noise measurements.
> 
>> Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
>> This is the hard lesson to learn. You can't really cheat if you aim to
>> get proper ADEV.
>> 
>> You can use averaging, and it will cause biased values, so you might use
>> the part with less bias, but there is safer ways of doing that, by going
>> full MDEV or PDEV instead.
>> 
>> With biases, you have something similar to, but not being _the_ ADEV.
> 
> OK. It looks like the last sentence very precisely describes what I was going 
> to do, so we understood each other right. Summarizing the discussion, as far 
> as I understand, the best strategy regarding *DEV calculations is:
> 1. Make MDEV the primary variant. It is suitable for calculation inside 
> counter as well as for exporting data for the following post processing.
> 2. Study how PDEV calculation fits on the used HW. If it is possible to do in 
> real time PDEV option can be added.
> 3. ADEV can be safely calculated only from the Pi mode counter data. Probably 
> it will not be very useful because of low single shoot resolution, but Pi 
> mode and corresponding data export can be easily added.
> 
> I think it will be more than enough for my needs, at least now.
> 
>> From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.
> 
> Yes. It is approx. 400MHz.

I think I would spend more time working out what happens at “about 400 MHz” X N 
or 
“about 400 MHz / M” …….

Bob


> 
>>> I have no FPGA also :) All processing is in the FW, I will see how it
>>> fits the used HW architecture.
>>> 
>>> Doing it all in FPGA has many benefits, but the HW will be more
>>> complicated and pricier with minimal benefits for my main goals.
>> 
>> Exactly what you mean by FW now I don't get, for me that is FPGA code.
> 
> I meant MCU code, to make things clearer I can use the SW term for it.
> 
> Thank you for the answers and explanations, they are highly appreciated!
> 
> All the best!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Oleg Skydan

Hi Magnus,

From: "Magnus Danielson" 

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.




As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement.


Probably I did not express what I meant clearly. I understand that we can 
not separate them, but if the DUT noise has most of the power inside the 
filter BW while instrument noise is wideband one, we can filter out part of 
instrument noise with minimal influence to the DUT one.



There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.


Yes, I am aware of that technique. I event did some experiments with cross 
correlation phase noise measurements.



Ehm no. The optimal averaging strategy for ADEV is to do no averaging.
This is the hard lesson to learn. You can't really cheat if you aim to
get proper ADEV.

You can use averaging, and it will cause biased values, so you might use
the part with less bias, but there is safer ways of doing that, by going
full MDEV or PDEV instead.

With biases, you have something similar to, but not being _the_ ADEV.


OK. It looks like the last sentence very precisely describes what I was 
going to do, so we understood each other right. Summarizing the discussion, 
as far as I understand, the best strategy regarding *DEV calculations is:
1. Make MDEV the primary variant. It is suitable for calculation inside 
counter as well as for exporting data for the following post processing.
2. Study how PDEV calculation fits on the used HW. If it is possible to do 
in real time PDEV option can be added.
3. ADEV can be safely calculated only from the Pi mode counter data. 
Probably it will not be very useful because of low single shoot resolution, 
but Pi mode and corresponding data export can be easily added.


I think it will be more than enough for my needs, at least now.


From the 2.5 ns single shot resolution, I deduce a 400 MHz count clock.


Yes. It is approx. 400MHz.


I have no FPGA also :) All processing is in the FW, I will see how it
fits the used HW architecture.

Doing it all in FPGA has many benefits, but the HW will be more
complicated and pricier with minimal benefits for my main goals.


Exactly what you mean by FW now I don't get, for me that is FPGA code.


I meant MCU code, to make things clearer I can use the SW term for it.

Thank you for the answers and explanations, they are highly appreciated!

All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Magnus Danielson
Hi,

On 05/13/2018 08:09 PM, Bob kb8tq wrote:
>>> If so, that raises a whole added layer to this discussion in terms of “does 
>>> it do
>>> what it says it does?”.
>>
>> This question is also important for amateur/hobby measurement equipment. I 
>> do not need equipment that "does not do what it says it does" even if it is 
>> build for hobby use.
>>
>> The theme about *DEV calculations has many important details I want to 
>> understand right, sorry if I asked too many questions (some of them probably 
>> were naive) and thank you for the help, it is very appreciated! I hope our 
>> discussion is useful not only for me.
>>
> 
> You are very much *not* the first person to run into these issues. They date 
> back to the very early use of things like
> ADEV. The debate has been active ever since. There are a few other sub 
> debates that also come up. The proper 
> definition of ADEV allows “drift correction” to be used. Just how you do 
> drift correction is up to you. As with filtering, 
> drift elimination impacts the results. It also needs to be defined ( if 
> used)., 

There is actually two uses of ADEV, one is to represent the amplitude of
the various noise types, and the other is to represent the behavior of
the frequency measure. The classical use is the former, and you do not
want to fool those estimates, but for the later pre-filtering is not
only allowed, but encouraged!

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Bob kb8tq
Hi

> On May 13, 2018, at 1:31 PM, Oleg Skydan  wrote:
> 
> Hi Bob!
> 
> From: "Bob kb8tq" 
>> I guess it is time to ask:
>> 
>> Is this a commercial product you are designing?
> 
> No. I have no abilities to produce it commercially and I see no market for 
> such product. I will build one unit for myself, I may build several more 
> units for friends or if somebody will like it. I will show the HW details 
> when it will be ready.
> 
> What I gain doing it?
> 1. I will have a new counter that suits my current needs
> 2. I will study something new
> 
>> If so, that raises a whole added layer to this discussion in terms of “does 
>> it do
>> what it says it does?”.
> 
> This question is also important for amateur/hobby measurement equipment. I do 
> not need equipment that "does not do what it says it does" even if it is 
> build for hobby use.
> 
> The theme about *DEV calculations has many important details I want to 
> understand right, sorry if I asked too many questions (some of them probably 
> were naive) and thank you for the help, it is very appreciated! I hope our 
> discussion is useful not only for me.
> 

You are very much *not* the first person to run into these issues. They date 
back to the very early use of things like
ADEV. The debate has been active ever since. There are a few other sub debates 
that also come up. The proper 
definition of ADEV allows “drift correction” to be used. Just how you do drift 
correction is up to you. As with filtering, 
drift elimination impacts the results. It also needs to be defined ( if used)., 

Bob

> Thanks!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Oleg Skydan

Hi Bob!

From: "Bob kb8tq" 

I guess it is time to ask:

Is this a commercial product you are designing?


No. I have no abilities to produce it commercially and I see no market for 
such product. I will build one unit for myself, I may build several more 
units for friends or if somebody will like it. I will show the HW details 
when it will be ready.


What I gain doing it?
1. I will have a new counter that suits my current needs
2. I will study something new

If so, that raises a whole added layer to this discussion in terms of 
“does it do

what it says it does?”.


This question is also important for amateur/hobby measurement equipment. I 
do not need equipment that "does not do what it says it does" even if it is 
build for hobby use.


The theme about *DEV calculations has many important details I want to 
understand right, sorry if I asked too many questions (some of them probably 
were naive) and thank you for the help, it is very appreciated! I hope our 
discussion is useful not only for me.


Thanks!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Bob kb8tq
Hi

I guess it is time to ask:

Is this a commercial product you are designing? 

If so, that raises a whole added layer to this discussion in terms of “does it 
do 
what it says it does?”.

Bob

> On May 13, 2018, at 3:07 AM, Oleg Skydan  wrote:
> 
> Hi Bob!
> 
> From: "Bob kb8tq" 
>> It’s only useful if it is accurate. Since you can “do code” that gives you 
>> results that are better than reality,
>> simply coming up with a number is not the full answer. To be useful as ADEV, 
>> it needs to be correct.
> 
> I understand it, so I try to investigate the problem to understand what can 
> be done (if any :).
> 
>> I’m sure it will come out to be a very cool counter. My *only* concern here 
>> is creating inaccurate results
>> by stretching to far with what you are trying to do. Keep it to the stuff 
>> that is accurate.
> 
> I am interested in accurate results or at least with well defined limitations 
> for a few specific measurements/modes. So I will try to make results as 
> accurate as I can do/it can be done keeping simple hardware.
> 
> Thanks!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Magnus Danielson
Hi Oleg,

On 05/13/2018 09:31 AM, Oleg Skydan wrote:
> Hi Magnus!
> 
> From: "Magnus Danielson" 
>>> The leftmost tau values are skipped and they "stay" inside the counter.
>>> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
>>> will generate internally 1/8sec averaged measurements, but export
>>> combined data for 1s stamps. The result will be strictly speaking
>>> different, but the difference should be insignificant.
>>
>> What is your motivation for doing this?
> 
> My counter can operate in usual Pi mode - I got 2.5ns resolution. I am
> primary interested in high frequency signals (not one shoot events), and
> HW is able to collect and process some millions of timestamps
> continuously. So in Delta or Omega mode I can improve the resolution in
> theory down to several ps (for 1s measurement interval). In reality the
> limit will be somewhat higher.

Fair enough.

> So I can compute the classical ADEV (using Pi mode) with a lot of
> counter noise at low tau (it will probably not be very useful due to the
> counter noise dominance in the leftmost part of ADEV plot), or MDEV
> (using Delta mode) with the counter noise much lower.

Yes, it helps you to suppress noise. As you extend the measures, you
need to do it properly to maintain MDEV property.

> I would like to try to use the excessive data I have to increase counter
> resolution in a manner that ADEV calculation with such preprocessing is
> still possible with acceptable accuracy. After Bob's explanations and
> some additional reading I was almost sure it is not possible (and it is
> so in general case), but then I saw the presentation
> http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola,
> High resolution time and frequency counters, updated version) and saw
> inferences on p.54. They looks reasonable and it is just what I wanted
> to do.

OK, when you do this you really want to filter out the first lower tau,
but as you get out of the filtered part, or rather, when the dominant
part of the ADEV processing is within the filter bandwidth, the biasing
becomes less.

I would be inclined to just continue the MDEV compliant processing
instead. If you want the matching ADEV, rescale it using the
bias-function, which can be derived out of p.51 of that presentation.
You just need to figure out the dominant noise-type of each range of
tau, something which is much simpler in MDEV since White PM and Flicker
PM separates more clearly than the weak separation of ADEV.

Also, on this page you can see how the system bandwith f_H affects white
PM and flicker PM for Allan, but not modified Allan.

>> The mistake is easy to make. Back in the days, it was given that you
>> should always give the system bandwidth alongside a ADEV plot, a
>> practice that later got lost. Many people does not know what bandwidth
>> they have, and the effect it has on the plot. I've even heard
>> distinguished and knowledgeable person in the field admit of doing it
>> incorrect.
> 
> That makes sense.
> 
> We can view at the problem in the frequency domain. We have a DUT,
> reference and instrument (counter) noise. In most cases we are
> interested in suppressing instrument and reference noise and leaving the
> DUT noise. The reference and DUT has more or less the same nature of
> noise, so it should not be possible to filter out reference noise
> without affecting DUT noise (with the simple HW). The counter noise (in
> my case) will look like white noise (at least the noise associated with
> the absence of the HW interpolator). When we process timestamps by Omega
> or Delta data processing we apply filter, so the correctness of the
> resulting data will depend of the DUT noise characteristics and filter
> shape. The ADEV calculation at tau > tau0 will also apply some sort of
> filter during decimation, it also should be counted for (cause we
> actually decimate the high rate timestamp stream making the points data
> for the following postprocessing). Am I right?

As you measure a DUT, the noise of the DUT, the noise of the counter and
the systematics of the counter adds up and we cannot distinguish them in
that measurement. There is measurement setups, such as
cross-correlation, which makes multiple measurements in parallel which
can start combat the noise separation issue.

For short taus, the systematic noise of quantization will create a 1/tau
limit in ADEV. This is in fact more complex than this simple model, but
let's just assume this for the moment, it is sufficient for the time
being and is what most assume anyway.

ADEV however does not really do decimation. It does combine measurements
to form longer observation time of frequency estimation, and subtract
these before squaring, to form the 2-point deviation, which we call
Allan's deviation or Allan deviation.

ADEV is designed to match how a simple counters deviation would become.

> Here is a good illustration how averaging affects ADEV
> 

Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Oleg Skydan

Hi Magnus!

From: "Magnus Danielson" 

The leftmost tau values are skipped and they "stay" inside the counter.
If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
will generate internally 1/8sec averaged measurements, but export
combined data for 1s stamps. The result will be strictly speaking
different, but the difference should be insignificant.


What is your motivation for doing this?


My counter can operate in usual Pi mode - I got 2.5ns resolution. I am 
primary interested in high frequency signals (not one shoot events), and HW 
is able to collect and process some millions of timestamps continuously. So 
in Delta or Omega mode I can improve the resolution in theory down to 
several ps (for 1s measurement interval). In reality the limit will be 
somewhat higher.


So I can compute the classical ADEV (using Pi mode) with a lot of counter 
noise at low tau (it will probably not be very useful due to the counter 
noise dominance in the leftmost part of ADEV plot), or MDEV (using Delta 
mode) with the counter noise much lower.


I would like to try to use the excessive data I have to increase counter 
resolution in a manner that ADEV calculation with such preprocessing is 
still possible with acceptable accuracy. After Bob's explanations and some 
additional reading I was almost sure it is not possible (and it is so in 
general case), but then I saw the presentation 
http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf (E. Rubiola, High 
resolution time and frequency counters, updated version) and saw inferences 
on p.54. They looks reasonable and it is just what I wanted to do.



The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.


That makes sense.

We can view at the problem in the frequency domain. We have a DUT, reference 
and instrument (counter) noise. In most cases we are interested in 
suppressing instrument and reference noise and leaving the DUT noise. The 
reference and DUT has more or less the same nature of noise, so it should 
not be possible to filter out reference noise without affecting DUT noise 
(with the simple HW). The counter noise (in my case) will look like white 
noise (at least the noise associated with the absence of the HW 
interpolator). When we process timestamps by Omega or Delta data processing 
we apply filter, so the correctness of the resulting data will depend of the 
DUT noise characteristics and filter shape. The ADEV calculation at tau > 
tau0 will also apply some sort of filter during decimation, it also should 
be counted for (cause we actually decimate the high rate timestamp stream 
making the points data for the following postprocessing). Am I right?


Here is a good illustration how averaging affects ADEV 
http://www.leapsecond.com/pages/adev-avg/ . If we drop the leftmost part of 
the ADEV affected by averaging, the resulting averaging effects on the ADEV 
are minimized. Also they can be minimized by the optimal averaging strategy. 
The question is optimal averaging strategy and well defined restrictions 
when such preprocessing can be applied.


If it works I would like to add such mode for the compatibility with the 
widely spread post processing SW (TimeLab is a good example). Of cause I can 
do calculations inside the counter without such limitations, but that will 
be another data processing option(s) (which might not be always suitable).



I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.


The end goal is to have a counter mode when the counter produces data 
suitable for post processing for ADEV and other similar statistics with 
resolution better (or counter noise lower) that one shoot one (Pi counter). 
I understand that, if it will be possible, the counter resolution will be 
degraded compared to usual Omega or Delta mode, also there will be some 
limitations for the DUT noise when such processing can be applied.



Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.


I have no HW interpolator. The similar problem in the firmware was discussed 
earlier and now it is fixed.



If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:

https://arxiv.org/abs/1604.01004


Thanks for the document. It needs some time to study and maybe I will
add the features to the counter to calculate correct PDEV.


It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and 

Re: [time-nuts] Question about frequency counter testing

2018-05-13 Thread Oleg Skydan

Hi Bob!

From: "Bob kb8tq" 
It’s only useful if it is accurate. Since you can “do code” that gives you 
results that are better than reality,
simply coming up with a number is not the full answer. To be useful as 
ADEV, it needs to be correct.


I understand it, so I try to investigate the problem to understand what can 
be done (if any :).


I’m sure it will come out to be a very cool counter. My *only* concern 
here is creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff 
that is accurate.


I am interested in accurate results or at least with well defined 
limitations for a few specific measurements/modes. So I will try to make 
results as accurate as I can do/it can be done keeping simple hardware.


Thanks!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Magnus Danielson


On 05/12/2018 09:41 PM, Bob kb8tq wrote:
> Hi
> 
> 
>> On May 12, 2018, at 1:20 PM, Oleg Skydan  wrote:
>>
>> Hi!
>>
>> From: "Bob kb8tq" 
>>> There is still the problem that the first post on the graph is different 
>>> depending
>>> on the technique.
>>
>> The leftmost tau values are skipped and they "stay" inside the counter. If I 
>> setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will 
>> generate internally 1/8sec averaged measurements, but export combined data 
>> for 1s stamps. The result will be strictly speaking different, but the 
>> difference should be insignificant.
> 
> Except here are a *lot* of papers where they demonstrate that the difference 
> may be *very* significant. I would
> suggest that the “is significant’ group is actually larger than the “is not” 
> group. 

There is no reason to treat it light-handed as about the same, as they
become different measures, where there is a measurement bias. Depending
on what you do, there might be a bias function to compensate the bias
with... or not. Even when there is, most people forget to apply it.

Stay clear of it and do it properly.

Averaging prior to ADEV does nothing really useful unless it is
well-founded, and then we call it MDEV and PDEV, and then you have to be
careful about the details to do it proper. Otherwise you just waste your
time to get "improved numbers" which does not actually help you to give
proper measures.

>>
>>> The other side of all this is that ADEV is really not a very good way to 
>>> test a counter.
>>
>> Counter testing was not a main reason to dig into statistics details last 
>> days. Initially I used ADEV when tried to test the idea of making the 
>> counter with very simple HW and good resolution (BTW, it appeared later it 
>> was not ADEV in reality :). Then I saw it worked, so I decided to make a 
>> "normal" useful counter (I liked the HW/SW concept). The HW has enough power 
>> to compute various statistics onboard in real time, and while it is not 
>> requisite feature of the project now, I think it will be good if the counter 
>> will be able to do it (or at least if it will export data suitable to do it 
>> in post process). The rest of the story you know :)
> 
> Again, ADEV is tricky and sensitive to various odd things. This whole debate 
> about it being sensitive goes 
> back to the original papers in the late 1960’s and 1970’s. At every paper I 
> attended the issue of averaging
> and bandwidth came up in the questions after the paper. The conversation has 
> been going on for a *long*
> time. 

If you go back to Dr. David Allan's Feb 1966 paper, you clearly see how
white and flicker phase modulation noise depend on the bandwidth, and
then assumed to be brick-wall filter. Your ability to reflect the
amplitude of those noises properly thus depends on the bandwidth.

Any filtering reduces the bandwidth, and hence artificially reduces the
ADEV value for the same amount of actual noise, then it is not
representing the underlying noise properly. However, if you use this for
improving your frequency measurements, it's fine and the processed ADEV
will represent the counters performance with that filter. Thus, the aim
will govern if you should or should not do the pre-filtering.

>>> If you are trying specifically just to measure ADEV, then there are a lot 
>>> of ways to do that by it’s self.
>>
>> Yes, but if it can be done with only some additional code - why not to have 
>> such ability? Even if it has some known limitations it is still a useful 
>> addition. Of cause it should be done as good as it can be with the HW 
>> limitations. Also it was/is a good educational moment.
> 
> It’s only useful if it is accurate. Since you can “do code” that gives you 
> results that are better than reality,
> simply coming up with a number is not the full answer. To be useful as ADEV, 
> it needs to be correct. 

Exactly.

>>
>> Now it is period of tests/experiments to see the used technology 
>> features/limitations(of cause if those experiments can be done with the 
>> current "ugly style HW"). I have already got a lot of useful information, it 
>> should help me in the following HW/FW development. The next steps are analog 
>> front end and GPS frequency correction (I should get the GPS module next 
>> week). I have already tested the 6GHz prescaler and now wait for some parts 
>> to finish it. Hope this project will have the "happy end" :).
> 
> I’m sure it will come out to be a very cool counter. My *only* concern here 
> is creating inaccurate results
> by stretching to far with what you are trying to do. Keep it to the stuff 
> that is accurate.

Bob and I are picky, and for a reasoon. When we want our ADEV plots, we
want them done properly, or else we can improve the specs of the
oscillators by changing how fancy post-processing we do on the
counter-data. Yes, we see this in professional conferences too.

Mumble... BAD SCIENCE!

Metrology correct 

Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Magnus Danielson
Hi,

On 05/12/2018 08:38 PM, Oleg Skydan wrote:
> Hi!
> 
> From: "Magnus Danielson" 
>> ADEV assumes brick-wall filtering up to the Nyquist frequency as result
>> of the sample-rate. When you filter the data as you do a Linear
>> Regression / Least Square estimation, the actual bandwidth will be much
>> less, so the ADEV measures will be biased for lower taus, but for higher
>> taus less of the kernel of the ADEV will be affected by the filter and
>> thus the bias will reduce.
> 
> Thanks for clarification. Bob already pointed me to problem and after
> some reading *DEV theme seems to be clearer.

The mistake is easy to make. Back in the days, it was given that you
should always give the system bandwidth alongside a ADEV plot, a
practice that later got lost. Many people does not know what bandwidth
they have, and the effect it has on the plot. I've even heard
distinguished and knowledgeable person in the field admit of doing it
incorrect.

>>> Does the ADEV plots I got looks reasonable for the used "mid range"
>>> OCXOs (see the second plot for the long run test)?
>>
>> You probably want to find the source of the wavy response as the orange
>> and red trace.
> 
> I have already found the problem. It is HW problem related to poor
> isolation between reference OCXO signal and counter input signal clock
> line (it is also possible there are some grounding or power supply
> decoupling problems - the HW is made in "ugly construction" style). When
> the input clock frequency is very close (0.3..0.4Hz difference) to the
> OCXO subharmonic this problem become visible (it is not FW problem
> discussed before, cause counter reference is not a harmonic of the OCXO
> anymore).

Make sense. Cross-talk has been performance limit of several counters,
and care should be taken to reduce it.

> It looks like some commercial counters suffers from that
> problem too. After I connected OCXO and input feed lines with short
> pieces of the coax this effect greatly decreased, but not disappeared.

Cross talk exists for sure, but there is a similar effect too which is
not due to cross-talk but due to how the counter is able to interpolate
certain frequencies.

> The "large N" plots were measured with the input signal 1.4Hz (0.3ppm)
> higher then 1/2 subharmonic  of the OCXO frequency, with such frequency
> difference that problem completely disappears. I will check for this
> problem again when I will move the HW to the normal PCB.

Yes.

>> If fact, you can do a Omega-style counter you can use for PDEV, you just
>> need to use the right approach to be able to decimate the data. Oh,
>> there's a draft paper on that:
>>
>> https://arxiv.org/abs/1604.01004
> 
> Thanks for the document. It needs some time to study and maybe I will
> add the features to the counter to calculate correct PDEV.

It suggest a very practical method for FPGA based counters, so that you
can make use of the high rate of samples that you have and reduce it in
HW before handing of to SW. As you want to decimate data, you do not
want to lose the Least Square property, and this is a practical method
of achieving it.

>>> If ADEV is needed, the averaging
>>> interval can be reduced and several measurements (more then eight) can
>>> be combined into one point (creating the new weighting function which
>>> resembles the usual Pi one, as shown in the [1] p.54), it should be
>>> possible to calculate usual ADEV using such data. As far as I
>>> understand, the filter which is formed by the resulting weighting
>>> function will have wider bandwidth, so the impact on ADEV will be
>>> smaller and it can be computed correctly. Am I missing something?
>>
>> Well, you can reduce averaging interval to 1 and then you compute the
>> ADEV, but it does not behave as the MDEV any longer.
> 
> With no averaging it will be a simple reciprocal counter with time
> resolution of only 2.5ns. The idea was to use trapezoidal weighting, so
> the counter will become somewhere "between" Pi and Delta counters. When
> the upper base of the weighting function trapezium is 0 length
> (triangular weighting) it is usual Delta counter, if it is infinitely
> long the result should converge to usual Pi counter. Prof. Rubiola
> claims if the ratio of upper to lower base is more than 8/9 the ADEV
> plots made from such data should be sufficiently close to usual ADEV. Of
> cause the gain from the averaging will be at least 3 times less than
> from the usual Delta averaging.

You do not want to mix pre-filtering and ADEV that way. We can do things
better.

> Maybe I need to find or make "not so good" signal source and measure its
> ADEV using above method and compare with the traditional. It should be
> interesting experiment.

It is always good to experiment and learn from both not so stable stuff,
stuff with significant drift and very stable stuff.

>> What you can do is that you can calculate MDEV or PDEV, and then apply
>> the suitable bias function to convert the level 

Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Bob kb8tq
Hi


> On May 12, 2018, at 1:20 PM, Oleg Skydan  wrote:
> 
> Hi!
> 
> From: "Bob kb8tq" 
>> There is still the problem that the first post on the graph is different 
>> depending
>> on the technique.
> 
> The leftmost tau values are skipped and they "stay" inside the counter. If I 
> setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will 
> generate internally 1/8sec averaged measurements, but export combined data 
> for 1s stamps. The result will be strictly speaking different, but the 
> difference should be insignificant.

Except here are a *lot* of papers where they demonstrate that the difference 
may be *very* significant. I would
suggest that the “is significant’ group is actually larger than the “is not” 
group. 


> 
>> The other side of all this is that ADEV is really not a very good way to 
>> test a counter.
> 
> Counter testing was not a main reason to dig into statistics details last 
> days. Initially I used ADEV when tried to test the idea of making the counter 
> with very simple HW and good resolution (BTW, it appeared later it was not 
> ADEV in reality :). Then I saw it worked, so I decided to make a "normal" 
> useful counter (I liked the HW/SW concept). The HW has enough power to 
> compute various statistics onboard in real time, and while it is not 
> requisite feature of the project now, I think it will be good if the counter 
> will be able to do it (or at least if it will export data suitable to do it 
> in post process). The rest of the story you know :)

Again, ADEV is tricky and sensitive to various odd things. This whole debate 
about it being sensitive goes 
back to the original papers in the late 1960’s and 1970’s. At every paper I 
attended the issue of averaging
and bandwidth came up in the questions after the paper. The conversation has 
been going on for a *long*
time. 

> 
>> If you are trying specifically just to measure ADEV, then there are a lot of 
>> ways to do that by it’s self.
> 
> Yes, but if it can be done with only some additional code - why not to have 
> such ability? Even if it has some known limitations it is still a useful 
> addition. Of cause it should be done as good as it can be with the HW 
> limitations. Also it was/is a good educational moment.

It’s only useful if it is accurate. Since you can “do code” that gives you 
results that are better than reality,
simply coming up with a number is not the full answer. To be useful as ADEV, it 
needs to be correct. 

> 
> Now it is period of tests/experiments to see the used technology 
> features/limitations(of cause if those experiments can be done with the 
> current "ugly style HW"). I have already got a lot of useful information, it 
> should help me in the following HW/FW development. The next steps are analog 
> front end and GPS frequency correction (I should get the GPS module next 
> week). I have already tested the 6GHz prescaler and now wait for some parts 
> to finish it. Hope this project will have the "happy end" :).

I’m sure it will come out to be a very cool counter. My *only* concern here is 
creating inaccurate results
by stretching to far with what you are trying to do. Keep it to the stuff that 
is accurate.

Bob


> 
> All the best!
> Oleg 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Magnus Danielson
Hi Oleg,

On 05/12/2018 07:20 PM, Oleg Skydan wrote:
> Hi!
> 
> From: "Bob kb8tq" 
>> There is still the problem that the first post on the graph is
>> different depending
>> on the technique.
> 
> The leftmost tau values are skipped and they "stay" inside the counter.
> If I setup counter to generate lets say 1s stamps (ADEV starts at 1s) it
> will generate internally 1/8sec averaged measurements, but export
> combined data for 1s stamps. The result will be strictly speaking
> different, but the difference should be insignificant.

What is your motivation for doing this?

I'm not saying you are necessarilly incorrect, but it would be
interesting to hear your motivation.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Oleg Skydan

Hi!

From: "Magnus Danielson" 

ADEV assumes brick-wall filtering up to the Nyquist frequency as result
of the sample-rate. When you filter the data as you do a Linear
Regression / Least Square estimation, the actual bandwidth will be much
less, so the ADEV measures will be biased for lower taus, but for higher
taus less of the kernel of the ADEV will be affected by the filter and
thus the bias will reduce.


Thanks for clarification. Bob already pointed me to problem and after some 
reading *DEV theme seems to be clearer.



Does the ADEV plots I got looks reasonable for the used "mid range"
OCXOs (see the second plot for the long run test)?


You probably want to find the source of the wavy response as the orange
and red trace.


I have already found the problem. It is HW problem related to poor isolation 
between reference OCXO signal and counter input signal clock line (it is 
also possible there are some grounding or power supply decoupling problems - 
the HW is made in "ugly construction" style). When the input clock frequency 
is very close (0.3..0.4Hz difference) to the OCXO subharmonic this problem 
become visible (it is not FW problem discussed before, cause counter 
reference is not a harmonic of the OCXO anymore). It looks like some 
commercial counters suffers from that problem too. After I connected OCXO 
and input feed lines with short pieces of the coax this effect greatly 
decreased, but not disappeared. The "large N" plots were measured with the 
input signal 1.4Hz (0.3ppm) higher then 1/2 subharmonic  of the OCXO 
frequency, with such frequency difference that problem completely 
disappears. I will check for this problem again when I will move the HW to 
the normal PCB.



If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:

https://arxiv.org/abs/1604.01004


Thanks for the document. It needs some time to study and maybe I will add 
the features to the counter to calculate correct PDEV.



If ADEV is needed, the averaging
interval can be reduced and several measurements (more then eight) can
be combined into one point (creating the new weighting function which
resembles the usual Pi one, as shown in the [1] p.54), it should be
possible to calculate usual ADEV using such data. As far as I
understand, the filter which is formed by the resulting weighting
function will have wider bandwidth, so the impact on ADEV will be
smaller and it can be computed correctly. Am I missing something?


Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.


With no averaging it will be a simple reciprocal counter with time 
resolution of only 2.5ns. The idea was to use trapezoidal weighting, so the 
counter will become somewhere "between" Pi and Delta counters. When the 
upper base of the weighting function trapezium is 0 length (triangular 
weighting) it is usual Delta counter, if it is infinitely long the result 
should converge to usual Pi counter. Prof. Rubiola claims if the ratio of 
upper to lower base is more than 8/9 the ADEV plots made from such data 
should be sufficiently close to usual ADEV. Of cause the gain from the 
averaging will be at least 3 times less than from the usual Delta averaging.


Maybe I need to find or make "not so good" signal source and measure its 
ADEV using above method and compare with the traditional. It should be 
interesting experiment.



What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.


That can be done if the statistics is calculated inside the counter, but it 
will not make the exported data suitable for post processing with the 
TimeLab or other software that is not aware of what is going on inside the 
counter.



Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.


Here is another question - how to correctly calculate averaging length in 
Delta counter? I have 5e6 timestamps in one second, so Pi and Omega counters 
process 5e6 samples totally and one measurement have also 5e6 samples, but 
the Delta one processes 10e6 totally with each of the averaged measurement 
having 5e6 samples. Delta counter actually used two times more data. What 
should be equal when comparing different counter types - the number of 
samples in one measurement (gating time) or the total number of samples 
processed?


Thanks!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-12 Thread Oleg Skydan

Hi!

From: "Bob kb8tq" 
There is still the problem that the first post on the graph is different 
depending

on the technique.


The leftmost tau values are skipped and they "stay" inside the counter. If I 
setup counter to generate lets say 1s stamps (ADEV starts at 1s) it will 
generate internally 1/8sec averaged measurements, but export combined data 
for 1s stamps. The result will be strictly speaking different, but the 
difference should be insignificant.


The other side of all this is that ADEV is really not a very good way to 
test a counter.


Counter testing was not a main reason to dig into statistics details last 
days. Initially I used ADEV when tried to test the idea of making the 
counter with very simple HW and good resolution (BTW, it appeared later it 
was not ADEV in reality :). Then I saw it worked, so I decided to make a 
"normal" useful counter (I liked the HW/SW concept). The HW has enough power 
to compute various statistics onboard in real time, and while it is not 
requisite feature of the project now, I think it will be good if the counter 
will be able to do it (or at least if it will export data suitable to do it 
in post process). The rest of the story you know :)


If you are trying specifically just to measure ADEV, then there are a lot 
of ways to do that by it’s self.


Yes, but if it can be done with only some additional code - why not to have 
such ability? Even if it has some known limitations it is still a useful 
addition. Of cause it should be done as good as it can be with the HW 
limitations. Also it was/is a good educational moment.


Now it is period of tests/experiments to see the used technology 
features/limitations(of cause if those experiments can be done with the 
current "ugly style HW"). I have already got a lot of useful information, it 
should help me in the following HW/FW development. The next steps are analog 
front end and GPS frequency correction (I should get the GPS module next 
week). I have already tested the 6GHz prescaler and now wait for some parts 
to finish it. Hope this project will have the "happy end" :).


All the best!
Oleg 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Magnus Danielson
Hi,

On 05/11/2018 05:35 PM, Bob kb8tq wrote:
> Hi
> 
> If you do the weighted average as indicated in the paper *and* compare it to 
> a “single sample” computation, 
> the results are different for that time interval. To me that’s a problem. To 
> the authors, the fact that the rest of
> the curve is the same is proof that it works. I certainly agree that once you 
> get to longer tau, the process 
> has no detrimental impact. There is still the problem that the first post on 
> the graph is different depending 
> on the technique. 

Check what I did in my paper. I made sure to check that my estimator of
phase and frequency is bias-free, that is, when exposed to stable phase
or stable frequency, that comes out of the phase and frequency unbiased,
but 0 as you switch them, as a good estimator should do.

> The other side of all this is that ADEV is really not a very good way to test 
> a counter. It has it’s quirks and it’s
> issues. They are impacted by what is in a counter,  but that’s a side effect. 
> If one is after a general test of 
> counter hardware, one probably should look at other approaches.

Well, you can tell a few things from the ADEV, to give you a hint about
what you can expect from that counter when you do ADEV... and measure of
frequency. The 1/tau limit is that of the counter. It's... a complex
issue of single-shot resolution and noise, but a hint.

> If you are trying specifically just to measure ADEV, then there are a lot of 
> ways to do that by it’s self. It’s not
> clear that re-invinting the hardware is required to do this. Going with an 
> “average down” approach ultimately
> *will* have problems for certain signals and noise profiles. 

The filtering needs to be understood and handled correctly, for sure,
and it's not doing anything good for lower true ADEV measures. Filtering
helps for improving the frequency reading, as the measures deviation
shifts from ADEV to MDEV or PDEV, but let's not confuse that with
improving the ADEV, it's a completely different thing. Improving the
ADEV takes single-shot resolution, stable hardware and stable reference
source.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Magnus Danielson
Oleg,

On 05/11/2018 04:42 PM, Oleg Skydan wrote:
> Hi
> 
> --
> From: "Bob kb8tq" 
>> The most accurate answer is always “that depends”. The simple answer
>> is no.
> 
> I have spent the yesterday evening and quite a bit of the night :)
> reading many interesting papers and several related discussions in the
> time-nuts archive (the Magnus Danielson posts in "Modified Allan
> Deviation and counter averaging" and "Omega counters and Parabolic
> Variance (PVAR)" topics were very informative and helpful, thanks!).

You are welcome. Good that people have use for them.

> It looks like the trick to combine averaging with the possibility of
> correct ADEV calculation in the post processing exists. There is a nice
> presentation made by prof. Rubiola [1]. There is a suitable solution on
> page 54 (at least I understood it so, maybe I am wrong). I can switch to
> usual averaging (Lambda/Delta counter) instead of LR calculation (Omega
> counter), the losses should be very small I my case. With such averaging
> the MDEV can be correctly computed.

If fact, you can do a Omega-style counter you can use for PDEV, you just
need to use the right approach to be able to decimate the data. Oh,
there's a draft paper on that:

https://arxiv.org/abs/1604.01004

Need to update that one.

> If ADEV is needed, the averaging
> interval can be reduced and several measurements (more then eight) can
> be combined into one point (creating the new weighting function which
> resembles the usual Pi one, as shown in the [1] p.54), it should be
> possible to calculate usual ADEV using such data. As far as I
> understand, the filter which is formed by the resulting weighting
> function will have wider bandwidth, so the impact on ADEV will be
> smaller and it can be computed correctly. Am I missing something?

Well, you can reduce averaging interval to 1 and then you compute the
ADEV, but it does not behave as the MDEV any longer.

What you can do is that you can calculate MDEV or PDEV, and then apply
the suitable bias function to convert the level to that of ADEV.

> I have made the necessary changes in code, now firmware computes the
> Delta averaging, also it computes combined Delta averaged measurements
> (resulting in trapezoidal weighting function), both numbers are computed
> with continuous stamping and optimal overlapping. Everything is done in
> real time. I did some tests. The results are very similar to the ones
> made with LR counting.

Yes, they give relatively close values of deviation, where PDEV goes
somewhat lower, indicating that there is a slight advantage of the LR/LS
frequency estimation measure over that of the Delta counter, as given by
it's MDEV.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Magnus Danielson
Hi Dana,

On 05/10/2018 06:17 PM, Dana Whitlow wrote:
> I'm a bit fuzzy, then, on the definition of ADEV.  I was under the
> impression that one measured a series of
> "phase samples" at the desired spacing, then took the RMS value of that
> series, not just a single sample,
> as the ADEV value.

You cannot use RMS here, as the noise does not converge as you build the
average. This was a huge issue before the Allan processing approach
essentially went for combining the average of 2-point RMS measurements,
which ends up being a subtraction of two frequency measure.

> Can anybody say which it is?   The RMS approach seems to make better sense
> as it provides some measure
> of defense against taking a sample that happens to be an outlier, yet
> avoids the flaw of tending to average
> the reported ADEV towards zero.

Forget about RMS here.

Cheers,
Magnus

> Dana   (K8YUM)
> 
> 
> On Thu, May 10, 2018 at 9:21 AM, Bob kb8tq  wrote:
> 
>> Hi
>>
>> If you collect data over the entire second and average that down for a
>> single point, then no, your ADEV will not be correct.
>> There are a number of papers on this. What ADEV wants to see is a single
>> phase “sample” at one second spacing. This is
>> also at the root of how you get 10 second ADEV. You don’t average the ten
>> 1 second data points. You throw nine data points
>> away and use one of them ( = you decimate the data ).
>>
>> What happens if you ignore this? Your curve looks “to good”. The resultant
>> curve is *below* the real curve when plotted.
>>
>> A quick way to demonstrate this is to do ADEV with averaged vs decimated
>> data ….
>>
>> Bob
>>
>>> On May 10, 2018, at 4:46 AM, Oleg Skydan  wrote:
>>>
>>> Hi
>>>
>>> I have got a pair of not so bad OCXOs (Morion GK85). I did some
>> measurements, the results may be interested to others (sorry if not), so I
>> decided to post them.
>>>
>>> I ran a set of 5minutes long counter runs (two OCXOs were measured
>> against each other), each point is 1sec gate frequency measurement with
>> different number of timestamps used in LR calculation (from 10 till 5e6).
>> The counter provides continuous counting. As you can see I reach the HW
>> limitations at 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results
>> looks reasonable, the theory predicts 27ps equivalent resolution with 1e5
>> timestamps, also the sqrt(N) law is clearly seen on the plots. I do not
>> know what is the limiting factor, if it is OCXOs or some counter HW.
>>>
>>> I know there are HW problems, some of them were identified during this
>> experiment. They were expectable, cause HW is still just an ugly
>> construction made from the boards left in the "radio junk box" from the
>> other projects/experiments. I am going to move to the well designed PCB
>> with some improvements in HW (and more or less "normal" analog frontend
>> with good comparator, ADCMP604 or something similar, for the "low
>> frequency" input). But I want to finish my initial tests, it should help
>> with the HW design.
>>>
>>> Now I have some questions. As you know I am experimenting with the
>> counter that uses LR calculations to improve its resolution. The LR data
>> for each measurement is collected during the gate time only, also
>> measurements are continuous. Will the ADEV be calculated correctly from
>> such measurements? I understand that any averaging for the time window
>> larger then single measurement time will spoil the ADEV plot. Also I
>> understand that using LR can result in incorrect frequency estimate for the
>> signal with large drift (should not be a problem for the discussed
>> measurements, at least for the numbers we are talking about).
>>>
>>> Does the ADEV plots I got looks reasonable for the used "mid range"
>> OCXOs (see the second plot for the long run test)?
>>>
>>> BTW, I see I can interface GPS module to my counter without additional
>> HW (except the module itself, do not worry it will not be another DIY
>> GPSDO, probably :-) ). I will try to do it. The initial idea is not try to
>> lock the reference OCXO to GPS, instead I will just measure GPS against REF
>> and will make corrections using pure math in SW. I see some advantages with
>> such design - no hi resolution DAC, reference for DAC, no loop, no
>> additional hardware at all - only the GPS module and software :) (it is in
>> the spirit of this project)... Of cause I will not have reference signal
>> that can be used outside the counter, I think I can live with it. It worth
>> to do some experiments.
>>>
>>> Best!
>>> Oleg UR3IQO
>>> <Снимок экрана (1148).png><Снимок экрана (1150).png><Снимок экрана
>> (1149).png>___
>>> time-nuts mailing list -- time-nuts@febo.com
>>> To unsubscribe, go to https://www.febo.com/cgi-bin/
>> mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To 

Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Magnus Danielson
Oleg,

On 05/10/2018 10:46 AM, Oleg Skydan wrote:
> Hi
> 
> Now I have some questions. As you know I am experimenting with the
> counter that uses LR calculations to improve its resolution. The LR data
> for each measurement is collected during the gate time only, also
> measurements are continuous. Will the ADEV be calculated correctly from
> such measurements?

Many assume yes, actual answer is no or well, it depends.

ADEV assumes brick-wall filtering up to the Nyquist frequency as result
of the sample-rate. When you filter the data as you do a Linear
Regression / Least Square estimation, the actual bandwidth will be much
less, so the ADEV measures will be biased for lower taus, but for higher
taus less of the kernel of the ADEV will be affected by the filter and
thus the bias will reduce.

It was when investigating this that Prof Enrico Rubiola and Prof
Francois Vernotte invented the parabolic deviation PDEV.

> I understand that any averaging for the time window
> larger then single measurement time will spoil the ADEV plot.

Correct.

> Also I understand that using LR can result in incorrect frequency estimate for
> the signal with large drift (should not be a problem for the discussed
> measurements, at least for the numbers we are talking about).

That is a result of not using a LR/LS method supporting drift, at which
time its effect on frequency estimation is greatly reduced.

LR only for phase and frequency model, i.e. only linear components, is
unable to correctly handle the quadratic nature of linear drift. Thus,
by using a LR model that supports it, the quadratic terms influence on
frequency estimation can be reduced.

> Does the ADEV plots I got looks reasonable for the used "mid range"
> OCXOs (see the second plot for the long run test)?

You probably want to find the source of the wavy response as the orange
and red trace.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Bob kb8tq
Hi

If you do the weighted average as indicated in the paper *and* compare it to a 
“single sample” computation, 
the results are different for that time interval. To me that’s a problem. To 
the authors, the fact that the rest of
the curve is the same is proof that it works. I certainly agree that once you 
get to longer tau, the process 
has no detrimental impact. There is still the problem that the first post on 
the graph is different depending 
on the technique. 

The other side of all this is that ADEV is really not a very good way to test a 
counter. It has it’s quirks and it’s
issues. They are impacted by what is in a counter,  but that’s a side effect. 
If one is after a general test of 
counter hardware, one probably should look at other approaches.

If you are trying specifically just to measure ADEV, then there are a lot of 
ways to do that by it’s self. It’s not
clear that re-invinting the hardware is required to do this. Going with an 
“average down” approach ultimately
*will* have problems for certain signals and noise profiles. 

Bob

> On May 11, 2018, at 10:42 AM, Oleg Skydan  wrote:
> 
> Hi
> 
> --
> From: "Bob kb8tq" 
>> The most accurate answer is always “that depends”. The simple answer is no.
> 
> I have spent the yesterday evening and quite a bit of the night :) reading 
> many interesting papers and several related discussions in the time-nuts 
> archive (the Magnus Danielson posts in "Modified Allan Deviation and counter 
> averaging" and "Omega counters and Parabolic Variance (PVAR)" topics were 
> very informative and helpful, thanks!).
> 
> It looks like the trick to combine averaging with the possibility of correct 
> ADEV calculation in the post processing exists. There is a nice presentation 
> made by prof. Rubiola [1]. There is a suitable solution on page 54 (at least 
> I understood it so, maybe I am wrong). I can switch to usual averaging 
> (Lambda/Delta counter) instead of LR calculation (Omega counter), the losses 
> should be very small I my case. With such averaging the MDEV can be correctly 
> computed. If ADEV is needed, the averaging interval can be reduced and 
> several measurements (more then eight) can be combined into one point 
> (creating the new weighting function which resembles the usual Pi one, as 
> shown in the [1] p.54), it should be possible to calculate usual ADEV using 
> such data. As far as I understand, the filter which is formed by the 
> resulting weighting function will have wider bandwidth, so the impact on ADEV 
> will be smaller and it can be computed correctly. Am I missing something?
> 
> I have made the necessary changes in code, now firmware computes the Delta 
> averaging, also it computes combined Delta averaged measurements (resulting 
> in trapezoidal weighting function), both numbers are computed with continuous 
> stamping and optimal overlapping. Everything is done in real time. I did some 
> tests. The results are very similar to the ones made with LR counting.
> 
> [1] http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf
>   E. Rubiola, High resolution time and frequency counters, updated version.
> 
> All the best!
> Oleg UR3IQO 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-11 Thread Oleg Skydan

Hi

--
From: "Bob kb8tq" 
The most accurate answer is always “that depends”. The simple answer is 
no.


I have spent the yesterday evening and quite a bit of the night :) reading 
many interesting papers and several related discussions in the time-nuts 
archive (the Magnus Danielson posts in "Modified Allan Deviation and counter 
averaging" and "Omega counters and Parabolic Variance (PVAR)" topics were 
very informative and helpful, thanks!).


It looks like the trick to combine averaging with the possibility of correct 
ADEV calculation in the post processing exists. There is a nice presentation 
made by prof. Rubiola [1]. There is a suitable solution on page 54 (at least 
I understood it so, maybe I am wrong). I can switch to usual averaging 
(Lambda/Delta counter) instead of LR calculation (Omega counter), the losses 
should be very small I my case. With such averaging the MDEV can be 
correctly computed. If ADEV is needed, the averaging interval can be reduced 
and several measurements (more then eight) can be combined into one point 
(creating the new weighting function which resembles the usual Pi one, as 
shown in the [1] p.54), it should be possible to calculate usual ADEV using 
such data. As far as I understand, the filter which is formed by the 
resulting weighting function will have wider bandwidth, so the impact on 
ADEV will be smaller and it can be computed correctly. Am I missing 
something?


I have made the necessary changes in code, now firmware computes the Delta 
averaging, also it computes combined Delta averaged measurements (resulting 
in trapezoidal weighting function), both numbers are computed with 
continuous stamping and optimal overlapping. Everything is done in real 
time. I did some tests. The results are very similar to the ones made with 
LR counting.


[1] http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf
   E. Rubiola, High resolution time and frequency counters, updated 
version.


All the best!
Oleg UR3IQO 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Bob kb8tq
Hi

> On May 10, 2018, at 1:44 PM, Oleg Skydan  wrote:
> 
> Bob, thanks for clarification!
> 
> From: "Bob kb8tq" 
>> If you collect data over the entire second and average that down for a 
>> single point, then no, your ADEV will not be correct.
> 
> That probably explains why I got so nice (and suspicious) plots :)
> 
>> There are a number of papers on this. What ADEV wants to see is a single 
>> phase “sample” at one second spacing.
> 
> After I read your answer I remembered some nice papers from prof. Rubiola, me 
> bad - I was able to answer my question by myself. When we take a single phase 
> "sample" at start and at end time of the "each tau" it is equivalent to 
> summing all timestamps intervals I collect during that "tau", but by doing LR 
> processing I calculate *weighted* sum, so the results will differ. So, it 
> appears "ADEV" calculations is PDEV (parabolic) in reality, because of the 
> current firmware processing.
> 
> I made a test with two plots for illustration - one is the classical ADEV 
> (with 2.5ns time resolution), the second one with LR processed data (5e6 
> timestamps per second). Both plots are made from the same data. It is obvious 
> the classical ADEV is limited by the counter resolution in the left part of 
> the plot. It is interesting is it possible to use the 498 extra points 
> per each second to improve counter resolution in ADEV measurements without 
> affecting ADEV?

The most accurate answer is always “that depends”. The simple answer is no. If 
you take a look at some of the papers from 
the 90’s you can find suggestions on doing filtering on the first point in the 
series. The gotcha is that it does impact the first
point. The claim is that if you do it right, it does not impact the rest of the 
points in the series. 

Bob


> 
> Thanks!
> Oleg UR3IQO 
> <Снимок экрана (1151).png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Oleg Skydan

Bob, thanks for clarification!

From: "Bob kb8tq" 
If you collect data over the entire second and average that down for a 
single point, then no, your ADEV will not be correct.


That probably explains why I got so nice (and suspicious) plots :)

There are a number of papers on this. What ADEV wants to see is a single 
phase “sample” at one second spacing.


After I read your answer I remembered some nice papers from prof. Rubiola, 
me bad - I was able to answer my question by myself. When we take a single 
phase "sample" at start and at end time of the "each tau" it is equivalent 
to summing all timestamps intervals I collect during that "tau", but by 
doing LR processing I calculate *weighted* sum, so the results will differ. 
So, it appears "ADEV" calculations is PDEV (parabolic) in reality, because 
of the current firmware processing.


I made a test with two plots for illustration - one is the classical ADEV 
(with 2.5ns time resolution), the second one with LR processed data (5e6 
timestamps per second). Both plots are made from the same data. It is 
obvious the classical ADEV is limited by the counter resolution in the left 
part of the plot. It is interesting is it possible to use the 498 extra 
points per each second to improve counter resolution in ADEV measurements 
without affecting ADEV?


Thanks!
Oleg UR3IQO 
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Bob kb8tq
Hi

More or less: 

ADEV takes the *difference* between phase samples and then does a standard
deviation on them. RMS of the phase samples makes a lot of sense and it was
used back in the late 50’s / early 60’s. The gotcha turns out to be that it is 
an 
ill behaved measure. The more data you take, the bigger the number you get. 
( = it does not converge ). That problem is what lead NBS to dig into a better 
measure. The result was ADEV.

The point about averaging vs decimation relates to what you do to the data 
*before*
you ever compute the ADEV. If you have 0.1 second samples, you have to do 
something
to get to a tau of 1 second or 10 seconds or … The process you use to get the 
data
to the proper interval turns out to matter quite a bit. 

Bob

> On May 10, 2018, at 12:17 PM, Dana Whitlow  wrote:
> 
> I'm a bit fuzzy, then, on the definition of ADEV.  I was under the
> impression that one measured a series of
> "phase samples" at the desired spacing, then took the RMS value of that
> series, not just a single sample,
> as the ADEV value.
> 
> Can anybody say which it is?   The RMS approach seems to make better sense
> as it provides some measure
> of defense against taking a sample that happens to be an outlier, yet
> avoids the flaw of tending to average
> the reported ADEV towards zero.
> 
> Dana   (K8YUM)
> 
> 
> On Thu, May 10, 2018 at 9:21 AM, Bob kb8tq  wrote:
> 
>> Hi
>> 
>> If you collect data over the entire second and average that down for a
>> single point, then no, your ADEV will not be correct.
>> There are a number of papers on this. What ADEV wants to see is a single
>> phase “sample” at one second spacing. This is
>> also at the root of how you get 10 second ADEV. You don’t average the ten
>> 1 second data points. You throw nine data points
>> away and use one of them ( = you decimate the data ).
>> 
>> What happens if you ignore this? Your curve looks “to good”. The resultant
>> curve is *below* the real curve when plotted.
>> 
>> A quick way to demonstrate this is to do ADEV with averaged vs decimated
>> data ….
>> 
>> Bob
>> 
>>> On May 10, 2018, at 4:46 AM, Oleg Skydan  wrote:
>>> 
>>> Hi
>>> 
>>> I have got a pair of not so bad OCXOs (Morion GK85). I did some
>> measurements, the results may be interested to others (sorry if not), so I
>> decided to post them.
>>> 
>>> I ran a set of 5minutes long counter runs (two OCXOs were measured
>> against each other), each point is 1sec gate frequency measurement with
>> different number of timestamps used in LR calculation (from 10 till 5e6).
>> The counter provides continuous counting. As you can see I reach the HW
>> limitations at 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results
>> looks reasonable, the theory predicts 27ps equivalent resolution with 1e5
>> timestamps, also the sqrt(N) law is clearly seen on the plots. I do not
>> know what is the limiting factor, if it is OCXOs or some counter HW.
>>> 
>>> I know there are HW problems, some of them were identified during this
>> experiment. They were expectable, cause HW is still just an ugly
>> construction made from the boards left in the "radio junk box" from the
>> other projects/experiments. I am going to move to the well designed PCB
>> with some improvements in HW (and more or less "normal" analog frontend
>> with good comparator, ADCMP604 or something similar, for the "low
>> frequency" input). But I want to finish my initial tests, it should help
>> with the HW design.
>>> 
>>> Now I have some questions. As you know I am experimenting with the
>> counter that uses LR calculations to improve its resolution. The LR data
>> for each measurement is collected during the gate time only, also
>> measurements are continuous. Will the ADEV be calculated correctly from
>> such measurements? I understand that any averaging for the time window
>> larger then single measurement time will spoil the ADEV plot. Also I
>> understand that using LR can result in incorrect frequency estimate for the
>> signal with large drift (should not be a problem for the discussed
>> measurements, at least for the numbers we are talking about).
>>> 
>>> Does the ADEV plots I got looks reasonable for the used "mid range"
>> OCXOs (see the second plot for the long run test)?
>>> 
>>> BTW, I see I can interface GPS module to my counter without additional
>> HW (except the module itself, do not worry it will not be another DIY
>> GPSDO, probably :-) ). I will try to do it. The initial idea is not try to
>> lock the reference OCXO to GPS, instead I will just measure GPS against REF
>> and will make corrections using pure math in SW. I see some advantages with
>> such design - no hi resolution DAC, reference for DAC, no loop, no
>> additional hardware at all - only the GPS module and software :) (it is in
>> the spirit of this project)... Of cause I will not have reference signal
>> that can be used outside the counter, I think I can live with 

Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Dana Whitlow
I'm a bit fuzzy, then, on the definition of ADEV.  I was under the
impression that one measured a series of
"phase samples" at the desired spacing, then took the RMS value of that
series, not just a single sample,
as the ADEV value.

Can anybody say which it is?   The RMS approach seems to make better sense
as it provides some measure
of defense against taking a sample that happens to be an outlier, yet
avoids the flaw of tending to average
the reported ADEV towards zero.

Dana   (K8YUM)


On Thu, May 10, 2018 at 9:21 AM, Bob kb8tq  wrote:

> Hi
>
> If you collect data over the entire second and average that down for a
> single point, then no, your ADEV will not be correct.
> There are a number of papers on this. What ADEV wants to see is a single
> phase “sample” at one second spacing. This is
> also at the root of how you get 10 second ADEV. You don’t average the ten
> 1 second data points. You throw nine data points
> away and use one of them ( = you decimate the data ).
>
> What happens if you ignore this? Your curve looks “to good”. The resultant
> curve is *below* the real curve when plotted.
>
> A quick way to demonstrate this is to do ADEV with averaged vs decimated
> data ….
>
> Bob
>
> > On May 10, 2018, at 4:46 AM, Oleg Skydan  wrote:
> >
> > Hi
> >
> > I have got a pair of not so bad OCXOs (Morion GK85). I did some
> measurements, the results may be interested to others (sorry if not), so I
> decided to post them.
> >
> > I ran a set of 5minutes long counter runs (two OCXOs were measured
> against each other), each point is 1sec gate frequency measurement with
> different number of timestamps used in LR calculation (from 10 till 5e6).
> The counter provides continuous counting. As you can see I reach the HW
> limitations at 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results
> looks reasonable, the theory predicts 27ps equivalent resolution with 1e5
> timestamps, also the sqrt(N) law is clearly seen on the plots. I do not
> know what is the limiting factor, if it is OCXOs or some counter HW.
> >
> > I know there are HW problems, some of them were identified during this
> experiment. They were expectable, cause HW is still just an ugly
> construction made from the boards left in the "radio junk box" from the
> other projects/experiments. I am going to move to the well designed PCB
> with some improvements in HW (and more or less "normal" analog frontend
> with good comparator, ADCMP604 or something similar, for the "low
> frequency" input). But I want to finish my initial tests, it should help
> with the HW design.
> >
> > Now I have some questions. As you know I am experimenting with the
> counter that uses LR calculations to improve its resolution. The LR data
> for each measurement is collected during the gate time only, also
> measurements are continuous. Will the ADEV be calculated correctly from
> such measurements? I understand that any averaging for the time window
> larger then single measurement time will spoil the ADEV plot. Also I
> understand that using LR can result in incorrect frequency estimate for the
> signal with large drift (should not be a problem for the discussed
> measurements, at least for the numbers we are talking about).
> >
> > Does the ADEV plots I got looks reasonable for the used "mid range"
> OCXOs (see the second plot for the long run test)?
> >
> > BTW, I see I can interface GPS module to my counter without additional
> HW (except the module itself, do not worry it will not be another DIY
> GPSDO, probably :-) ). I will try to do it. The initial idea is not try to
> lock the reference OCXO to GPS, instead I will just measure GPS against REF
> and will make corrections using pure math in SW. I see some advantages with
> such design - no hi resolution DAC, reference for DAC, no loop, no
> additional hardware at all - only the GPS module and software :) (it is in
> the spirit of this project)... Of cause I will not have reference signal
> that can be used outside the counter, I think I can live with it. It worth
> to do some experiments.
> >
> > Best!
> > Oleg UR3IQO
> > <Снимок экрана (1148).png><Снимок экрана (1150).png><Снимок экрана
> (1149).png>___
> > time-nuts mailing list -- time-nuts@febo.com
> > To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> > and follow the instructions there.
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Bob kb8tq
Hi

If you collect data over the entire second and average that down for a single 
point, then no, your ADEV will not be correct. 
There are a number of papers on this. What ADEV wants to see is a single phase 
“sample” at one second spacing. This is
also at the root of how you get 10 second ADEV. You don’t average the ten 1 
second data points. You throw nine data points
away and use one of them ( = you decimate the data ). 

What happens if you ignore this? Your curve looks “to good”. The resultant 
curve is *below* the real curve when plotted. 

A quick way to demonstrate this is to do ADEV with averaged vs decimated data ….

Bob

> On May 10, 2018, at 4:46 AM, Oleg Skydan  wrote:
> 
> Hi
> 
> I have got a pair of not so bad OCXOs (Morion GK85). I did some measurements, 
> the results may be interested to others (sorry if not), so I decided to post 
> them.
> 
> I ran a set of 5minutes long counter runs (two OCXOs were measured against 
> each other), each point is 1sec gate frequency measurement with different 
> number of timestamps used in LR calculation (from 10 till 5e6). The counter 
> provides continuous counting. As you can see I reach the HW limitations at 
> 5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results looks 
> reasonable, the theory predicts 27ps equivalent resolution with 1e5 
> timestamps, also the sqrt(N) law is clearly seen on the plots. I do not know 
> what is the limiting factor, if it is OCXOs or some counter HW.
> 
> I know there are HW problems, some of them were identified during this 
> experiment. They were expectable, cause HW is still just an ugly construction 
> made from the boards left in the "radio junk box" from the other 
> projects/experiments. I am going to move to the well designed PCB with some 
> improvements in HW (and more or less "normal" analog frontend with good 
> comparator, ADCMP604 or something similar, for the "low frequency" input). 
> But I want to finish my initial tests, it should help with the HW design.
> 
> Now I have some questions. As you know I am experimenting with the counter 
> that uses LR calculations to improve its resolution. The LR data for each 
> measurement is collected during the gate time only, also measurements are 
> continuous. Will the ADEV be calculated correctly from such measurements? I 
> understand that any averaging for the time window larger then single 
> measurement time will spoil the ADEV plot. Also I understand that using LR 
> can result in incorrect frequency estimate for the signal with large drift 
> (should not be a problem for the discussed measurements, at least for the 
> numbers we are talking about).
> 
> Does the ADEV plots I got looks reasonable for the used "mid range" OCXOs 
> (see the second plot for the long run test)?
> 
> BTW, I see I can interface GPS module to my counter without additional HW 
> (except the module itself, do not worry it will not be another DIY GPSDO, 
> probably :-) ). I will try to do it. The initial idea is not try to lock the 
> reference OCXO to GPS, instead I will just measure GPS against REF and will 
> make corrections using pure math in SW. I see some advantages with such 
> design - no hi resolution DAC, reference for DAC, no loop, no additional 
> hardware at all - only the GPS module and software :) (it is in the spirit of 
> this project)... Of cause I will not have reference signal that can be used 
> outside the counter, I think I can live with it. It worth to do some 
> experiments.
> 
> Best!
> Oleg UR3IQO 
> <Снимок экрана (1148).png><Снимок экрана (1150).png><Снимок экрана 
> (1149).png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-05-10 Thread Oleg Skydan

Hi

I have got a pair of not so bad OCXOs (Morion GK85). I did some 
measurements, the results may be interested to others (sorry if not), so I 
decided to post them.


I ran a set of 5minutes long counter runs (two OCXOs were measured against 
each other), each point is 1sec gate frequency measurement with different 
number of timestamps used in LR calculation (from 10 till 5e6). The counter 
provides continuous counting. As you can see I reach the HW limitations at 
5..6e-12 ADEV (1s tau) with only 1e5 timestamps. The results looks 
reasonable, the theory predicts 27ps equivalent resolution with 1e5 
timestamps, also the sqrt(N) law is clearly seen on the plots. I do not know 
what is the limiting factor, if it is OCXOs or some counter HW.


I know there are HW problems, some of them were identified during this 
experiment. They were expectable, cause HW is still just an ugly 
construction made from the boards left in the "radio junk box" from the 
other projects/experiments. I am going to move to the well designed PCB with 
some improvements in HW (and more or less "normal" analog frontend with good 
comparator, ADCMP604 or something similar, for the "low frequency" input). 
But I want to finish my initial tests, it should help with the HW design.


Now I have some questions. As you know I am experimenting with the counter 
that uses LR calculations to improve its resolution. The LR data for each 
measurement is collected during the gate time only, also measurements are 
continuous. Will the ADEV be calculated correctly from such measurements? I 
understand that any averaging for the time window larger then single 
measurement time will spoil the ADEV plot. Also I understand that using LR 
can result in incorrect frequency estimate for the signal with large drift 
(should not be a problem for the discussed measurements, at least for the 
numbers we are talking about).


Does the ADEV plots I got looks reasonable for the used "mid range" OCXOs 
(see the second plot for the long run test)?


BTW, I see I can interface GPS module to my counter without additional HW 
(except the module itself, do not worry it will not be another DIY GPSDO, 
probably :-) ). I will try to do it. The initial idea is not try to lock the 
reference OCXO to GPS, instead I will just measure GPS against REF and will 
make corrections using pure math in SW. I see some advantages with such 
design - no hi resolution DAC, reference for DAC, no loop, no additional 
hardware at all - only the GPS module and software :) (it is in the spirit 
of this project)... Of cause I will not have reference signal that can be 
used outside the counter, I think I can live with it. It worth to do some 
experiments.


Best!
Oleg UR3IQO 
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Question about frequency counter testing

2018-04-29 Thread Magnus Danielson
The CNT91 is really a CNT90 with some detailed improvements to reduce
time-errors to be conform with 50 ps rather than 100 ps resolution.

In the CNT90 the comparators where in the same IC, which caused
ground-bounce coupling between channels, but separating them was among
the things that went in. Also, improved grounding of the front-plate as
I recall it.

The core-clock is 100 MHz, giving 10 ns steps or coarse counter, the
interpolators then have 10 bits, so while not the full range is being
used, some 10-20 ps of actual resolution, but Pendulum engineers
consider the RMS performance as they measure the beat frequency sweep
over phase states.

Cheers,
Magnus

On 04/26/2018 11:16 PM, Azelio Boriani wrote:
> If your hardware is capable of capturing up to 10 millions of
> timestamps per second and calculating LR "on the fly", it is not a so
> simple hardware, unless you consider simple hardware a 5megagates
> Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
> most in an FPGA, 300MHz, your timestamps will have a one-shot
> resolution of few nanoseconds. Where have you found a detailed
> description of the CNT91 counting method? The only detailed
> description I have found is the CNT90 (not 91) service manual and it
> uses interpolators (page 4-13 of the service manual).
> 
> On Thu, Apr 26, 2018 at 2:45 PM, Bob kb8tq  wrote:
>> Hi
>>
>> Even with a fast counter, there are going to be questions about clock jitter 
>> and just
>> how well that last digit performs in the logic. It’s never easy to squeeze 
>> the very last
>> bit of performance out …..
>>
>> Bob
>>
>>> On Apr 26, 2018, at 3:06 AM, Azelio Boriani  
>>> wrote:
>>>
>>> Very fast time-stamping like a stable 5GHz counter? The resolution of
>>> a 200ps (one shot) interpolator can be replaced by a 5GHz
>>> time-stamping counter.
>>>
>>> On Thu, Apr 26, 2018 at 12:28 AM, Bob kb8tq  wrote:
 Hi

 Unfortunately there is no “quick and dirty” way to come up with an 
 accurate “number of digits” for a
 math intensive counter. There are a *lot* of examples of various counter 
 architectures that have specific
 weak points in what they do. One sort of signal works one way, another 
 signal works very differently.

 All that said, the data you show suggests you are in the 10 digits per 
 second range.

 Bob

> On Apr 25, 2018, at 3:01 PM, Oleg Skydan  wrote:
>
> Dear Ladies and Gentlemen,
>
> Let me tell a little story so you will be able to better understand what 
> my question and what I am doing.
>
> I needed to check frequency in several GHz range from time to time. I do 
> not need high absolute precision (anyway this is a reference oscillator 
> problem, not a counter), but I need fast high resolution instrument (at 
> least 10 digits in one second). I have only a very old slow unit so, I 
> constructed a frequency counter (yes, yet another frequency counter 
> project :-). I is a bit unusual - I decided not to use interpolators and 
> maximally simplify hardware and provide the necessary resolution by very 
> fast timestamping and heavy math processing. In the current configuration 
> I should get 11+ digits in one second, for input frequencies more then 
> 5MHz.
>
> But this is theoretical number and it does not count for some factors. 
> Now I have an ugly build prototype with insanely simple hardware running 
> the counter core. And I need to check how well it performs.
>
> I have already done some checks and even found and fixed some FW bugs :). 
> Now it works pretty well and I enjoyed looking how one OCXO drifts 
> against the other one in the mHz range. I would like to check how many 
> significant digits I am getting in reality.
>
> The test setup now comprises of two 5MHz OCXO (those are very old units 
> and far from the perfect oscillators - the 1sec and 10sec stability is 
> claimed to be 1e-10, but they are the best I have now). I measure the 
> frequency of the first OCXO using the second one as counter reference. 
> The frequency counter processes data in real time and sends the 
> continuous one second frequency stamps to the PC. Here are experiment 
> results - plots from the Timelab. The frequency difference (the 
> oscillators are being on for more than 36hours now, but still drift 
> against each other) and ADEV plots. There are three measurements and six 
> traces - two for each measurement. One for the simple reciprocal 
> frequency counting (with R letter in the title) and one with the math 
> processing (LR in the title). As far as I understand I am getting 10+ 
> significant digits of frequency in one second and it is questionable if I 
> see counter noise or oscillators one.
>
> I also calculated the usual 

Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Bob kb8tq
Hi

As you have noticed already, it is amazingly easy to get data plots with more 
than the 
real number and less than the real number of digits. Only careful analysis of 
the underlying
hardware and firmware will lead to an accurate estimate of resolution.  

This is by no means unique to what you are doing. Commercial counters are 
very often falling into this trap. If you hook up a SR-620 to it’s internal 
standard, 
you will see a *lot* of very perfect looking digits …. they aren’t real. The HP 
5313x
counters have issues with integer related inputs / reference. This isn’t easy.

Bob

> On Apr 27, 2018, at 2:47 PM, Oleg Skydan  wrote:
> 
> Hi
> 
> From: "Bob kb8tq" 
> Sent: Friday, April 27, 2018 4:38 PM
>> Consider a case where the clocks and signals are all clean and stable:
>> 
>> Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
>> MHz and the other is 400 MHz ). The amount of information in your
>> data stream collapses. Over a 1 second period, you get a bit better than
>> 9 digits per second.  Put another way, the data set is the same regardless
>> of where you are in the 2.5 ppb “space”.
> 
> Thanks a lot for pointing me to this problem! It looks like that was the 
> reason I lost a digit. The frequency in my experiment appear to be close to 
> the exact subharmonic of the PLL multiplied reference. It was not less than 
> 2.5ppb off frequency (the difference was approx 0.3ppm), but it still was 
> close enough to degrade the resolution.
> 
> Fortunately it can be fixed in firmware using various methods and I have made 
> the necessary changes. Here are Allan deviation and frequency drift plots. 
> The first one with the old firmware, the second one with the updated firmware 
> that count for the lost of information you mention.
> 
> The frequency difference plot also shows the measurement "noise" now is much 
> lower. It looks like I have got 11 significant digits now and my old OCXOs 
> are better than manufacturer claims by almost 10 times.
> 
> Thanks!
> Oleg UR3IQO
> <1137.png><1138.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Oleg Skydan

Hi

From: "Bob kb8tq" 
Sent: Friday, April 27, 2018 4:38 PM

Consider a case where the clocks and signals are all clean and stable:

Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
MHz and the other is 400 MHz ). The amount of information in your
data stream collapses. Over a 1 second period, you get a bit better than
9 digits per second.  Put another way, the data set is the same regardless
of where you are in the 2.5 ppb “space”.


Thanks a lot for pointing me to this problem! It looks like that was the 
reason I lost a digit. The frequency in my experiment appear to be close to 
the exact subharmonic of the PLL multiplied reference. It was not less than 
2.5ppb off frequency (the difference was approx 0.3ppm), but it still was 
close enough to degrade the resolution.


Fortunately it can be fixed in firmware using various methods and I have 
made the necessary changes. Here are Allan deviation and frequency drift 
plots. The first one with the old firmware, the second one with the updated 
firmware that count for the lost of information you mention.


The frequency difference plot also shows the measurement "noise" now is much 
lower. It looks like I have got 11 significant digits now and my old OCXOs 
are better than manufacturer claims by almost 10 times.


Thanks!
Oleg UR3IQO

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Oleg Skydan

From: "Azelio Boriani" 
Sent: Friday, April 27, 2018 12:16 AM


If your hardware is capable of capturing up to 10 millions of
timestamps per second and calculating LR "on the fly", it is not a so
simple hardware, unless you consider simple hardware a 5megagates
Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
most in an FPGA, 300MHz, your timestamps will have a one-shot
resolution of few nanoseconds.


There is no FPGA. If I would use FPGA I probably should be able to get more
resolution for one shoot measurements, cause there are some simple methods 
of interpolating signal inside FPGA (they do not require additional 
hardware).
They can increase resolution by 2..16 times easily. So even with 200MHz 
internal

FPGA clock it is possible to reach 1ns one shoot resolution or even better.

I will show details when the project will be at the finishing stage.


Where have you found a detailed
description of the CNT91 counting method? The only detailed
description I have found is the CNT90 (not 91) service manual and it
uses interpolators (page 4-13 of the service manual).


Sorry, I meant CNT-90, but I bet cnt90/cnt91 use the same technique. You
can use interpolator along with the math processing. This will result
in better resolution and/or you can use less memory and less processing 
power.
I choose not use one to simplify the hardware. 


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Bob kb8tq
Hi

So what’s going on here? 

With any of a number of modern (and not so modern) FPGA’s you can run a clock 
in the 400 MHz region. 
Clocking with a single edge gives you a 2.5 ns resolution. On some parts, you 
are not limited to a single 
edge. You can clock with both the rising and falling edge of the clock. That 
gets you to 1.25 ns. For the 
brave, there is the ability to phase shift the clock and do the trick yet one 
more time. That can get you
to 0.6125 ns. You may indeed need to drive more than one input to get that 
done. 

As you get more and more fancy, the chip timing gets further into your data. A 
very simple analogy is
the non-uniform step size you see on an ADC. Effectively you have a number that 
has a +/- ?.?? sort
of tolerance on it. As before that may not what you expect in a frequency 
counter. It still does not mean
the the data is trash. You just have a source of error to contend with. 

You could also feed the data down a “wave union” style delay chain. That would 
get you into the 100ps
range with further linearity issues to contend with. There are also calibration 
issues as well as temperature
and voltage dependencies. Even the timing in the multi phase clock approach 
will have some voltage
and temperature dependency. 

Since it’s an FPGA, coming up with a lot of resources is not all that crazy 
expensive. You aren’t buying 
gate chips and laying out a PCB. A few thousand logic blocks is tiny by modern 
standards. Your counter
or delay line ideal might fit in < 100 logic blocks.  There’s lots of room for 
pipelines and I/O this and that. 
The practical limit is how much you want to put into the “pipe” that gets the 
data out of the FPGA.

In the end, you still are still stuck with the fact that many of the various 
TDC chips have higher resolution / lower cost. 
You also have a pretty big gap between raw chip price and what a fully 
developed instrument will run. 
That’s true regardless of what you base it on and how you do the design. 

Bob



> On Apr 26, 2018, at 5:28 PM, Oleg Skydan  wrote:
> 
> From: "Hal Murray" 
> Sent: Thursday, April 26, 2018 10:28 PM
> 
>> Is there a term for what I think you are doing?
> 
> I saw different terms like "omega counter" or multiple time-stamp
> average counter, probably there are others too.
> 
>> If I understand (big if), you are doing the digital version of magic
>> down-conversion with an A/D.  I can't even think of the name for that.
> 
> No, it is much simpler. The hardware saves time-stamps to the memory at
> each (event) rise of the input signal (let's consider we have digital logic
> input signal for simplicity). So after some time we have many pairs of
> {event number, time-stamp}. We can plot those pairs with event number on
> X-axis and time on Y-axis, now if we fit the line on that dataset the
> inverse slope of the line will correspond to the estimated frequency.
> 
> The line is fitted using linear regression.
> 
> This technique improves frequency uncertainty as
> 
> 2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))
> 
> So If I have 2.5ns HW time resolution, and collect 5e6 events,
> processing should result in 3.9ps resolution.
> 
> Of cause this is for the ideal case. The first real life problem is
> signal drift for example.
> 
> Hope I was able to tell of what I am doing.
> 
> BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
> Probably I should look for better OCXOs. Interesting thing - the counter
> processed 300GB of time-stamps data during that 8+hour run :).
> 
> All the best!
> Oleg 
> <1133.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Bob kb8tq
Hi

Since you are using averaging to get more bits (much like a CIC ) the idea that 
you
need noise to make it happen is actually pretty common. There are app notes 
coming at it from various directions on ADC’s and SDR going *way* back (like to
when I was in school …. yikes ….). 

What is a bit odd is working your head around it being needed in this case. 
It *is* sort of a 1 bit A/D. There is a very tenuous connection. 

Bottom line is still that you are doing signal processing. Since that is what 
is going 
on, you very much need to get into the grubby details to work out what the 
limitations
(and benefits) will be. It doesn’t *look* like a radio, but it has a lot of 
SDR-like issues.

Bob

> On Apr 27, 2018, at 11:58 AM, Tom Van Baak <t...@leapsecond.com> wrote:
> 
> Azelio, the problem with that approach is that the more stable and accurate 
> your DUT & REF sources the less likely there will be transitions, even during 
> millions of samples over one second.
> 
> A solution is to dither the clock, which is something many old hp frequency 
> counters did. In other words, you deliberately introduce well-designed noise 
> so that you cross clock edge transitions as *much* as possible. It seems 
> counter-intuitive that adding noise can vastly improve your measurement, but 
> in the case of oversampling counters like this, it does.
> 
> /tvb
> 
> - Original Message - 
> From: "Azelio Boriani" <azelio.bori...@gmail.com>
> To: "Discussion of precise time and frequency measurement" 
> <time-nuts@febo.com>
> Sent: Friday, April 27, 2018 7:39 AM
> Subject: Re: [time-nuts] Question about frequency counter testing
> 
> 
> You can measure your clocks down to the ps averaged resolution you
> want only if they are worse than your one-shot base resolution one WRT
> the other. In a resonable time, that is how many transitions in your
> 2.5ns sampling interval you have in 1 second to have a n-digit/second
> counter.
> 
> On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
> <azelio.bori...@gmail.com> wrote:
>> Yes, this is the problem when trying to enhance the resolution from a
>> low one-shot resolution. Averaging 2.5ns resolution samples can give
>> data only if clocks move one with respect to the other and "cross the
>> boundary" of the 2.5ns sampling interval. You can measure your clocks
>> down to the ps averaged resolution you want only if they are worse
>> than your one-shot base resolution one WRT the other.
>> 
>> On Fri, Apr 27, 2018 at 3:38 PM, Bob kb8tq <kb...@n1k.org> wrote:
>>> Hi
>>> 
>>> Consider a case where the clocks and signals are all clean and stable:
>>> 
>>> Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
>>> MHz and the other is 400 MHz ). The amount of information in your
>>> data stream collapses. Over a 1 second period, you get a bit better than
>>> 9 digits per second.  Put another way, the data set is the same regardless
>>> of where you are in the 2.5 ppb “space”.
>>> 
>>> Bob
>>> 
>>> 
>>> 
>>>> On Apr 27, 2018, at 5:30 AM, Hal Murray <hmur...@megapathdsl.net> wrote:
>>>> 
>>>> 
>>>> olegsky...@gmail.com said:
>>>>> No, it is much simpler. The hardware saves time-stamps to the memory at 
>>>>> each
>>>>> (event) rise of the input signal (let's consider we have digital logic 
>>>>> input
>>>>> signal for simplicity). So after some time we have many pairs of {event
>>>>> number, time-stamp}. We can plot those pairs with event number on X-axis 
>>>>> and
>>>>> time on Y-axis, now if we fit the line on that dataset the inverse slope 
>>>>> of
>>>>> the line will correspond to the estimated frequency.
>>>> 
>>>> I like it.  Thanks.
>>>> 
>>>> If you flip the X-Y axis, then you don't have to invert the slope.
>>>> 
>>>> That might be an interesting way to analyze TICC data.  It would work
>>>> better/faster if you used a custom divider to trigger the TICC as fast as 
>>>> it
>>>> can print rather than using the typical PPS.
>>>> 
>>>> --
>>>> 
>>>> Another way to look at things is that you have a fast 1 bit A/D.
>>>> 
>>>> If you need results in a second, FFTing that might fit into memory.  (Or 
>>>> you
>>>> could rent a big-memory cloud server.  A quick sample found 128GB for
>>>> $1/hour.)  That's with 1 second of

Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Tom Van Baak
Azelio, the problem with that approach is that the more stable and accurate 
your DUT & REF sources the less likely there will be transitions, even during 
millions of samples over one second.

A solution is to dither the clock, which is something many old hp frequency 
counters did. In other words, you deliberately introduce well-designed noise so 
that you cross clock edge transitions as *much* as possible. It seems 
counter-intuitive that adding noise can vastly improve your measurement, but in 
the case of oversampling counters like this, it does.

/tvb

- Original Message - 
From: "Azelio Boriani" <azelio.bori...@gmail.com>
To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com>
Sent: Friday, April 27, 2018 7:39 AM
Subject: Re: [time-nuts] Question about frequency counter testing


You can measure your clocks down to the ps averaged resolution you
want only if they are worse than your one-shot base resolution one WRT
the other. In a resonable time, that is how many transitions in your
2.5ns sampling interval you have in 1 second to have a n-digit/second
counter.

On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
<azelio.bori...@gmail.com> wrote:
> Yes, this is the problem when trying to enhance the resolution from a
> low one-shot resolution. Averaging 2.5ns resolution samples can give
> data only if clocks move one with respect to the other and "cross the
> boundary" of the 2.5ns sampling interval. You can measure your clocks
> down to the ps averaged resolution you want only if they are worse
> than your one-shot base resolution one WRT the other.
>
> On Fri, Apr 27, 2018 at 3:38 PM, Bob kb8tq <kb...@n1k.org> wrote:
>> Hi
>>
>> Consider a case where the clocks and signals are all clean and stable:
>>
>> Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
>> MHz and the other is 400 MHz ). The amount of information in your
>> data stream collapses. Over a 1 second period, you get a bit better than
>> 9 digits per second.  Put another way, the data set is the same regardless
>> of where you are in the 2.5 ppb “space”.
>>
>> Bob
>>
>>
>>
>>> On Apr 27, 2018, at 5:30 AM, Hal Murray <hmur...@megapathdsl.net> wrote:
>>>
>>>
>>> olegsky...@gmail.com said:
>>>> No, it is much simpler. The hardware saves time-stamps to the memory at 
>>>> each
>>>> (event) rise of the input signal (let's consider we have digital logic 
>>>> input
>>>> signal for simplicity). So after some time we have many pairs of {event
>>>> number, time-stamp}. We can plot those pairs with event number on X-axis 
>>>> and
>>>> time on Y-axis, now if we fit the line on that dataset the inverse slope of
>>>> the line will correspond to the estimated frequency.
>>>
>>> I like it.  Thanks.
>>>
>>> If you flip the X-Y axis, then you don't have to invert the slope.
>>>
>>> That might be an interesting way to analyze TICC data.  It would work
>>> better/faster if you used a custom divider to trigger the TICC as fast as it
>>> can print rather than using the typical PPS.
>>>
>>> --
>>>
>>> Another way to look at things is that you have a fast 1 bit A/D.
>>>
>>> If you need results in a second, FFTing that might fit into memory.  (Or you
>>> could rent a big-memory cloud server.  A quick sample found 128GB for
>>> $1/hour.)  That's with 1 second of data.  I don't know how long it would 
>>> take
>>> to process.
>>>
>>> What's the clock frequency?  Handwave.  At 1 GHz, 1 second of samples fits
>>> into a 4 byte integer even if all the energy ends up in one bin.  4 bytes, 
>>> *2
>>> for complex, *2 for input and output is 16 GB.
>>>
>>>
>>> --
>>> These are my opinions.  I hate spam.
>>>
>>>
>>>
>>> ___
>>> time-nuts mailing list -- time-nuts@febo.com
>>> To unsubscribe, go to 
>>> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Azelio Boriani
You can measure your clocks down to the ps averaged resolution you
want only if they are worse than your one-shot base resolution one WRT
the other. In a resonable time, that is how many transitions in your
2.5ns sampling interval you have in 1 second to have a n-digit/second
counter.

On Fri, Apr 27, 2018 at 4:32 PM, Azelio Boriani
 wrote:
> Yes, this is the problem when trying to enhance the resolution from a
> low one-shot resolution. Averaging 2.5ns resolution samples can give
> data only if clocks move one with respect to the other and "cross the
> boundary" of the 2.5ns sampling interval. You can measure your clocks
> down to the ps averaged resolution you want only if they are worse
> than your one-shot base resolution one WRT the other.
>
> On Fri, Apr 27, 2018 at 3:38 PM, Bob kb8tq  wrote:
>> Hi
>>
>> Consider a case where the clocks and signals are all clean and stable:
>>
>> Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
>> MHz and the other is 400 MHz ). The amount of information in your
>> data stream collapses. Over a 1 second period, you get a bit better than
>> 9 digits per second.  Put another way, the data set is the same regardless
>> of where you are in the 2.5 ppb “space”.
>>
>> Bob
>>
>>
>>
>>> On Apr 27, 2018, at 5:30 AM, Hal Murray  wrote:
>>>
>>>
>>> olegsky...@gmail.com said:
 No, it is much simpler. The hardware saves time-stamps to the memory at 
 each
 (event) rise of the input signal (let's consider we have digital logic 
 input
 signal for simplicity). So after some time we have many pairs of {event
 number, time-stamp}. We can plot those pairs with event number on X-axis 
 and
 time on Y-axis, now if we fit the line on that dataset the inverse slope of
 the line will correspond to the estimated frequency.
>>>
>>> I like it.  Thanks.
>>>
>>> If you flip the X-Y axis, then you don't have to invert the slope.
>>>
>>> That might be an interesting way to analyze TICC data.  It would work
>>> better/faster if you used a custom divider to trigger the TICC as fast as it
>>> can print rather than using the typical PPS.
>>>
>>> --
>>>
>>> Another way to look at things is that you have a fast 1 bit A/D.
>>>
>>> If you need results in a second, FFTing that might fit into memory.  (Or you
>>> could rent a big-memory cloud server.  A quick sample found 128GB for
>>> $1/hour.)  That's with 1 second of data.  I don't know how long it would 
>>> take
>>> to process.
>>>
>>> What's the clock frequency?  Handwave.  At 1 GHz, 1 second of samples fits
>>> into a 4 byte integer even if all the energy ends up in one bin.  4 bytes, 
>>> *2
>>> for complex, *2 for input and output is 16 GB.
>>>
>>>
>>> --
>>> These are my opinions.  I hate spam.
>>>
>>>
>>>
>>> ___
>>> time-nuts mailing list -- time-nuts@febo.com
>>> To unsubscribe, go to 
>>> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Azelio Boriani
Yes, this is the problem when trying to enhance the resolution from a
low one-shot resolution. Averaging 2.5ns resolution samples can give
data only if clocks move one with respect to the other and "cross the
boundary" of the 2.5ns sampling interval. You can measure your clocks
down to the ps averaged resolution you want only if they are worse
than your one-shot base resolution one WRT the other.

On Fri, Apr 27, 2018 at 3:38 PM, Bob kb8tq  wrote:
> Hi
>
> Consider a case where the clocks and signals are all clean and stable:
>
> Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10
> MHz and the other is 400 MHz ). The amount of information in your
> data stream collapses. Over a 1 second period, you get a bit better than
> 9 digits per second.  Put another way, the data set is the same regardless
> of where you are in the 2.5 ppb “space”.
>
> Bob
>
>
>
>> On Apr 27, 2018, at 5:30 AM, Hal Murray  wrote:
>>
>>
>> olegsky...@gmail.com said:
>>> No, it is much simpler. The hardware saves time-stamps to the memory at each
>>> (event) rise of the input signal (let's consider we have digital logic input
>>> signal for simplicity). So after some time we have many pairs of {event
>>> number, time-stamp}. We can plot those pairs with event number on X-axis and
>>> time on Y-axis, now if we fit the line on that dataset the inverse slope of
>>> the line will correspond to the estimated frequency.
>>
>> I like it.  Thanks.
>>
>> If you flip the X-Y axis, then you don't have to invert the slope.
>>
>> That might be an interesting way to analyze TICC data.  It would work
>> better/faster if you used a custom divider to trigger the TICC as fast as it
>> can print rather than using the typical PPS.
>>
>> --
>>
>> Another way to look at things is that you have a fast 1 bit A/D.
>>
>> If you need results in a second, FFTing that might fit into memory.  (Or you
>> could rent a big-memory cloud server.  A quick sample found 128GB for
>> $1/hour.)  That's with 1 second of data.  I don't know how long it would take
>> to process.
>>
>> What's the clock frequency?  Handwave.  At 1 GHz, 1 second of samples fits
>> into a 4 byte integer even if all the energy ends up in one bin.  4 bytes, *2
>> for complex, *2 for input and output is 16 GB.
>>
>>
>> --
>> These are my opinions.  I hate spam.
>>
>>
>>
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Tom Van Baak
> That might be an interesting way to analyze TICC data.  It would work 
> better/faster if you used a custom divider to trigger the TICC as fast as it 
> can print rather than using the typical PPS.

Hi Hal,

Exactly correct. For more details see this posting:
https://www.febo.com/pipermail/time-nuts/2014-December/089787.html

That's one reason for the 1/10/100/1000 Hz PIC divider chips -- to make 
measurements at 100PPS instead of 1PPS.

JohnA could have designed the TAPR/TICC to be a traditional two-input A->B Time 
Interval Counter (TIC) like any counter you see from hp. But instead, with the 
same hardware, he implemented it as a Time Stamping Counter (TSC) pair. This 
gives you A->B as time interval if you want, but it also gives you REF->A and 
REF->B as time stamps as well.

You can operate the two channels separately if you want, that is, two 
completely different DUT measurements at the same time, as if you had two TIC's 
for the price of one. Or you can run them synchronized so that you are 
effectively making three simultaneous measurements: DUTa vs. REF vs. DUTb vs.

This paper is a must read:

Modern frequency counting principles
http://www.npl.co.uk/upload/pdf/20060209_t-f_johansson_1.pdf

See also:

New frequency counting principle improves resolution
http://tycho.usno.navy.mil/ptti/2005papers/paper67.pdf

Continuous Measurements with Zero Dead-Time in CNT-91
http://www.testmart.com/webdata/mfr_promo/whitepaper_cnt91.pdf

Time & Frequency Measurements for Oscillator Manufacturers using CNT-91
http://www.testmart.com/webdata/mfr_promo/whitepaper_osc%20manu_cnt91.pdf

Some comments and links to HP's early time stamping chip:
https://www.febo.com/pipermail/time-nuts/2017-November/107528.html

/tvb

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Bob kb8tq
Hi

Consider a case where the clocks and signals are all clean and stable:

Both are within 2.5 ppb of an integer relationship. ( let’s say one is 10 
MHz and the other is 400 MHz ). The amount of information in your 
data stream collapses. Over a 1 second period, you get a bit better than 
9 digits per second.  Put another way, the data set is the same regardless 
of where you are in the 2.5 ppb “space”. 

Bob



> On Apr 27, 2018, at 5:30 AM, Hal Murray  wrote:
> 
> 
> olegsky...@gmail.com said:
>> No, it is much simpler. The hardware saves time-stamps to the memory at each
>> (event) rise of the input signal (let's consider we have digital logic input
>> signal for simplicity). So after some time we have many pairs of {event
>> number, time-stamp}. We can plot those pairs with event number on X-axis and
>> time on Y-axis, now if we fit the line on that dataset the inverse slope of
>> the line will correspond to the estimated frequency. 
> 
> I like it.  Thanks.
> 
> If you flip the X-Y axis, then you don't have to invert the slope.
> 
> That might be an interesting way to analyze TICC data.  It would work 
> better/faster if you used a custom divider to trigger the TICC as fast as it 
> can print rather than using the typical PPS.
> 
> --
> 
> Another way to look at things is that you have a fast 1 bit A/D.
> 
> If you need results in a second, FFTing that might fit into memory.  (Or you 
> could rent a big-memory cloud server.  A quick sample found 128GB for 
> $1/hour.)  That's with 1 second of data.  I don't know how long it would take 
> to process.
> 
> What's the clock frequency?  Handwave.  At 1 GHz, 1 second of samples fits 
> into a 4 byte integer even if all the energy ends up in one bin.  4 bytes, *2 
> for complex, *2 for input and output is 16 GB.
> 
> 
> -- 
> These are my opinions.  I hate spam.
> 
> 
> 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-27 Thread Hal Murray

olegsky...@gmail.com said:
> No, it is much simpler. The hardware saves time-stamps to the memory at each
> (event) rise of the input signal (let's consider we have digital logic input
> signal for simplicity). So after some time we have many pairs of {event
> number, time-stamp}. We can plot those pairs with event number on X-axis and
> time on Y-axis, now if we fit the line on that dataset the inverse slope of
> the line will correspond to the estimated frequency. 

I like it.  Thanks.

If you flip the X-Y axis, then you don't have to invert the slope.

That might be an interesting way to analyze TICC data.  It would work 
better/faster if you used a custom divider to trigger the TICC as fast as it 
can print rather than using the typical PPS.

--

Another way to look at things is that you have a fast 1 bit A/D.

If you need results in a second, FFTing that might fit into memory.  (Or you 
could rent a big-memory cloud server.  A quick sample found 128GB for 
$1/hour.)  That's with 1 second of data.  I don't know how long it would take 
to process.

What's the clock frequency?  Handwave.  At 1 GHz, 1 second of samples fits 
into a 4 byte integer even if all the energy ends up in one bin.  4 bytes, *2 
for complex, *2 for input and output is 16 GB.


-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Bob kb8tq
Hi

The degree to which your samples converge to a specific value while being 
averaged
is dependent on a bunch of things. The noise processes on the clock and the 
measured
signal are pretty hard to avoid. It is *very* easy to over estimate how fast 
things converge.

Bob

> On Apr 26, 2018, at 5:28 PM, Oleg Skydan  wrote:
> 
> From: "Hal Murray" 
> Sent: Thursday, April 26, 2018 10:28 PM
> 
>> Is there a term for what I think you are doing?
> 
> I saw different terms like "omega counter" or multiple time-stamp
> average counter, probably there are others too.
> 
>> If I understand (big if), you are doing the digital version of magic
>> down-conversion with an A/D.  I can't even think of the name for that.
> 
> No, it is much simpler. The hardware saves time-stamps to the memory at
> each (event) rise of the input signal (let's consider we have digital logic
> input signal for simplicity). So after some time we have many pairs of
> {event number, time-stamp}. We can plot those pairs with event number on
> X-axis and time on Y-axis, now if we fit the line on that dataset the
> inverse slope of the line will correspond to the estimated frequency.
> 
> The line is fitted using linear regression.
> 
> This technique improves frequency uncertainty as
> 
> 2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))
> 
> So If I have 2.5ns HW time resolution, and collect 5e6 events,
> processing should result in 3.9ps resolution.
> 
> Of cause this is for the ideal case. The first real life problem is
> signal drift for example.
> 
> Hope I was able to tell of what I am doing.
> 
> BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
> Probably I should look for better OCXOs. Interesting thing - the counter
> processed 300GB of time-stamps data during that 8+hour run :).
> 
> All the best!
> Oleg 
> <1133.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Azelio Boriani
If your hardware is capable of capturing up to 10 millions of
timestamps per second and calculating LR "on the fly", it is not a so
simple hardware, unless you consider simple hardware a 5megagates
Spartan3 (maybe more is needed). Moreover: if your clock is, say, at
most in an FPGA, 300MHz, your timestamps will have a one-shot
resolution of few nanoseconds. Where have you found a detailed
description of the CNT91 counting method? The only detailed
description I have found is the CNT90 (not 91) service manual and it
uses interpolators (page 4-13 of the service manual).

On Thu, Apr 26, 2018 at 2:45 PM, Bob kb8tq  wrote:
> Hi
>
> Even with a fast counter, there are going to be questions about clock jitter 
> and just
> how well that last digit performs in the logic. It’s never easy to squeeze 
> the very last
> bit of performance out …..
>
> Bob
>
>> On Apr 26, 2018, at 3:06 AM, Azelio Boriani  wrote:
>>
>> Very fast time-stamping like a stable 5GHz counter? The resolution of
>> a 200ps (one shot) interpolator can be replaced by a 5GHz
>> time-stamping counter.
>>
>> On Thu, Apr 26, 2018 at 12:28 AM, Bob kb8tq  wrote:
>>> Hi
>>>
>>> Unfortunately there is no “quick and dirty” way to come up with an accurate 
>>> “number of digits” for a
>>> math intensive counter. There are a *lot* of examples of various counter 
>>> architectures that have specific
>>> weak points in what they do. One sort of signal works one way, another 
>>> signal works very differently.
>>>
>>> All that said, the data you show suggests you are in the 10 digits per 
>>> second range.
>>>
>>> Bob
>>>
 On Apr 25, 2018, at 3:01 PM, Oleg Skydan  wrote:

 Dear Ladies and Gentlemen,

 Let me tell a little story so you will be able to better understand what 
 my question and what I am doing.

 I needed to check frequency in several GHz range from time to time. I do 
 not need high absolute precision (anyway this is a reference oscillator 
 problem, not a counter), but I need fast high resolution instrument (at 
 least 10 digits in one second). I have only a very old slow unit so, I 
 constructed a frequency counter (yes, yet another frequency counter 
 project :-). I is a bit unusual - I decided not to use interpolators and 
 maximally simplify hardware and provide the necessary resolution by very 
 fast timestamping and heavy math processing. In the current configuration 
 I should get 11+ digits in one second, for input frequencies more then 
 5MHz.

 But this is theoretical number and it does not count for some factors. Now 
 I have an ugly build prototype with insanely simple hardware running the 
 counter core. And I need to check how well it performs.

 I have already done some checks and even found and fixed some FW bugs :). 
 Now it works pretty well and I enjoyed looking how one OCXO drifts against 
 the other one in the mHz range. I would like to check how many significant 
 digits I am getting in reality.

 The test setup now comprises of two 5MHz OCXO (those are very old units 
 and far from the perfect oscillators - the 1sec and 10sec stability is 
 claimed to be 1e-10, but they are the best I have now). I measure the 
 frequency of the first OCXO using the second one as counter reference. The 
 frequency counter processes data in real time and sends the continuous one 
 second frequency stamps to the PC. Here are experiment results - plots 
 from the Timelab. The frequency difference (the oscillators are being on 
 for more than 36hours now, but still drift against each other) and ADEV 
 plots. There are three measurements and six traces - two for each 
 measurement. One for the simple reciprocal frequency counting (with R 
 letter in the title) and one with the math processing (LR in the title). 
 As far as I understand I am getting 10+ significant digits of frequency in 
 one second and it is questionable if I see counter noise or oscillators 
 one.

 I also calculated the usual standard deviation for the measurements 
 results (and tried to remove the drift before the calculations), I got STD 
 in the 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.

 Now the questions:
 1. Are there any testing methods that will allow to determine if I see 
 oscillators noise or counter does not perform in accordance with the 
 theory (11+ digits)? I know this can be done with better OCXO, but 
 currently I cannot get better ones.
 2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ 
 significant digits) right?

 As far as I understand the situation I need better OCXO's to check if 
 HW/SW really can do 11+ significant digits frequency measurement in one 
 second.

 Your comments are greatly appreciated!

Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Oleg Skydan

From: "Hal Murray" 
Sent: Thursday, April 26, 2018 10:28 PM


Is there a term for what I think you are doing?


I saw different terms like "omega counter" or multiple time-stamp
average counter, probably there are others too.


If I understand (big if), you are doing the digital version of magic
down-conversion with an A/D.  I can't even think of the name for that.


No, it is much simpler. The hardware saves time-stamps to the memory at
each (event) rise of the input signal (let's consider we have digital logic
input signal for simplicity). So after some time we have many pairs of
{event number, time-stamp}. We can plot those pairs with event number on
X-axis and time on Y-axis, now if we fit the line on that dataset the
inverse slope of the line will correspond to the estimated frequency.

The line is fitted using linear regression.

This technique improves frequency uncertainty as

2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))

So If I have 2.5ns HW time resolution, and collect 5e6 events,
processing should result in 3.9ps resolution.

Of cause this is for the ideal case. The first real life problem is
signal drift for example.

Hope I was able to tell of what I am doing.

BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
Probably I should look for better OCXOs. Interesting thing - the counter
processed 300GB of time-stamps data during that 8+hour run :).

All the best!
Oleg 
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Hal Murray

olegsky...@gmail.com said:
> The plots I showed were made with approx. 5*10^6 timestamps  per second, so
> theoretically I should get approx. 4ps equivalent  resolution (or 11+
> significant digits in one second). 

Is there a term for what I think you are doing?

If I understand (big if), you are doing the digital version of magic 
down-conversion with an A/D.  I can't even think of the name for that.

If I have a bunch of digital samples and count the transitions I can conpute 
a frequency.  But I would get the same results if the input frequency was X 
plus the sampling frequency.  Or 2X.  ...  The digital stream is the beat 
between the input and the sampling frequency.

That technique depends on having a low jitter clock.  There should be some 
good math in there, but I don't see it.

A related trick is getting the time from something that ticks slowly, like 
the RTC/CMOS clocks on PCs.   They only tick once per second, but you can get 
the time with (much) higher resolution if you poll until it ticks.

Don't forget about metastability.


-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Bob kb8tq
Hi

Even with a fast counter, there are going to be questions about clock jitter 
and just 
how well that last digit performs in the logic. It’s never easy to squeeze the 
very last
bit of performance out …..

Bob

> On Apr 26, 2018, at 3:06 AM, Azelio Boriani  wrote:
> 
> Very fast time-stamping like a stable 5GHz counter? The resolution of
> a 200ps (one shot) interpolator can be replaced by a 5GHz
> time-stamping counter.
> 
> On Thu, Apr 26, 2018 at 12:28 AM, Bob kb8tq  wrote:
>> Hi
>> 
>> Unfortunately there is no “quick and dirty” way to come up with an accurate 
>> “number of digits” for a
>> math intensive counter. There are a *lot* of examples of various counter 
>> architectures that have specific
>> weak points in what they do. One sort of signal works one way, another 
>> signal works very differently.
>> 
>> All that said, the data you show suggests you are in the 10 digits per 
>> second range.
>> 
>> Bob
>> 
>>> On Apr 25, 2018, at 3:01 PM, Oleg Skydan  wrote:
>>> 
>>> Dear Ladies and Gentlemen,
>>> 
>>> Let me tell a little story so you will be able to better understand what my 
>>> question and what I am doing.
>>> 
>>> I needed to check frequency in several GHz range from time to time. I do 
>>> not need high absolute precision (anyway this is a reference oscillator 
>>> problem, not a counter), but I need fast high resolution instrument (at 
>>> least 10 digits in one second). I have only a very old slow unit so, I 
>>> constructed a frequency counter (yes, yet another frequency counter project 
>>> :-). I is a bit unusual - I decided not to use interpolators and maximally 
>>> simplify hardware and provide the necessary resolution by very fast 
>>> timestamping and heavy math processing. In the current configuration I 
>>> should get 11+ digits in one second, for input frequencies more then 5MHz.
>>> 
>>> But this is theoretical number and it does not count for some factors. Now 
>>> I have an ugly build prototype with insanely simple hardware running the 
>>> counter core. And I need to check how well it performs.
>>> 
>>> I have already done some checks and even found and fixed some FW bugs :). 
>>> Now it works pretty well and I enjoyed looking how one OCXO drifts against 
>>> the other one in the mHz range. I would like to check how many significant 
>>> digits I am getting in reality.
>>> 
>>> The test setup now comprises of two 5MHz OCXO (those are very old units and 
>>> far from the perfect oscillators - the 1sec and 10sec stability is claimed 
>>> to be 1e-10, but they are the best I have now). I measure the frequency of 
>>> the first OCXO using the second one as counter reference. The frequency 
>>> counter processes data in real time and sends the continuous one second 
>>> frequency stamps to the PC. Here are experiment results - plots from the 
>>> Timelab. The frequency difference (the oscillators are being on for more 
>>> than 36hours now, but still drift against each other) and ADEV plots. There 
>>> are three measurements and six traces - two for each measurement. One for 
>>> the simple reciprocal frequency counting (with R letter in the title) and 
>>> one with the math processing (LR in the title). As far as I understand I am 
>>> getting 10+ significant digits of frequency in one second and it is 
>>> questionable if I see counter noise or oscillators one.
>>> 
>>> I also calculated the usual standard deviation for the measurements results 
>>> (and tried to remove the drift before the calculations), I got STD in the 
>>> 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
>>> 
>>> Now the questions:
>>> 1. Are there any testing methods that will allow to determine if I see 
>>> oscillators noise or counter does not perform in accordance with the theory 
>>> (11+ digits)? I know this can be done with better OCXO, but currently I 
>>> cannot get better ones.
>>> 2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ 
>>> significant digits) right?
>>> 
>>> As far as I understand the situation I need better OCXO's to check if HW/SW 
>>> really can do 11+ significant digits frequency measurement in one second.
>>> 
>>> Your comments are greatly appreciated!
>>> 
>>> P.S. If I feed the counter reference to its input I got 13 absolutely 
>>> stable and correct digits and can get more, but this test method is not 
>>> very useful for the used counter architecture.
>>> 
>>> Thanks!
>>> Oleg
>>> 73 de UR3IQO
>>> <1124.png><1127.png>___
>>> time-nuts mailing list -- time-nuts@febo.com
>>> To unsubscribe, go to 
>>> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>> 
>> ___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
> 

Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Oleg Skydan

From: "Azelio Boriani" 
Sent: Thursday, April 26, 2018 10:06 AM

Very fast time-stamping like a stable 5GHz counter? 

No, it is not 5GHz counter. It does the trick I first saw in CNT91
counters. The hardware is capable of capturing up to 10 millions 
of timestamps per second and calculating LR "on the fly".


The plots I showed were made with approx. 5*10^6 timestamps 
per second, so theoretically I should get approx. 4ps equivalent 
resolution (or 11+ significant digits in one second).



The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.

I am not interesting in measuring timings of the single event,
and I did not try to make a full featured timer-counter-analyser. 
It is just a high resolution RF frequency counter with very 
simple all digital hardware.


Oleg 
___

time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-26 Thread Azelio Boriani
Very fast time-stamping like a stable 5GHz counter? The resolution of
a 200ps (one shot) interpolator can be replaced by a 5GHz
time-stamping counter.

On Thu, Apr 26, 2018 at 12:28 AM, Bob kb8tq  wrote:
> Hi
>
> Unfortunately there is no “quick and dirty” way to come up with an accurate 
> “number of digits” for a
> math intensive counter. There are a *lot* of examples of various counter 
> architectures that have specific
> weak points in what they do. One sort of signal works one way, another signal 
> works very differently.
>
> All that said, the data you show suggests you are in the 10 digits per second 
> range.
>
> Bob
>
>> On Apr 25, 2018, at 3:01 PM, Oleg Skydan  wrote:
>>
>> Dear Ladies and Gentlemen,
>>
>> Let me tell a little story so you will be able to better understand what my 
>> question and what I am doing.
>>
>> I needed to check frequency in several GHz range from time to time. I do not 
>> need high absolute precision (anyway this is a reference oscillator problem, 
>> not a counter), but I need fast high resolution instrument (at least 10 
>> digits in one second). I have only a very old slow unit so, I constructed a 
>> frequency counter (yes, yet another frequency counter project :-). I is a 
>> bit unusual - I decided not to use interpolators and maximally simplify 
>> hardware and provide the necessary resolution by very fast timestamping and 
>> heavy math processing. In the current configuration I should get 11+ digits 
>> in one second, for input frequencies more then 5MHz.
>>
>> But this is theoretical number and it does not count for some factors. Now I 
>> have an ugly build prototype with insanely simple hardware running the 
>> counter core. And I need to check how well it performs.
>>
>> I have already done some checks and even found and fixed some FW bugs :). 
>> Now it works pretty well and I enjoyed looking how one OCXO drifts against 
>> the other one in the mHz range. I would like to check how many significant 
>> digits I am getting in reality.
>>
>> The test setup now comprises of two 5MHz OCXO (those are very old units and 
>> far from the perfect oscillators - the 1sec and 10sec stability is claimed 
>> to be 1e-10, but they are the best I have now). I measure the frequency of 
>> the first OCXO using the second one as counter reference. The frequency 
>> counter processes data in real time and sends the continuous one second 
>> frequency stamps to the PC. Here are experiment results - plots from the 
>> Timelab. The frequency difference (the oscillators are being on for more 
>> than 36hours now, but still drift against each other) and ADEV plots. There 
>> are three measurements and six traces - two for each measurement. One for 
>> the simple reciprocal frequency counting (with R letter in the title) and 
>> one with the math processing (LR in the title). As far as I understand I am 
>> getting 10+ significant digits of frequency in one second and it is 
>> questionable if I see counter noise or oscillators one.
>>
>> I also calculated the usual standard deviation for the measurements results 
>> (and tried to remove the drift before the calculations), I got STD in the 
>> 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
>>
>> Now the questions:
>> 1. Are there any testing methods that will allow to determine if I see 
>> oscillators noise or counter does not perform in accordance with the theory 
>> (11+ digits)? I know this can be done with better OCXO, but currently I 
>> cannot get better ones.
>> 2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ 
>> significant digits) right?
>>
>> As far as I understand the situation I need better OCXO's to check if HW/SW 
>> really can do 11+ significant digits frequency measurement in one second.
>>
>> Your comments are greatly appreciated!
>>
>> P.S. If I feed the counter reference to its input I got 13 absolutely stable 
>> and correct digits and can get more, but this test method is not very useful 
>> for the used counter architecture.
>>
>> Thanks!
>> Oleg
>> 73 de UR3IQO
>> <1124.png><1127.png>___
>> time-nuts mailing list -- time-nuts@febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.
>
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Question about frequency counter testing

2018-04-25 Thread Bob kb8tq
Hi

Unfortunately there is no “quick and dirty” way to come up with an accurate 
“number of digits” for a 
math intensive counter. There are a *lot* of examples of various counter 
architectures that have specific
weak points in what they do. One sort of signal works one way, another signal 
works very differently. 

All that said, the data you show suggests you are in the 10 digits per second 
range. 

Bob

> On Apr 25, 2018, at 3:01 PM, Oleg Skydan  wrote:
> 
> Dear Ladies and Gentlemen,
> 
> Let me tell a little story so you will be able to better understand what my 
> question and what I am doing.
> 
> I needed to check frequency in several GHz range from time to time. I do not 
> need high absolute precision (anyway this is a reference oscillator problem, 
> not a counter), but I need fast high resolution instrument (at least 10 
> digits in one second). I have only a very old slow unit so, I constructed a 
> frequency counter (yes, yet another frequency counter project :-). I is a bit 
> unusual - I decided not to use interpolators and maximally simplify hardware 
> and provide the necessary resolution by very fast timestamping and heavy math 
> processing. In the current configuration I should get 11+ digits in one 
> second, for input frequencies more then 5MHz.
> 
> But this is theoretical number and it does not count for some factors. Now I 
> have an ugly build prototype with insanely simple hardware running the 
> counter core. And I need to check how well it performs.
> 
> I have already done some checks and even found and fixed some FW bugs :). Now 
> it works pretty well and I enjoyed looking how one OCXO drifts against the 
> other one in the mHz range. I would like to check how many significant digits 
> I am getting in reality.
> 
> The test setup now comprises of two 5MHz OCXO (those are very old units and 
> far from the perfect oscillators - the 1sec and 10sec stability is claimed to 
> be 1e-10, but they are the best I have now). I measure the frequency of the 
> first OCXO using the second one as counter reference. The frequency counter 
> processes data in real time and sends the continuous one second frequency 
> stamps to the PC. Here are experiment results - plots from the Timelab. The 
> frequency difference (the oscillators are being on for more than 36hours now, 
> but still drift against each other) and ADEV plots. There are three 
> measurements and six traces - two for each measurement. One for the simple 
> reciprocal frequency counting (with R letter in the title) and one with the 
> math processing (LR in the title). As far as I understand I am getting 10+ 
> significant digits of frequency in one second and it is questionable if I see 
> counter noise or oscillators one.
> 
> I also calculated the usual standard deviation for the measurements results 
> (and tried to remove the drift before the calculations), I got STD in the 
> 3e-4..4e-4Hz (or 6e-11..8e-11) range in many experiments.
> 
> Now the questions:
> 1. Are there any testing methods that will allow to determine if I see 
> oscillators noise or counter does not perform in accordance with the theory 
> (11+ digits)? I know this can be done with better OCXO, but currently I 
> cannot get better ones.
> 2. Is my interpretation of the ADEV value at tau=1sec (that I have 10+ 
> significant digits) right?
> 
> As far as I understand the situation I need better OCXO's to check if HW/SW 
> really can do 11+ significant digits frequency measurement in one second.
> 
> Your comments are greatly appreciated!
> 
> P.S. If I feed the counter reference to its input I got 13 absolutely stable 
> and correct digits and can get more, but this test method is not very useful 
> for the used counter architecture.
> 
> Thanks!
> Oleg
> 73 de UR3IQO 
> <1124.png><1127.png>___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.