Hello Rick,
The minimum detectable signal for CW for most radios is on the order
of 1 microvolt (peak) in say 1 KHz of bandwidth(they are actually
better). This is about -110 dBm in a 50 ohm resistor.
Wait a moment. How do you define the minimum detectable signal? In
(old) ARRL handbook measurements, the MDS is made equal to the noise
floor, by definition.
So, -110 dBm in 1 kHz means -140 dBm/Hz. That becomes a noise figure
of 34 dB, the thermal noise is -174 dBm/Hz. Few radios are so
insensitive; see, by instance, the reviews by Bob Sherwood, noise
floors are in the -120/-135 dBm zone (in 500 Hz, I presume, but this
does not change much the figures).
This does not mean the LSB for the A/D can be just 1 microvolt. The
A/D will need a few more bits in order to resolve this signal. Say
3 bits. At 6 dB per bit this puts the lowest range for the A/D at
about -128 dBm. With an upper input level of +10 dBm, then the
dynamic range of the A/D "process" has to be about 138 dB. Say 140
dB. That is about 24 bits of dynamic range. There are ways to get
this by oversampling with fewer bits and then decimating and
filtering. For every reduction by half in the bandwidth, you gain
1/2 bit (I think). So in order to get to this dynamic range a 16
bit A/D would need to be decimated and filtered to get the extra 8
bits by increasing the sample rate by 2^16. For a 1 KHz BW this
means the 16 bit A/D has to run at about 65 MHz. Tough. Another
concern is the equivalent noise figure of the A/D. This can be
estimated by looking at the A/D output and observing the range of
random outputs. These have to be dealt with also, and just
decimating and filtering may not be good enough.
Not sure all this is right.
Rick
w6nzk
You come to reasonable figures, but the supposition that an ADC is
not able to resolve signals lower than 1 LSB (least significant bit)
is simply wrong, and comes from a simplistic model. I will try to
explain - English is not my mother language, so forgive me :-) - I
hope that the explanation will be useful to others. I found a strong
resistance in some of my friends, coming from a digital world
experience, to accept it. Purists, please stop reading now, I will
simplify (as I can).
You would be right if the ADC was perfect; that is, without any
noise. Take a 4 bit, simple, low bandwidth ADC, and the output data
with no input will be steady at 0.
But in our case, without any external signal the 14 or 16 bit, 100
MS/s ADC output is not steady at 0000000000 (or be it 1000 0000 0000
0000....). It continuously switches the first few bits by +/- several
counts, due to intrinsic noise. It is the presence of this noise that
allows the detection of signals lower than 1 LSB.
If you look at the output data stream after applying a signal with
P-P value lower than 1 LSB, you will see a (small!) periodicity in
the output data. In other words, doing an FFT of the output data
stream you will find a peak at the signal frequency; longer the FFT,
higher the resolution, cleaner your signal peak. The signal will, let
me say, "modulate" the intrinsic noise of the ADC, and appear in the
output data. This is what is called dithering.
Just to do an example, Perseus clips at (approx) 0,3 Vp at antenna
input. The ADC has 14 bit, so the LSB is 0,3/16384 = 18 microV. But
signals much lower than 18 microV are well receivable.
I find much simpler to consider an ADC as if it was a traditional
receiver, with a definite noise figure and so a noise power density
(dBm/Hz), and a maximum allowable signal. You can't reduce the noise
power density, if not putting a preamplifier before the ADC. The
number of bits (parallelism of the ADC) is not important; what counts
is the relationship between noise density, full scale value and
sampling rate. Then you apply all the digital processing art:
decimation, filtering, FFT... and at the end the sensitivity and
blocking dynamic range are always the same, defined by the front-end.
The concept of processing gain is fictitious (nothing in the process
"gains"), but serves well to do the calculations.
73 - Marco IK1ODO / AI4YF