Re: RTL-SDR sample bit depth
Leif, thanks much for the informative replies. Some of the pieces are beginning to connect for me, and I'm going to go back and do some further measurements, including with the noise head. In the past, I'd set up a Gnu Radio flowgraph that was in effect a channel power meter. I will go back and use that to test the noise floor in various bandwidths. I'll also try using the noise head to measure NF as you described. Thanks! John On 03/05/2018 05:21 AM, Leif Asbrink wrote: Hi again John, Leif, one other question... how do you use a noise figure meter to measure an SDR? I have an HP 8970A with noise head available, but haven't figured out how to do a measurement where there's no RF output to connect the meter to. You could open the unit and connect the signal that goes to the A/D converter to the NF meter. Not very practical and probably very difficult because of spurs. You can use the noise head and your SDR program to measure power over the widest spur free frequency range you can find. Just use the SDR to measure by how many dB the noise floor changes when you switch the noise head on and off. That is exactly what the NF meter is doing although it makes several more things to compensate for the noise it generates itself which is important if you measure low gain amplifiers. Now, you do not want those complications because you want to measure the system NF of the entire SDR. What you have is like figure 3 here: https://cdn.rohde-schwarz.com/pws/dl_downloads/dl_application/application_notes/1ma178/1MA178_2e_NoiseFigure.pdf Your SDR is a spectrum analyzer and the power difference you observe is the Y-factor in dB. Convert it to linear power scale and use equation 12 in the document. In case the NF of your SDR would be very near 0 dB you would find the difference that you measure equal to the excess noise you can read on the noise head. 73 Leif
Re: RTL-SDR sample bit depth
Hi again John, > Leif, one other question... how do you use a noise figure meter to > measure an SDR? I have an HP 8970A with noise head available, but > haven't figured out how to do a measurement where there's no RF output > to connect the meter to. You could open the unit and connect the signal that goes to the A/D converter to the NF meter. Not very practical and probably very difficult because of spurs. You can use the noise head and your SDR program to measure power over the widest spur free frequency range you can find. Just use the SDR to measure by how many dB the noise floor changes when you switch the noise head on and off. That is exactly what the NF meter is doing although it makes several more things to compensate for the noise it generates itself which is important if you measure low gain amplifiers. Now, you do not want those complications because you want to measure the system NF of the entire SDR. What you have is like figure 3 here: https://cdn.rohde-schwarz.com/pws/dl_downloads/dl_application/application_notes/1ma178/1MA178_2e_NoiseFigure.pdf Your SDR is a spectrum analyzer and the power difference you observe is the Y-factor in dB. Convert it to linear power scale and use equation 12 in the document. In case the NF of your SDR would be very near 0 dB you would find the difference that you measure equal to the excess noise you can read on the noise head. 73 Leif
Re: RTL-SDR sample bit depth
Hello John, > I'm looking at 192 kHz of spectrum in a 1024 bin FFT, so that's 187.5 Hz > per bin. Trying to understand what you said below, is it correct that I > should view the bin width as the equivalent of the receive bandwidth for > MDS purposes? This would be the case if the FFT is un-windowed which it is probably not. A window function will make the data points go towards zero at the ends of the data block. Have a look here: http://www.sm5bsz.com/slfft/slfft.htm The filter function that each fft bin represents can be obtained by computing the FFT of the window function. If you do not know it, you might guess that it is 300 Hz or so. A better strategy is to select for example SSB mode with a filter bandwidth of 1 kHz. Then measure the noise level within the passband. Many SDR software have a true RMS detector that would allow precise measurements. 73 Leif
Re: RTL-SDR sample bit depth
Leif, one other question... how do you use a noise figure meter to measure an SDR? I have an HP 8970A with noise head available, but haven't figured out how to do a measurement where there's no RF output to connect the meter to. Thanks, John On 03/04/2018 11:56 AM, Leif Asbrink wrote: Hi John, For an approximation of the minimum discernable signal (MDS) I adjust the signal generator amplitude until I see a noticeable signal that is consistently just above the noise. To find the overload point, I increase the amplitude until I see the first spur appear -- it's a very sudden transition, with a 1 dB amplitude increase triggering spurs many dB above the noise. Based on the assumptions in my earlier message, I would expect to see a dynamic range of about 59 dB (~50 dB from 8 bits at 1.536 Msps, plus 9 dB processing gain by the decimate-by-8). No, you are measuring noise in a much narrower bandwidth. "until I see a noticeable signal" implicates that you look at a spectrum of some kind. If I assume your bin resolution is 19.2 Hz you have another 40 dB higher dynamic range. You might be interested in this: http://www.sm5bsz.com/linuxdsp/hware/rtlsdr/rtlsdr-03.40.htm However, I'm seeing closer to 100 dB dynamic range -- for example, with the RF gain set to 20 dB, the MDS is -124 dBm and the overload level is 25 dBm. This tracks for various settings of the RF gain, though there seems to be a few dB of compression with gains above 30 dB. MDS in what bandwidth? In amateur radio it is usually 500 Hz which, if you adhere to that means you found the noise floor at -151 dBm/Hz. That is 23 dB from room temperature (-174 dBm/Hz) so your noise figure would be 23 dB. With overload at -25 dBm (typo?) your range would be 126 dBm/Hz or 99 dB in a 500 Hz bandwidth. That is not really true however because you would have to measure the noise floor while there is a strong signal present and reciprocal mixing would set the limit. On the other hand, if the strong signal is at 90 MHz it would not reach the ADC so performance would be determined by the tuner chip. Regards Leif I'm trying to understand this discrepancy, which could be the result of: 1. Some AGC operation or gain compression in the R820T2 tuner chip; 2. My assumptions about the internal sample rate bit depth or decimation being wrong; or 3. My math being wrong (for example, is there a log10 vs. log20 error in my dB calculations, or is the dB scaling in the FFT showing voltage rather than power?). Any thoughts would be appreciated. Thanks, John On 03/02/2018 09:46 AM, John Ackermann N8UR wrote: Hi -- I'm trying to understand the sampling and decimation structure of the RTL-SDR dongle, to work out the effective number of bits after decimation. From Google and looking at the librtlsdr code (which is beyond my programming depth), I think I've figured out the following. I'd much appreciate verification/correction/amplification. 1. Actual ADC in the RTL-2832U is a single-bit sigma-delta running at some very high rate. 2. This is converted to 28.8 msps at 8 bit depth. 3. 28.8 msps is downsampled to the rate requested by the user and sent over the USB bus as 8 bit unsigned IQ pairs. Based on that, I *think*: a. Any processing gain in the downsample from 28.8 msps/8 bits within the chip is lost because the wire samples are limited to 8 bits. The output is 8 bits dynamic range regardless of the sample rate set. b. THEREFORE... for best dynamic range one wants to set the RTL-2832U to the highest sample rate that avoids lost samples, and do further decimation in the host processor, where the added bits aren't lost on the wire. I'd appreciate any verification or correction of that analysis. Thanks, John
Re: RTL-SDR sample bit depth
Hi Leif -- Thanks for the reply, and for the pointer to your page! I will look at that very closely. The purpose of my tests is for using dongles with the CW Skimmer software which takes an input bandwidth of 192 kHz. I'm looking at 192 kHz of spectrum in a 1024 bin FFT, so that's 187.5 Hz per bin. Trying to understand what you said below, is it correct that I should view the bin width as the equivalent of the receive bandwidth for MDS purposes? And that therefore there would be processing gain of about 30 dB (division by 1024)? That would bring my measurements into the range of theory. For what it's worth, here's a link to an animation I did showing the abrupt onset of spurs when the ADC overload point is reached, in this case at -57 dBm with "rf gain" set to 37.2: https://www.dropbox.com/s/a788k0cj16mi7zr/dongle_overload.gif?dl=0. Thanks! 73, John On 03/04/2018 11:56 AM, Leif Asbrink wrote: Hi John, For an approximation of the minimum discernable signal (MDS) I adjust the signal generator amplitude until I see a noticeable signal that is consistently just above the noise. To find the overload point, I increase the amplitude until I see the first spur appear -- it's a very sudden transition, with a 1 dB amplitude increase triggering spurs many dB above the noise. Based on the assumptions in my earlier message, I would expect to see a dynamic range of about 59 dB (~50 dB from 8 bits at 1.536 Msps, plus 9 dB processing gain by the decimate-by-8). No, you are measuring noise in a much narrower bandwidth. "until I see a noticeable signal" implicates that you look at a spectrum of some kind. If I assume your bin resolution is 19.2 Hz you have another 40 dB higher dynamic range. You might be interested in this: http://www.sm5bsz.com/linuxdsp/hware/rtlsdr/rtlsdr-03.40.htm However, I'm seeing closer to 100 dB dynamic range -- for example, with the RF gain set to 20 dB, the MDS is -124 dBm and the overload level is 25 dBm. This tracks for various settings of the RF gain, though there seems to be a few dB of compression with gains above 30 dB. MDS in what bandwidth? In amateur radio it is usually 500 Hz which, if you adhere to that means you found the noise floor at -151 dBm/Hz. That is 23 dB from room temperature (-174 dBm/Hz) so your noise figure would be 23 dB. With overload at -25 dBm (typo?) your range would be 126 dBm/Hz or 99 dB in a 500 Hz bandwidth. That is not really true however because you would have to measure the noise floor while there is a strong signal present and reciprocal mixing would set the limit. On the other hand, if the strong signal is at 90 MHz it would not reach the ADC so performance would be determined by the tuner chip. Regards Leif I'm trying to understand this discrepancy, which could be the result of: 1. Some AGC operation or gain compression in the R820T2 tuner chip; 2. My assumptions about the internal sample rate bit depth or decimation being wrong; or 3. My math being wrong (for example, is there a log10 vs. log20 error in my dB calculations, or is the dB scaling in the FFT showing voltage rather than power?). Any thoughts would be appreciated. Thanks, John On 03/02/2018 09:46 AM, John Ackermann N8UR wrote: Hi -- I'm trying to understand the sampling and decimation structure of the RTL-SDR dongle, to work out the effective number of bits after decimation. From Google and looking at the librtlsdr code (which is beyond my programming depth), I think I've figured out the following. I'd much appreciate verification/correction/amplification. 1. Actual ADC in the RTL-2832U is a single-bit sigma-delta running at some very high rate. 2. This is converted to 28.8 msps at 8 bit depth. 3. 28.8 msps is downsampled to the rate requested by the user and sent over the USB bus as 8 bit unsigned IQ pairs. Based on that, I *think*: a. Any processing gain in the downsample from 28.8 msps/8 bits within the chip is lost because the wire samples are limited to 8 bits. The output is 8 bits dynamic range regardless of the sample rate set. b. THEREFORE... for best dynamic range one wants to set the RTL-2832U to the highest sample rate that avoids lost samples, and do further decimation in the host processor, where the added bits aren't lost on the wire. I'd appreciate any verification or correction of that analysis. Thanks, John
Re: RTL-SDR sample bit depth
Hi John, > For an approximation of the minimum discernable signal (MDS) I adjust > the signal generator amplitude until I see a noticeable signal that is > consistently just above the noise. To find the overload point, I > increase the amplitude until I see the first spur appear -- it's a very > sudden transition, with a 1 dB amplitude increase triggering spurs many > dB above the noise. > > Based on the assumptions in my earlier message, I would expect to see a > dynamic range of about 59 dB (~50 dB from 8 bits at 1.536 Msps, plus 9 > dB processing gain by the decimate-by-8). No, you are measuring noise in a much narrower bandwidth. "until I see a noticeable signal" implicates that you look at a spectrum of some kind. If I assume your bin resolution is 19.2 Hz you have another 40 dB higher dynamic range. You might be interested in this: http://www.sm5bsz.com/linuxdsp/hware/rtlsdr/rtlsdr-03.40.htm > However, I'm seeing closer to 100 dB dynamic range -- for example, with > the RF gain set to 20 dB, the MDS is -124 dBm and the overload level is > 25 dBm. This tracks for various settings of the RF gain, though there > seems to be a few dB of compression with gains above 30 dB. MDS in what bandwidth? In amateur radio it is usually 500 Hz which, if you adhere to that means you found the noise floor at -151 dBm/Hz. That is 23 dB from room temperature (-174 dBm/Hz) so your noise figure would be 23 dB. With overload at -25 dBm (typo?) your range would be 126 dBm/Hz or 99 dB in a 500 Hz bandwidth. That is not really true however because you would have to measure the noise floor while there is a strong signal present and reciprocal mixing would set the limit. On the other hand, if the strong signal is at 90 MHz it would not reach the ADC so performance would be determined by the tuner chip. Regards Leif > > I'm trying to understand this discrepancy, which could be the result of: > > 1. Some AGC operation or gain compression in the R820T2 tuner chip; > > 2. My assumptions about the internal sample rate bit depth or > decimation being wrong; or > > 3. My math being wrong (for example, is there a log10 vs. log20 error > in my dB calculations, or is the dB scaling in the FFT showing voltage > rather than power?). > > Any thoughts would be appreciated. > > Thanks, > John > > On 03/02/2018 09:46 AM, John Ackermann N8UR wrote: > > Hi -- > > > > I'm trying to understand the sampling and decimation structure of the > > RTL-SDR dongle, to work out the effective number of bits after decimation. > > > > From Google and looking at the librtlsdr code (which is beyond my > > programming depth), I think I've figured out the following. I'd much > > appreciate verification/correction/amplification. > > > > 1. Actual ADC in the RTL-2832U is a single-bit sigma-delta running at > > some very high rate. > > > > 2. This is converted to 28.8 msps at 8 bit depth. > > > > 3. 28.8 msps is downsampled to the rate requested by the user and sent > > over the USB bus as 8 bit unsigned IQ pairs. > > > > Based on that, I *think*: > > > > a. Any processing gain in the downsample from 28.8 msps/8 bits within > > the chip is lost because the wire samples are limited to 8 bits. The > > output is 8 bits dynamic range regardless of the sample rate set. > > > > b. THEREFORE... for best dynamic range one wants to set the RTL-2832U > > to the highest sample rate that avoids lost samples, and do further > > decimation in the host processor, where the added bits aren't lost on > > the wire. > > > > I'd appreciate any verification or correction of that analysis. > > > > Thanks, > > John
RTL-SDR sample bit depth
Hi -- I'm trying to understand the sampling and decimation structure of the RTL-SDR dongle, to work out the effective number of bits after decimation. From Google and looking at the librtlsdr code (which is beyond my programming depth), I think I've figured out the following. I'd much appreciate verification/correction/amplification. 1. Actual ADC in the RTL-2832U is a single-bit sigma-delta running at some very high rate. 2. This is converted to 28.8 msps at 8 bit depth. 3. 28.8 msps is downsampled to the rate requested by the user and sent over the USB bus as 8 bit unsigned IQ pairs. Based on that, I *think*: a. Any processing gain in the downsample from 28.8 msps/8 bits within the chip is lost because the wire samples are limited to 8 bits. The output is 8 bits dynamic range regardless of the sample rate set. b. THEREFORE... for best dynamic range one wants to set the RTL-2832U to the highest sample rate that avoids lost samples, and do further decimation in the host processor, where the added bits aren't lost on the wire. I'd appreciate any verification or correction of that analysis. Thanks, John