Further to my original post, it appears that it is not just the number of
dropped samples that is being missed by the metadata object.

As an experiment, I modified the benchmark_rate.py example so that the
metadata error_code flag is printed out every time the recv() method is
called:

while num_rx_samps < target_num_samples:
    try:
        samps = rx_streamer.recv(recv_buffer, metadata)
*        print(metadata.error_code)*
        if samps:
            etc

When I run the script, a typical output looks like this:

Orx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
Orx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none
Orx_metadata_error_code.none
rx_metadata_error_code.none
rx_metadata_error_code.none

As you can see, the Fastpath logger is printing 'O' to the console, but the
metadata object reports no errors.

Looks like a bug to me!  :)

Brendan.




On Wed, Apr 14, 2021 at 1:23 PM Brendan Horsfield <
[email protected]> wrote:

> Fair enough.  To ensure that this problem is logged with the Ettus
> engineering team, is there an official mailing list or email address that I
> should report this bug to?
>
> On Wed, Apr 14, 2021 at 12:02 PM Marcus D Leech <[email protected]>
> wrote:
>
>> That just sounds like a bug. The Python API is still considered
>> experimental.
>>
>> Sent from my iPhone
>>
>> On Apr 13, 2021, at 9:22 PM, Brendan Horsfield <
>> [email protected]> wrote:
>>
>> 
>> Hi Marcus,
>>
>> I have run some comparison tests between the C++ and Python versions of
>> "benchmark_rate", using a high sampling rate in order to force some
>> overruns.
>>
>> It appears that both versions are detecting & reporting overrun events
>> correctly.  However, when it comes to the number of dropped samples, the
>> Python version always returns zero for the number of dropped samples.
>>
>> Do you have any idea why this is the case?  Is the resolution of the
>> timer less fine-grained in the Python implementation perhaps?
>>
>> Thanks,
>> Brendan.
>>
>>
>>
>>
>> On Tue, Apr 13, 2021 at 11:05 PM Marcus D Leech <[email protected]>
>> wrote:
>>
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On Apr 13, 2021, at 3:05 AM, [email protected] wrote:
>>>
>>> 
>>>
>>> Hi All,
>>>
>>> I am using a Python script to capture a short burst of rx samples from
>>> my B210. The script is based heavily on the Ettus example
>>> “benchmark_rate.py”, with a couple of additional tweaks I took from the
>>> Ettus GitHub repo (
>>> https://github.com/EttusResearch/uhd/blob/master/host/python/uhd/usrp/multi_usrp.py
>>> ).
>>>
>>> In my script I am calling my rx sampling function repeatedly using a
>>> “for" loop. Any errors that occur during sampling are stored in a
>>> uhd.types.RXMetadata() object, just like in the original Ettus script.
>>>
>>> Here’s the strange part:
>>>
>>> While the script is running, the letter ‘O’ is printed on the screen
>>> about 50% of the time, which I believe is an overflow warning from the
>>> Fastpath logger. However, the number of errors being detected by the
>>> RXMetadata() object is almost zero. How can this be?
>>>
>>> Some questions:
>>>
>>>    -
>>>
>>>    How seriously should I take the Fastpath ‘O’ warning? What does it
>>>    actually mean? Does it mean that this burst of samples will be
>>>    corrupted/incomplete?
>>>
>>> It absolutely means that samples were lost.
>>>
>>> The metadata should include time stamps that will allow you to compute
>>> how much was lost.
>>>
>>>
>>>    -
>>>
>>>    Why is the RXMetadata object not returning an error every single
>>>    time that the Fastpath logger does?
>>>
>>> This I’m not certain of.
>>>
>>> Thanks,
>>>
>>> Brendan.
>>> _______________________________________________
>>> USRP-users mailing list -- [email protected]
>>> To unsubscribe send an email to [email protected]
>>>
>>>
_______________________________________________
USRP-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to