HI all,

I am trying to debug a grc flowgraph containing an Embedded Python
BASIC_BLOCK.
The python block has two inputs and one output.
The block makes sure (through FORECAST()) that both inputs have the SAME
sample requirements.
Also, the block ALWAYS uses CONSUME_EACH() so that both input buffers are
cleared by the same amount of samples.

I observe the following behavior:

During normal operation (high SNR) the lengths of the two input buffers are
the same.
However, after some tinkering (I decrease the SNR using a QT Range and then
increase it again) the two input buffers have different lengths!

Is this even possible?
Can the scheduler do that?

I first suspected that the peak detector which is present in the flowgraph
and feeds the first input of the python block is chewing up samples when it
does not find a peak so my two streams become unsynchronized. However I
verified that the peak detector is a sync_block so this is impossible
either...

Do you have any idea as to why this is happening?

I attach the grc code (this is tested in 3.7 and 3.8).

thanks in advance for any help,
Achilleas

Attachment: test_input_buffers.grc
Description: Binary data

Reply via email to