There is definitely a weird logic with this block.
Let me explain:

Suppose your training sequence is c(t) with energy E=sum_t |c(t)|^2.
The training c(.) is exactly what the parameter "symbols" is in the
correlate block.
Suppose your signal is some scaled version of the training sequence eg,
y(t) = a c(t).
You would expect that the block correlates the incoming signal y(t) with
the given training c(t) and declares a peak when the correlation is above a
certain threshold, A.
ie, something like  sum^t y(s) c^*(s)  > A <==> a sum^t |c(s)|^2 > A <==>
sum^t |c(s)|^2 > A/a
Now since the full correlation, sum_s |c(s)|^2  equals E, you would expect
that the quantity A/a should scale with E (ie, it should be a fraction of
E), ie, A/a = fraction E <==> A = a fraction E.

This means that if one wants to build a correlator that works with an
arbitrary scaling "a" (suppose a genie gave you that at the Rx), the only
thing you should have to do is to
put as the parameter "threshold" in the block the quantity "a fraction".
This would make the whole system transparent: no matter what the scaling
is, your estimator would work exactly the same.

HOWEVER, as it turns out the "correct" scaling that works for this block is
to set  "threshold= a^2 fraction".
This is really weird and I do not see any intuition behind that type of
scaling...

Please see attached minimal working example and notice the squaring of
scaling inside the block that makes it transparent to scaling the input
signal.
However if you change this to anything else, the system breaks down (in the
sense that its behavior changes with scaling the input signal).

best
Achilleas

Attachment: test.grc
Description: Binary data

Reply via email to