There is also non-trivial channel specific DSP development likely needed
too, not just ML training-- as the ML still relies on the receiver managing
to sync and track the signal.  Maybe a ML only adjustment for a different
channel would be better than doing nothing, but it probably won't get
particularly close to the results you'd get with some more direct
engineering.

Another fun potential avenue that would avoid more of the human engineered
parts: -- for when you know the speaker,  hold the RADE encoder constant
but fine tune the decoder for a particular parties voice.  Might improve
the threshold for intelligibility by some fraction of the channel capacity
wasted communicating that info, entirely unlike the advanced decoding in
WSJTX is able to exploit the assumption that the callsign of the current
message you're receiving is more likely the callsign that you just called.


On Sat, Aug 16, 2025 at 6:58 AM glen english LIST <glenl...@cortexrf.com.au>
wrote:

> Hi David
> Have just read all the papers..   as i read it , (
> https://arxiv.org/pdf/2505.06671)
>
>  we could alternatively train the system for say aurora channel
> characteristsics on 2m , or rainscatter characteristics on 10 GHz
>
> and could expand or use different training sets for different propagation
> modes on different bands,
> ....depending on how general they are.
>
> IE this is a workiong deomonstration for AWGN and HF style multipath
> channels, but not at all limited it
>
> best regards
> -glen
>
>
> On 11/08/2025 16:44, david wrote:
>
> Hi Richard,
>
> Yes it certainly is planned, but as per the Radio Autoencoder page:
>
> _______________________________________________
> Freetel-codec2 mailing list
> Freetel-codec2@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/freetel-codec2
>
_______________________________________________
Freetel-codec2 mailing list
Freetel-codec2@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freetel-codec2

Reply via email to