Joe, 

1. This summarizes decoding results using exp(x) (aka “jt”) symbol metrics and 
power-percentage (aka “sf”) symbol metrics. 

The procedure in each case was to self-tune the algorithm by (i) running with 
an arbitrary erasure probability matrix and then (ii) using probability of 
symbol-error and probability of correct-mr2 in fort.40 to update the erasure 
and insertion probability matrices. I iterated this procedure until it 
converged (2 or 3 runs). After tuning, the algorithm was run with the mr2 
insertion probability scaling factor set to 0.0 (no mr2syms) and again with the 
scaling factor set to 0.3 (does some mr2sym insertions). For the latter case, 
it is necessary to set the calc_syn parameter to 1 in the call to decode_rs.

sf metrics with no mrsyms (mrsym 0.0):
1. 835 with ntrials=10000
2. 733 with ntrials=1000
3. 502 with ntrials=100

sf metrics with mrsyms (0.3):
1. 839 with ntrials=10000
2. 747 with ntrials=1000
3. 556 with ntrials=100

jt metrics no mrsyms (0.0):
1. 836 with ntrials=10000
2. 733 with ntrials=1000
3. 533 with ntrials=100

jt metrics with mrsyms (0.3):
1. 837 with ntrials=10000
2. 745 with ntrials=1000
3. 560 with ntrials=100

Conclusions:
- no significant differences between sf and jt symbol metrics, provided that 
proper erasure/mr2-insertion probabilities are used. 
- The mr2sym insertion does improve results significantly at very small ntrial 
values, but see item 2., below.

2. I did some execution time runs and I can verify your assertion that setting 
calc_syn=1 in the call to decode_rs (which is necessary when mr2sym insertion 
is done) increases execution time by a factor of approximately 1.6. Which is 
why the option to turn off the syndrome calculation was put in. Thus, the 
conclusion here is that the added overhead necessary to do mrsym insertion is 
not worth it, at least for the idealized no-fading simulated signals analyzed 
in these tests. 

It’s striking that the results of all 4 experiments are essentially the same at 
ntrials=10000. This suggests to me that we are decoding all of the cases that 
are admitted by our nhard/nsoft-based selection criteria. I suppose that more 
work could be done to try to refine these criteria. In fact, I should mention 
that I am having second thoughts about my nsoft metric. I can’t help thinking 
that it should be based on log(p1) instead of p1 - so I may come back to that 
eventually. Before that, I think that I’d like to see what happens to the 
erasure probabilities if we tune on some real data.

Finally, I am able to compile and run WSJT-X r5950 on OS X. Seems to work fine, 
although I only got 448 decodes. Perhaps this is due to the mismatched metrics. 
I’ll play around with it.

Steve k9an
------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to