Hi Joe, 

I did some tests on WSJT-X r5954 this evening. I added both sets (JT and SF) of 
erasure probabilities and ran some different combinations. I didn’t change any 
settings other than to comment or un-comment the necessary lines in demod64a 
and in sfrsd2.

1. matched: jt symbol metrics and jt erasure probs: 607
2. mismatch: sf symbol metrics and jt erasure probs: 523
3. matched: sf symbol metrics and sf erasure probs: 594

Overall, numbers are still less than WSJT, but at least now the numbers are the 
right way around. And all results are much better than the best result (448 I 
think) that I got when I tried it yesterday. So whatever knob you turned, just 
turn it a bit more…

If there are differences in the way that the s3 array is treated in WSJT and 
WSJT-X, such as differences in flattening algorithm and maybe noise level 
estimation, then I suppose that it’s possible that a re-tuning of the erasure 
probs might improve things a bit.

Eventually, it might be useful to add back in the code that accumulates 
statistics (in the 8x8 matrix form) and also enable sfrsd2 to read in the 
accumulated statistics without manual intervention, i.e. push-buttonize the 
self-tuning capability of the system. This would make it easy to tune up the 
algorithm for HF-type files.

I quickly tried your spurious-vector fix on one of my 20m batches and it seems 
to be very effective, reducing the number of spurious vectors by a large 
factor. There are still some files that produce 8 or more false decodes and 
only 2 decodes, but overall it is much better. 

Last thing - where do I go to turn off the blank lines for the failed decodes? 
I’d like to disable those and then run my HF files to compare with previous 
results. 

Steve k9an
------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to