Hi Joe and all - I’m glad that you’ve incorporated the two-pass decoding into 1.6.1 Joe!
I’ve done a couple of tests. First, a summary of my own results on my batch of 333 hf files. Cases 6 and 7 are from the latest 1.6.1 with two-pass decoding: 1. v1.5 with kvasd: 3128 (2 bad) 2. v1.5 with sfrsd/sfrsd2 ntrials=10000: 3150 (2 bad) 3. v1.5 with sfrsd/sfrsd2 ntrials=2000: 3033 4. v1.5 with sfrsd/sfrsd2 ntrials=1000: 3026 5. JT65, two-pass prototype code, ntrials=2000: 4059 6. WSJT-X 1.6.1 r6021, n=6 (1000 trials): 4044 7. WSJT-X 1.6.1 r6021, n=5 (316 trials): 3994 As you predicted, the difference between 316 trials and 2000 trials is minimal - less than 2%. It looks to me like n=6 is the sweet spot. I feel like we’re in pretty good shape as far as the HF multi-decoding goes. As such, I’ve gone back to playing with the -24dB data. With n=8, I’m getting a disappointing 639 decodes. I’ve been studying the synchronization code with an eye toward trying to come up with ideas for improving the low snr performance. One thing that is driving me a bit crazy is the interplay between the various hard-coded offsets and the zero-padding used at the beginning of the dd array and one of the cx arrays. I have a hacked-up version here that eliminates all empirical dt offsets and all zero-padding - but the maddening thing is that it’s *almost* but not quite as good as what we have now. I’ll continue to play with it, after some of my frustration wears off. Along the way, while debugging my time offset issues, I stumbled on some behavior that I hadn’t anticipated but, in retrospect, makes perfect sense and explains a long-standing mystery for me. I found that when you get the time offset wrong, typically you get no decodes. However, if you get everything offset by exactly an integer number of symbols, then all of a sudden you’ll get a whole bunch of decodes - but they’ll all be completely garbled. It took me a while to realize that this is a down-side of cyclic codes in an application like ours. For a cyclic code, by definition every cyclic shift of a codeword is another codeword. So if we happen to sync on a sidelobe of the ccf, rather than the main peak, then it’s likely that we’ll correctly decode. This probably explains a class of false decodes that I’ve struggled to understand - it’s the ones with high snr and very small nhard+nsoft, i.e. apparently false decodes associated with codewords that are a minimal distance from the senseword. Of course, it’s possible for a senseword to be so messed up by the channel that it ends up right next to the “wrong” codeword - but those odds are incredibly small - too small to explain the number of these cases that I’ve seen. Steve k9an > On Oct 29, 2015, at 1:36 PM, Joe Taylor <j...@princeton.edu> wrote: > > Hi all, > > I've added one more line to the table comparing JT65 decoding performance: > > Correct False > Program Decodes Decodes Decoder > ------------------------------------------------------------ > JT65-HF 2329 24 BM + kvasd > WSJT-X r5912 2249 0 BM + kvasd > WSJT-X r5955 2114 0 BM + sfrsd > WSJT-X r5955 1816 0 BM only > WSJT-X r6020 2874 0 BM + sfrsd (N=1000) 2-pass > WSJT-X r6020 2887 0 BM + sfrsd (N=3000) 2-pass > > Conclusion: for HF-style decoding, a few thousand random erasure > patterns is enough. At that level Wwe have already reached a point of > diminishing returns. > > -- 73, Joe, K1JT > > ------------------------------------------------------------------------------ > _______________________________________________ > wsjt-devel mailing list > wsjt-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/wsjt-devel ------------------------------------------------------------------------------ _______________________________________________ wsjt-devel mailing list wsjt-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/wsjt-devel