Joe,

RR on all. 

I think that I have now arrived at where you were some hours ago - scratching 
my head. 

I’m running WSJT-X r5950 on OS X. The sfrsd2.c routine is set up for the jt 
symbol metrics. If demod64a uses jt metrics, results are very poor - maybe 
100/1000 decodes. If I change demod64a to use sf metrics, I get 448 decodes 
which is, as you said.

Did you figure out why it seems to work best with the wrong metrics??

Steve k9an

> On Sep 30, 2015, at 8:44 PM, Joe Taylor <[email protected]> wrote:
> 
> Hi Steve,
> 
> Thanks for sharing your further test results.  The iterative self-tuning 
> procedure seems to work very well, and  guess we are agreed that 
> stochastic substitution of second-best symbols isn't buying us enough to 
> make the performance hit worthwhile.  We should, however, remain 
> open-minded about a possible advantage in conditions of Rayleigh fading, 
> QRM, etc.
> 
> I also observed the smaller number of decodes when sfrsd2 is invoked 
> from within WSJT-X, compared with WSJT.  I haven't tracked down the 
> reason, yet; I still suspect that some upstream filter is rejecting some 
> signals that are actually worthy of attention.  At HF we were not so 
> hungry for the last 0.5 dB of sensitivity.
> 
> Your notion to do multi-threading by running multiple simultaneous sfrsd 
> threads may indeed be a good way to go.  As you say, the setup time in 
> sfrsd (before entering the stochastic loop) is small enough to be 
> ignored.  Moreover, that makes it easy to use OpenMP because we can do 
> that part in Fortran -- similar to how it's done in already when doing 
> both JT9 and JT65 at the same time.  Nothing special needs to be done to 
> the sfrsd code; there's no need for it to be C++ rather than C.  We will 
> just need to take care to distinguish shared variables from "per thread" 
> variables, do I/O in the main thread, etc.  I managed to make this work, 
> before, and Bill knows all the tricks.
> 
>       -- Joe
> 
> On 9/30/2015 8:25 PM, Steven Franke wrote:
>> Joe,
>> 
>> 1. This summarizes decoding results using exp(x) (aka “jt”) symbol metrics 
>> and power-percentage (aka “sf”) symbol metrics.
>> 
>> The procedure in each case was to self-tune the algorithm by (i) running 
>> with an arbitrary erasure probability matrix and then (ii) using probability 
>> of symbol-error and probability of correct-mr2 in fort.40 to update the 
>> erasure and insertion probability matrices. I iterated this procedure until 
>> it converged (2 or 3 runs). After tuning, the algorithm was run with the mr2 
>> insertion probability scaling factor set to 0.0 (no mr2syms) and again with 
>> the scaling factor set to 0.3 (does some mr2sym insertions). For the latter 
>> case, it is necessary to set the calc_syn parameter to 1 in the call to 
>> decode_rs.
>> 
>> sf metrics with no mrsyms (mrsym 0.0):
>> 1. 835 with ntrials=10000
>> 2. 733 with ntrials=1000
>> 3. 502 with ntrials=100
>> 
>> sf metrics with mrsyms (0.3):
>> 1. 839 with ntrials=10000
>> 2. 747 with ntrials=1000
>> 3. 556 with ntrials=100
>> 
>> jt metrics no mrsyms (0.0):
>> 1. 836 with ntrials=10000
>> 2. 733 with ntrials=1000
>> 3. 533 with ntrials=100
>> 
>> jt metrics with mrsyms (0.3):
>> 1. 837 with ntrials=10000
>> 2. 745 with ntrials=1000
>> 3. 560 with ntrials=100
>> 
>> Conclusions:
>> - no significant differences between sf and jt symbol metrics, provided that 
>> proper erasure/mr2-insertion probabilities are used.
>> - The mr2sym insertion does improve results significantly at very small 
>> ntrial values, but see item 2., below.
>> 
>> 2. I did some execution time runs and I can verify your assertion that 
>> setting calc_syn=1 in the call to decode_rs (which is necessary when mr2sym 
>> insertion is done) increases execution time by a factor of approximately 
>> 1.6. Which is why the option to turn off the syndrome calculation was put 
>> in. Thus, the conclusion here is that the added overhead necessary to do 
>> mrsym insertion is not worth it, at least for the idealized no-fading 
>> simulated signals analyzed in these tests.
>> 
>> It’s striking that the results of all 4 experiments are essentially the same 
>> at ntrials=10000. This suggests to me that we are decoding all of the cases 
>> that are admitted by our nhard/nsoft-based selection criteria. I suppose 
>> that more work could be done to try to refine these criteria. In fact, I 
>> should mention that I am having second thoughts about my nsoft metric. I 
>> can’t help thinking that it should be based on log(p1) instead of p1 - so I 
>> may come back to that eventually. Before that, I think that I’d like to see 
>> what happens to the erasure probabilities if we tune on some real data.
>> 
>> Finally, I am able to compile and run WSJT-X r5950 on OS X. Seems to work 
>> fine, although I only got 448 decodes. Perhaps this is due to the mismatched 
>> metrics. I’ll play around with it.
>> 
>> Steve k9an
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> wsjt-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
> 
> ------------------------------------------------------------------------------
> _______________________________________________
> wsjt-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/wsjt-devel


------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to