Joe - 

I’ve just committed sfrsd3.c. This uses the probability of error derived from 
your fort.40 data to set the erasure probabilities. I modified extract2.f90 to 
print the 8x8 array out and then imported it directly into sfrsd3.c I manually 
edited the probabilities in the regions that had zeros and also changed a 
couple of the other numbers - but the important part of the array is directly 
from fort.40. 

I found that if I just multiply that array by a factor (1.1 is used in the 
sfrsd3 that I just committed), then it works well. 

The results are:

factor/good/bad
1.1 823 2
1.2 832 5
1.3 842 7

Thus, the version that I committed with the multiplicative factor set to 1.1 
gives 823 good decodes and 2 bad. I just did this quickly. There are a few 
things to look at. First, the larger multiplicative factors cause some 
probabilities to exceed 1, which means that  symbols in those regions will 
*always* be erased. Maybe we should cap the probability at 0.95 or something… 
Also, the number of bad decodes is increasing significantly as we increase the 
factor.  

In any case, I like this approach, as the algorithm is self-tuning in the sense 
that we could start out with all entries in the array = 0.5, say, and then 
iteratively refine the array to get to where we are now, with no guesswork.

I won’t have time to play with this any more until this evening - but another 
thing that I want to try is to use the pmr2 array to set the probabilities of 
inserting an mr2…

Steve k9an


------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to