Hi Steve,

Congratulations -- those are really good results using the 8x8 
probability matrix.

With revision 5942 I have made minor adjustments to the criteria used 
for rejecting potentially bad decodes.  It gives 837/1000 good decodes 
and NO bad decodes with ntrials=10000.

I have put this best-yet code in sfrsd2.c.

        -- Joe

On 9/29/2015 12:26 PM, Steven Franke wrote:
> Joe,
>
> One more comment - I think that I may have mis-interpreted the good/bad 
> columns in the rsdtest output. I interpreted the first number as total number 
> of decodes and the second number as the number of bad decodes. So I had been 
> subtracting the second from the first to get “good” decodes. I see now that 
> the first number *is* the number of good decodes — so, with the 
> multiplicative factor set to 1.3, as a function of ntrials:
>
> ntrials=1000 744 1
> ntrials=2500 785 4
> ntrials=5000 823 6
> ntrials=10000 849 7
>
> Execution time is proportional to ntrials, so we could drop back to 
> ntrials=5000 and still be on-par with kvasd. Just need to figure out how to 
> reject the bad ones.
>
> Steve k9an
>
>> On Sep 29, 2015, at 11:09 AM, Steven Franke<[email protected]>  wrote:
>>
>> Joe -
>>
>> I’ve just committed sfrsd3.c. This uses the probability of error derived 
>> from your fort.40 data to set the erasure probabilities. I modified 
>> extract2.f90 to print the 8x8 array out and then imported it directly into 
>> sfrsd3.c I manually edited the probabilities in the regions that had zeros 
>> and also changed a couple of the other numbers - but the important part of 
>> the array is directly from fort.40.
>>
>> I found that if I just multiply that array by a factor (1.1 is used in the 
>> sfrsd3 that I just committed), then it works well.
>>
>> The results are:
>>
>> factor/good/bad
>> 1.1 823 2
>> 1.2 832 5
>> 1.3 842 7
>>
>> Thus, the version that I committed with the multiplicative factor set to 1.1 
>> gives 823 good decodes and 2 bad. I just did this quickly. There are a few 
>> things to look at. First, the larger multiplicative factors cause some 
>> probabilities to exceed 1, which means that  symbols in those regions will 
>> *always* be erased. Maybe we should cap the probability at 0.95 or 
>> something… Also, the number of bad decodes is increasing significantly as we 
>> increase the factor.
>>
>> In any case, I like this approach, as the algorithm is self-tuning in the 
>> sense that we could start out with all entries in the array = 0.5, say, and 
>> then iteratively refine the array to get to where we are now, with no 
>> guesswork.
>>
>> I won’t have time to play with this any more until this evening - but 
>> another thing that I want to try is to use the pmr2 array to set the 
>> probabilities of inserting an mr2…
>>
>> Steve k9an
>>
>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> wsjt-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> wsjt-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/wsjt-devel

------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to