Hi Steve and all,

A few more thoughts about my idea on how we might select erasure vectors 
most likely to lead to successful decodes.  This is potentially 
important because the "right" erasure vector leads to a nearly 
instantaneous decode; nearly all of the processing time in sfrsd2 is now 
devoted to failed processing of randomly selected "wrong" erasure 
vectors.  If we can tilt the balance toward testing
likely-to-be-right erasure patterns sooner than unlikely ones, 
processing time will be reduced.

With this motivation I compiled histograms of the number of symbol 
values retained (i.e., not erased) as a function of where each symbol 
fell in the list ranked by p1, its estimated probability of being 
correct.  Two cases are shown in a graph posted here:
http://physics.princeton.edu/pulsar/K1JT/erasures.pdf
The dotted curve shows the fraction of symbols retained at each rank 
position for all erasure patterns tried (about 2 million of them) in a 
run with 1000 simulated "transmissions" and ntrials=10000 per 
transmission.  The solid curve is the corresponding quantity for the 
much smaller number of 809 erasure patterns that led to correct decodes.

Several features are clearly evident in the histograms.  You can see the 
effect of our present empirically-determined boundaries at rank indexes 
i = 32 and 38.  Correct decodes generally occur with fewer retained 
symbols (more erasures) in the region i > 38 than for trials that did 
not decode.  This seems to suggest increasing the probability erasure 
for large i.

We should probably soften the present step-wise boundary at i=38, for 
which there is no physical motivation.

We might also look at how these histograms may change when restricted to 
distinct ranges of p2/p1.

Comments?

        -- Joe

------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to