On 13/09/2015 12:55, Michael Black wrote:

Hi Mike,
> JT9 is multithreaded.  Printf() is thread-safe but it is not atomic.  So you 
> need to use a mutex to access it safely from multiple threads.
I don't think that is relevant in this case. KVASD is only executed for 
JT65 decodes and each decoder is single threaded. The JT65 and JT9 
decoders do run concurrently in separate threads.

Also KVASD is executed by a Fortran SYSTEM() intrinsic function so it is 
run in a separate process. If there is any contention for a common log 
file then some sort of interprocess synchronization would be required. I 
doubt that is necessary in this case, maybe there is some overlap of 
process exit and flushing buffers but that seems unlikely too. Making 
sure file handles are closed before exit in sfrsd should sort that out 
if it is happening.
> Mike W9MDB
73
Bill
G4WJS.
>
> -----Original Message-----
> From: Steven Franke [mailto:[email protected]]
> Sent: Saturday, September 12, 2015 11:59 PM
> To: WSJT software development
> Subject: Re: [wsjt-devel] update on sfrsd
>
> Joe and Bill,
>
> I am running wsjt-x (wsjtx_exp branch) using my decoder program (sfrsd) 
> renamed to kvasd. It seemed to work OK at first, although the decodes that 
> wsjt-x displays are shown in a different order than they are when I use the 
> “real” kvasd.
>
> Now, in addition to the results coming out in a different order, I have 
> started to get segmentation faults and strange errors. This started after I 
> significantly increased the amount of data that I am writing to a log file 
> (/tmp/sfrsd.log) using fprintf statements from within sfrsd. I notice that 
> the contents of the log file are garbled - as if more than one instance of 
> the program was trying to write to it at the same time. Are you spawning 
> multiple instances of kvasd? Is there anything that I need to do to my 
> program to make it safe to run when called from wsjt-x?
>
> Steve k9an
>
>> On Sep 12, 2015, at 4:02 PM, Steven Franke <[email protected]> wrote:
>>
>> Joe,
>>
>> FYI, examining the results a bit more, I notice that sfrsd decoded cases 
>> containing as many as 38 bad symbols (out of 63). I haven’t done the same 
>> study for kvasd - it doesn’t return the total number of errors, 
>> unfortunately.
>>
>> More important, I also notice that the vast majority of the successful 
>> decodes take less than 100 trials (that’s the number of erasure locator 
>> vectors that we have to try before we get lucky and hit one that will 
>> decode) - whereas the cutoff is set to 10000 at the moment. This points the 
>> way toward achieving a very significant speedup. Note the following:
>>
>> Total number of calls to sfrsd: 1408
>> Total number of successful decodes: 104
>>
>> Of the 1304 unsuccessful decodes, a large fraction (probably 90%) consist of 
>> long strings of nearly identical rx vectors. I am using your suggestion to 
>> rename sfrsd to kvasd for the purpose of running these tests, so your birdie 
>> removal should be operative. I have seen as many as 15 un-decodable and 
>> identical rx vectors from one file. Since most of the successful decodes 
>> finish after fewer than 100 trials, whereas every single un-decodable one 
>> will run out the full 10000 trials, much less than 1% of the execution time 
>> is actually spent on decodable vectors. So there’s a lot to gain if we can 
>> figure out a way to reduce the number of un-necessary calls to the decoder.
>>
>> As I noted in the last message, I have a long list of things that I can do 
>> to speed up the execution time, but none of them promise the huge 
>> improvement available if we can cut out a bunch of those bad rx vectors. I 
>> can certainly look at this --- I don’t mean to throw it in your lap --- but 
>> I’m guessing that you may already have some insight into what’s going on.
>>
>> Steve k9an
>>
>>> On Sep 12, 2015, at 1:04 PM, Steven Franke <[email protected]> wrote:
>>>
>>> Hi Joe -
>>> Since you asked about where the sfrsd routine stands with respect to kvasd, 
>>> I did a quick test this morning. Before I ran the test, I found and fixed a 
>>> fundamental flaw in the sfrsd routine that significantly improved its 
>>> performance. Then I did a run with the number of trials set to 10000, which 
>>> makes it very slow (it takes a long time to time out) but I wanted to see 
>>> where we stand in terms of decoding probability, irrespective of decoding 
>>> time. The results are encouraging. Here’s a brief summary of the results 
>>> from my set of 212 .wav files:
>>>
>>> baseline (BM HDD only): 618 decodes
>>> kvasd: 104 decodes (this is above and beyond the 618 BM decodes)
>>> sfrsd: 104 decodes (2 bad decodes)
>>>
>>> kvasd decoded 14 cases that were not decoded by sfrsd.
>>> sfrsd decoded 12 cases (not counting 2 bad ones) that were not
>>> decoded by kvasd
>>>
>>> This result is obtained using a very simple-minded algorithm, which can be 
>>> described in just a few lines:
>>>
>>> 1. sort the symbol probabilities
>>> 2. among the 51 smallest symbol probabilities, randomly select a subset and 
>>> set them as “erasures”
>>> 3. try to decode with BM
>>> 4. if decode succeeds, quit. Otherwise go to 2.
>>>
>>> This algorithm only makes use of the symbol probabilities to identify the 
>>> 51 symbols that are erasure candidates. Obviously, one can do a better job 
>>> by using the symbol quality to determine the probability of setting that 
>>> symbol as an erasure. That’s the next step.
>>>
>>> I am not too worried about the bad decodes at this stage. The algorithm 
>>> should be enhanced such that it saves a list of all of the codewords 
>>> produced by the BM algorithm. It should then select the codeword with the 
>>> best metric (e.g. some measure of distance from the received symbol 
>>> vector). It should also apply a threshold to reject codewords that are too 
>>> “far” away from the received codeword. I have not implemented any of that 
>>> yet.
>>> Steve k9an
>>>
>>>> On Sep 11, 2015, at 12:11 PM, Steven Franke <[email protected]> wrote:
>>>>
>>>> Hi Joe,
>>>>
>>>> Re the sfrsd program. Please regard this as merely a framework for future 
>>>> development. Nothing in the current code is optimized and I am almost 
>>>> certain that I can do better. I think that it is premature to do any 
>>>> quantitative performance comparisons with KV.  I will say that it was very 
>>>> encouraging to see how easy it is to come up with something that provides 
>>>> useful gains via very simple use of soft-symbol information.
>>>>
>>>> Regarding the prospects for improving on KV - long story short - I am 
>>>> convinced that the class of algorithms that I am playing with can be tuned 
>>>> to be competitive with KV if we are willing to wait long enough for the 
>>>> algorithm to run to completion. What I don’t know yet is if it will be 
>>>> possible to come up with something that runs fast enough to be useful.
>>>>
>>>> My next step will be to pick apart the existing KA9Q Berlekamp Massey code 
>>>> so that we can eliminate some redundant calculations. In particular, I 
>>>> recently realized that the syndromes only have to be calculated once for 
>>>> each received symbol vector, whereas I am now re-calculating them 
>>>> thousands of times in the main loop.  I plan to implement the current 
>>>> sfrsd algorithm with this improvement and then move on to improving 
>>>> algorithm. I’ll let you know when I’ve got it to a point where I think 
>>>> that it’s worth comparing to KV.
>>>>
>>>> By the way, one thing that I’ve already noticed is that some files produce 
>>>> lots of identical un-decodable received symbol vectors. Since I have to 
>>>> try a whole bunch of virtual received signal vectors before I can decide 
>>>> that these are un-decodable, my decoder bogs down on these whereas KV 
>>>> seems to be able to blow right past them. I’m guessing that these events 
>>>> result from interference, or birdies - I can’t think of anything else that 
>>>> would produce a slew of identical vectors. It may eventually be necessary 
>>>> to figure out a way to identify these events before they are sent to the 
>>>> decoder.
>>>>
>>>> Re JTMSK - I am very interested in it, but I haven’t had time to look at 
>>>> what you are doing there. If you have carefully read the relevant sections 
>>>> of Proakis, then you certainly know more about CPFSK techniques than I do. 
>>>> I am aware of the fact that it should be possible, in principle, to do 
>>>> some form of block demodulation, perhaps using the Viterbi algorithm, as 
>>>> opposed to symbol-by-symbol demodulation. While this could produce 
>>>> significant gains - it would seem that this approach will require the 
>>>> channel to be reasonably well behaved (i.e. no significant frequency 
>>>> offset or random phase walks), no?  Off-list I’ll send you a manuscript 
>>>> that does some comparisons of different demodulation schemes. I haven’t 
>>>> yet had time to distill it down to any simple wisdom - maybe you’ll get 
>>>> something useful out of it.
>>>>
>>>> 73, Steve k9an
>>>>
>>>>
>>>>
>>>> On Sep 11, 2015, at 8:54 AM, Joe Taylor <[email protected]> wrote:
>>>>> Hi Steve,
>>>>>
>>>>> I finally found time for a quick look at "sfrsd", your drop-in
>>>>> replacement for kvasd that uses the stochastic Chase algorith.
>>>>>
>>>>> I compiled and ran it on a few test cases this morning, and (as
>>>>> will be no surprise to you) found that it works, and it succeeds in
>>>>> some cases where the hard-decision Berlekamp-Massey algorithm
>>>>> fails.  Just like kvasd... and no patents, non-disclosure
>>>>> agreements, or non-open source code to worry about!  Very impressive!!
>>>>>
>>>>> I have not yet done any thorough tests of sensitivity relative to
>>>>> the Koetter-Vardy algorithm.  Have you?  Are there any theoretical
>>>>> reasons to expect that it should be (or not be) as good as K-V?
>>>>>
>>>>> I will do further tests, as time permits.
>>>>>
>>>>>
>>>>> On another matter, somewhat along the same lines:
>>>>>
>>>>> Have you looked at all at the way I am presently decoding the MSK
>>>>> signals in JTMSK?  Especially since about r5848, I think the
>>>>> decoder is pretty good; but since the demodulation process does not
>>>>> yet take full advantage of signal coherency or the inherent 
>>>>> symbol-to-symbol "memory"
>>>>> in MSK modulation, I think a still better decoder is possible.
>>>>>
>>>>>  From reading Proakis I'm aware that (quite apart from the Viterbi
>>>>> algorithm used to decode the K=13, r=1/2 convolutional code of
>>>>> JTMSK) a Viterbi algorithm could also be used as an optimum
>>>>> receiver for these continuous-phase signals.  I also understand
>>>>> that using modulation index
>>>>> h=0.715 (rather than the h=0.5 of MSK) might have advantages while
>>>>> still requiring bandwith no more than 2 kHz.
>>>>>
>>>>> Do you have any opinions here?  Any interest in looking into these
>>>>> questions further?
>>>>>
>>>>>   -- 73, Joe, K1JT


------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to