Eric Blossom wrote: > What's the maximum run length that you see in packets that _do_ work? > My understanding from talking to Johnathan is that he has some > packets that work that have 17-bit runs in them (post-whitener). > (Not sure if they're ones or zeros.)
I've checked into the developers/jcorgan/digital branch a program, run_length.py, that will read a binary file and output the statistics for runs of similar bits (either all zeros or all ones): $ dd if=/dev/urandom of=rand.dat bs=1500 count=1 1+0 records in 1+0 records out 1500 bytes (1.5 kB) copied, 0.000451 seconds, 3.3 MB/s $ ./run_length.py -f rand.dat Using rand.dat for data. Bytes read: 1500 Bits read: 12000 Runs of length 1 : 2994 Runs of length 2 : 1494 Runs of length 3 : 703 Runs of length 4 : 384 Runs of length 5 : 214 Runs of length 6 : 104 Runs of length 7 : 42 Runs of length 8 : 31 Runs of length 9 : 7 Runs of length 10 : 5 Runs of length 11 : 0 Runs of length 12 : 2 Sum of runs: 12000 bits Maximum run length is 12 bits $ So if you capture the the input to the modulator as a binary file, you can use this to probe the statistics of packets that fail and ones that succeed in your test cases. Of course, you need to capture the packet after it has been whitened, *not* the input payload from a file being sent. In the same developer branch, I've added the --from-file option to benchmark_tx.py. This allows you to transmit a continuous stream of packets in one direction, the contents of which are read from the given file. This eliminates the entire network stack and any bi-directional traffic issues from the picture. You use benchmark_rx.py on the receiving end to determine when a receive CRC error occurs. Now normally you'll get some of these due to noise, but if you run the benchmark_tx.py test over and over again and the same packets fail on receive, then you've identified the specific packet numbers whose contents seem to trigger the issue people have been seeing. I'll be adding a command-line parameter to the benchmark_tx.py to cause the whitened packet data to be logged to a file suitable for use with run_length.py, so this whole process can be automated. (Not done yet.) In general this issue "smells" like a pattern-specific failure in receiver synchronization, which are usually run length related, yet right now I have a failure case with a 12-bit run and a success case with a 17-bit run, so it's not absolutely clear that this is the issue, or not entirely. The suggestion to use 8b/10b line coding is an interesting experiment that could be run in parallel to other testing, but it's not clear yet that it would be attacking the right problem. The 25% performance penalty up front from such a line coding technique is a high price to pay without conclusive evidence it's not just shifting the problem elsewhere in the stream. There is an alternative that Eric and I have conceived that would be a temporary workaround. It would not solve the original problem but would at least allow upper level protocols that do re-transmission to recover from the failure. I'll talk about that once I've got it coded, tested, and into my developer branch. -- Johnathan Corgan Corgan Enterprises LLC http://corganenterprises.com _______________________________________________ Discuss-gnuradio mailing list [email protected] http://lists.gnu.org/mailman/listinfo/discuss-gnuradio
