On Fri, Sep 7, 2012 at 1:26 AM, Kristoff Bonne <[email protected]> wrote:

> I haven't had time to look at BCH in detail but do you suggest that (64,32)BCH
> is simply 5 times (12,6)BCH or am I missing something.

I don't know much at all about BCH, but I was suggesting that a
theoretical (48,24) code was NOT the same as (was potentially better
than) using a (12,6) code four times.  I referred to BCH only because
it apparently has variations that work on different numbers of bits at
a time (what I would call a "FEC frame", 48 or 12 in these examples).

> Well, I will of course interleaving.
>
> Depending on option-bits in the stream-header, it will be using blocks
> of 1 (i.e. only inside the frame) or 2, 4 or 16 frames. The latter modes
> will greatly increase latency, but is designed only be used for -say- 10
> meter DXing; to deal with fading.

There is obviously a tradeoff between the block size and latency.
Given a certain block size, my argument is that the optimal FEC would
treat the entire block as a single "FEC frame" (not to be confused
with codec2 frames).

Consider for example that we wanted to add 50% additional bits for FEC
and the block size is 16 codec2 frames of 56 bits each for a total of
896 codec2 bits and 448 FEC bits.  One way to handle it would be to
split the 896 bits into 28 FEC frames of 32 bits each and use a
(32,16) FEC code on it.  Let us assume that a theoretical (32,16) code
can correct three bit errors.  With 28 such FEC frames, it should be
possible to correct 28*3 = 84 bit errors per block, but only if they
are perfectly distributed.  Interleaving can help distribute the bit
errors more uniformly, but as the number of errors per block gets
close to 84, it is likely that some of the FEC frames will have more
than three errors and not be correctable while others will have less
than three and "waste" some of their ability to correct.

By contrast, if the entire 896 bits were treated as a single FEC frame
and a (896,448) FEC code (same total number of FEC bits) could correct
84 bits (the same total number of errors per block), it wouldn't
matter how the bit errors were distributed within the block.  If there
were 84 or less errors, the entire block would get corrected
perfectly.  Note that this suggests that interleaving would be of no
value;  up to 84 bit errors per block could be corrected no matter how
they were distributed.

I can think of a few reasons that taking the large FEC frame technique
to an extreme might be problematic.  It might take more processing
time or memory to execute than multiple iterations with a smaller FEC
frame.  And if there are more errors in the block than can be
corrected (more than 84 in the example above), it might affect more
codec2 frames than if the FEC was done on smaller frames.  I suppose
that argument could also be made against interleaving bits among
multiple codec2 voice frames;  a single burst of errors too long to
correct would mess up more voice frames than if interleaving was not
done.  I hadn't considered it before, but maybe interleaving for FEC
is part of the reason digital radio tends to "fall off the cliff"
rather than degrading gradually when the signal gets weak.

Steve


-- 
Steve Strobel
Link Communications, Inc.
1035 Cerise Rd
Billings, MT 59101-7378
(406) 245-5002 ext 102
(406) 245-4889 (fax)
WWW: http://www.link-comm.com
MailTo:[email protected]

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Freetel-codec2 mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/freetel-codec2

Reply via email to