Hi Chris,

> Jens:
>
> Isn't the current link layer protocol high-level data link
> control?  That is a widely used data link layer, and I don't
> understand why you think it should be changed, altough I'd like
> to hear your thoughts.

I don�t think that HDLC should be changed - I just stated that its
use in the amateur radio domain is not apropriate.
You use the shannon formula for channel capacity, insert 15 dB S/N
and 25 KHz bandwith. This gives you a theoretical maximum capacity
of about 125 Kbps. You you know of any amateur radio equipment which
is able to achieve this? The answer is for sure no. The truth is,
this capacity is vitally unreachable. However, by means of high
efficient channel coding (e.g. turbo codes) you can reach that
limit by about 0.5 dB (S/N). At our current setup, if you receive
one error in a frame, the complete frame is lost. By means of redundant
channel coding, you can add redundancy to a packet - which - if done
right will increase the effective throughput. There is a technique
called "Hybrid FEC/ARQ II", sometimes also referred to as "Memory ARQ
type II" which allows you to get the outmost of a link if some
requirements to the equipment in use are met. Another requirement -
and this is of importance here - is, that the link layer protocol
is very closely intervened with L1. That is, instead of saying "please
transmit this frame again" you say "I need so-and-so-much more
redundancy
in order to decode your frame X". That is, you start with transmitting
the least redundancy possible, if it can be decoded at the receiver it�s
fine, if not the receiver will request more redundancy for that block.
The repeated information will be merged with the data received first
which gives you two times the security you had when receiving it the
first time. Note that with our current functionality it is quiete
likely that the second try contains a bad bit again and again the
frame must be dropped. With the ARQ-FEC II scheme it is in fact
_very_very_
unlikely that the frame cannot be decoded now. To make a long story
short,
we can use the channel more efficiently for bulk data transfer. This is
especially important on user access channels.
The other thing is, here in Germany we are quiete close to being
able to do real-time voice and video conferencing over our packet
radio network. In my opinion we need real-time support on the link
layer protocol in use on these user entrances. There must be some
means of allocating bandwith on a channel. In addition to this,
you do not have the time to repeat any frames. Thus, real-time
traffic needs to be encoded with little coding rate (i.e. a lot
of redundancy). To configure this all, there must be some (logical)
control channel between the user and the master node on the channel.
The resulting protocol could be some mixture between GSM and wireless
ATM.
Moreover the user stations at logon to the net should be auto-aligned.
This includes adjusting the internal timers, channel access parameters
as well as physical (electrical) parameters such as deviation and mean
frequency, modulation type in use, power emission. S/N could be measured
to determine the standart coding rate and so on.
While we are at it we could implement some sort of authentication
sub-protocol to keep CB-guys out.
The list of possibilities is endless, so is the fun of implementing it,
we�d just need to do some standarization.
Please take a look at Phil Karn�s homepage
(http://people.qualcomm.com/pkarn
or something). See "Towards new link layer protocols" - although this is
slighly outdated it�s still current until we get this one fixed.

Copy for the linux-ham list for discussion. Hope you don�t mind.

  -- Jens

Reply via email to