When I first started poking around, I found a simple comparison paper at 
http://ieeexplore.ieee.org/iel2/672/6329/00247152.pdf . It appears to require 
an IEEE 
membership, or a university internet connection(I'm VPNed into my uni right 
now). The 
bottom of each page does say "U. S. Government work not protected by U. S. 
copyright." 
I've hated not being able to access academic papers in the past, so I'm half 
tempted to just 
observe the copyright notice and post it here. 

The problem with that comparison, and in fact many real world tests is the 
single tone 
modem(STM) ends up using adaptive equalization(the STM wouldn't work at such a 
high 
rate without it), while the parallel tone modem(PTM) doesn't. Adaptive 
equalization is the 
"secret sauce" of high speed STMs. It's what gives them their multipath 
handling ability. 
They basically model the transmission channel, find the inverse transformation, 
and de-
spread the multipath laden signal to form a clean one. In such tests, the STM 
seems to 
come out on top for anything but very local (<50km) ground paths.

PTMs inherently have multipath tolerance due to their MUCH lower baud rates. 
But there's 
nothing that says you can't strap an adaptive equalizer onto a PTM. You'd get 
natural 
multipath tolerance, along with whatever multipath correction needed to be 
performed. 
Most current PTM modems ignore multipath all together, and late signal is lost 
at best and 
causes intersymbol interference at worst. Adding an adaptive equalizer would 
allow PTMs 
to recover this late signal energy along with further reducing ISI. So we'd 
have the secret 
sauce adaptive equalizer, along with natural multipath tolerance, instead of 
just one or the 
other.

It appears that the ham OFDM modes DO include adaptive equalizers(at least, 
WINDRM 
does). I'm not that familiar with DRM, so I'm not sure how much of the 
transmitted signal 
is dedicated to training data(if any at all). Most STMs dedicate a large 
portion of the 
transmitted signal to keeping the receiver's equalizer up to date. Other modes 
that weren't 
designed for adaptive equalization simply use the FEC of the transmitted signal 
to guess 
at the actual transmitted signal, which is then used to train the equalizer. 
Here in the US, 
the ATSC digital television standard uses extremely simple 
modulation(8VSB)...if you sent 
an analog TV transmitter an 8-level voltage signal with a little filtering, 
you'd have an 8VSB 
transmitter. The first generation receivers were horrible with multipath 
handling, but as 
time went on, adaptive equalizer chipsets got better and better and were 
"strapped on" to 
the standard. I have a little HD receiver for my laptop with a fairly recent 
chipset/equalizer. I can receive a perfect copy of the signal in a metal-walled 
narrow 
dorm room(structurally similar to a prison cell, although far more pleasant). 
Yet if 
someone walks around the room, the signal will drop completely. For all I can 
tell, the 
signal got stronger as the person moved, but the equalizer has to sit there for 
2-3 
seconds trying to retrain itself. And sure enough, 2-3 seconds later, I'm back 
to perfect 
copy again. 

So I'm not sure what we could achieve if we willy nilly started to strap 
adaptive equalizers 
to PTMs. The PTMs may have to be modified to transmit a training signal, so the 
receivers 
have something definite(and quick!) to model with, rather than a slow adaptive 
equalizer 
that simply guesses at the received signal. On the other hand, PTMs tend to 
naturally be 
far more multipath tolerant, so it's likely we'd have to dedicate less of the 
transmitted 
signal energy to the training signal. What's ideal needs some simulation and 
real over the 
air testing. Heck, we should let a computer search for the right combination of 
baud, 
tones, and power dedicated to training data over a wide variety of conditions. 
Pick a small 
set of winners for varying band conditions, and some synchronous(but slow 
enough for PC 
s) ARQ, and bam, you got yourself a data mode :).

Mike
KF6EYU

--- In digitalradio@yahoogroups.com, Rick <[EMAIL PROTECTED]> wrote:
> The question becomes: if you had two modems, one using single tone high 
> baud rate vs. one using multi tone OFDM, which one would perform the 
> best in varying conditions.
> 
> Various documents on the internet suggest that there is not much 
> difference, but there is at least one that does show a difference with 
> computer simulations in favor of the multi tone modems. I tend to 
> discount computer simulations as not adequate and prefer the real world 
> under many different conditions that gives you a more accurate practical 
> feel for what can and can not be done. That same document, done as a PhD 
> paper, admitted that some waveforms that worked well on computer 
> simulation, actually did not work at all in an actual real world test.

Reply via email to