Re: DRM broadcast disrupted by leap seconds

2003-07-20 Thread Peter Bunclark
On Sat, 19 Jul 2003, Markus Kuhn wrote:
>
> All modern digital broadcast transmission systems introduce significant
> delays due to compression and coding. It is therefore common practice
> today that the studio clocks run a few seconds (say T = 10 s) early, and
> then the signal is delayed by digital buffers between the studio and the
> various transmitter chains for T minus the respective transmission and
> coding delay. This way, you can achieve that both analog terrestrial and
> digital satellite transmissions have rather synchronous audio and video.
> Otherwise, your neigbor would already cheer in from of his analogue TV
> set, while you still hear on DRM the "live" report about the football
> player aproaching the goal.
But that's exactly what does happen, analog TV is ahead of digital, often
leading to asynchronous cheering coming from different parts of the house.
>
> There are a couple of problem though with delayed "live":
>
>   - One is with the BBC. They insist for nostalgic reasons to transmit
> the Big Bang sound live, which cannot be run 10 seconds early in
> sync with the studio clock.
>
>   - Another are live telephone conversations with untrained members of the
> radio audience who run a loud receiver next to the phone. The delay
> eliminates the risk of feedback whisle, but it now ads echo and
> human confusion. The former can be tackled with DSP techniques, the
> latter is more tricky.
But then there's often a deliberate delay introduced so the editor can
push the cut-off button on the first f

>
>   - The third problem is that in the present generation of digital
> radio receivers (DAB, DRM, WorldSpace, etc.), the authors of the
> spec neglected to standardize the exact buffer delay in the receiver.
Intestingly, I have noticed Radio 5 live is synchronous or even slightly
ahead of analogue on Digital Terrestial.   I put it down to relatively
instantaneous compression/decompression of audio cf video streams.
(NICAM is near-instantaneous on 15-year old technology)

Pete.


Re: DRM broadcast disrupted by leap seconds

2003-07-19 Thread Markus Kuhn
Ed Davies wrote on 2003-07-19 09:15 UTC:
> > When the scheduled transmission time
> > arrives for a packet, it is handed with high timing accuracy to the
> > analog-to-digital converter,
>
> I assume you mean digital-to-analog.

Yes, sorry for the typo.

> This also raises the point that because the transmission is delayed a few
> seconds for buffering there is presumably a need for the studio to work
> "in the future" by a few seconds if time signals are to be transmitted
> correctly.

All modern digital broadcast transmission systems introduce significant
delays due to compression and coding. It is therefore common practice
today that the studio clocks run a few seconds (say T = 10 s) early, and
then the signal is delayed by digital buffers between the studio and the
various transmitter chains for T minus the respective transmission and
coding delay. This way, you can achieve that both analog terrestrial and
digital satellite transmissions have rather synchronous audio and video.
Otherwise, your neigbor would already cheer in from of his analogue TV
set, while you still hear on DRM the "live" report about the football
player aproaching the goal.

There are a couple of problem though with delayed "live":

  - One is with the BBC. They insist for nostalgic reasons to transmit
the Big Bang sound live, which cannot be run 10 seconds early in
sync with the studio clock.

  - Another are live telephone conversations with untrained members of the
radio audience who run a loud receiver next to the phone. The delay
eliminates the risk of feedback whisle, but it now ads echo and
human confusion. The former can be tackled with DSP techniques, the
latter is more tricky.

  - The third problem is that in the present generation of digital
radio receivers (DAB, DRM, WorldSpace, etc.), the authors of the
spec neglected to standardize the exact buffer delay in the receiver.

For the last reason mostly, the time beeps from digital receivers still
have to be used with great caution today (or are even left out by some
stations who prefer to send none rather than wrong ones).

> > Either having a commonly used standard time without leap seconds (TI),
> > or having TAI widely supported in clocks and APIs would have solved the
> > problem.
>
> Absolutely - and the second suggested solution doesn't need to take 20
> years to be implemented.

The engineer involved in this project to whom I talked was actually very
familiar with my API proposal on

  http://www.cl.cam.ac.uk/~mgk25/time/c/

and agreed that the problem never had come up if that had been widely
supported by Linux, NTP drivers, and GPS receiver manufacturers. But we
are not there yet.

The current discussion on removing leap seconds no doubt also will delay
efforts to make TAI more widely available, because what is the point in
improving the implementations if the spec might change soon
fundamentally.

I don't care much weather we move from UTC to TI, because both
approaches have comparable advantages and drawbacks, which we understand
today probably as good as we ever will. But it would be good to make a
decision rather sooner than later, because the uncertainty that the
discussion generates about how to design new systems developped today
with regard to leap seconds can be far more hassle. It would be
unfortunate if at the end of this discussion we change nothing and all
we have accomplished is to delay setting up mechanisms to deal with leap
seconds properly. I personally feel certainly not motivated to press
ahead with proposals for handling leap seconds better, if there is a
real chance that there might soon be no more of them.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: DRM broadcast disrupted by leap seconds

2003-07-19 Thread Ed Davies
Markus Kuhn wrote:
> When the scheduled transmission time
> arrives for a packet, it is handed with high timing accuracy to the
> analog-to-digital converter,

I assume you mean digital-to-analog.

> ...
> [In fact, since short-wave transmitters frequently switch between
> programmes at the full hour, in discussions the hope was expressed that
> in practice nobody will notice.]
> ...

Don't they also often transmit the time signal on the hour?  Ironic, huh?

This also raises the point that because the transmission is delayed a few
seconds for buffering there is presumably a need for the studio to work
"in the future" by a few seconds if time signals are to be transmitted
correctly.

> Either having a commonly used standard time without leap seconds (TI),
> or having TAI widely supported in clocks and APIs would have solved the
> problem.

Absolutely - and the second suggested solution doesn't need to take 20
years to be implemented.

Ed.


DRM broadcast disrupted by leap seconds

2003-07-18 Thread Markus Kuhn
A recent visitor gave me a detailed account of a telecoms application
where UTC leap seconds can cause havoc, which I would like to share here
with you. In this example, the design ended up being vulnerable to UTC
leap seconds, in spite off the engineers being fully aware of all the
issues involved (leap seconds, UTC vs. TAI vs. GPS time, etc.), but had
to conclude that there was no sufficiently simple way around the problem
to be worth being taken.

Background: The Digital Radio Mondiale standard (ETSI TS 101980) defines
the new global broadcast format for long/medium/short-wave digital
radio. The modulation technique it uses is based on coded orthogonal
frequency division multiplexing (COFDM). In this technique, about ten
thousand data bits are packed together with error-correction information
and then sent through a fast fourier transform, in order to generate a
waveform that consists of about a hundred carrier signals, each of which
represents a hand full of bits via its amplitude and phase. The output
of the FFT is then broadcast. The coding is arranged carefully, such
that if echoes or doppler shifts disrupt or cancel out a number of
carrier frequencies, the payload audio data can still be recovered fully
intact. The high robustness of COFDM against echos makes it feasible to
operate single-frequency networks. Several transmitters that are spread
over a large region broadcast the essentially same waveform at the same
time in the same frequency band. Handling overlapping signals from
multiple transmitters on the same frequency is not very different from
handling long-distance echos. Single frequency networks offer highly
efficient use of the radio spectrum, as the requirement of keeping large
minimum distances between transmitters on the same frequency falls away.

The transmitter infrastructure design supplied by a major manufacturer
of DRM broadcast equipment works like this:

At the central head end, where the broadcast signal arrives from the
studios, a Linux PC has a sound card that samples the studio signal at
48.000 kHz (or equivalently resamples an asynchronously arriving
digital studio signal in a polyphase filterbank). The head-end Linux PC
is connected to a Meinberg GPS receiver (a special GPS receiver
optimized for precision timing applications). The 10. MHz output
of the GPS receiver is used to derive the high-precision sampling clock
signal used by the soundcard. The serial port of the GPS receiver also
synchronizes the Linux clock via the usual xntpd driver software by Dave
Mills, et al.

Every 400 ms, the software on the PC takes 400 ms worth of sampled
audio, sends them through an MPEG Advanced Audio Coding (AAC)
compression algorithm. The PC knows for each audio sample received
within about a millisecond the corresponding UTC time. It attaches to
the generated 400-ms long compressed data packet a UTC timestamp that is
a few seconds into the future. At this time the packet must be leaving
the antennae of transmitters all over the region. The packets are then
sent via low-cost asynchronous communication links (e.g., selected parts
of the Internet) to the various transmitter stations. There, they are
modulated and queued for delivery by a DSP board that is equally
connected to a GPS receiver. When the scheduled transmission time
arrives for a packet, it is handed with high timing accuracy to the
analog-to-digital converter, which will produce the analog complex (I,Q)
baseband signal that drives the high-power transmitter.

In order to keep the complexity of the design simple, existing standard
components such as xntpd, the Linux kernel clock driver and commercially
available GPS clocks with UTC output were used, together with low-cost
asynchronous communication links. The result is a significantly more
economic transmitter infrastructure than what could be offered by
competing technologies.

The only problem with using such off-the-shelf components is, that the
entire design is unable to transmit a signal during a leap second,
because packets are scheduled based on UTC!

The engineers were fully ware of the problem, and they knew that TAI and
GPS time exist. In fact, the packet protocol format supports the use of
GPS timestamps. But as there is no simple configuration option to cause
the particular GPS receiver used to output any of these, or to get Linux
and xntpd to run on a TAI or GPS timescale (which would have broken
other things, such as the regular correct-local-time timestamping
preferred for system administration), this was not applicable. As a
result, nobody could be bothered to add a few person months of work,
simply to ensure that there is no 1-second disruption once every year.
[In fact, since short-wave transmitters frequently switch between
programmes at the full hour, in discussions the hope was expressed that
in practice nobody will notice.]

Either having a commonly used standard time without leap seconds (TI),
or having TAI widely supported in clocks and A