Interesting followup: I skimmed the ETSI spec the station engineer
referred to and it does read as a yes, the program clock reference is
sent via IP and yes it does expect ridiculously low jitter.
Further, Gig E has a 250mhz clock rate and the chipsets can get as low
as 50-60 picoseconds of peak to peak variance. So across a single
ethernet link this really ought to be possible.
With several devices in a chain whose clocks are never going to be in
sync with each other, and where packets are stored and forwarded at each
hop I'm still surprised it would work. But apparently it /has/ worked
for 7 or 8 years. Some of my colleagues say that a few years ago they
tried carrying the feed over a VPLS tunnel because one of the microwave
hops died in a lightning storm. As you might expect, introducing the
network stack on several routers into the mix did /not/ allow for
sufficiently low jitter and the customer was getting the PCR errors.
This is actually why they were calling us about it, they were wondering
if we did something like that again. ....of course we didn't but now I
understand why they were asking.
-Adam
On 12/10/2019 4:40 PM, [email protected] wrote:
The have to buffer in an elastic store to be able to do this.
Similar to pseudowire for T1.
Then he has to sync the output of his buffer with GPS or colorburst or
WWVB or some other external sync on his end.
Nobody can sync that tight over the internet, not even dedicated
ethernet is good enough for that tight.
*From:* Adam Moffett
*Sent:* Tuesday, December 10, 2019 2:32 PM
*To:* [email protected]
*Subject:* [AFMUG] Testing ridiculous jitter constraints
I was discussing with a TV station engineer some sort of disturbance
he's seeing in a video feed which crosses a section of our network.
This is crossing a blend of fiber and part 101 microwave, and it's
been working fine for several years until suddenly their problem
cropped up about a month ago.
His words, emphasis mine:
"We are seeing PCR clocking intolerance in our television data streams
(~19.392685 Mbps, plus overhead; PCR is sent at a defined interval, at
least once every 40ms, for each of five embedded streams with a drift
tolerance of <10mHz and a /*jitter error of <25us per */ETSI TR 101
290), "
I know jack-all about TV broadcasting, but I discussed packet to
packet delay variation of less than 1 millisecond being considered
perfect in my world, and "do I understand you correctly that you
really need clock signals transmitted across the network with less
than 25 /micro/ seconds of jitter?" He seems to feel that yes, that is
the case. Is this guy mistaken? I can't believe whatever converts
the TV signal to ethernet and back wouldn't have at least some minimal
jitter buffer.
Even if he's right....how do you even test that? A wireshark capture
will have a time attached to each packet, and that _is _displayed in
microseconds, but how precise could that be in real life? I mean
hypothetically, by the time a frame gets copied to a mirrored switch
port, hits my ethernet card, passes through the whole software stack
to get into Wireshark couldn't that have introduced 25us worth of new
variance?
What about MEF OAM statistics? Would that be precise enough?
More than anything I'm shocked at the assertion about the required
precision. I feel like on a one-way transmission like TV they could
add a half second delay to accommodate jitter or retransmissions and
nobody watching at home would ever know the difference. But I'm _also
_curious about how you would check that assuming you had to.
-Adam
------------------------------------------------------------------------
--
AF mailing list
[email protected]
http://af.afmug.com/mailman/listinfo/af_af.afmug.com
--
AF mailing list
[email protected]
http://af.afmug.com/mailman/listinfo/af_af.afmug.com