Hi,

I'll begin by apologizing for asking what I'm quite sure is a terribly stupid 
question to many of you.

The question is:

It has long been known that media data can have different levels of importance. 
Simply put, if packet #1 from a single source contains parts of a video's 
I-Frame and packet #2 contains only parts of B-Frames, and both end up in the 
same queue, controlled by an AQM (for instance), it would be better for the 
video stream if packet #2 would be dropped rather than packet #1.

There is nothing new with this story; it's not hard to find research papers 
that document various variations of this theme, showing benefits in video 
quality.

My question is, why is none of this happening?

Is it because DSCP values are typically associated with sources, and hence, 
marking packet #2 as "less" important would put the source at the risk of 
having its packets less important than not only its own other packets, but 
anybody else's?  But there is equipment that does per-connection stuff, and 
such things could probably better be done near the edges, where the bottleneck 
typically is... so if that's the whole issue, we could define DSCP values that 
mean relative importance *within the same five-tuple only*. Surely that has 
been thought about and probably proposed by folks before, so what happened? Why 
isn't it done?

Or is it because per-five-tuple-functionality in the network is regarded as 
being too costly, and not encouraged, and hence not standardized?

I'm just trying to understand the reasons for this particular long-standing 
difference between research an reality.

Cheers,
Michael

Reply via email to