Re: [whatwg] Quality Values for Media Source Elements

2009-12-15 Thread Hugh Guiney
On Mon, Dec 14, 2009 at 4:08 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
 I would almost consider simply using low quality and high quality
 as quality distinguishers (and maybe medium) and leave the actual
 choice of encoding to the hosting entity. Right now, may sites provide
 only two choices for Desktop: SD and HD, plus one for mobile. The
 device can be separated by the device-width as Eric described.

Except it can't—at least, not entirely. Since the size of a video
image is the result of multiplying the width and height by its pixel
aspect ratio, the pixel count of the video does not necessarily match
that of the device playing it back.

For instance, on a DVD, both fullscreen and widescreen movies are
stored at the same resolutions, but with different pixel aspect ratios
(i.e., the shape of the pixels are not necessarily a 1:1 square, as on
computers).

According to the D1  DV standards, fullscreen pixels have a
width-to-height ratio of 4320/4739 (~0.9). So a 720x480 fullscreen
image on a square-pixel device would have to be displayed at ~656x480
pixels to retain its proper aspect ratio.

By the same standards, widescreen pixels are 5760/4739 (~1.2), so a
720x480 widescreen image would have to be displayed at ~875x480
pixels.

Therefore, screen and (min-device-width: 720px) would not work for
all 480i/p content. Either the PAR would have to be read from the file
itself—the storage of which differs from format to format—or the
author would have to specify it. Which is also problematic since not
everyone knows what PARs are, and even when they do, not everyone uses
the same pixel shape definitions: MPEG-4 says widescreen pixels are
40/33, which is *close* to the D1/DV definition but not quite: this
results in a ~873x480 square-pixel image. And due to rounding, there
is also a conventional habit of specifying a PAR of 6/5 (exactly 1.2),
resulting in 864x480. Apple on the other hand defines it as 32/27,
resulting in ~853x480. So even with the same source content, you may
be looking at as many as 4 different rendered sizes depending on the
device manufacturer. So if you specify the PAR according to one
standard, a device built according to another may not recognize it as
playable material, even though it is fully capable of playing it back.

Though this COULD potentially be solved by taking aspect ratio error
into account when processing the media query. So say for any specified
PAR that doesn't match a PAR the device can support exactly, if the
ratios turned out to be the same rounded to a certain decimal point,
that would count as a match and the device would simply render it
according to its standards. This *somewhat* defeats the point of
specifying PARs exactly, but at least it'd be good enough as the
difference would be too insignificant for most people to notice.

 SD and
 HD - while also changing between aspect ratio - are mostly a choice
 between lower bandwidth use and higher bandwidth use, which are taken
 as equivalent to low and high quality by users. Since there will
 likely be a higher bitrate HD version joining in the future, it will
 then turn into SD, HD and HD2 - which equates to low, medium, high.
 Over time, SD will fall aside and leave medium and high. Then, if
 another higher quality comes in, they can be redefined to low and
 medium.

 Thus, keeping these fuzzy specifiers, we stay future-proof and leave
 the actual choice of what low and high means to the respective
 hosting site, which will make the format choice according to current
 standards.

 I'd prefer giving actual levels (low, medium, and high) rather
 than a number between 0 and 1, because they make it comparable between
 hosting sites. If I choose to have low on YouTube, I will likely
 want low on Dailymotion and Hulu, even if those sites decided to use
 completely different encoding parameters for their low and high
 quality versions.

I can agree with this proposal as far as quality = data rate is
concerned. As for any of the other criteria, they'd have to be
addressed differently.

On Mon, Dec 14, 2009 at 10:59 AM, Aryeh Gregor simetrical+...@gmail.com wrote:
 It depends on the application.  But in any event, HTML can never
 possibly do everything JavaScript does, so at some point the answer
 needs to be use JavaScript.

Nor should it. But if you're doing something in JavaScript, there
*should* be a functional alternative in plain HTML when it's turned
off. That means if you've got an AJAX application, even with JS turned
off a user should still be able to interact with the server
synchronously. If you had all of your content negotiation in JS,
however, there could be no alternative, as the lack of one would have
been the reason to use JS in the first place.

 I don't follow.  If authors *were* willing to use content negotiation,
 to the contrary, there would be no need for source.  You could just
 write video src=foo/video in your markup, and configure your
 server to serve foo.mp4 or foo.ogg depending on 

Re: [whatwg] Quality Values for Media Source Elements

2009-12-15 Thread Aryeh Gregor
On Tue, Dec 15, 2009 at 3:17 AM, Hugh Guiney hugh.gui...@gmail.com wrote:
 Nor should it. But if you're doing something in JavaScript, there
 *should* be a functional alternative in plain HTML when it's turned
 off.

Functional, sure, except where that's impossible (e.g., a client-side
computer game) or you have good reason not to care (e.g., intranet app
where you can require JS to be on).  It doesn't have to provide all
the same features, though.  In general that's impossible, which is why
we have script to start with.

 I don't know that nobody *wants* to do that; I think most of them
 simply don't know how.

The ones who know how, or could easily find out, still overwhelmingly
don't want to.

 I don't think it's a square wheel. A square wheel wouldn't work. HTTP
 CN works. The fact that people are willing to do something in HTML,
 but are unwilling to do the very same thing in HTTP, seems to suggest
 a lack of understanding of HTTP and/or its capabilities.

A square wheel works, as long as you're willing to do a lot more work.
 HTTP content negotiation has the following problems compared to an
HTML-based solution:

* Authors already know how to edit HTML, since they need to for
everything else.  Changing HTTP headers requires them to also know how
to configure their web server, or use a scripting language (which is
harder to learn and much less performant than static resources).  This
makes it automatically harder to learn, which is bad.
* Every web server is configured differently.  There is no standard
for configuring your server to do content negotiation (that I'm aware
of).
* Many users (e.g., on some shared hosts) don't have the ability to
reconfigure their web server, or at least not easily.
* Some web servers (e.g., lighttpd last I checked) require that the
whole web server be restarted for any config change.
* A solution in HTML will continue to work if you just copy the entire
directory to a new server.  The same is not reliably true of anything
that relies on web server configuration.

People avoid it for excellent reason.

 This is a nice interim solution, but it also forces the user to
 download a resource which may not necessarily be the most appropriate
 version for them.

Only if you use autobuffer, which you don't have to.


Re: [whatwg] Quality Values for Media Source Elements

2009-12-14 Thread Silvia Pfeiffer
On Mon, Dec 14, 2009 at 6:59 PM, Hugh Guiney hugh.gui...@gmail.com wrote:

 On Mon, Dec 14, 2009 at 12:12 AM, Eric Carlson eric.carl...@apple.com wrote:
  Certainly! WebKit evaluates the query in the 'media' attribute if it 
 believes it can handle the MIME type. If the query evaluates to true, it 
 uses that source element. If it evaluates to false it skips it, even 
 though it could (in theory) open the movie. For example, one of our layout 
 tests [1] has the following :

 video controls
    source src=content/error.mpeg media=print
    source src=content/error2.mpeg media=screen and (min-device-width: 
 8px)
    source src=content/test.mp4 media=screen and (min-device-width: 100px)
 /video

  The test fails if the video element is instantiated with anything but 
 test.mp4.

 This seems extremely useful. How many media features are implemented?

 Currently, though, the CSS3 Media Query spec doesn't cover enough
 metadata to make this as useful as it could be.

 On Mon, Dec 14, 2009 at 1:54 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
 Indeed it seems to me the solution to the quality problem should
 then be done through the media attribute. I am not sure yet how to,
 because we have no definition for what a low quality or high
 quality video is other than some form or SD vs HD and lower
 resolution vs higher resolution and lower bandwidth vs higher
 bandwidth.

 Well, we could certainly define them as they'd be defined *today*, but
 as HD becomes more and more commonplace it will effectively stop being
 high definition, even if the name sticks. And, what one person
 considers low quality, another person may consider high quality,
 and vice-versa, depending on the capabilities of their machine and the
 type of content they're used to seeing. Which is why it doesn't make
 sense to specify absolutes, and why I proposed using relative values.

 If we're to be more granular though, the biggest barrier to
 implementation is the fact that, as you said, video is
 multi-dimensional: there are MANY different factors that can affect
 quality, viewing preference, and/or playback compatibility. Here is a
 non-comprehensive list off the top of my head:

 * Aspect Ratio (or Width and Height)
 * Pixel Aspect Ratio (or Relative Pixel Width and Relative Pixel Height)
 * Display Aspect Ratio (or AR / W  H and PAR / PW  PH)
 * Content Aspect Ratio (or Content Width and Content Height)
 * Sample Rate (or Rate and Scale)
 * Bit Rate
 * Frame Type or Picture Type, i.e. spatial/temporal compression technology
 * Frame Scan Type (Progressive / Interlaced)
 * Field Order (None / Top Field First / Bottom Field First)
 * Colorimetry
 * Codec
 * Container Format
 * Telecine Pattern
 * Dimensionality (2D, 3D, Panoramic, etc.)
 * Audio Channels
 * Audio Sample Rate
 * Audio Bit Rate
 * Audio Codec

 Even if all of this metadata were converted to media features, most
 people wouldn't even know what most of it means. There is a tremendous
 amount of misinformation out there about video technology; when it's
 accurate it is almost always confusing. Even as a digital video
 professional, I often have to explain a lot of these concepts to my
 peers. So unless HTML authors are also fairly technical people with
 video backgrounds, chances are most of it wouldn't end up being used.

 Still though, implementing even just a subset of these would improve
 media selection by a lot.


I would almost consider simply using low quality and high quality
as quality distinguishers (and maybe medium) and leave the actual
choice of encoding to the hosting entity. Right now, may sites provide
only two choices for Desktop: SD and HD, plus one for mobile. The
device can be separated by the device-width as Eric described. SD and
HD - while also changing between aspect ratio - are mostly a choice
between lower bandwidth use and higher bandwidth use, which are taken
as equivalent to low and high quality by users. Since there will
likely be a higher bitrate HD version joining in the future, it will
then turn into SD, HD and HD2 - which equates to low, medium, high.
Over time, SD will fall aside and leave medium and high. Then, if
another higher quality comes in, they can be redefined to low and
medium.

Thus, keeping these fuzzy specifiers, we stay future-proof and leave
the actual choice of what low and high means to the respective
hosting site, which will make the format choice according to current
standards.

I'd prefer giving actual levels (low, medium, and high) rather
than a number between 0 and 1, because they make it comparable between
hosting sites. If I choose to have low on YouTube, I will likely
want low on Dailymotion and Hulu, even if those sites decided to use
completely different encoding parameters for their low and high
quality versions.

Cheers,
Silvia.


Re: [whatwg] Quality Values for Media Source Elements

2009-12-14 Thread Aryeh Gregor
On Mon, Dec 14, 2009 at 2:59 AM, Hugh Guiney hugh.gui...@gmail.com wrote:
 JavaScript is a crutch that far too many applications are relying on
 for major functionality lately. JavaScript should enhance a Web
 experience, not supplant it.

It depends on the application.  But in any event, HTML can never
possibly do everything JavaScript does, so at some point the answer
needs to be use JavaScript.

 If no one uses content negotiation then there is no need to have the
 source element at all.

I don't follow.  If authors *were* willing to use content negotiation,
to the contrary, there would be no need for source.  You could just
write video src=foo/video in your markup, and configure your
server to serve foo.mp4 or foo.ogg depending on the incoming HTTP
headers.  But nobody wants to do that, so the configuration has to be
done in HTML instead.

 XHTML and HTML are interchangeable with any other two technologies in
 that example. PDF and Word, HTML and RSS, RSS and XHTML... the point
 isn't whether most site authors are offering those two in particular;
 the point is that on a platform that supports content negotiation, it
 makes no sense to outsource it to another technology, making authors
 reinvent the wheel simply because not enough people are using wheels.

It makes sense to reinvent the wheel if the wheel is square.  HTTP
content negotiation is a square wheel.

 It's simple for an end-user; not necessarily so simple for authors to 
 implement.

Two or three different URLs with different versions of the resource
would do it.  You don't even need JavaScript.  But if you do use
JavaScript, it should be as simple as

a href=lower_quality.html
onclick=document.getElementById('myvideo').src =
'lower-quality.ogg'Lower-quality version/a

(disclaimer: not tested).


Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Aryeh Gregor
On Sat, Dec 12, 2009 at 11:40 PM, Hugh Guiney hugh.gui...@gmail.com wrote:
 With the exception that Flash does not need separate components to be
 active to sustain that functionality. You can toggle quality in Flash
 without any server- or client-side scripts at all. You may need
 ActionScript in some cases, but that's an integral part of Flash,
 whereas JavaScript, PHP, etc. are not integral parts of HTML.

JavaScript is an integral part of HTML to all intents and purposes.
HTML itself does not and should not try to cover use-cases that are
already adequately covered by HTML+JavaScript -- there will always be
things that are better handled by a general-purpose scripting
language.  Of course, moving something into HTML might be valuable
because it makes the feature easier for authors to use, but that needs
to be weighed against the cost of browsers having to implement it
rather than some other feature.

 But that is exactly how content negotiation in HTTP already works.

Well, yes.  On the other hand, almost nobody actually uses content
negotiation, so I don't think that supports your case.

 I for one would rather not go to such trouble. Can you imagine going
 to every site you visit and specifying that you want XHTML instead of
 HTML, rather than just specifying
 application/xhtml+xml;q=1.0,text/html;q=0.0, in your request
 headers?

Well, no, because there's almost no functional difference between
XHTML and HTML except that the former is more likely to break due to
typos or minor bugs.  Plus, virtually no site actually provides both
XHTML and HTML.  Actually, virtually no site provides real XHTML at
all.  So I don't bother specifying a preference for either.  If you
do, I rather suspect that makes you one of a few hundred people at
most, out of billions of web users.  So maybe you could pick an
analogy that's more realistic?

On the other hand, every single video site already does allow you to
specify quality, and I've never had a problem with this.  It's a
simple control that's only there when you want it, and you can easily
figure out if you actually want higher or lower quality in any given
case.


Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Tab Atkins Jr.
Wasn't there talk of adding a @media attribute to video which could,
among other things, hold bitrate information which would allow the UA
to auto-determine whether it should play a file?

This would require a change to the current selection algorithm, as the
UA now has to make a 'best guess' of which file to use rather than
just choosing the first which works, but it's probably worth it.
(Plus the other benefits of @media, such as declaring that a
particular source has subtitles burned in, etc.)

~TJ


Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Silvia Pfeiffer
There are many things that we would want to add to the source
element to allow for a better choice between the different source
files that are linked, but the biggest problem is that it is currently
only used to go through from top to bottom until the first file is
found that can be played back - then the source selection stops. Even
this is already quite a difficult algorithm.

Extending source to choose between alternative source files based on
other aspects such as quality, screen size, contains captions,
contains audio descriptions, etc. isn't going to work with the current
way that the source element is set up. This is why the @media
attribute hasnt' been used/implemented anywhere yet: it contradicts
the current algorithm for source selection. And in discussion with the
browser developers who have implemented the element I hear that it's
complex enough as it is and overloading the algorithm further is
impossible.

I've been wondering if there is another way.

The analogy with the source selection algorithm for mime types on a
server doesn't work well, because there is only one dimension upon
which to choose a source file: mime type. Here, we have several
dimensions, making any automated choice a challenge.

Cheers,
Silvia.

On Mon, Dec 14, 2009 at 2:28 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Wasn't there talk of adding a @media attribute to video which could,
 among other things, hold bitrate information which would allow the UA
 to auto-determine whether it should play a file?

 This would require a change to the current selection algorithm, as the
 UA now has to make a 'best guess' of which file to use rather than
 just choosing the first which works, but it's probably worth it.
 (Plus the other benefits of @media, such as declaring that a
 particular source has subtitles burned in, etc.)

 ~TJ



Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Eric Carlson

On Dec 13, 2009, at 8:12 PM, Silvia Pfeiffer wrote:

 Oh! What are you doing with it? I mean - have the values in the media
 attribute any effect on the video element?
 
  Certainly! WebKit evaluates the query in the 'media' attribute if it believes 
it can handle the MIME type. If the query evaluates to true, it uses that 
source element. If it evaluates to false it skips it, even though it could 
(in theory) open the movie. For example, one of our layout tests [1] has the 
following :

video controls
source src=content/error.mpeg media=print
source src=content/error2.mpeg media=screen and (min-device-width: 
8px)
source src=content/test.mp4 media=screen and (min-device-width: 100px)
/video

  The test fails if the video element is instantiated with anything but 
test.mp4.

  I have seen 'media' used on real-world pages with something like the 
following to select different movies for the iphone and desktop:

video controls
source src='desktop-video.mp4' media=@media screen and (min-device-width: 
481px)
source src='iphone-video.mp4' media=@media screen and (min-device-width: 
480px)
/video

  This works because the source elements are evaluated in order, so the first 
one is selected on the desktop where both queries will evaluate to true.

eric

[1] 
http://trac.webkit.org/browser/trunk/LayoutTests/media/video-source-media.html?format=txt


 Thanks,
 Silvia.
 
 On Mon, Dec 14, 2009 at 2:43 PM, Eric Carlson eric.carl...@apple.com wrote:
 
 On Dec 13, 2009, at 2:35 PM, Silvia Pfeiffer wrote:
 
 This is why the @media attribute hasnt' been used/implemented anywhere yet
 
  Are you saying that nobody has implemented the media attribute on 
 source? If so, you are incorrect as WebKit has had this for almost two 
 years.
 
 eric
 
 



Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Silvia Pfeiffer
Ah that's excellent. I was under the impression that all
implementations so far are ignoring the media attribute in the
selection algorithm. But it seems I am mistaken. Do all browsers
implement this support then? And can we put the examples below into
the specification?

Indeed it seems to me the solution to the quality problem should
then be done through the media attribute. I am not sure yet how to,
because we have no definition for what a low quality or high
quality video is other than some form or SD vs HD and lower
resolution vs higher resolution and lower bandwidth vs higher
bandwidth.

Regards,
Silvia.

On Mon, Dec 14, 2009 at 4:12 PM, Eric Carlson eric.carl...@apple.com wrote:

 On Dec 13, 2009, at 8:12 PM, Silvia Pfeiffer wrote:

 Oh! What are you doing with it? I mean - have the values in the media
 attribute any effect on the video element?

  Certainly! WebKit evaluates the query in the 'media' attribute if it 
 believes it can handle the MIME type. If the query evaluates to true, it uses 
 that source element. If it evaluates to false it skips it, even though it 
 could (in theory) open the movie. For example, one of our layout tests [1] 
 has the following :

 video controls
    source src=content/error.mpeg media=print
    source src=content/error2.mpeg media=screen and (min-device-width: 
 8px)
    source src=content/test.mp4 media=screen and (min-device-width: 100px)
 /video

  The test fails if the video element is instantiated with anything but 
 test.mp4.

  I have seen 'media' used on real-world pages with something like the 
 following to select different movies for the iphone and desktop:

 video controls
    source src='desktop-video.mp4' media=@media screen and 
 (min-device-width: 481px)
    source src='iphone-video.mp4' media=@media screen and (min-device-width: 
 480px)
 /video

  This works because the source elements are evaluated in order, so the 
 first one is selected on the desktop where both queries will evaluate to true.

 eric

 [1] 
 http://trac.webkit.org/browser/trunk/LayoutTests/media/video-source-media.html?format=txt


 Thanks,
 Silvia.

 On Mon, Dec 14, 2009 at 2:43 PM, Eric Carlson eric.carl...@apple.com wrote:

 On Dec 13, 2009, at 2:35 PM, Silvia Pfeiffer wrote:

 This is why the @media attribute hasnt' been used/implemented anywhere yet

  Are you saying that nobody has implemented the media attribute on 
 source? If so, you are incorrect as WebKit has had this for almost two 
 years.

 eric






Re: [whatwg] Quality Values for Media Source Elements

2009-12-13 Thread Hugh Guiney
On Sun, Dec 13, 2009 at 7:26 AM, Aryeh Gregor simetrical+...@gmail.com wrote:
 JavaScript is an integral part of HTML to all intents and purposes.
 HTML itself does not and should not try to cover use-cases that are
 already adequately covered by HTML+JavaScript -- there will always be
 things that are better handled by a general-purpose scripting
 language.  Of course, moving something into HTML might be valuable
 because it makes the feature easier for authors to use, but that needs
 to be weighed against the cost of browsers having to implement it
 rather than some other feature.

JavaScript is a crutch that far too many applications are relying on
for major functionality lately. JavaScript should enhance a Web
experience, not supplant it.

 Well, yes.  On the other hand, almost nobody actually uses content
 negotiation, so I don't think that supports your case.

If no one uses content negotiation then there is no need to have the
source element at all.

 Well, no, because there's almost no functional difference between
 XHTML and HTML except that the former is more likely to break due to
 typos or minor bugs.  Plus, virtually no site actually provides both
 XHTML and HTML.  Actually, virtually no site provides real XHTML at
 all.  So I don't bother specifying a preference for either.  If you
 do, I rather suspect that makes you one of a few hundred people at
 most, out of billions of web users.  So maybe you could pick an
 analogy that's more realistic?

XHTML and HTML are interchangeable with any other two technologies in
that example. PDF and Word, HTML and RSS, RSS and XHTML... the point
isn't whether most site authors are offering those two in particular;
the point is that on a platform that supports content negotiation, it
makes no sense to outsource it to another technology, making authors
reinvent the wheel simply because not enough people are using wheels.

 On the other hand, every single video site already does allow you to
 specify quality, and I've never had a problem with this.  It's a
 simple control that's only there when you want it, and you can easily
 figure out if you actually want higher or lower quality in any given
 case.

It's simple for an end-user; not necessarily so simple for authors to implement.

On Sun, Dec 13, 2009 at 5:35 PM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
 The analogy with the source selection algorithm for mime types on a
 server doesn't work well, because there is only one dimension upon
 which to choose a source file: mime type. Here, we have several
 dimensions, making any automated choice a challenge.

I do agree with this.

On Mon, Dec 14, 2009 at 12:12 AM, Eric Carlson eric.carl...@apple.com wrote:
  Certainly! WebKit evaluates the query in the 'media' attribute if it 
 believes it can handle the MIME type. If the query evaluates to true, it uses 
 that source element. If it evaluates to false it skips it, even though it 
 could (in theory) open the movie. For example, one of our layout tests [1] 
 has the following :

 video controls
source src=content/error.mpeg media=print
source src=content/error2.mpeg media=screen and (min-device-width: 
 8px)
source src=content/test.mp4 media=screen and (min-device-width: 100px)
 /video

  The test fails if the video element is instantiated with anything but 
 test.mp4.

This seems extremely useful. How many media features are implemented?

Currently, though, the CSS3 Media Query spec doesn't cover enough
metadata to make this as useful as it could be.

On Mon, Dec 14, 2009 at 1:54 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
 Indeed it seems to me the solution to the quality problem should
 then be done through the media attribute. I am not sure yet how to,
 because we have no definition for what a low quality or high
 quality video is other than some form or SD vs HD and lower
 resolution vs higher resolution and lower bandwidth vs higher
 bandwidth.

Well, we could certainly define them as they'd be defined *today*, but
as HD becomes more and more commonplace it will effectively stop being
high definition, even if the name sticks. And, what one person
considers low quality, another person may consider high quality,
and vice-versa, depending on the capabilities of their machine and the
type of content they're used to seeing. Which is why it doesn't make
sense to specify absolutes, and why I proposed using relative values.

If we're to be more granular though, the biggest barrier to
implementation is the fact that, as you said, video is
multi-dimensional: there are MANY different factors that can affect
quality, viewing preference, and/or playback compatibility. Here is a
non-comprehensive list off the top of my head:

* Aspect Ratio (or Width and Height)
* Pixel Aspect Ratio (or Relative Pixel Width and Relative Pixel Height)
* Display Aspect Ratio (or AR / W  H and PAR / PW  PH)
* Content Aspect Ratio (or Content Width and Content Height)
* Sample Rate (or Rate and Scale)
* Bit 

[whatwg] Quality Values for Media Source Elements

2009-12-12 Thread Hugh Guiney
Hey all,

So, in my first foray into preparing Theora/Vorbis content, for use
with video, I realized that I wasn't sure with what settings to
encode my materials. Should I:

A.) Supply my visitors with the best possible quality at the expense
of loading/playback speed for people on slower connections

B.) Just account for the lowest common denominator and give everyone a
low quality encode

or

C.) Go halfway and present a medium quality encode acceptable for
most people?

A. is not legacy-proof, B. is not future-proof, and the C. is neither.
C. may sound like the most sensible solution, but even if I were to
put up something that worked for most people *right now*, as
computers become more capable and connections become faster, more
visitors are going to want higher-quality videos, meaning I'd have to
stay on top of the relevant trends and update my pages accordingly.

Ideally, I would like to be able to simply encode a few different
quality variations of the same file and serve each version to its
corresponding audience.

There are a few ways I could do this. One of the most obvious ways
would be to present different versions of the site, e.g. one for slow
connections and one for fast connections and have the user pick via
a splash page before entering, as was popular in '90s. But this is
almost certainly a faux pas today: it puts a wall between the user and
my content, and requires me to maintain two different versions of the
site. Hardly efficient.

Another way would be to itemize each version of the file in a list,
with details next to them such as frame and file size, so the user
could pick accordingly. While this would probably be fine for
downloads, it completely defeats the point of embedded media.

Alternatively, I could devise a script that prompts users for their
connection speed and/or quality preference, which (assuming they know
it) would then go through the available resources on the server and
return the version of the file I'd have allocated to that particular
response. But that would require either branching for every file
alternative of every video on my site in the script—or specifying the
quality in some other way that can be programmatically exploited;
perhaps using microdata, but then I'd be stuffing the fallback content
with name-value pairs, which isn't particularly accessible.

Or, I could invent my own HTTP header and try to get everyone to use
it. Which is a lot to do for something like this, and isn't guaranteed
to work.

None of these options seem particularly viable to me. Right now, the
HTML5 spec allows UAs to choose between multiple versions of a media
resource based on type. In the interest of making media more
accessible to users of varying bandwidth and processing power, and
easier to maintain for authors, I propose allowing the relative
quality of each resource to be specified for multiple-source media.

You will notice that in Flash animations, there is a context menu
option to change the rendered quality between High, Medium, and
Low. Each setting degrades or upgrades the picture, and requires
less or more computing power to process respectively. Additionally,
some Flash video authors elect to construct their own quality
selection UI/scripting within the video itself, allowing them to have
a finer degree of control over the presentation of the image.

Similarly, YouTube has the ability to switch between standard quality,
high quality, and high definition videos based on users' preferences.
In the Playback Setup section of Account Settings, you will find
the following options:

Video Playback Quality
Choose the default setting for viewing videos
* Choose my video quality dynamically based on the current connection speed.
* I have a slow connection. Never play higher-quality video.
* I have a fast connection. Always play higher-quality video when it's
available.

If HTML video is to compete with Flash, or become implemented on as
wide a scale as YouTube http://www.youtube.com/html5, it makes sense
to allow for some sort of quality choice mechanism, as users will have
come to expect that functionality.

This could be done by allowing an attribute on source elements that
takes a relative value, such as (or similar to) those specified in
HTTP http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.9.
This attribute could be called quality or qvalue or just q (my
personal preference would be it that order decreasing), and be used as
such:

video controls
  source src='video-hd.ogv' quality='1.0' type='video/ogg;
codecs=theora, vorbis'
  source src='video-hq.ogv' quality='0.5' type='video/ogg;
codecs=theora, vorbis'
  source src='video-sd.ogv' type='video/ogg; codecs=theora, vorbis'
/video

In this case, video-hd.ogv (a high definition encode) would be the
author's preferred version, video-hq.ogv (a high quality standard
definition encode) would be less preferred than video-hd.ogv, but more
preferred than video-sd, and video-sd (a standard definition encode)
would be 

Re: [whatwg] Quality Values for Media Source Elements

2009-12-12 Thread Nils Dagsson Moskopp
Hugh Guiney hugh.gui...@gmail.com schrieb am Sat, 12 Dec 2009
21:32:30 -0500:

 Hey all,

Hey,

 Ideally, I would like to be able to simply encode a few different
 quality variations of the same file and serve each version to its
 corresponding audience.

The source element was invented for this.

 There are a few ways I could do this. One of the most obvious ways
 would be to present different versions of the site, e.g. one for slow
 connections and one for fast connections and have the user pick via
 a splash page before entering, as was popular in '90s. But this is
 almost certainly a faux pas today: it puts a wall between the user and
 my content, and requires me to maintain two different versions of the
 site. Hardly efficient.

This is correct. You don't prompt users for different resolutions
either nowadays.

 If HTML video is to compete with Flash, or become implemented on as
 wide a scale as YouTube http://www.youtube.com/html5, it makes sense
 to allow for some sort of quality choice mechanism, as users will have
 come to expect that functionality.

What about a simple drop-down beside an embedded media resource ?

 This could be done by allowing an attribute on source elements that
 takes a relative value, such as (or similar to) those specified in
 HTTP http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.9.
 This attribute could be called quality or qvalue or just q (my
 personal preference would be it that order decreasing), and be used as
 such:

Specifying a bit rate would be vastly more appropriate IMO.


Cheers,
-- 
Nils Dagsson Moskopp // erlehmann
http://dieweltistgarnichtso.net


signature.asc
Description: PGP signature


Re: [whatwg] Quality Values for Media Source Elements

2009-12-12 Thread Aryeh Gregor
On Sat, Dec 12, 2009 at 9:32 PM, Hugh Guiney hugh.gui...@gmail.com wrote:
 So, in my first foray into preparing Theora/Vorbis content, for use
 with video, I realized that I wasn't sure with what settings to
 encode my materials. Should I:

 A.) Supply my visitors with the best possible quality at the expense
 of loading/playback speed for people on slower connections

 B.) Just account for the lowest common denominator and give everyone a
 low quality encode

 or

 C.) Go halfway and present a medium quality encode acceptable for
 most people?

The usual tactic taken by popular video sites today is to provide
multiple quality levels, serve one by default, and give the user an
option to choose their preferred quality level.  For Flash video, you
might use some kind of Flash script, while for HTML video, you'd use
JavaScript and/or hyperlinks, but the effect is pretty much the same.
HTML video seems to be precisely on par with Flash video in this
regard right now.

 video controls
  source src='video-hd.ogv' quality='1.0' type='video/ogg;
 codecs=theora, vorbis'
  source src='video-hq.ogv' quality='0.5' type='video/ogg;
 codecs=theora, vorbis'
  source src='video-sd.ogv' type='video/ogg; codecs=theora, vorbis'
 /video

 . . .

 The UA could then have a playback setup that would allow the user to
 specify how it should handle content negotiation for multiple-source
 media. This could be based solely on the quality attribute if
 provided, or if @type is also provided, also based on what
 content-type the user prefers.

I don't think the proposed syntax is useful if you use a
floating-point number with no fixed scale for quality.  Different
sites would use the same number to mean different things, so users
couldn't usefully specify a global preference.  A bitrate or something
would make more sense.

I'm not sure the benefit of permitting quality preferences to be set
across all sites would end up being worth it.  Users are probably
happy enough setting them per site, especially since different sites
might have better or worse video for a given bitrate (or any
artificial quality metric you might think up).  At best, I'd think
this falls into the don't consider for addition to spec until
browsers have implemented what's already there category.


Re: [whatwg] Quality Values for Media Source Elements

2009-12-12 Thread Aryeh Gregor
On Sat, Dec 12, 2009 at 9:57 PM, Nils Dagsson Moskopp
nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
 Hugh Guiney hugh.gui...@gmail.com schrieb am Sat, 12 Dec 2009
 21:32:30 -0500:
 Ideally, I would like to be able to simply encode a few different
 quality variations of the same file and serve each version to its
 corresponding audience.

 The source element was invented for this.

No, source is used for providing entirely different formats, not
different files of the same format.  The browser will use the first
source it can play at all, so having multiple sources of the same
format is pointless.  You'd have to either serve different raw HTML to
begin with (e.g., different URLs for different qualities), or use JS
to switch the element's sources when the user changes the quality.
This seems to be more or less the same as what you need to do with
Flash today.