Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Ian Hickson
On Tue, 30 Jun 2009, Matthew Gregan wrote:
 
 Is there any reason why PCM in a Wave container has been removed from 
 HTML 5 as a baseline for audio?

Having removed everything else in these sections, I figured there wasn't 
that much value in requiring PCM-in-Wave support. However, I will continue 
to work with browser vendors directly and try to get a common codec at 
least for audio, even if that is just PCM-in-Wave.


 The reason for not selecting a video codec doesn't seem to have much 
 weight when considering Ogg Vorbis as a required audio codec.

Unfortunately, the reasons don't really matter at the end of the day. If 
they don't implement it, they don't implement it. :-(

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Kristof Zelechovski
Even if Apple decides to implement Ogg Theora, iPod users will still get
QuickTime served and get a better rendering because the common codec is the
failsafe solution and will be specified as the last one.  This phenomenon is
expected to happen for any platform, not just Apple's.  I cannot see how
this effect can be perceived as diminishing the significance of the HTML
specification, however.  I believe proprietary codecs will always be better
than public domain codecs, until hardware development makes this question
irrelevant, because this application requires a large investment in
research.
I understand that the reason for rejecting MPEG-1 as a fallback mechanism is
that the servers will not serve it because of increased bandwidth usage,
right?
Cheers,
Chris



Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Ian Hickson
On Tue, 30 Jun 2009, Kristof Zelechovski wrote:

 I understand that the reason for rejecting MPEG-1 as a fallback mechanism is
 that the servers will not serve it because of increased bandwidth usage,
 right?

Right.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Silvia Pfeiffer
Hi Ian,

I have just posted a detailed reply on your email to public-html
(http://lists.w3.org/Archives/Public/public-html/2009Jun/0830.html),
so let me not repeat myself, but only address the things that I
haven't already addressed there.


On Tue, Jun 30, 2009 at 2:50 PM, Ian Hicksoni...@hixie.ch wrote:
 I considered requiring Ogg Theora support in the spec, since we do have
 three implementations that are willing to implement it, but it wouldn't
 help get us true interoperabiliy, since the people who are willing to
 implement it are willing to do so regardless of the spec, and the people
 who aren't are not going to be swayed by what the spec says.

Inclusion of a required baseline codec into a standard speaks more
loudly than you may think. It provides confidence - confidence that an
informed choice has been made as to the best solution in a given
situation. Confidence to Web developers, confidence to hosting
providers, confidence also (but less so, since they are gatekeepers in
this situation) to Browser Vendors.

In my opinion, including a baseline codec requirement into a W3C
specification that is not supported by all Browser Vendors is much
preferable over an unclear situation, where people are forced to
gather their own information about a given situation and make a
decision on what to choose based on potentially very egoistic and
single-sided reasons/recommendations.

In fact, it is a tradition of HTML to have specifications that are
only supported by a limited set of Browser Vendors and only over time
increasingly supported by all - e.g. how long did it take for all
Browser vendors to accept css2, and many of the smaller features of
html4 such as fixed positioning?

I firmly believe that making the decision to give up on baseline
codecs is repeating a mistake made and repeatedly cited as a mistake
on the lack of specification of a baseline format for images - which
is one of the reasons why it took years to have two baseline image
codecs available in all browsers. We could try the other route for a
change and see if standards can actually make a difference to
adoption.


 Going forward, I see several (not mutually exclusive) possibilities, all
 of which will take several years:

  1. Ogg Theora encoders continue to improve. Off-the-shelf hardware Ogg
    Theora decoder chips become available. Google ships support for the
    codec for long enough without getting sued that Apple's concern
    regarding submarine patents is reduced. = Theora becomes the de facto
    codec for the Web.

This to me is a defeat of the standardisation process. Standards are
not there to wait for the market to come up with a de-facto standard.
They are there to provide confidence to the larger market about making
a choice - no certainty of course, but just that much more confidence
that it matters.


  2. The remaining H.264 baseline patents owned by companies who are not
    willing to license them royalty-free expire, leading to H.264 support
    being available without license fees. = H.264 becomes the de facto
    codec for the Web.

That could take many years.


 I would encourage proponents of particular codecs to attempt to address
 the points listed above, as eventually I expect one codec will emerge as
 the common codec, but not before it fulfills all these points:

OK, let me try to address these for Theora. The replies for Vorbis are
simply yes to each of these points.

  - is implementable without cost and distributable by anyone
Theora is.

  - has off-the-shelf decoder hardware chips available
decoder hardware for video means that there are software libraries
available that use specific hardware in given chips to optimise
decoding. It is not a matter of hardware vendors to invent new
hardware to support Theora, but it is a matter of somebody
implementing some code to take advantage of available hardware on
specific platforms. This is already starting to happen, and will
increasingly happen if Theora became the baseline codec.

  - is used widely enough to justify the extra patent exposure
This is a double requirement: firstly one has to quantify the extra
patent exposure, and independent of that is wide uptake.
We are now seeming wide uptake happening for Theora with Dailymotion,
Wikimedia, Archive.org and many small  medium size video platforms
(such as thevideobay, metavid, pad.me) taking it up.
As for the extra patent exposure - with every month that goes by, this
is shrinking. And obviously many players have already decided that the
extra patent exposure of Theora is acceptable, since already three
Browser Vendors are supporting Theora natively.

  - has a quality-per-bit high enough for large volume sites
Your main argument against Theora is a recent email stating that
YouTube could not be run using Theora. Several experiments with
current Theora encoder version have demonstrated that this statement
was based on misinformation and not on fact. Until I see fact that
confirms that YouTube would indeed 

Re: [whatwg] Codecs for audio and video

2009-06-30 Thread David Singer

Thank you, Ian, for the summary.

I just wanted to say that we're not happy with the situation.  We 
continue to monitor it, to take what action we can, and we continue 
to hope that we will, at some time, find a solution that reaches 
consensus.

--
David Singer
Multimedia Standards, Apple Inc.


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Mikko Rantalainen
Ian Hickson wrote:
 on the situation regarding codecs for video and audio in HTML5, I have 
 reluctantly come to the conclusion that there is no suitable codec that 
 all vendors are willing to implement and ship.
 
 I have therefore removed the two subsections in the HTML5 spec in which 
 codecs would have been required, and have instead left the matter 
 undefined, as has in the past been done with other features like img and 
 image formats, embed and plugin APIs, or Web fonts and font formats.
 
 The current situation is as follows:
 
Apple refuses to implement Ogg Theora in Quicktime by default (as used 
by Safari), citing lack of hardware support and an uncertain patent 
landscape.
 
Google has implemented H.264 and Ogg Theora in Chrome, but cannot 
provide the H.264 codec license to third-party distributors of 
Chromium, and have indicated a belief that Ogg Theora's quality-per-bit 
is not yet suitable for the volume handled by YouTube.
 
Opera refuses to implement H.264, citing the obscene cost of the 
relevant patent licenses.
 
Mozilla refuses to implement H.264, as they would not be able to obtain 
a license that covers their downstream distributors.
 
Microsoft has not commented on their intent to support video at all.

Short summary:

Theora is supported by everyone else but Apple and Microsoft, H.264 can
only be supported (in theory) by Apple, Google and Microsoft because of
patent licensing.

Patent licensing issues aside, H.264 would be better baseline codec than
Theora.

 I considered requiring Ogg Theora support in the spec, since we do have 
 three implementations that are willing to implement it, but it wouldn't 
 help get us true interoperabiliy, since the people who are willing to 
 implement it are willing to do so regardless of the spec, and the people 
 who aren't are not going to be swayed by what the spec says.

I don't know about Microsoft but Apple has displayed willingness to
implement what specifications say (see http://acid3.acidtests.org/ for
example). By W3C standards a spec can get REC status if it has at least
two implementations and we already have three. The current HTML 5 spec
already has stuff not implemented by every vendor, why should video be
different?

I'd suggest one of the two choices (I prefer the first one):

(1) Specify Theora as the baseline codec. Hopefully it will be tested by
acid4 test (or by some other popular test) and Apple will either
implement it regardless of the assumed patent risks or finds the actual
patent owners and acquires the required licenses for Theora to be
implemented by Apple. In the future, if Apple implements Theora, then
perhaps even Microsoft will do so, too.

(2) Specify {Theora or H.264} as the baseline. That way all vendors that
have displayed any interest for video could implement the spec.
Authors would be required to provide the video in both formats to be
sure that any spec compliant user agent is able to display the content,
but at least there would be some real target set by the spec. However, I
think that this just moves the H.264 patent licensing issue from browser
vendors to content authors: if you believe that you cannot decode H.264
without proper patent license there's no way you could encode H.264
content without the very same license. As a result, many authors will
not be able to provide H.264 variant -- and as a result the Theora would
become de facto standard in the future.

-- 
Mikko



signature.asc
Description: OpenPGP digital signature


[whatwg] XHTML namespace and HTML elements

2009-06-30 Thread Olli Pettay

Hi,

I wonder what (and where) are the reasons to use XHTML namespace also 
with HTML elements.

The behavior causes few issues like
https://bugzilla.mozilla.org/show_bug.cgi?id=501312 and
http://www.w3.org/Bugs/Public/show_bug.cgi?id=6777 and
http://www.w3.org/Bugs/Public/show_bug.cgi?id=7059

And what are the problems if and when null namespace is used with HTML
elements (like in =FF3.5).

When script libraries need to check if some element is an (X)HTML
element, they could always use instanceof.

Perhaps this has been discussed earlier in this list, but
couldn't find the relevant emails.

-Olli


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Jeff McAdams

Ian Hickson wrote:
I considered requiring Ogg Theora support in the spec, since we do have 
three implementations that are willing to implement it, but it wouldn't 
help get us true interoperabiliy, since the people who are willing to 
implement it are willing to do so regardless of the spec, and the people 
who aren't are not going to be swayed by what the spec says.


Ian, first off, thank you for your efforts to this point, your patience 
in the face of conflicting opinions has been awe-inspiring (and I'll 
certainly include my messages in the set of those requiring patience 
from you)


I feel I have to disagree a bit with what you say above, though.

Yes, clearly publishing the spec with a baseline codec specified isn't 
*sufficient* for definitively get[ting] us true interoperabiliy[sic], 
but it certain does *help* get us true interoperability, in two ways 
that I can think of off the top of my head.


First, there is some inherent pressure for implementing the spec. 
Again, some parties have indicated that it is not enough to get them to 
do so, but that eliminates their ability to claim adherence to this 
standard when others are doing so.  (Well, to truthfully claim, anyway. 
 I don't think any of the parties involved here are unscrupulous enough 
to claim compliance when they don't actually comply because of the lack 
of this codec support, but other, non-engaged parties certainly might). 
 Specifying a baseline codec takes away a marketing bullet point that 
can be used to sell their product, while hurting interoperability.


Second, it gives us (people like me) an extra tool to go back to vendors 
and say, Hey, please support HTML5, its important to me, and the 
video tag, with the correct baseline codec support, is important to 
me.  Without the baseline codec being specified, it takes away a lot of 
our leverage, as customers of companies that have said they won't 
support this, to push on them.  (I, personally, as a single data point, 
use a Mac, and mostly to this point use Safari, but have already made 
sure I've gotten the Firefox 3.5-mumble-not-yet-released that has the 
video tag support so that I can begin making use of it to some degree, 
and plan to do so more in the future).  Certainly you, of all people, 
can appreciate the benefits to interoperability that we've seen through 
publication of the ACID tests.  No, they aren't full compliance tests, 
but look at the public pressure that has been brought to bear on browser 
makers by the public's awareness of them.  Look at how much better the 
interoperability has gotten over the same period.  No, its still not 
perfect, by a long shot, but at least now we're moving in the right 
direction.  Give us, the end users, the tools we need to help bring that 
pressure for interoperability to bear on the browser makers.




There is one thing that I'm not quite clear on, however.

You've said a couple of things that I perceive as contradictory, here. 
You've said that you want to help bring about interoperability, but then 
you've also said that you're only documenting what it is that the 
browser makers are implementing.  There is room in the standards bodies 
world for both goals, and both goals, at times are valid and beneficial. 
 But, if your intent is to help bring about interoperability, *real* 
interoperability, then I think its pretty clear that the way forward 
involves specifying a baseline codec.


Leaving such an important point of interoperability completely up to the 
whims of people out there seems unwise here (I look at MS's latest 
attempt at supporting ODF as a great example of how interoperability can 
actually be harmed, even by a complying implementation, when important 
parts of guidelines to interoperability are left out...there are plenty 
more examples).


I think its nearly imperative that important points of interoperability 
contention such as this be specified, else it gives unscrupulous 
developers the ability to intentionally worsen interoperability and 
making the spec considerably less valuable by developing an 
implementation that is compliant, but not interoperable with anyone 
else (Oh, I implemented video using animated gifs...yes its absurd, 
but someone could, at least in theory, claim compliance that way).  I 
would also point out that scrupulous developers could unintentionally 
worsen interoperability in the same way.  By allowing this opening, 
end-users see browsers that have the HTML5 stamp (figuratively), but 
their browsing experience suffers and they start to lose faith in the 
spec as actually meaning anything useful regarding the reliability of 
their browsing experience.


Again, thank you for your efforts, and add me to the camp of believing 
that the baseline codec is vitally important, even without all of the 
browser makers being willing (at least initially) to support it.


--
Jeff McAdams
je...@iglou.com


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread jjcogliati-whatwg



--- On Tue, 6/30/09, Mikko Rantalainen mikko.rantalai...@peda.net wrote:

 (2) Specify {Theora or H.264} as the baseline. That way all
 vendors that
 have displayed any interest for video could
 implement the spec.
 Authors would be required to provide the video in both
 formats to be
 sure that any spec compliant user agent is able to display
 the content,
 but at least there would be some real target set by the
 spec. However, I
 think that this just moves the H.264 patent licensing issue
 from browser
 vendors to content authors: if you believe that you cannot
 decode H.264
 without proper patent license there's no way you could
 encode H.264
 content without the very same license. As a result, many
 authors will
 not be able to provide H.264 variant -- and as a result the
 Theora would
 become de facto standard in the future.
 
 -- 
 Mikko
 
Specify {Theora or H.264} AND {Motion JPEG}  That way there is a fallback 
mechanism when you care more about compatibility than bandwidth and don't want 
to deal with the hassle of the H.264 patents.  Sometimes compatibility is more 
important than bandwidth. (HTML is a common method of putting content on 
CD-ROMs.)

Josh Cogliati



Re: [whatwg] XHTML namespace and HTML elements

2009-06-30 Thread Henri Sivonen

On Jun 30, 2009, at 15:11, Olli Pettay wrote:

I wonder what (and where) are the reasons to use XHTML namespace  
also with HTML elements.

The behavior causes few issues like
https://bugzilla.mozilla.org/show_bug.cgi?id=501312 and


A variant of this corner case already existed with attribute nodes. It  
seems to me that setting uppercase no-namespace attributes on the XML  
side, moving the node to an HTML document and getting the attributes  
on the other side has no use cases, so I think this isn't a problem in  
practice.



http://www.w3.org/Bugs/Public/show_bug.cgi?id=6777 and
http://www.w3.org/Bugs/Public/show_bug.cgi?id=7059


The patch that introduced this one unfortunate new special case to  
Gecko removed 20 instances of code dealing with the namespace duality  
and opened up the opportunity to eliminate 105 more such instances  
(all virtual calls; https://bugzilla.mozilla.org/show_bug.cgi? 
id=488249 ).



And what are the problems if and when null namespace is used with HTML
elements (like in =FF3.5).


I think having a tree with mixed HTML and XML-trait nodes is more  
confusing than the edge case from bug 501312. You can get such mixed  
trees in practice by having script code that uses createElementNS(http://www.w3.org/1999/xhtml 
, ...) in order to work with both HTML and XHTML.



When script libraries need to check if some element is an (X)HTML
element, they could always use instanceof.


There are also non-browser apps that don't run scripts at all and,  
therefore, don't need to implement the HTML-specific DOM Core deltas  
from http://www.whatwg.org/specs/web-apps/current-work/#apis-in-html-documents 
. Those apps benefit even more than browsers, since the above-parser  
differences between HTML and XHTML are abstracted away even more  
completely than in the UAs that have to support legacy existing scripts.


In general, I think maintaining differences between HTML and XHTML  
serves no useful purpose when it's not done to support existing content.


--
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/




Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Mike Shaver
On Tue, Jun 30, 2009 at 12:50 AM, Ian Hicksoni...@hixie.ch wrote:
 Finally, what is Google/YouTube's official position on this?

 As I understand it, based on other posts to this mailing list in recent
 days: Google ships both H.264 and Theora support in Chrome; YouTube only
 supports H.264, and is unlikely to use Theora until the codec improves
 substantially from its current quality-per-bit.

It would be good to understand what the threshold for acceptability is
here; earlier reports on this mailing list have indicated that (on at
least the tested content) Theora can produce quality-per-bit that is
quite comparable to that of H.264 as employed by YouTube.  As one
organization investing, and invested, in the success of Theora,
Mozilla would be very glad to know so that we can help reach that
target.

Can one of the Google representatives here get a statement from
YouTube about the technical threshold here?  I think it could have
significant impact on the course of video on the web; perhaps more
than SHOULD language in HTML5 here.

I personally believe that putting codec requirements in the
specification could have significant market effects, because it would
take advantage of general market pressure for standards compliance.
As an example, if you put it in HTML5 then you could put it in ACID4,
and the ACID tests have historically been quite influential in driving
browser implementation choices.  Theora could get the same boost
NodeIterator has seen, I daresay to greater positive impact on the
web.

Mike


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Gregory Maxwell
On Tue, Jun 30, 2009 at 5:31 AM, Mikko
Rantalainenmikko.rantalai...@peda.net wrote:
[snip]
 Patent licensing issues aside, H.264 would be better baseline codec than
 Theora.

I don't know that I necessarily agree there.

H.264 achieves better efficiency (quality/bitrate) than Theora, but it
does so with greater peak computational complexity and memory
requirements on the decoder.

This isn't really a fault in H.264, it's just a natural consequence of
codec development. Compression efficiency will always be strongly
correlated to computational load.

So, I think there would be an argument today for including something
else as a *baseline* even in the absence of licensing.  (Though the
growth of computational power will probably moot this in the 15-20
years it will take for H.264 to become licensing clear)

Of course there are profiles, but they create a lot of confusion:
People routinely put out files that others have a hard time playing.
Of course, were it not for the licensing Theora wouldn't exist but
there would likely be many other codec alternatives with differing
CPU/bandwidth/quality tradeoffs.

I just wanted to make the point that there are other considerations
which have been ignored simply because the licensing issue is so
overwhelmingly significant, but if it weren't we'd still have many
things to discuss.

The subject does bring me to a minor nit on Ian's decent state of
affairs message:

One of the listed problems is lack of hardware support. I think Ian
may be unwittingly be falling to a common misconception.

This is a real issue, but it's being misdescribed— it would be more
accurately and clearly stated as lack of software support on embedded
devices.   Although people keep using the word hardware in this
context I believe that in 999 times out of 1000 they are mistaken in
doing so.

As far as I, or anyone I've spoken to, can tell no one is actually
doing H.264 decode directly in silicon, at least no one with a web
browser. The closest thing to that I see are small microcoded DSPs
which you buy pre-packaged with codec software and ready to go.  I'm
sure someone can correct me if I'm mistaken.

There are a number of reasons for this such as the rapid pace of codec
development vs ASIC design horizons, and the mode switching heavy
nature of modern codecs (H.264 supports many mixtures of block sizes,
for example) simply requiring a lot of chip real-estate if implemented
directly in hardware.

In some cases the DSP is proprietary and not sufficiently open for
other software. But at least in the mobile device market it appears to
be the norm to use an off the shelf general purpose DSP.

This is a very important point because the hardware doesn't support
it sounds like an absolute deal breaker while No one has bothered
porting Theora to the TMS320c64x DSP embedded in the OMAP3 CPU used in
this handheld device is an obviously surmountable problem.


In the future, when someone says no hardware support it would be
helpful to find out if they are talking about actual hardware support
or just something they're calling hardware because it's some
mysterious DSP running a vendor-blob that they themselves aren't
personally responsible for programming... or if they are just
regurgitating common wisdom.


Cheers!


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Mike Shaver
On Tue, Jun 30, 2009 at 10:43 AM, Gregory Maxwellgmaxw...@gmail.com wrote:
 No one has bothered
 porting Theora to the TMS320c64x DSP embedded in the OMAP3 CPU used in
 this handheld device is an obviously surmountable problem.

Unless I'm mistaken about the DSP in question, that work is in fact
underway, and should bear fruit in the next handful of months.

Mike


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Dr. Markus Walther


Ian Hickson wrote:
 On Tue, 30 Jun 2009, Matthew Gregan wrote:
 Is there any reason why PCM in a Wave container has been removed from 
 HTML 5 as a baseline for audio?
 
 Having removed everything else in these sections, I figured there wasn't 
 that much value in requiring PCM-in-Wave support. However, I will continue 
 to work with browser vendors directly and try to get a common codec at 
 least for audio, even if that is just PCM-in-Wave.

Please, please do so - I was shocked to read that PCM-in-Wave as the
minimal 'consensus' container for audio is under threat of removal, too.

Frankly, I don't understand why audio was drawn into this. Is there any
patent issue with PCM-in-Wave? If not, then IMHO the decision should be
orthogonal to video.

-- Markus


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Dr. Markus Walther
Gregory Maxwell wrote:
 PCM in wav is useless for many applications: you're not going to do
 streaming music with it, for example.

 It would work fine for sound effects...

The world in which web browsers live is quite a bit bigger than internet
and ordinary consumer use combined...

Browser-based intranet applications for companies working with
professional audio or speech are but one example. Please see my earlier
contributions to this list for more details.

 but it still is more code to
 support, a lot more code in some cases depending on how the
 application is layered even though PCM wav itself is pretty simple.
 And what exactly does PCM wav mean?  float samples? 24 bit integers?
 16bit? 8bit? ulaw? big-endian? 2 channel? 8 channel? Is a correct
 duration header mandatory?

To give one specific point in this matrix: 16-bit integer samples,
little-endian, 1 channel, correct duration header not mandatory.
This is relevant in practice in what we do. I can't speak for others.

 It would be misleading to name a 'partial baseline'. If the document
 can't manage make a complete workable recommendation, why make one at
 all?

I disagree. Why insist on perfection here? In my view, the whole of HTML
5 as discussed here is about reasonable compromises that can be
supported now or pretty soon. As the browsers which already support PCM
wav (e.g. Safari, Firefox) show, it isn't impossible to get this right.

Regards,
-- Markus


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Kristof Zelechovski
Assuming bandwidth will increase with technological advance, it seems
unreasonable that the bandwidth issue is allowed to block fallback solutions
such as PCM within a specification that is expected to live longer than
three years from now.
IMHO,
Chris



Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Aryeh Gregor
On Tue, Jun 30, 2009 at 12:50 AM, Ian Hicksoni...@hixie.ch wrote:
 I considered requiring Ogg Theora support in the spec, since we do have
 three implementations that are willing to implement it, but it wouldn't
 help get us true interoperabiliy, since the people who are willing to
 implement it are willing to do so regardless of the spec, and the people
 who aren't are not going to be swayed by what the spec says.

Why can't you make support for Theora and Vorbis a should
requirement?  That wouldn't be misleading, especially if worded right.
 It would serve as a hint to future implementers who might not be
familiar with this whole sordid format war.  It would also hopefully
help put more emphasis on Ogg and get more authors to view lack of Ogg
support as a deficiency or bug to be worked around, thus encouraging
implementers to support it.  It's only about two lines total -- what's
the downside?

Proselytism is a valid reason to add material to the spec, right?
Certainly I recall you mentioning that in the case of alt text -- you
didn't want to allow alt text to be omitted in general lest it
discourage authors from using it.  I think it's clear that of the two
contenders for video, Theora is a much closer fit to HTML 5's goal of
open standards and deserves whatever support is possible without
sacrificing other goals (like accuracy).


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Jeff McAdams

Peter Kasting wrote:
As a contributor to multiple browsers, I think it's important to note 
the distinctions between cases like Acid3 (where IIRC all tests were 
supposed to test specs that had been published with no dispute for 5 
years), much of HTML5 (where items not yet implemented generally have 
agreement-on-principle from various vendors) and this issue, where 
vendors have publicly refused to implement particular cases.  Particular 
specs in the first two cases represent vendor consensus, and when 
vendors discover problems during implementation the specs are changed.  
This is not a case where vendor consensus is currently possible (despite 
the apparently naive beliefs on the part of some who think the vendors 
are merely ignorant and need education on the benefits of codec x or y), 
and just put it in the spec to apply pressure is not a reasonable 
response.


I don't know that anyone has suggested putting it in the spec *only* to 
apply pressure to vendors.  Certainly that is an added bonus (I'll put 
that in quotes because not everyone will consider that a positive 
thing), and certainly doing so will achieve the goal of applying 
pressure.  But I agree that putting it in the spec to *only* apply 
pressure to vendors is not reasonable, but considering it as an 
additional reason to put it in the spec, is quite reasonable.


--
Jeff McAdams
je...@iglou.com


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Sam Kuper
2009/6/30 Peter Kasting pkast...@google.com
 On Jun 30, 2009 2:17 AM, Sam Kuper sam.ku...@uclmail.net wrote:
   2009/6/30 Silvia Pfeiffer silviapfeiff...@gmail.com
On Tue, Jun 30, 2009 at 2:50 PM, Ian Hicksoni...@hixie.ch wrote:   
I considered requiring Og...
 
  Right. Waiting for all vendors to support the specified codec would be like 
  waiting for them all to be Acid3 compliant. Better to specify how browsers 
  should behave (especially if it's how most of them will behave), and let 
  the stragglers pick up the slack in their own time under consumer pressure.
  Sam

 As a contributor to multiple browsers, I think it's important to note the 
 distinctions between cases like Acid3 (where IIRC all tests were supposed to 
 test specs that had been published with no dispute for 5 years), much of 
 HTML5 (where items not yet implemented generally have agreement-on-principle 
 from various vendors) and this issue, where vendors have publicly refused to 
 implement particular cases. [...]

I'd question, based on the following statements, whether your memory
of Acid3 is correct:

Controversially, [Acid3] includes several elements from the CSS2
recommendation that were later removed in CSS2.1 but reintroduced in
W3C CSS3 working drafts that have not made it to candidate
recommendations yet.[1]

The following standards are tested by Acid3: [...]
* SMIL 2.1 (subtests 75-76) [...][1]

SMIL 2.1 became a W3C Recommendation in December 2005.[2]

[1] http://en.wikipedia.org/wiki/Acid3
[2] 
http://en.wikipedia.org/wiki/Synchronized_Multimedia_Integration_Language#SMIL_2.1

So, there is some precedent for the W3C to publish specs/tests,
expecting browser vendors to catch up with them further down the line.

Sam


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Robert O'Callahan
On Wed, Jul 1, 2009 at 7:15 AM, Peter Kasting pkast...@google.com wrote:

 As a contributor to multiple browsers, I think it's important to note the
 distinctions between cases like Acid3 (where IIRC all tests were supposed to
 test specs that had been published with no dispute for 5 years), much of
 HTML5 (where items not yet implemented generally have agreement-on-principle
 from various vendors) and this issue, where vendors have publicly refused to
 implement particular cases.  Particular specs in the first two cases
 represent vendor consensus, and when vendors discover problems during
 implementation the specs are changed.


It's not true that all the specs tested in Acid3 represented vendor
consensus. For example, a lot of browser people were skeptical of the value
of SVG Animation (SMIL), but it was added to Acid3. That was a clear example
of something being implemented primarily because of pressure from
specifications and tests. It's true, though, that no-one flat-out refused to
implement it, so that situation isn't quite the same.

Personally I think it's appropriate to use specs to exert some pressure.
We've always done it. Flat-out refusal of a vendor to implement something is
a problem, but I assume there are limits to how much we allow that to affect
the process. If Microsoft suddenly announces they hate HTML5 and won't
implement any of it, would we just throw it all out?

If we are going to allow individual vendors to exert veto power, at least
lets make them accountable. Let's require them to make public statements
with justifications instead of passing secret notes to Hixie.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Peter Kasting
* I didn't say 5 years from Rec status
* Acid3 was meant to be an illustrative example of a case where the test
itself was not intentionally introducing new behavior or attempting to force
consensus on unwilling vendors, not a perfect analogy to something

PK

On Jun 30, 2009 12:36 PM, Sam Kuper sam.ku...@uclmail.net wrote:

2009/6/30 Peter Kasting pkast...@google.com

 On Jun 30, 2009 2:17 AM, Sam Kuper sam.ku...@uclmail.net wrote:   
2009/6/30 Silvia Pfeiffe...
 As a contributor to multiple browsers, I think it's important to note the
distinctions between cases like Acid3 (where IIRC all tests were supposed to
test specs that had been published with no dispute for 5 years), much of
HTML5 (where items not yet implemented generally have agreement-on-principle
from various vendors) and this issue, where vendors have publicly refused to
implement particular cases. [...]

I'd question, based on the following statements, whether your memory
of Acid3 is correct:

Controversially, [Acid3] includes several elements from the CSS2
recommendation that were later removed in CSS2.1 but reintroduced in
W3C CSS3 working drafts that have not made it to candidate
recommendations yet.[1]

The following standards are tested by Acid3: [...]
   * SMIL 2.1 (subtests 75-76) [...][1]

SMIL 2.1 became a W3C Recommendation in December 2005.[2]

[1] http://en.wikipedia.org/wiki/Acid3
[2]
http://en.wikipedia.org/wiki/Synchronized_Multimedia_Integration_Language#SMIL_2.1

So, there is some precedent for the W3C to publish specs/tests,
expecting browser vendors to catch up with them further down the line.

Sam


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Jeff McAdams

Peter Kasting wrote:
There is no other reason to put a codec in the spec -- the primary 
reason to spec a behavior (to document vendor consensus) does not 
apply.  Some vendors agreed, and some objected violently is not 
consensus.


But Most people agreed, and one or two vendors objected violently 
probably is.  Just because one or two people are really loud, doesn't 
mean that there isn't concensus.


I'm not saying that this is the case, here, but it is possible.

Also, I find the focus on vendors to the exclusion of other stakeholders 
a bit concerning.



--
Jeff McAdams
je...@iglou.com


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Sam Kuper
2009/6/30 Peter Kasting pkast...@google.com:
 * I didn't say 5 years from Rec status

No, you didn't; I was being generous. You said something much less
meaningful: published with no dispute for 5 years. No dispute from
whom? Browser developers and web developers disputed aspects of
several of the standards under test in Acid3 during the 5 years
preceding its publication. Witness the divergence between different
browsers' implementations of ECMAScript and CSS; witness different
approaches taken by web developers to handle them; witness disputed
elements like q and cite.

 * Acid3 was meant to be an illustrative example of a case where the test
 itself was not intentionally introducing new behavior

Well, it was intentionally testing whether a browser implemented specs
accurately. In some cases, browsers had to have new behaviour added in
order to do so.

 or attempting to force
 consensus on unwilling vendors [...]

I quote Wikipedia again: Microsoft, developers of the Internet
Explorer browser, said that Acid3 does not map to the goal of Internet
Explorer 8 and that IE8 would improve only some of the standards being
tested by Acid3.[20] IE8 scores 20/100 and has some problems with
rendering the Acid3 test page.[1]

Similarly with Acid2 (released April 13 2005):

In July 2005, Chris Wilson, the Internet Explorer Platform Architect,
stated that passing Acid2 was not a priority for Internet Explorer 7,
describing the test as a wish list of features rather than a true
test of standards compliance.[2]

The point of specs is to define how things *should* be. They are, by
nature, idealistic. Implementation may not be perfect or universal.
This has to be acknowledged, but it does not justify dropping an item
from the spec that several major browser vendors are willing to
support and that other would probably be willing to support once fears
of submarine patents have dissipated.

Sam

[1] http://en.wikipedia.org/wiki/Acid3
[2] In July 2005, Chris Wilson, the Internet Explorer Platform
Architect, stated that passing Acid2 was not a priority for Internet
Explorer 7, describing the test as a wish list of features rather
than a true test of standards compliance.


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Sam Kuper
2009/6/30 Sam Kuper sam.ku...@uclmail.net:
 [2] In July 2005, Chris Wilson, the Internet Explorer Platform [...]

That should have been:

[2] http://en.wikipedia.org/wiki/Acid2#Microsoft.27s_response


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Joshua Brickner

Jeff McAdams wrote:


Peter Kasting wrote:
There is no other reason to put a codec in the spec -- the primary  
reason to spec a behavior (to document vendor consensus) does not  
apply.  Some vendors agreed, and some objected violently is not  
consensus.


But Most people agreed, and one or two vendors objected violently  
probably is.  Just because one or two people are really loud,  
doesn't mean that there isn't concensus.


I'm not saying that this is the case, here, but it is possible.

Also, I find the focus on vendors to the exclusion of other  
stakeholders a bit concerning.

--
Jeff McAdams
je...@iglou.com


IMHO, the fundamental question here is whether or not the spec should  
be concerned solely about creating a standard that is satisfactory for  
implementers to follow, or if it should go further and try to make the  
standard work well for everyone involved including developers and  
consumers.


I am certain that most would like to have a standard that best serves  
the entire community.


Joshua Brickner
jos...@rocketjones.com



Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Aryeh Gregor
On Tue, Jun 30, 2009 at 3:15 PM, Peter Kastingpkast...@google.com wrote:
 This is not a case where vendor
 consensus is currently possible (despite the apparently naive beliefs on the
 part of some who think the vendors are merely ignorant and need education on
 the benefits of codec x or y), and just put it in the spec to apply
 pressure is not a reasonable response.

*Requiring* Ogg support might be a bit much, I agree, since it implies
that nobody would have a legitimate foreseeable reason for not
supporting it, and that's ungenerous at best.  But the spec could
still *encourage* Ogg support while acknowledging there may be reasons
not to have it.  Saying all implementations *should* support Ogg
Theora and Vorbis acknowledges that there may exist valid reasons in
particular circumstances to ignore a particular item, but the full
implications must be understood and carefully weighed before choosing
a different course.  This seems like a perfectly reasonable response
to me.


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Asbjørn Ulsberg

On Tue, 30 Jun 2009 06:50:31 +0200, Ian Hickson i...@hixie.ch wrote:


video itself supports multiple sources, so there's no need for
JavaScript to do this. But it does mean we end up with exactly the
situation we're in now, with different implementations supporting
different codecs and the spec not having any power over this.


Not really. Having the video element, even without a baseline codec,  
we're better off than today where a horrible mix of JavaScript, object,  
embed, and/or conditional comments is the only way to get a  
cross-browser video solution working.


That is, of course, if Microsoft decides to implement video. If they  
don't, I assume object wrapped in video works just as well as nested  
video elements.



The next-best option is Ogg, that favors small independent content
producers.


That seems to be what Opera, Mozilla, and Chrome are implementing.


Then, isn't it better to have 3 out of 5 browsers adhering to the standard  
(requiring Ogg Theora support) than to have no requirement to adhere to at  
all? To have the standard as a backing when pushing Microsoft and Apple to  
come to their senses, does give more leverage than to have nothing at all.


While neither Microsoft nor Apple will launch browsers that immediately  
support Ogg Theora after HTML5 reaches TR status, they might after a while  
of good old fashioned bitching and nagging.


Seeing how just about all states on the planet is moving towards open  
standard support and implementation into national government law, I  
actually think HTML5 requiring Ogg Theora support will make a difference  
some years from now. If HTML5 requires support for Ogg Theora and  
Microsoft and Apple don't support it, it's likely that great forces like  
the EU Commission will react and force them into submission. If HTML5  
doesn't require support of any codec, there's not much the EU Commission  
or any other government can do.


As a little PS to all of this: On January 1st 2009 the Norwegian  
government made it mandatory that all video published on the web, by  
government-funded projects and agencies in Norway, to be in the Ogg Theora  
format. You're allowed to publish other formats as well, but Ogg Theora is  
the common baseline format everyone must publish.


For audio, the required format is Ogg Vorbis, and for text it's HTML, ODT  
or PDF depending on the type of document and interaction requirements.



The W3C is not only about web standards. It's also the road map. Right
now, that road map, where video is concerned, says the following: User
agents may support any video and audio codecs and container formats. It
might as well say Here be dragons. I think it's time, at the very
least, to say goodbye to single-company proprietary dreck. To say both
that existing international standards are OK for now, but the ideal as
currently expressed in the boxed copy under 3.12.7.1 is still not met.


Why is this the case for video but not images? We don't require a
particular image format for img either, but people know you can just  
PNG and JPEG.


It is indeed the case for images as well, but the situation is, and was,  
different. None of the browser vendors had or have invested any  
considerable amount of time or money on any image format. That's not the  
case with video, where both Microsoft and Apple have invested a great deal.



MPEG-1 is nowhere near good enough at this point to be a serious
contender. There have been suggestions that even Theora isn't good enough
yet (for example, YouTube won't use Theora with the current state of
encoders), an it _far_ outperforms MPEG-1.


Indeed.

--
Asbjørn Ulsberg -=|=-  asbj...@ulsberg.no
«He's a loathsome offensive brute, yet I can't look away»


Re: [whatwg] XHTML namespace and HTML elements

2009-06-30 Thread Ian Hickson
On Tue, 30 Jun 2009, Olli Pettay wrote:
 
 I wonder what (and where) are the reasons to use XHTML namespace also with
 HTML elements.

The main reason was simplification.

 * Consistency for scripts in HTML and XHTML. For example, a script can 
   now use createElementNS() in both without having to check the mode 
   first.

 * Consistency for CSS in HTML and XHTML.

 * Consistency for SVG features (e.g. scripting) across HTML 
   and XHTML now that we have SVG-in-HTML and SVG-in-XHTML.

 * Sanity of implementation. Browsers have had all kinds of weird 
   behaviour to act one way in text/html and another in XML while wanting 
   elements to have consistent behaviour in both.

 * A better-defined set of rules for handling mixing of XML and non-XML 
   nodes, e.g. when importing XHTML nodes from XMLHttpRequest'ed XML 
   documents into text/html documents.

...and so on.


 The behavior causes few issues like
 https://bugzilla.mozilla.org/show_bug.cgi?id=501312 and
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=6777 and
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=7059

These are really minor issues compared to the benefits.


 And what are the problems if and when null namespace is used with HTML 
 elements (like in =FF3.5).

Mostly lack of consistency. Gecko actually used to do this like HTML5 
suggests, it was only changed because of a desire to match what was at the 
time thought to be the spec, if I recall correctly. HTML5 changed this 
early on precisely so that this change could be reverted.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Maciej Stachowiak


On Jun 30, 2009, at 1:59 AM, Silvia Pfeiffer wrote:




 - has off-the-shelf decoder hardware chips available

decoder hardware for video means that there are software libraries
available that use specific hardware in given chips to optimise
decoding. It is not a matter of hardware vendors to invent new
hardware to support Theora, but it is a matter of somebody
implementing some code to take advantage of available hardware on
specific platforms. This is already starting to happen, and will
increasingly happen if Theora became the baseline codec.


I looked into this question with the help of some experts on video  
decoding and embedded hardware. H.264 decoders are available in the  
form of ASICs, and many high volume devices use ASICs rather than  
general-purpose programmable DSPs. In particular this is very common  
for mobile phones and similar devices - it's not common to use the  
baseband processor for video decoding, for instance, as is implied by  
some material I have seen on this topic, or to use other fully general  
DSPs.


Some H.264 ASICs are internally implemented as completely hardcoded  
logic. Others are implemented as a relatively general purpose DSP with  
a custom instruction set and microcode set by the manufacturer. Even  
these theoretically more general chips cannot be programmed by the  
device vendor, only the manufacturer of the chip itself. ASICs often  
have a significant cost and power consumption advantage compared to  
other solutions, at least in medium to high volume applications.


A Google search for H.264 decoder ASIC shows that these are  
available from many manufacturers: http://www.google.com/search?q=H.264+ASIC 
.


As far as I know, there are currently no commercially available ASICs  
for Ogg Theora video decoding. (Searching Google for Theora ASIC finds  
some claims that technical aspects of the Theora codec would make it  
hard to implement in ASIC form and/or difficult to run on popular  
DSPs, but I do not have the technical expertise to evaluate the merit  
of these claims.)


Regards,
Maciej




Re: [whatwg] Changing postMessage() to allow sending unentangled ports

2009-06-30 Thread Ian Hickson
On Thu, 4 Jun 2009, Drew Wilson wrote:
 
 I'd like to suggest that we allow sending ports that are not entangled 
 (i.e. ports that have been closed)

Done.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] HTML5 3.7.2 - document.write

2009-06-30 Thread Ian Hickson
On Thu, 4 Jun 2009, Kartikaya Gupta wrote:

 I have a question about section 3.7.2. Under step 5, it says that it is 
 considered a reentrant invocation of parser if the document.write() 
 method was called from script executing inline. Does this include 
 document.write() calls invoked from user actions (e.g. onclick)? I 
 assume not, but I'm getting varying behavior from the major browsers for 
 this test case (click on the button to run):
 
 ---
 HTMLHEAD
 script id=outter type=text/javascript
 function doDoc() {
   document.write('I am scr'+'ipt type=text/javascript id=inner 
 src=code.js/scr'+'iptthe bdocument/b');
   document.close();
 }
 /script
 /HEADBODY
  button onclick=doDoc()runDoc/button
 /BODY/HTML
 ---
 
 Inside code.js:
 
 ---
 document.write('img src=testIMG.jpg /');
 ---

This is (as far as parsing goes) equivalent to just loading a page that 
has I am script type=text/javascript id=inner src=code.js/scriptthe 
bdocument/b as the contents. When you execute the first .write(), it 
starts a new parser.

However, the spec as written was a bit confusing. I've tried to clarify 
it by referencing the script nesting level explicitly.

Please let me know if this is not equivalent!

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Gregory Maxwell
On Tue, Jun 30, 2009 at 10:41 PM, Maciej Stachowiakm...@apple.com wrote:
 I looked into this question with the help of some experts on video decoding
 and embedded hardware. H.264 decoders are available in the form of ASICs,
 and many high volume devices use ASICs rather than general-purpose
 programmable DSPs. In particular this is very common for mobile phones and
 similar devices - it's not common to use the baseband processor for video
 decoding, for instance, as is implied by some material I have seen on this
 topic, or to use other fully general DSPs.

Can you please name some specific mobile products? Surely if it's
common doing so shouldn't be hard.  I don't mean to argue that it
isn't true or intend to debate you on the merits of any examples…  But
this is an area which has been subject to a lot of very vague claims
which add a lot more confusion rather than insight.

Iphone (of all vintages), and Palm Pre have enough CPU power to do
Theora decode for 'mobile resolutions' on the main cpu (no comment on
battery life; but palm pre is OMAP3 and support for that DSP is in the
works as mentioned). I can state this with confidence since the
horribly slow 400mhz arm4t based SOC in the OpenMoko freerunner is
able to (just barely) do it with the completely unoptimized (for arm)
reference libraries (on x86 the assembly optimizations are worth a
30-40% performance boost).

Another example I have is the WDTV, a set top media box.  It's often
described as using a dedicated hardware H.264 decoder, but what it
actually uses is a SMP8634. Which is a hardware decode engine based on
general purpose processors which appears to be format-flexible enough
to decode other formats. (Although the programing details aren't
freely available so its difficult to make concrete claims).

[snip]
 As far as I know, there are currently no commercially available ASICs for
 Ogg Theora video decoding. (Searching Google for Theora ASIC finds some
 claims that technical aspects of the Theora codec would make it hard to
 implement in ASIC form and/or difficult to run on popular DSPs, but I do not
 have the technical expertise to evaluate the merit of these claims.)

There is, in fact, a synthetically VHDL implementation of the Theora
decoder backend available at http://svn.xiph.org/trunk/theora-fpga/

I'm not able to find the claims regarding Theora on DSPs which you are
referring to, care to provide a link?

Not especially relevant but worth mentioning for completeness: Elphel
also distributes a complete i-frame only theora encoder as
syntheziable verilog under the GPL which is used on an FPGA in their
prior generation camera products.
(http://www3.elphel.com/xilinx/publications/xcellonline/xcell_53/xc_video53.htm)

Existence trumps speculation. But I'm still not of the impression that
the hardware forms are not all that relevant.


Re: [whatwg] do not encourage use of small element for legal text

2009-06-30 Thread Ian Hickson
On Thu, 4 Jun 2009, Andrew W. Hagen wrote:
 
 I have a copy of the Constitution of the United States on my web site. 
 That is a legal text. It also qualifies as legalese, a derogatory 
 term. If I were to change it to HTML 5, the current spec encourages me 
 to place the entire Constitution in small elements.

The spec says the following:

# The small element represents small print or other side comments.
#
# Note: Small print is typically legalese describing disclaimers, caveats, 
# legal restrictions, or copyrights. Small print is also sometimes used 
# for attribution.

I don't see how this can be said to encourage putting the constitution in 
small elements. The constitution is hardly small print or a side 
comment.


 Encouraging use of small print for legalese also encourages this:
 
 h1
 a href=continue.html
 Welcome to the BigCo web site. Click to continue.
 /a
 /h1
 smallBy clicking above, you agree that BigCo can charge your
 credit card $10 per visit to the BigCo web site per page clicked./small

Right, that's the case we do want to encourage. It's better than the 
alternative, which would be:

 style
  .s { font-size: smaller; }
 /style
 h1
 a href=continue.html
 Welcome to the BigCo web site. Click to continue.
 /a
 /h1
 span class=sBy clicking above, you agree that BigCo can charge your
 credit card $10 per visit to the BigCo web site per page clicked./span

...because if they use small, you can configure your client to go out of 
its way to highlight small text, whereas you have no way to know to 
highlight any text based on its font size or class.


 Now that might not stand if challenged in a court, but it is definitely 
 not the kind of thing that the HTML 5 spec should condone. And yet, in 
 its current form, it does. What ought to constitute outright fraud is 
 encouraged by the HTML 5 spec in its current form.

HTML5 doesn't encourage deceptive practices or fraud.


 The HTML 5 spec also encourages, in its current form, placing any legal 
 disclaimer in a small element. Therefore, we could have this result.

 h1BigCo Services: We guarantee our work/h1
 smallExcept between the hours of 12:01 am and 11:59 pm./small
 
 That is a deceptive use of a disclaimer that the HTML 5 spec encourages. 
 This is most unfortunate.

It is significantly better than the alternative, which is people hiding 
the disclaimer with span and styles (rather than small and styles).


 There is no middle ground here. Encouraging legal text to be in a small 
 element except when it is deceptive or inappropriate would at best 
 lead to confusion.

It seems worse to encourage it to be in a p element where it is 
indistinguishable from other small text and cannot be programmatically 
highlighted.


On Fri, 5 Jun 2009, Andrew W. Hagen wrote:

 My intention was to encourage the HTML 5 specification to not contain 
 any content that could be construed as legal advice.

I really don't think the text in the spec can even remotely be construed 
as legal advice.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] the cite element

2009-06-30 Thread Ian Hickson
On Fri, 5 Jun 2009, Andrew W. Hagen wrote:
 
 That was interesting about the history of the cite element.
 
 The import of my proposed change is that it would make the cite element
 much more useful than it would be than if it were limited to titles.
 
 For example, take a page listing numerous famous quotations. Below might be
 one of them:
 
 liqMan is the only animal that laughs and weeps; for he is the only animal
 that is struck with the difference between what things are, and what
 they ought to be./qbr /  -- citeWilliam Hazlitt/cite/li
 
 That works well, yet that would be technically against what the spec
 in its current form allows.
 
 A second example. Let's say a web page is to list a citation of a work.
 
 This would be the citation, marked up according to the current HTML spec.
 
 pHawking, Stephen.citeA Brief History of Time/cite. Bantam: New York.
 1988./p
 
 Most of the citation is not in the cite element.
 
 The following should be an option for web authors.
 
 pciteHawking, Stephen.iA Brief History of Time/i. Bantam: New York.
 1988./cite/p
 
 That encases the entire citation in a cite element. The web author can
 re-style the cite
 element as desired.
 
 Cite should be available for untitled works. For example:
 
 Rock critics have universally praisedcite style=font-style: normalthe
 untitled fourth
 album/cite  by Led Zeppelin.
 
 While people aren't usually typographically marked up, they are cited.
 
 The change would allow things other than titles to be placed into the cite
 element.
 That would make cite much more useful.

I don't understand why it would be more useful. Having an element for the 
typographic purpose of marking up titles seems more useful than an element 
for the purpose of indicating what is a citation.

Note that HTML5 now has a more detailed way of marking up citations, using 
the Bibtex vocabulary. I think this removes the need for using the cite 
element in the manner you describe.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Annotating structured data that HTML has no semanticsfor

2009-06-30 Thread Ian Hickson
On Tue, 9 Jun 2009, Kristof Zelechovski wrote:

 * Let a COLOR element have a value DOM property in the DOM that returns a
 color.

.value already does so.


 * Let a NUMBER element has a value DOM property that returns a number.

.valueAsNumber already does so.


 Actually, the latter use case is one I have bumped into:  
 * The DOM does not provide a numeric value,

As noted, it now does.


 * JavaScript support for parsing localized properties is poor; you have to
 reverse engineer the result of toLocaleString,

The localised properties aren't exposed.


 * VBScript support is better but inconsistent as it depends on the system
 locale and not on the document locale as desired.

Then don't use VBScript; it's a vendor-specific technology anyway.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] AppCache and javascript url question?

2009-06-30 Thread Ian Hickson
On Thu, 4 Jun 2009, Michael Nordman wrote:

 What appcache (if any) should the resulting iframes be associated with? I
 think per the spec, the answer is none. Is that the correct answer?
 
 html manifest='myManifestFile'
 body
 script language=JavaScript
   function frameContents1()
   {
 var doc = frame1.document;
 doc.open();
 doc.write('img src=image.png');
 doc.close();
 return;
   }
 
   function frameContents2()
   {
 return hello;
   }
 /script
 
 iframe name=frame1 src=javascript:parent.frameContents1()
 iframe name=frame2 src=javascript:parent.frameContents2()
 /body
 /html

If there's no manifest=, there's no application cache selected, as far 
as I can tell.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Maciej Stachowiak


On Jun 30, 2009, at 9:13 PM, Gregory Maxwell wrote:

On Tue, Jun 30, 2009 at 10:41 PM, Maciej Stachowiakm...@apple.com  
wrote:
I looked into this question with the help of some experts on video  
decoding
and embedded hardware. H.264 decoders are available in the form of  
ASICs,

and many high volume devices use ASICs rather than general-purpose
programmable DSPs. In particular this is very common for mobile  
phones and
similar devices - it's not common to use the baseband processor for  
video
decoding, for instance, as is implied by some material I have seen  
on this

topic, or to use other fully general DSPs.


Can you please name some specific mobile products? Surely if it's
common doing so shouldn't be hard.  I don't mean to argue that it
isn't true or intend to debate you on the merits of any examples…  But
this is an area which has been subject to a lot of very vague claims
which add a lot more confusion rather than insight.


For the mobile phones where I have specific knowledge regarding their  
components, I am not at liberty to disclose that information.


However, it's quite clear from even a cursory investigation that H.264  
ASICs are available from multiple vendors. This would not be the case  
if they weren't shipping in high volume products. As I'm sure you  
know, ASICs have fairly high up-front costs so they need volume to be  
cost effective.



Iphone (of all vintages), and Palm Pre have enough CPU power to do
Theora decode for 'mobile resolutions' on the main cpu (no comment on
battery life; but palm pre is OMAP3 and support for that DSP is in the
works as mentioned).


I can tell you that iPhone does not do H.264 decoding on the CPU.


I can state this with confidence since the
horribly slow 400mhz arm4t based SOC in the OpenMoko freerunner is
able to (just barely) do it with the completely unoptimized (for arm)
reference libraries (on x86 the assembly optimizations are worth a
30-40% performance boost).


No one doubts that software implementations are available. However,  
they are not a substitute for hardware implementations, for many  
applications. I would expect a pure software implementation of video  
decoding on any mobile device would decimate battery life.




Another example I have is the WDTV, a set top media box.  It's often
described as using a dedicated hardware H.264 decoder, but what it
actually uses is a SMP8634. Which is a hardware decode engine based on
general purpose processors which appears to be format-flexible enough
to decode other formats. (Although the programing details aren't
freely available so its difficult to make concrete claims).


I would caution against extrapolating from a single example. But even  
here, this seems to be a case of a component that may in theory be  
programmable, but in practice can't be reprogrammed by the device  
vendor.




[snip]
As far as I know, there are currently no commercially available  
ASICs for
Ogg Theora video decoding. (Searching Google for Theora ASIC finds  
some
claims that technical aspects of the Theora codec would make it  
hard to
implement in ASIC form and/or difficult to run on popular DSPs, but  
I do not

have the technical expertise to evaluate the merit of these claims.)


There is, in fact, a synthetically VHDL implementation of the Theora
decoder backend available at http://svn.xiph.org/trunk/theora-fpga/


I did not mention FPGAs because they are not cost-competitive for  
products that ship in volume.


The points I wanted to make are simply this:

- H.264 decoders are in fact available in ASIC form, this is not a  
case of general-purpose hardware that could be programmed to do any  
codec

- H.264 ASICs are not only available but actually used
- There are no commercially available equivalents for Theora at this  
time


Silvia implied that mass-market products just have general-purpose  
hardware that could easily be used to decode a variety of codecs  
rather than true hardware support for specific codecs, and to the best  
of my knowledge, that is not the case.


Regards,
Maciej



Re: [whatwg] Parsing RFC3339 constructs

2009-06-30 Thread Ian Hickson
On Fri, 5 Jun 2009, Julian Reschke wrote:
 Ian Hickson wrote:
  On Fri, 5 Jun 2009, Julian Reschke wrote:
   Ian Hickson wrote:
 Michael(tm) Smith wrote:
  It seems pretty clear that there isn't anything else to refer 
  to for the date/time parsing rules -- but to me at least, 
  specifying those rules seems orthogonal to specifying the 
  date/time syntax, and I would think the syntax could just be 
  defined by making reference to the productions[1] in RFC 3339 
  (instead of completely redefining them), while stating any 
  exceptions.
  
  [1] http://tools.ietf.org/html/rfc3339#section-5.6
  
  I think the exceptions might just amount to:
  
- the literal letters T and Z must be uppercase
 Any technical reason why they have to?
Not really. We just need a separator.
   So why make it different from RFC 3339?
  
  Limiting the syntax to the simplest possible syntax was an intentional 
  design choice intended to ease the burden on implementors and authors. 
  In practice, pretty much every time we've made syntax 
  case-insensitive, we've ended up having trouble because of it.
 
 If this was a totally new syntax, I would agree.
 
 But as something based on ISO8601 (and thereby also RFC 3339) it appears 
 to be a bad idea to make it less compatible just for that reason.

We've seriously simplified the ISO-8601 syntax in many more ways than just 
this. This was a conscious design decision.


  The HTML5 spec defines exactly how to parse dates. Implementors are 
  required to implement what the spec describes, so reusing libraries is 
  implicitly not likely to be useful here. RFC3339 isn't even a 
  particularly important one in the grand scheme of things (ISO8601 
  comes to mind as a much higher-profile example).
 
 I think it's unfortunate that HTML5 doesn't allow using an off-the-shelf 
 parser. But if it doesn't, and the temptation *will* be there to use 
 them, I'd recommend stating it very clearly.

Done.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Parsing RFC3339 constructs

2009-06-30 Thread Julian Reschke

Ian Hickson wrote:

If this was a totally new syntax, I would agree.

But as something based on ISO8601 (and thereby also RFC 3339) it appears 
to be a bad idea to make it less compatible just for that reason.


We've seriously simplified the ISO-8601 syntax in many more ways than just 
this. This was a conscious design decision.


Yes, the same decision was made for RFC 3339 (and the similar W3C Note). 
I was recommending to stay closer to those, not to ISO8601.



...


BR, Julian


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Gregory Maxwell
On Wed, Jul 1, 2009 at 12:35 AM, Maciej Stachowiakm...@apple.com wrote:
 For the mobile phones where I have specific knowledge regarding their
 components, I am not at liberty to disclose that information.

Unsurprising but unfortunate.

There are other people trying to feel out the implications for
themselves whom are subject to different constraints then you are, for
whom the take my word for it is less than useless since they can
only guess that the same constraints may apply to their situation.

 I can tell you that iPhone does not do H.264 decoding on the CPU.

Iphone can decode Theora on the CPU. I don't think really isn't even a
open question on that point. It's an important distinction because
there are devices which can decode theora on the primary cpu but need
additional help for H.264 (especially for things beyond base-profile).

 No one doubts that software implementations are available. However, they are
 not a substitute for hardware implementations, for many applications. I
 would expect a pure software implementation of video decoding on any mobile
 device would decimate battery life.

Then please don't characterize it as it won't work when the
situation is it would work, but would probably have unacceptable
battery life on the hardware we are shipping.

The battery life question is a serious and important one, but its
categorically different one than can it work at all.  (In particular
because many people wouldn't consider the battery life implications of
a rarely used fallback format to be especially relevant to their own
development).

 I would caution against extrapolating from a single example. But even here,
 this seems to be a case of a component that may in theory be programmable,
 but in practice can't be reprogrammed by the device vendor.

Yes, I provided it to balance the OMAP3 example I provided. It a case
which is not as well off as the as a widly used, user programmable
general purpose DSP based devices but still not likely to be limited
by fixed function hardware. It's programmable, but only by the chip
maker.

 There is, in fact, a synthetically VHDL implementation of the Theora
 decoder backend available at http://svn.xiph.org/trunk/theora-fpga/

 I did not mention FPGAs because they are not cost-competitive for products
 that ship in volume.

Of course not, but the existence of a code for a syntheizable hardware
description language means that the non-existence of a ASIC version is
just a question of market demand not of some fundamental technical
barrier which you appeared to be implying exists.


 Silvia implied that mass-market products just have general-purpose hardware
 that could easily be used to decode a variety of codecs rather than true
 hardware support for specific codecs, and to the best of my knowledge, that
 is not the case.

There are mass market products that do this.  Specifically palm-pre is
OMAP3, the N810 is OMAP2. These have conventional DSPs with publicly
available toolchains.

It seems like in both cases broad vague claims are misleading.


I'd still love to see some examples of a web browsing device on the
market (obviously I don't expect anyone to comment on their unreleased
products) which can decode H.264 but fundamentally can't decode
Theora.  The closest I'm still aware of is the WDTV I mentioned
(which, does not have enough general CPU power to decode pretty much
any video format, and which uses a video engine which can only be
programed by its maker).

I understand that you're not at liberty to discuss this point in more
detail, but perhaps someone else is.

Likewise, I'm still curious to find out what webpages are claiming
that implementation on common DSPs would be unusually difficult.


Cheers,