Re: [whatwg] Cryptographically strong random numbers

2011-02-14 Thread Mike Shaver
On Fri, Feb 11, 2011 at 1:36 PM, Adam Barth w...@adambarth.com wrote:
 Regardless, the ability does not exist in JavaScriptCore.  If you'd
 like to contribute a patch that makes it possible, I'm sure it would
 be warmly received.

That is surprising to me. Isn't it necessary in order to implement
window.forms.formname and other dynamically-reflected properties?

Mike


Re: [whatwg] Canvas API: What should happen if non-finite floats are used

2010-09-08 Thread Mike Shaver
On Wed, Sep 8, 2010 at 10:33 AM, Oliver Hunt oli...@apple.com wrote:
 In a lot of cases all you want to do is ignore NaN and Infinite values, 
 otherwise you basically have to prepend every call to canvas with NaN and 
 Infinity checks if you're computing values unless you can absolutely 
 guarantee your values won't have gone nuts at any point in the computation -- 
 otherwise you're going to get reports that occasionally your content breaks 
 but with no repro case (typical users will not be seeing error messages, it 
 just doesn't work).

Does this mean that you're expecting that the ignored calls didn't
matter?  Surely having parts of the drawing be missing will often be
noticed by users as well!

Mike


Re: [whatwg] Adding ECMAScript 5 array extras to HTMLCollection

2010-07-30 Thread Mike Shaver
On Fri, Jul 30, 2010 at 12:43 AM, Oliver Hunt oli...@apple.com wrote:
 The various html collections aren't fixed length, they're not assignable, so 
 they can't used interchangeably with arrays at the best of times.

Array generics work on arrays that aren't fixed-length, perhaps
obviously, and I believe they're still valid over arrays which have
had their indexed properties configured to be unwritable as well.  As
your later example shows, they can be used interchangeably with arrays
at exactly the best of times: when wanting to apply a non-mutating
extra to one!

We've got the array statics in Firefox (Array.forEach, etc.) and it's
made the make-collections-be-arrays cries from those working on
front-end JS subside dramatically.  I think they would probably solve
the problem quite effectively, and not require us to be on the lookout
for naming collisions between future Array.prototype and
HTMLCollection fields.

Mike


Re: [whatwg] Please disallow javascript: URLs in browser address bars

2010-07-22 Thread Mike Shaver
On Thu, Jul 22, 2010 at 4:48 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 These days, though, all major browsers have javascript consoles which
 you can bring up and paste that into.

That doesn't typically apply to content tabs or windows, though.

I have a couple of questions:

What is the proposed change to which specification, exactly?  URL-bar
behaviour, especially input permission, seem out of scope for the
specs that the WHATWG is working on.  Would a UA that asked for the
user's permission the first time a bookmarklet is used (like some
prompt the first time a given helper app or URL scheme is used) be
compliant?

What should the URL bar say when the user clicks a javascript: link
which produces content?  a href=javascript:5;five!/a

Mike


Re: [whatwg] Please disallow javascript: URLs in browser address bars

2010-07-22 Thread Mike Shaver
On Thu, Jul 22, 2010 at 5:32 PM, Luke Hutchison luke.hu...@mit.edu wrote:
 On Thu, Jul 22, 2010 at 5:03 PM, Mike Shaver mike.sha...@gmail.com wrote:
 What is the proposed change to which specification, exactly?  URL-bar
 behaviour, especially input permission, seem out of scope for the
 specs that the WHATWG is working on.

 Is there a better venue to discuss this then?  (It seems like even if
 UI issues are out of the scope of what WHATWG is working on, all the
 right people are signed up to this list...)

I'm not sure of a better venue off-hand.  I don't think that there's
anyone from Microsoft participating in this list, though, and I expect
that a lot of the users affected by the Facebook viruses are using
their browser.

 Would a UA that asked for the
 user's permission the first time a bookmarklet is used (like some
 prompt the first time a given helper app or URL scheme is used) be
 compliant?

 You mean like Windows User Account Control? ;)

No, I mean like the prompts for geolocation, popup windows, first-use
helper applications, first-use URL protocols, and similar.  But my
question is more about what you propose to disallow, and why you
choose disable as the requirement.

 It's not unreasonable to guess that the number of people
 inconvenienced by the easy exploitability of the current behavior
 numbers in the millions, given that Facebook has 500M users and these
 viruses continue to spread like wildfire.  The number inconvenienced
 by having these URLs disabled by default (and re-enableable via a
 developer option the first time they hit this limitation)

That is only helpful against the specific case of direct paste in the
URL bar, though, and bookmarklets aren't just a developer-only
feature.  They're widely used by URL-shortening services, blogging and
micro-blogging services, and Amazon's universal wish list.

 Given the success of these exploits so far, it is also reasonable to
 suggest that the sophistication of attack will only increase with
 time.

Yes, which I think is why so many of us are suggesting that making the
social engineer say drag this link to your bookmark bar, and use it
when you Really Like something! is not going to be much of a
mitigation.

It's not that I don't believe it's a problem, to be clear; it's that I
don't think you're proposing a meaningful solution to it!

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 9:10 AM, Nils Dagsson Moskopp
nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
 (clients try to guess based on
 incorrect information and you end up with stupid switches).

Could you be more specific about the incorrect information?  My
understanding, from this thread and elsewhere, is that video formats
are reliably sniffable, and furthermore that the appropriate MIME type
for ogg-with-VP8 vs ogg-with-theora isn't clear (or possibly even
extant).  It seems like reliance on MIME type will result in more of
the guessing-and-stupid-switches than sniffing.

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 9:43 AM, Nils Dagsson Moskopp
nils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
 Mike Shaver mike.sha...@gmail.com schrieb am Wed, 21 Jul 2010
 09:15:18 -0400:
 and furthermore that the appropriate MIME type
 for ogg-with-VP8 vs ogg-with-theora isn't clear (or possibly even
 extant).

 According to RFC4281 http://www.rfc-editor.org/rfc/rfc4281.txt,there
 is an optional codecs parameter for container file formats.

Does that work in practice?  How can one configure, f.e., Apache to
send the right type for foo.ogv ?

Dave Singer is on this list, and is one of the authors of that RFC, so
perhaps he can tell us: has the codecs parameter from RFC4281 worked
in practice for Apple?  Could you describe where it's used, and how
it's typically configured?

 It seems like reliance on MIME type will result in more of
 the guessing-and-stupid-switches than sniffing.

 Because they still might be wrong?

From what I can see of how they are used on the web today, MIME types
are less precise than the determination made by sniffing within the
container, yes.

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 9:46 AM, Chris Double chris.dou...@double.co.nz wrote:
 How much data are you willing to sniff to find out if the Ogg file
 contains Theora and/or Vorbis? You have to read the header packets
 contained within the Ogg file to get this.

A few kilobytes certainly seems reasonable -- I don't think we can
actually do anything with the video until we've made that
determination, so I'm actually quite sure what's being asked.  Can you
describe a scenario where Firefox would have a reliable MIME type, but
isn't retrieving the first chunk of the video anyway?  Do browsers use
HEAD to check for video compatibility today?

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 9:51 AM, Philip Jägenstedt phil...@opera.com wrote:
 On Wed, 21 Jul 2010 15:15:18 +0200, Mike Shaver mike.sha...@gmail.com
 wrote:
 Could you be more specific about the incorrect information?  My
 understanding, from this thread and elsewhere, is that video formats
 are reliably sniffable, and furthermore that the appropriate MIME type
 for ogg-with-VP8 vs ogg-with-theora isn't clear (or possibly even
 extant).  It seems like reliance on MIME type will result in more of
 the guessing-and-stupid-switches than sniffing.

 The MIME type for both of those would be video/ogg. It wouldn't be hard or
 very error-prone to use only the MIME type, Firefox already does that. It's
 also not very hard to rely on sniffing, which all the other browsers do,
 although Opera still checks the MIME type first.

Indeed, so it seems that sniffing is always required, unless we expect
reliable use of the codecs parameter to become widespread.  (I
confess that I do not expect this, even if this group and the W3C
exhort authors to do so.)

 * Configuring the MIME type is an extra step that seemingly many authors
 don't know that they need. That it is easy to configure doesn't really help.

It may or may not be easy to configure the MIME type correctly, if we
are to include codec details.

 * Ignoring the MIME type will lead to more videos served as text/plain,
 which will render as huge amounts of garbage text in current browsers if
 opened directly (i.e. in a top-level browsing context).

Ignoring the MIME type *and* not sniffing those cases, you mean?

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 10:04 AM, Philip Jägenstedt phil...@opera.com wrote:
 Right, sniffing is currently only done in the context of video, at least
 in Opera. The problem could be fixed by adding more sniffing, certainly.

A warning that you're about to open a 5MB text document might be
humane anyway. :-)

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 10:07 AM, Chris Double
chris.dou...@double.co.nz wrote:
 When content sniffing are we ignoring the mime type served by the
 server and always sniffing? If so then incorrectly configured servers
 can result in more downloaded data due to having to read the data
 looking for a valid video. For example:

 video
  source src='foo.ogg'
  source src='foo.mkv'
 /video

 If the web browser doesn't support Ogg but does support matroska, and
 the server sends the video/ogg content type,  the browser can stop and
 go to the next source after downloading very little data.

 If the web browser is expected to ignore the mime type and content
 sniff it must see if 'foo.ogg' is a matroska file. According to the
 matroska spec arbitary ASCII data can be placed before the EBML
 identifier. This means reading a possible large amount of data (even
 the entire file) before being able to say that it's not a matroska
 file.

Assuming that such a browser were to exist, and that such content were
to exist (an Ogg file that did not contain any characters outside of
0x20-0x7F in the first few kilobytes), and that it were to be a
problem for that browser's users, I would probably suggest that the
developers of said browser implement basic Ogg support (enough to say
this is Ogg, so we don't support it), and go back to solving more
pressing problems!

Mike


Re: [whatwg] video application/octet-stream

2010-07-21 Thread Mike Shaver
On Wed, Jul 21, 2010 at 10:24 AM, Chris Double
chris.dou...@double.co.nz wrote:
 On Thu, Jul 22, 2010 at 2:15 AM, Mike Shaver mike.sha...@gmail.com wrote:
 ...I would probably suggest that the
 developers of said browser implement basic Ogg support (enough to say
 this is Ogg, so we don't support it), and go back to solving more
 pressing problems!

 Or the developers of said browser could obey the mime type that the
 server sent,

My understanding is that this isn't working out so well for us.

Mike


Re: [whatwg] Allowing in attribute values

2010-06-25 Thread Mike Shaver
One advantage is almost the same as your footnote: JavaScript source is
permitted in the values of many attributes, and can certainly contain the 
operator.

On Jun 25, 2010 12:34 PM, Benjamin M. Schwartz bmsch...@fas.harvard.edu
wrote:
 On 06/25/2010 11:50 AM, Boris Zbarsky wrote:
 It seems like what you want here is for browsers to parse as they do
 now, but a particular subset of browser-accepted syntax to be enshrined
 so that when defining your restrictions over content you control you can
 just say follow the spec instead of follow the spec and don't put ''
 in attribute values, right?

 That's more or less how I feel. The spec places requirements on how user
 agents, data mining tools, and conformance checkers must handle
 non-conforming input, but there are many other things in the world that
 process HTML. In other applications, it may be acceptable to have
 undefined behavior on non-conforming input, like in ISO C.

 HTML5 has a very clear specification of conformance, and a validator is
 widely available. If I build a tool that guarantees correct behavior only
 on conforming inputs, then users can easily check their documents for
 conformance before using my tool. If my tool has additional restrictions,
 then I need to write my own validator, and answer a lot of questions.

 I was inspired to suggest this restriction after using mod_layout for
 Apache, which inserts a banner at the top of a page. It works by doing a
 wildcard search for body*. There are a number of obvious ways to
 break this [1]; one of them is by having  in an attribute value. I'm
 sure there are many thousands of such programs around the world.

 It sounds like most experts here would prefer to allow  in attribute
 values in conforming documents, and that's fine. I don't fully understand
 the advantage, but I won't argue against consensus.

 --Ben

 [1] A javascript line like widthbodywidth  heightbodyheight would
 also break it, as would an appropriately constructed comment. It might be
 possible to construct a regexp for this that functions correctly on all
 conformant HTML5 documents. Such a regexp would be considerably simpler
 if  were disallowed in attribute values.



Re: [whatwg] input type=location proposals

2010-06-25 Thread Mike Shaver
On Thu, Jun 24, 2010 at 5:55 PM, Ashley Sheridan
a...@ashleysheridan.co.uk wrote:
 I think it's quite a fringe case. What about things that are more used:

 type=number - a browser could aid input with some sort of spinner
 type=price - a browser could use the locale to select a monetary format, or 
 at least display the amount in the locale format specified by the document 
 itself

I think that most users are able to input a number or price without
much difficulty.  Asking a user to input their latitude and longitude
is a great way to bounce them entirely, since none of them are going
to have any idea how to find it out.

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 9:11 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 The WHATWG has a steering council made up of browser developers.
 Officially, they can override Ian's decisions or make him step down as
 editor.  They've never had to exercise this power yet, though.

Could you elaborate on this?  That *anyone* can override Ian's
decisions or make him step down as editor is news to me, and I suspect
that the organization I represent is counted among the developers you
list.  Who is on the steering council of browser developers?  How does
one appeal a decision to them, and how do they determine what to do
(unanimity, majority vote, rotating single-person veto, five-card
draw)?

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 1:51 PM, Ian Hickson i...@hixie.ch wrote:
 I value technical merit even higher than convergence.

How is technical merit assessed?  Removing Theora from the
specification, for example, seems like it was for political rather
than technical reasons, if I understand how you use the terms.  How
can one learn of the technical motivations of decisions such as the
change to require ImageData for Canvas, or participate in their
evaluation prior to them gaining the incumbent's advantage of being
present in the specification text?

Mike


Re: [whatwg] input type=location proposals

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 2:45 PM, Ashley Sheridan
a...@ashleysheridan.co.uk wrote:

 On Fri, 2010-06-25 at 17:09 -0400, Aryeh Gregor wrote:
  type=number has been in the spec for years.

 Do you have a link to this to verify?

http://dev.w3.org/html5/markup/input.number.html is the fourth hit for
type=number in Google for me, but your search engine results results
may vary.

It's also under the input element, type values, number part of the
WHATWG's HTML5 draft, a document with which you may have some
familiarity.

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 3:00 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Bottom of the charter: http://www.whatwg.org/charter

 I believe the decision process is knife fight to first blood.

Editors should reflect the consensus opinion of the working group
when writing their specifications, but it is the document editor's
responsibility to break deadlocks when the working group cannot come
to an agreement on an issue doesn't sound like the working group can
override anything, and only goes as far as should.

It turns out that two of the Members are in the same building as me,
though, so I'll go see what they think the model is.  I think they may
be surprised to discover that they could have overridden Ian on
anything!

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 3:07 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 How
 can one learn of the technical motivations of decisions such as the
 change to require ImageData for Canvas,

 On the WHATWG wiki a Rationale page is being assembled by a volunteer
 (don't know their name, but they go by 'variable' in #whatwg) to
 document the reasoning behind various decisions that come up in
 questions.  Beyond that, mailing-list diving.

In the case of the ImageData change, I can't find any proposal made to
the list prior to the spec being altered, but I will dive anew.

 There can
 sometimes be a significant delay between something being proposed and
 this happening, though, so within that timespan things can be
 discussed without the incumbent advantage you talk about.

That only works if changes are proposed via the mailing list, and it
relies on meaningful delay.  If my recollection of the original
additions of SQL databases and web workers is correct, there was very
little such delay, certainly relative to the scale of content.

Are you describing how you think the WHATWG has committed to work, how
it does work, or how you think it should work?

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 3:09 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 I wasn't precise in my language - don't read too much into my exact wording.

No, certainly; I'm much more interested in the spirit here than the
wording, since it doesn't match my experience or understanding.  I'll
take on my education burden myself, though!

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 3:30 PM, Aryeh Gregor simetrical+...@gmail.com wrote:
 I'm pretty sure they won't be.  Any significant implementer has always
 had veto power over the spec.

I fear that simply refusing to implement is indeed the WHATWG's
equivalent of how Tab described FO-threats in the W3C environment: a
much more efficient way to influence the direction of the document
than sharing technical reasoning during the design of a capability.

 For example, Mozilla vetoed Web
 Databases, and Apple vetoed Theora requirements, by just saying they
 wouldn't implement them.

Web Databases was removed from the specification before we were even
certain within Mozilla that we wouldn't implement them, so I don't
think that's quite right.  It's true that we don't think it's a good
technology direction for the web, and that we didn't believe it
belonged in HTML5 proper, but I don't think that's quite the same
thing.  (To the extent that Mozilla has unanimity on such things in
the first place.)

 Ian has always made it clear that he'll spec
 whatever the implementers are happy with.

That is not my recollection of what happened with offline, for what
it's worth. Mozilla and Google had a relatively small set of
deviations between approaches (ours developed on the whatwg list and
Google's developed behind closed doors prior to the Gears
announcement) and Ian specified an entirely different model, over the
objections of both Mozilla and Google.  I welcome corrections to the
timeline and details here, but apparently the behaviour that we
*should* have exhibited was simply refusing to implement, rather than
changing late in our development cycle to the new specification that
Ian constructed, for which there was no implementation or deployment
experience.

Is that really how we want the group to operate?  It seems to reward
silent refusal with greater influence than transparent reasoning.  We
saw similarly (IMO) offensive behaviour when IBM voted against the ES5
specification at ECMA General Assembly, simply because their pet
feature hadn't been included (though there was ample technical
justification for its omission, both in closed-door membership
meetings and in the public list).  Happily, in that case it simply
made IBM look manipulative and petty, and didn't prevent the
specification from reaching ratification.

If I were to be in charge of an organization building a platform that
competed with the web, I would certainly consider it worth my time to
implement a browser and then refuse to implement pieces of the
specification that competed with my line of business.  Certainly if I
were running an organization that made a browser and had a line of
business threatened by a piece of the specification, it would be very
clear how to mitigate that threat, since no specifics need be provided
in support of a refusal veto.

Mike


Re: [whatwg] Technical Parity with W3C HTML Spec

2010-06-25 Thread Mike Shaver
On Fri, Jun 25, 2010 at 6:50 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 Who from Mozilla objected? I didn't object, because I thought Ian's approach
 (manifests) was better than ours (JAR files). And I thought ours was quite
 different from Gears' (which used manifests, IIRC).

There were two revision periods, I think, one as you describe that got
us off JAR files (thank the stars), and then another later on, which
is the one to which I am referring.  I could be grotesquely
misremembering, though, so I'll retract my comment rather than try to
find records of the conversations!

Mike


Re: [whatwg] input type=upload (not just files) proposal

2010-06-08 Thread Mike Shaver
On Tue, Jun 8, 2010 at 10:47 AM, Ashley Sheridan
a...@ashleysheridan.co.ukwrote:

  On Tue, 2010-06-08 at 10:37 -0400, Simpson, Grant Leyton wrote:

 Are you wanting the user to manually enter the filename, including the 
 file:// scheme? If not, are you envisioning the file dialog box to provide a 
 choice between selecting local files and entering an http/ftp url?

 On Jun 8, 2010, at 10:32 AM, Eitan Adler wrote:

  It would then be the server's job to fetch the file unless the user
  passed it a file:// scheme it which case the file would be provided by
  the UI.


  I can see how this might work, but in theory it would be more difficult
 than it sounds. For example, passing an FTP uri would only work if that FTP
 server allowed anonymous access, as you wouldn't want to pass your own FTP
 access credentials to an unknown server.


Or the UA could fetch the remote resource and then re-transfer it, as is
sometimes an option on desktop mail clients when attaching a URL (attach
page vs attach link, or similar).  Then it's just a UA issue, since the
client can do that for any file input, and could even permit creating one
from the clipboard.

Mike


Re: [whatwg] input type=upload (not just files) proposal

2010-06-08 Thread Mike Shaver
On Tue, Jun 8, 2010 at 11:02 AM, Ashley Sheridan
a...@ashleysheridan.co.ukwrote:

  Yes, and the rest of my email said that.


Sorry, I am not familiar with KIO, and didn't see the need for OS support.


 KIO slaves on KDE work just like that. It's not something that I think a
 user agent can easily just add in, but something that needs to be supported
 at the OS level.


I'm not sure why -- if the UA can save the resource to disk, and it can
upload files from disk to the site, then it can do all the things required.
It could optimize by chaining the streams together so it didn't have to
buffer the whole resource on disk, but an uploaded file is just a bunch of
bytes, so I don't think the OS needs to provide any sendfile-like magic.

I'd actually be a little surprised if there wasn't a Firefox add-on that
permitted this already, though overlaying the native file dialog is sort of
tricky I guess...

Mike


Re: [whatwg] WebSockets: what to do when there are too many open connections

2010-05-27 Thread Mike Shaver
On Thu, May 27, 2010 at 11:45 AM, John Tamplin j...@google.com wrote:
 On Thu, May 27, 2010 at 10:28 AM, Simon Pieters sim...@opera.com wrote:

 From our testing it seems that Vista has a limit of 1398 open sockets.
 Apparently Ubuntu has a limit of 1024 file descriptors per process.

 On Linux, that is just the default (which may vary between distros) and can
 be configured by the administrator -- see ulimit -n, sysctl -w fs.file-max,
 and fs.file-max in /etc/sysctl.conf (location may var between Linux
 distros).

I think it is reasonable to assume that the browser must be able to
operate effectively within those default limits, though: mostly the
browser can't adjust those, or usefully instruct the user to do so.

Mike


Re: [whatwg] WebSockets: what to do when there are too many open connections

2010-05-13 Thread Mike Shaver
On Thu, May 13, 2010 at 1:19 PM, Perry Smith pedz...@gmail.com wrote:
 Hosts have limits on open file descriptors but they are usually in the ten's 
 of thousands (per process) on today's OSs.

I have to admit, I'd be a little surprised (I think pleasantly, but
maybe not) if I could open ten thousand file descriptors on the latest
shipping Windows CE, or for that matter on an iPhone.

The question is whether you queue or give an error.  When hitting the
RFC-ish per-host connection limits, browsers queue additional requests
from img or such, rather than erroring them out.  Not sure that's
the right model here, but I worry about how much boilerplate code
there will need to be to retry the connection (asynchronously) to
handle failures, and whether people will end up writing it or just
hoping for the best.

Mike


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-22 Thread Mike Shaver
On Mon, Mar 15, 2010 at 3:05 AM, Maciej Stachowiak m...@apple.com wrote:
 === Summary of Data ===

 1) In all browsers tested, copying to an ImageData and then back to a canvas
 (two blits) is faster than a 2x scale.
 2) In all browsers tested, twice the cost of a canvas-to-canvas blit is
 considerably less than the cost of copy to and back from ImageData.
 3) In all browsers tested, twice the cost of a canvas-to-canas blit is still
 less than the cost of a single 0.5x scale or a rotate.

With hardware acceleration in play, things seem to change a lot there,
though it may be that it's just breaking the test somehow?  The
displayed images look correct, FWIW.

My results on a Windows 7 machine, with the browsers maximized on a
1600x1200 display.

FF 3.6:
direct copy: 8ms
indirect: 408ms
2x scale: 1344ms
0.5x scale: 85ms
rotate: 440ms

FF 3.7a (no D2D):
direct copy: 12.5ms
indirect: 101ms
2x scale: 532ms
0.5x scale: 33ms
rotate: 389ms

FF 3.7a (D2D):
direct copy: 0.5ms
indirect: 136ms
2x scale: 0.5ms
0.5x scale: 0.5ms
rotate: 0.5ms

WebKit r56194:
direct copy: 18.5ms
indirect copy: 113ms
2x scale: 670ms
0.5x scale: 112ms
rotate: 129ms

This supports the idea of specialized API, perhaps, since it will keep
authors from having to figure out which path to take in order to avoid
a massive hit  when using the canvas copies (100x or more for
D2D-enabled FF, if the test's results are correct).  It also probably
indicates that everyone is going to get a lot faster in the next
while, so performance tradeoffs should perhaps not be baked too deeply
into the premises for these APIs.

Other more complex tests like blurring or desaturating or doing edge
detection, etc. may show other tradeoffs, and I think we're working on
a performance suite for tracking our own work that may illuminate some
of those.  Subject, of course, to the incredibly fluid nature of all
browser performance analysis these days!

Mike


Re: [whatwg] Storage quota introspection and modification

2010-03-11 Thread Mike Shaver
2010/3/11 Ian Fette (イアンフェッティ) ife...@google.com:
 I think apps will have to deal with hitting quota as you describe, however
 with a normal desktop app you usually have a giant disk relative to what the
 user actually needs. When we're talking about shipping something with a 5mb
 or 50mb default quota, that's a very different story than my grandfather
 having a 1tb disk that he is never going to use. Even with 50mb (which is
 about as much freebie quota as I think I am comfortable giving at the
 moment), you will blow through that quite quickly if you want to sync your
 email.

How did you come up with 50MB?  As a user, I would want the
application that is gmail to have the same capabilities as the
application that is thunderbird, I think.  Isn't that our goal?

 The thing that makes this worse is that you will blow through it at
 some random point (as there is no natural installation point from the APIs
 we have.

That's the case for desktop applications too, really -- mostly I run
out of disk not when I install uTorrent or Thunderbird, but when I'm
trying the Nth linux distro to find one that likes my video card or
someone mails me an HD-resolution powerpoint and I'm about to head to
the airport.

 I would personally be in
 favor of this approach, if only we had a good way to define what it meant to
 offline the app.

Sorry, I was working from that premise, which (I thought!) you stated
in your first message:

I personally would not expect to browse to a site and then just
happen to be able to use it offline, nor do I expect users to have
that expectation or experience. Rather, I expect going through some
sort of flow like clicking something that says Yes, I want to use
Application X offline.

Could also be an infobar on first some-kind-of-storage use, which
users can click to say yeah, make sure this works offline vs it can
use some storage, I guess, but don't let it get in the way of my
torrents!  I am not a UI designer worth the term, but I *do* believe
that the problem is solvable.

Mike


Re: [whatwg] Storage quota introspection and modification

2010-03-11 Thread Mike Shaver
2010/3/11 Ian Fette (イアンフェッティ) ife...@google.com:
 AFAIK most browsers are setting a default quota for storage options that is
 on the order of megabytes.

Could well be, indeed.  It sounded like you'd done some thinking about
the size, and I was curious about how you came up with that number
(versus some %age of available disk, for example).

 Yes, but I think there may be uses of things like storage for non-offline
 uses (pre-fetching email attachments, saving an email that is in a draft
 state etc.)  If it's relatively harmless, like 1mb usage, I don't want to
 pop up an infobar, I just want to allow it. So, I don't really want to have
 an infobar each time a site uses one of these features for the first time,
 I'd like to allow innocuous use if possible

I think of an infobar as relatively innocuous, and a good balance of
user awareness versus flow interruption, but I repeat my lack of
interaction design credentials!

(Your attachment example is an interesting one, I think: do I get the
request if I request too-big an attachment, but not if it's a small
one?  Or if it's saving a blog post draft that has a bunch of images
in it, vs. one that's 140 chars long.)

 But at the same time, I want
 apps to be able to say up front, at a time when the user is thinking about
 it (because they just clicked something on the site, presumably) here's
 what I am going to need.

OK, I see.  What if we had the named-subquota stuff, and the way you
triggered that request was creation of a named subquota?  That would
also encourage app developers to provide a description of the quota,
and perhaps the sort of necessary for offline operation vs improves
performance vs supports additional features.  The named subquota
creation request could give an initial requested size and estimated
upper bound for the size.  An async event delivered back (or a
callback called) could tell the app what quota it was granted, if any
(or maybe just that it was granted some, but the size limit wasn't
specified).

Mike


Re: [whatwg] Storage quota introspection and modification

2010-03-10 Thread Mike Shaver
2010/3/10 Ian Fette (イアンフェッティ) ife...@google.com:
 As I talk with more application developers (both within Google and at
 large), one thing that consistently gets pointed out to me as a problem is
 the notion of the opaqueness of storage quotas in all of the new storage
 mechanisms (Local Storage, Web SQL Database, Web Indexed Database, the
 Filesystem API being worked on in DAP, etc). First, without being able to
 know how large your quota currently is and how much headroom you are using,
 it is very difficult to plan in an efficient manner. For instance, if you
 are trying to sync email, I think it is reasonable to ask how much space do
 I have, as opposed to just getting halfway through an update and finding
 out that you hit your quota, rolling back the transaction, trying again with
 a smaller subset, realizing you still hit your quota, etc.

It generally seems that desktop mail clients behave in the
undesirable way you describe, in that I've never seen one warn me
about available disk space, and I've had several choke on a disk being
surprisingly full.  And yet, I don't think it causes a lot of problems
for users.  One reason for that is likely that most users don't
operate in the red zone of their disk capacity; a reason for THAT
might be that the OS tells them that they're getting close, and that
many of their apps start to fail when they get full, so they are more
conditioned to react appropriately when they're warned.  (Also,
today's disks are gigantic, so if you fill one up it's usually a WTF
sort of moment.)

Part of that is also helped by the fact that they're managing a single
quota, effectively, which might point to a useful simplification: when
the disk gets close to full, and there's a lot of data in the
storage cache, the UA could prompt the user to do some cleanup.  Just
as with cleaning their disk, they would look for stuff they had
forgotten was still on there (I haven't used Google Reader in ages!)
or didn't know was taking up so much space (Flickr is caching *how*
much image data locally?).  The browser could provide a unified
interface for setting a limit, forbidding any storage, compressing to
trade space for perf; on the desktop users need to configure those
things per-application, if such are configurable at all.  If I really
don't like an app's disk space usage on the desktop, I can uninstall
it, for which the web storage analogue would perhaps be setting a
small/zero quota, or just not going there.

One thing that could help users make better quota decisions is a way
for apps to opt in to sub-quotas: gmail might have quotas for contact
data, search indexing, message bodies, and attachments.  I could
decide that on my netbook I want message bodies and contact data, but
will be OK with slow search and missing attachments.  An app like
Remember The Milk might just use one quota for simplicity, but with
the ability to expose distinct storage types to the UA, more complex
web applications could get sophisticated storage management for
free.

So I guess my position is this: I think it's reasonable for apps to
run into their quota, and to that end they should probably synchronize
data in priority order where they can distinguish (and if they were
going to make some decision based on the result of a quota check,
presumably they can).  User agents should seek to make quota
management as straightforward as possible for users.  One reasonable
approach, IMO, is to assume that if there is space available on the
disk, then an app they've offlined can use it.  If it hurts, don't
go back to that site, or put it in a quota box when you get the
achievement unlocked: 1GB of offline data pop-up.

Mike


Re: [whatwg] localStorage mutex - a solution?

2009-11-25 Thread Mike Shaver
On Wed, Nov 25, 2009 at 6:20 AM, Ian Hickson i...@hixie.ch wrote:
 Reading or writing a property on a native object doesn't do it, so

   window['x'].document.forms['y'].value = 'foo';

 ...doesn't release the mutex, though this (identical code) would:

   window['x'].document.forms.namedItem('y').value = 'foo';

 ...because of namedItem() call.

I don't think that we can reasonably expect web developers to do that
kind of analysis, since they are likely to be working through
libraries and other sources of indirection.

(Especially since I bet there is a lot of documentation describing
those two cases as being entirely identical in behaviour.)

Mike


Re: [whatwg] localStorage mutex - a solution?

2009-11-24 Thread Mike Shaver
On Tue, Nov 24, 2009 at 6:12 PM, Rob Ennals rob.enn...@gmail.com wrote:
 If you run your browser in super-warnings-enabled mode then you
 could have it warn you if you did anything remotely suspect between
 calls to localStorage (e.g. calling a function defined by an external
 javascript file or calling an API).

How would the browser distinguish between

storage-operation-1
inadvertent-API-call
storage-operation-2-that-should-be-atomic-with-1

and

storage-operation-1
API-calls-to-gather-data-for-another-transaction
storage-operation-2-that-is-unrelated-to-1

?  Seems like that's a necessary distinction if it's not to just warn
all over the place uselessly!

Mike


Re: [whatwg] localStorage mutex - a solution?

2009-11-04 Thread Mike Shaver
On Wed, Nov 4, 2009 at 5:13 PM, Rob Ennals rob.enn...@gmail.com wrote:
 How about this for a solution for the localStorage mutex problem:

 the user agent MAY release the storage mutex on *any* API operation except
 localStorage itself

 This guarantees that the common case of several storage operations in a row
 with nothing in-between works, but gives the implementors the freedom to
 release the storage mutex wherever else they find they need to.

How does it guarantee that?  Can't the user agent release the mutex
due to activity in another process/thread, between operations that are
sequential in a given script?

Mike


Re: [whatwg] localStorage mutex - a solution?

2009-11-04 Thread Mike Shaver
On Wed, Nov 4, 2009 at 5:51 PM, Rob Ennals rob.enn...@gmail.com wrote:
 Or to put it another way: if the thread can't call an API then it can't
 block waiting for another storage mutex, thus deadlock can't occur, thus we
 don't need to release the storage mutex.

Right, but the spec text there doesn't prevent the UA from releasing
more than in that scenario, which seems like it's not an improvement
over where we are right now: unpredictable consistency.  Existing racy
implementations like in IE would be conformant, so developers can't
count on the script-sequenced-storage-ops pattern providing
transactionality.

More likely, though, _I_'m missing something...

Mike


Re: [whatwg] Application defined locks

2009-09-11 Thread Mike Shaver
Aaron,

You're right, my recollection is quite incorrect.  My apologies for
unfairly describing the origin of the proposal.

Do you agree with Jeremy that Database is too far along in terms of
deployment to have significant changes made to it?  Given that we're
still hashing our major philosophical elements with respect to
transactionality and locking in parts of HTML5, I can imagine it being
quite desirable to make Database conform to whatever model we settle
on.  Does the localStorage mutex plus onbeforeunload plus Database
transaction collision equal deadlock?, etc.

(I have other concerns with Database, but they are higher-level and
therefore likely less compelling to its advocates. :-) )

Mike



On 9/11/09, Aaron Boodman a...@google.com wrote:
 On Fri, Sep 11, 2009 at 6:45 AM, Mike Shaver mike.sha...@gmail.com wrote:
 I'm especially concerned to hear you say that DB is basically done,
 since as far as I can tell it just came over the wall and into
 HTML5-land as a description of what Gears had already implemented and
 shipped, not through any of the sorts of is this the right model for
 the web?, what problems are we trying to solve? analysis that
 characterizes most of the conversations here.

 That isn't true at all. HTML5 database is completely different and
 incompatible with what's in Gears. It is a big improvement (imo).

 For a reminder, here is the Gears API:
 http://code.google.com/apis/gears/api_database.html

 - a


-- 
Sent from Gmail for mobile | mobile.google.com


Re: [whatwg] Global Script proposal.

2009-09-03 Thread Mike Shaver
On Thu, Sep 3, 2009 at 7:30 AM, Ian Hicksoni...@hixie.ch wrote:
 On Mon, 31 Aug 2009, Mike Shaver wrote:

 The multiple server-side processes that end up involved over the course
 of the user's interaction do need to share state with each other, and
 preserving blocking semantics for accessing such state makes the
 programs much simpler to reason about given today's programming
 languages.  Is that shared state not what the Global Script Object would
 provide?

 Aren't global script objects supposed to be client-side? I don't see how
 they would help with server-side state.

Yeah, I was reasoning by analogy; the global script object would on
the client be used as the databases or session state are on the server
side.  (Especially since LocalStorage isn't available to workers.)

Mike


Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-31 Thread Mike Shaver
On Mon, Aug 31, 2009 at 6:11 AM, Ian Hicksoni...@hixie.ch wrote:
 We can't treat cookies and persistent storage differently, because
 otherwise we'll expose users to cookie resurrection attacks. Maintaining
 the user's expectations of privacy is critical.

By that reasoning we can't treat cookies differently from the HTTP
cache (ETag) or history (URIs with session IDs), I think.  I don't
know of any UAs that expire history/cookie/cache in sync to avoid
correlations -- if it's even possible to do so -- and I don't think
I've seen any bugs asking Firefox to do so.

Mike


Re: [whatwg] Global Script proposal.

2009-08-31 Thread Mike Shaver
On Sat, Aug 29, 2009 at 5:40 PM, Ian Hicksoni...@hixie.ch wrote:
 Furthermore, consider performance going forward. CPUs have pretty much
 gotten as fast as they're getting -- all further progress is going to be
 in making multithreaded applications that use as many CPUs as possible. We
 should actively moving away from single-threaded designs towards
 multithreaded designs. A shared global script is firmly in the old way of
 doing things, and won't scale going forward.

Multi-threaded or multi-process, at least.  Most web developers are
quite familiar with multi-process development, where each process has
a single flow of control, since that's what the browser/server
interaction is.  The multiple server-side processes that end up
involved over the course of the user's interaction do need to share
state with each other, and preserving blocking semantics for accessing
such state makes the programs much simpler to reason about given
today's programming languages.  Is that shared state not what the
Global Script Object would provide?  If the synchronization overhead
of manipulating it becomes undesirable to an app developer for
performance reasons, they can use a worker with local state and an
event mechanism or some such; that's largely what people do on the
server side as well.

 Granted,
 programmers today don't want to use threads -- but, well, tough. All
 indications are that that's what the programming model of the next few
 decades is going to be; now is the time to move that way. We shouldn't be
 adding features that actually move us back to the single-threaded world.

I disagree that explicit use of threads is the programming model of
the next few decades.  We are seeing more and more developers eschew
shared-heap threads in favour of other techniques (f.e., task queues)
that adapt better to varying system resources and allow simpler
decomposition of the programming tasks.  Apple's Grand Central
Dispatch appears to be in this vein, though I confess I haven't
analyzed it in any significant way yet.

Mike


Re: [whatwg] Storage mutex

2009-08-29 Thread Mike Shaver
On Fri, Aug 28, 2009 at 3:36 PM, Jeremy Orlowjor...@chromium.org wrote:
 Can a plugin ever call into a script while a script is running besides when
 the script is making a synchronous call to the plugin?  If so, that worries
 me since it'd be a way for the script to lose its lock at _any_ time.

Does the Chromium out-of-process plugin model take steps to prevent
it?  I had understood that it let such things race freely, but maybe
that was just at the NPAPI level and there's some other interlocking
protocol before script can be invoked.

Mike


Re: [whatwg] Text areas with pattern attributes?

2009-08-29 Thread Mike Shaver
On Sat, Aug 29, 2009 at 9:44 PM, Ian Hicksoni...@hixie.ch wrote:
 On Wed, 19 Aug 2009, Mike Shaver wrote:

 It's also pretty common to enter multiple email addresses or tracking
 numbers or URLs one-per-line for batch operations on sites, and they
 would benefit from having client-side validation of such patterns.

 This is handled by input type=email multiple.

For one of the 3 cases, yes.  What for the other 2?

Should we specify input type=text multiple, for related but distinct
text entries?

Mike


Re: [whatwg] Storage mutex

2009-08-28 Thread Mike Shaver
On Tue, Aug 25, 2009 at 10:36 PM, Jeremy Orlowjor...@chromium.org wrote:
 On Tue, Aug 25, 2009 at 10:28 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 On Tue, Aug 25, 2009 at 11:51 AM, Jeremy Orlow jor...@chromium.org
 wrote:

 To me, getStorageUpdates seems to imply that updates have already
 happened and we're working with an old version of the data.  I think many
 developers will be quite shocked that getStorageUpdates _enables_ others to
 update storage.  In other words, 'get' seems to imply that you're consuming
 state that's happening anyway, not affecting behavior.

 fetchStorageUpdates?

 fetch has the same problem.  If we want to keep the StorageUpdates suffix,
 I'd go with something like allowStorageUpdates.  But, no matter what, it
 just doesn't seem very clear that you're actively allowing another thread to
 use the storage mutex.
 What about yieldStorageMutex?  Yield is enough different from unlock that I
 don't think it'll leave developers looking for the lock function.  Yield
 fits pretty well since this is essentially cooperative multi-tasking.
  StorageMutex is good because that's what its actually affecting.

processPendingStorageUpdates? processStorageUpdates?

FWIW I would expect getStorageUpdates to return some storage updates
to the caller, like a getter.  I would expect fetchStorageUpdates to
do the same thing, except maybe involving something over the network,
and I would be a little puzzled about why it wasn't just
getStorageUpdates.

Mike


Re: [whatwg] Web Storage: apparent contradiction in spec

2009-08-27 Thread Mike Shaver
On Wed, Aug 26, 2009 at 8:23 PM, Linus Upsonli...@google.com wrote:
 The
 candidate delete list will be thousands long and hidden in that haystack
 will be a few precious needles.

While that is certainly one of the outcomes, and I agree a bad one, I
am not sure that the user experience needs to be that bleak.  Further,
I expect that most UAs will need some dialog like this to let users
manage the data in some cases anyway -- if users can grant
permissions, they'll need to be able to revoke them, etc.

I think that there is a lot of data that UAs can use to present likely
targets for the user, as infrequently used desktop icons or similar
endeavour to do.  Bookmarked sites would be advantaged, as would sites
that have had their stored data read recently (or have an active
cookie, history frequency data, etc.).

I think highly of myself and my team, to be sure, but I don't think
that we are really going to always know better than the user -- even a
technically naive user -- what they consider to be important data.

Mike


Re: [whatwg] Text areas with pattern attributes?

2009-08-19 Thread Mike Shaver
On Wed, Aug 19, 2009 at 2:38 PM, Jonas Sickingjo...@sicking.cc wrote:
 So for the pattern attribute, a use case would be on a site that
 accepts US addresses (for example a store that only ships within the
 US), the site could use a textarea together with a pattern that
 matches US addresses.

It's also pretty common to enter multiple email addresses or tracking
numbers or URLs one-per-line for batch operations on sites, and they
would benefit from having client-side validation of such patterns.

Mike


Re: [whatwg] SharedWorkers and the name parameter

2009-08-17 Thread Mike Shaver
On Sat, Aug 15, 2009 at 8:29 PM, Jim Jewettjimjjew...@gmail.com wrote:
 Currently, SharedWorkers accept both a url parameter and a name
 parameter - the purpose is to let pages run multiple SharedWorkers using the
 same script resource without having to load separate resources from the
 server.

 [ request that name be scoped to the URL, rather than the entire origin,
 because not all parts of example.com can easily co-ordinate.]

 Would there be a problem with using URL fragments to distinguish the workers?

 Instead of:
    new SharedWorker(url.js, name);

 Use
    new SharedWorker(url.js#name);
 and if you want a duplicate, call it
    new SharedWorker(url.js#name2);

 The normal semantics of fragments should prevent the repeated server fetch.

I don't think that it's very natural for the name to be derived from
the URL that way.  Ignoring that we're not really identifying a
fragment, it seems much less self-documenting than a name parameter.
I would certainly expect, from reading that syntax, for the #part to
be calling out a sub-script (property or function or some such) rather
than changing how the SharedWorker referencing it is named!

Mike


Re: [whatwg] Dates BCE

2009-07-30 Thread Mike Shaver
Can the historical-timeline community perhaps work with a microformat
for such things, so that we can standardize on the basis of experience
using the technology in the field, rather than on speculative uses?

Mike


Re: [whatwg] Make quoted attributes a conformance criterion

2009-07-26 Thread Mike Shaver
On Sun, Jul 26, 2009 at 5:10 AM, Keryx Webwebmas...@keryx.se wrote:
 Mike, I know what you are doing at Mozilla, and have a ton of respect for
 you. But I fail to see how you could misunderstand my analogy to JSLint. Or
 do you suggest that Doug Crockford should drop manual semi-colon insertion
 from that tool?

I'm suggesting that a tool which produces an error report for all use
of HTML event handler attributes is enforcing Mr Crockford's style,
and not just accepted best practices, making its requiring of a
trailing semi in such event handler attributes rather
non-authoritative.

Mike


Re: [whatwg] Make quoted attributes a conformance criterion

2009-07-26 Thread Mike Shaver
On Sun, Jul 26, 2009 at 5:15 AM, Keryx Webwebmas...@keryx.se wrote:
 My analogy was simply this: Just like it makes sense for a JavaScript lint
 tool to enforce semi-colons, it makes sense for an HTML conformance checker
 to enforce quotation marks.

A lint tool is not a conformance checker.  Your proposal here is
analogous to removing ASI from ECMAScript, such that a program which
relied on it would not be conformant.

I recommend that you find an HTML guru of the same stature as
Crockford in the JS community, and convince her to write a lint tool
which forbids unquoted attribute values.  Once you have that, you can
(attempt to) popularize that style via evangelism for the lint tool,
rather than trying to foist your stylistic preferences -- which, as it
happens, I share -- onto the world via spec requirements.

Mike


Re: [whatwg] Make quoted attributes a conformance criterion

2009-07-25 Thread Mike Shaver
On Sat, Jul 25, 2009 at 5:47 AM, Keryx Webwebmas...@keryx.se wrote:
 I think my suggestion is totally analogous to e.g. semi-colon insertion in
 ECMAScript. JSLint demands that those should be present, and I've yet to
 hear anyone say it's a matter of style. Omitting semi-colons is a known
 cause of trouble in ECMAScript.

And yet, tons of inline event handler attribute values on the web omit
their trailing semicolons...as a matter of style.

Mike


Re: [whatwg] Codecs for video and audio

2009-07-14 Thread Mike Shaver
On Tue, Jul 7, 2009 at 5:09 PM, Ian Hicksoni...@hixie.ch wrote:
 We've narrowed codecs down to two. The spec could say that UA which
 supports video MUST implement at least one of Theora or H.264. All
 vendors can comply with that, and that's better than not specifying any
 codecs at all (e.g. doesn't allow browsers to support WMV only).

 That may be where we end up if we really can't resolve this, yes. That
 would be unfortunate, thouh.

I don't see how that helps, if the spec is descriptive rather then
prescriptive.  Surely if a major browser were to pop up and say we
will only support VC1 you'd be forced to change the spec to permit
that?  If you can't forbid people from supporting H.264 only, via spec
text, I don't understand how you could forbid people from supporting
WMV only via spec text.

I also still don't understand how YouTube's objection is relevant to
the codec decision for the standard, since the 1% browser from that
company _will_ support Theora.  But that would be less important to me
if there were something more crisp than quality-per-bit isn't good
enough, so that people could reasonably work to reach that target.
Some sort of statement about what would be good enough would certainly
make the objection more constructive, and to my eyes at least would
make it a more principled basis for influencing the spec.

Mike


Re: [whatwg] Codecs for video and audio

2009-07-14 Thread Mike Shaver
On Tue, Jul 14, 2009 at 2:19 PM, Peter Kastingpkast...@google.com wrote:
 It makes sense if you think about it -- whether YouTube sends videos encoded
 as H.264 is irrelevant to what the _baseline_ codec for video needs to be,
 it is only relevant as additional info for vendors deciding whether to
 support H.264.

Yes, I concur --  I couldn't think of any reason for that to be
relevant to the discussion of baseline codecs at first, so I tried to
make it fit (and asked questions about the details of it).

I will patiently await the details. :-)

Mike


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Mike Shaver
On Tue, Jun 30, 2009 at 12:50 AM, Ian Hicksoni...@hixie.ch wrote:
 Finally, what is Google/YouTube's official position on this?

 As I understand it, based on other posts to this mailing list in recent
 days: Google ships both H.264 and Theora support in Chrome; YouTube only
 supports H.264, and is unlikely to use Theora until the codec improves
 substantially from its current quality-per-bit.

It would be good to understand what the threshold for acceptability is
here; earlier reports on this mailing list have indicated that (on at
least the tested content) Theora can produce quality-per-bit that is
quite comparable to that of H.264 as employed by YouTube.  As one
organization investing, and invested, in the success of Theora,
Mozilla would be very glad to know so that we can help reach that
target.

Can one of the Google representatives here get a statement from
YouTube about the technical threshold here?  I think it could have
significant impact on the course of video on the web; perhaps more
than SHOULD language in HTML5 here.

I personally believe that putting codec requirements in the
specification could have significant market effects, because it would
take advantage of general market pressure for standards compliance.
As an example, if you put it in HTML5 then you could put it in ACID4,
and the ACID tests have historically been quite influential in driving
browser implementation choices.  Theora could get the same boost
NodeIterator has seen, I daresay to greater positive impact on the
web.

Mike


Re: [whatwg] Codecs for audio and video

2009-06-30 Thread Mike Shaver
On Tue, Jun 30, 2009 at 10:43 AM, Gregory Maxwellgmaxw...@gmail.com wrote:
 No one has bothered
 porting Theora to the TMS320c64x DSP embedded in the OMAP3 CPU used in
 this handheld device is an obviously surmountable problem.

Unless I'm mistaken about the DSP in question, that work is in fact
underway, and should bear fruit in the next handful of months.

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 8:00 AM, Chris DiBonacdib...@gmail.com wrote:
 Comparing Daily Motion to Youtube is disingenuous.

Much less so than comparing promotion of H.264-in-video via
Google's sites and client to support for legacy proprietary content
via plugin APIs, I would say.  But also, I didn't compare DailyMotion
to YouTube!  I used it as an example of converting content at scale,
to speak to the relative impact of a codec change vs. API changes in
terms of effort.

 If yt were to
 switch to theora and maintain even a semblance of the current youtube
 quality it would take up most available bandwidth across the internet.
 The most recent public number was just over 1 billion video streams a
 day, and I've seen what we've had to do to make that happen, and it is
 a staggering amount of bandwidth. Dailymotion is a fine site, but
 they're just not Youtube.

I don't think the bandwidth delta is very much with recent (and
format-compatible) improvements to the Theora encoders, if it's even
in H.264's favour any more, but I'd rather get data than share
suppositions.  Can you send me a link to raw video for the clip at
http://www.youtube.com/demo/google_main.mp4?2 so I can get it
converted with the state of the art encoder and we can compare
numbers?

 Considering this 'argument'  came out of the larger issue that we're
 actually shipping with Theora (also on android, too), and as we showed
 at Google I/O, are sampling it on some pages at Youtube,

That's great news -- I wasn't able to be at Google I/O, and I can't
find any mention of Youtube providing Theora for consumption anywhere.
Can you clue me in with a link?  (It does seem that Youtube accepts
Theora at upload, but it seems like it gets transcoded to Flash or
whatever at that point, so it's converting from unencumbered to
encumbered!  http://www.youtube.com/watch?v=roE4bmOSURk is one example
that I'm pretty sure was uploaded in Theora format.)

 I will say that the best thing that can
 happen to Theora recently was firefox's support of it, though, but
 even better would be substantive codec improvements

That's indeed a big part of what we've been funding, and the results
have been great already.  I'd like to demonstate them to you, because
I suspect that you'd be a better-armed advocate within Google for
unencumbered video if you could see what it's really capable of now.
(Separate from the Wikimedia grant we also just started funding work
to port Theora to some DSPs, so that we will be able to do off-CPU
decode/yuv2rbg/scale on some devices.)

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 9:37 AM, Chris DiBonacdib...@gmail.com wrote:
 I tried funding dirac a while back, to some good end, and we provide
 students, but here's the challenge: Can theora move forward without
 infringing on the other video compression patents?

We certainly believe so, but I'm certainly not qualified to evaluate
the different techniques.

Would Theora inherently be any less able to than any other codec
system, though?  I hope you're not saying that it has to be H.264
forever, given the spectre of the streaming license changes at the end
of 2010.

If Youtube is held back by client compatibility, they should be glad
that we're working hard to move ~25% of the web to having Theora
support in the near future!  Google could help that cause a lot by
putting (well-encoded, ahem) Theora up there, even if it's just in the
experimental /html5 area.  It wouldn't hurt to use the reference
libraries rather than ffmpeg for the client either, since we've found
significant differences in quality of experience there.

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 10:08 AM, Chris DiBonacdib...@gmail.com wrote:
 No, but it is what I worry about. How agressive will mpeg.la be in
 their interpretation of the direction that theora is going? I don't
 think that is a reason to stop the current development direction (or
 the funding of it) but I thought that Dirac, with the BBC connection,
 might make a better opponent politically than Theora.

I have reason to hope that Mozilla would be a good opponent
politically as well; that was certainly one piece that we were glad to
bring to the table.  Not that I have anything against Dirac, and would
love to see support for it as well, but I think it's farther from
being web-practical due to bandwidth minimums than Theora is.

 It is client compatibility first, and global/edge bandwidth restricted
 as well. I'd prefer to ship with the reference libraries and have told
 the team as much.

I would certainly like to understand the reasons for Chrome shipping
with H.264 support, but this thread has confused me a little.

As I understand it, you have to have it because, per your analogy with
plugins, there is a lot of legacy content with H.264 that will be made
available via the video tag so you're forced to provide support or
risk market irrelevance...

...but that legacy content is virtually *all* from Google properties...

...but Google can't provide Theora video because of...

a) client compatibility limits (circular with the above, though
Firefox will provide ~25% of the web with Theora support, vastly
larger than I think even the most optimistic projections of
Chrome+Safari with H.264 when Chrome ships video, so maybe we can
out-egg that chicken)

b) bandwidth concerns (but even if Theora took _double_ the bandwidth,
and _all_ the content was converted overnight, that's still only a 25%
increase in bandwidth, plus a few percent for Chrome when it ships
video as well)

So if we can remove the bandwidth spectre -- and I really believe we
can do that, if it hasn't already been done with the state of the art
encoders -- it sounds like it unwinds to client compatibility, which
happily Mozilla and Google have some significant influence over!

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 8:00 AM, Chris DiBonacdib...@gmail.com wrote:
 actually shipping with Theora (also on android, too)

I was looking for a reference to this, but haven't found anything yet.

http://developer.android.com/guide/appendix/media-formats.html lists
Vorbis, but not Theora, and I can't find any announcements about it
elsewhere either.  Any leads? :)

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 10:39 AM, Chris DiBonacdib...@gmail.com wrote:
 Let me ask David Sparks and see where it went, I remember we had it in
 the inital drops, or thought we did.

That'd be great -- all I can find reference to is Vorbis, as used for
the ringtones and system sounds (righteous!)

Mike


Re: [whatwg] H.264-in-video vs plugin APIs

2009-06-13 Thread Mike Shaver
On Sat, Jun 13, 2009 at 11:25 AM, Chris DiBonacdib...@gmail.com wrote:
 It'll take a little while, I'm travelling a bit this month (brazil ,
 new york, etc..)

Yep, I'll reach out to the o3d guys directly as well, see if they have
the source video for that clip.  More than happy to do the
measurements on this side, I know what a pain travel can be...

Mike


[whatwg] H.264-in-video vs plugin APIs

2009-06-12 Thread Mike Shaver
Apologies for the poor threading, I wasn't subscribed when the message
here was sent.

In http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-June/020237.html
Chris DiBona wrote:
  The incredibly sucky outcome is that Chrome ships patent-encumbered
  open web features, just like Apple. That is reprehensible.

 Reprehensible? Mozilla (and all the rest) supports those same open
 web features through its plugin architecture. Why don't you make a
 stand and shut down compatibility with plugins from flash, quicktime
 and others? How long would Firefox last in the market if it were
 incompatible with those? Honestly.

I think that reprehensible is excessive, and not helpful, but I
think you're very much missing the point here.

It's true that plugin support is necessary for competitiveness on the
desktop web, because there is a lot of content out there that requires
those plugins, for better or for worse.  And because each plugin has
different API (and often markup requirements), in addition to codec
differences, migration cost is a pain.  This is definitely a problem
for people on iPhones and the Pre and various mobile Linux devices and
platforms, and I gather for Android users as well.

That's not the case for video, as far as I can tell.  There is still
proportionately little content on the web that uses it, and as far as
H.264-in-video is concerned, basically *all* the content on the web
is Google's!  What legacy-content-compatibility requirement there is
comes from services in the same company!  Anyone else moving to
video now from their legacy setups will have much more to worry
about with respect to API changes than a simple transcoding, and the
experiences of DailyMotion and others indicate that the transcoding
works quite well.  (We want it to work better, of course, which is why
we're investing in tools and library development.)

I do not like the situation on the web today, where to use all the
content you need to have a license to Flash, and I'm saddened that
Google is choosing to use its considerable leverage -- especially in
the web video space, where they could be a king-maker if ever there
was one -- to create a _future_ in which one needs an H.264 patent
license to view much of the video content on the web.  Firefox won't
likely have native H.264 support, since we simply can't operate under
those patent restrictions, so by your analogy I suppose we won't last
long in the market -- I very much hope you're wrong about that aspect
as well.  And I hope that those who would follow Google's footsteps in
joining the web browser market don't have to get such a license as
well; that would be an unfortunate blow to the competitiveness of the
current environment, to which Google has contributed and from which it
has benefited.

Mike