Re: [whatwg] video and acceleration

2009-03-30 Thread Philip Jägenstedt
On Sat, 28 Mar 2009 05:57:35 +0100, Benjamin M. Schwartz  
bmsch...@fas.harvard.edu wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear What,

Short: video won't work on slow devices.  Help!

Long:
The video tag has great potential to be useful on low-powered computers
and computing devices, where current internet video streaming solutions
(such as Adobe's Flash) are too computationally expensive.  My personal
experience is with OLPC XO-1*, on which Flash (and Gnash) are terribly
slow for any purpose, but Theora+Vorbis playback is quite smooth at
reasonable resolutions and bitrates.

The video standard allows arbitrary manipulations of the video stream
within the HTML renderer.  To permit this, the initial implementations
(such as the one in Firefox 3.5) will perform all video decoding
operations on the CPU, including the tremendously expensive YUV-RGB
conversion and scaling.  This is viable only for moderate resolutions and
extremely fast processors.

Recognizing this, the Firefox developers expect that the decoding process
will eventually be accelerated.  However, an accelerated implementation  
of

the video spec inevitably requires a 3D GPU, in order to permit
transparent video, blended overlays, and arbitrary rotations.

Pure software playback of video looks like a slideshow on the XO, or any
device with similar CPU power, achieving 1 or 2 fps.  However, these
devices typically have a 2D graphics chip that provides video overlay
acceleration: 1-bit alpha, YUV-RGB, and simple scaling, all in
special-purpose hardware.**  Using the overlay (via XVideo on Linux)
allows smooth, full-speed playback.

THE QUESTION:
What is the recommended way to handle the video tag on such hardware?

There are two obvious solutions:
0. Implement the spec, and just let it be really slow.
1. Attempt to approximate the correct behavior, given the limitations of
the hardware.  Make the video appear where it's supposed to appear, and
use the 1-bit alpha (dithered?) to blend static items over it.  Ignore
transparency of the video.  Ignore rotations, etc.
2. Ignore the HTML context.  Show the video in manners more suitable to
the user (e.g. full-screen or in an independent resizable window).

Which is preferable?  Is it worth specifying a preferred behavior?


In the typical case a simple hardware overlay correctly positioned could  
be used, but there will always be a need for a software fallback when  
rotation, filters, etc are used. Like Robert O'Callahan said, a user agent  
would need to detect when it is safe to use hardware acceleration and use  
it only then.


If there is something that could be changed in the spec to make things a  
bit easier for user agents it might be an overlay attribute, just like SVG  
has:  
http://www.w3.org/TR/SVGTiny12/multimedia.html#compositingBehaviorAttribute


I'm not convinced such an attribute would help, just pointing it out  
here...


--
Philip Jägenstedt
Opera Software


Re: [whatwg] Web Addresses vs Legacy Extended IRI (again)

2009-03-30 Thread Giovanni Campagna
2009/3/29 Kristof Zelechovski giecr...@stegny.2a.pl:
 It is not clear that the server will be able to correctly support various
 representations of characters in the path component, e.g. identify accented
 characters with their decompositions using combining diacritical marks.  The
 peculiarities can depend on the underlying file system conventions.
 Therefore, if all representations are considered equally appropriate,
 various resources may suddenly become unavailable, depending on the encoding
 decisions taken by the user agent.
 Chris

It is not clear to me that the server will be able to support the
composed form of à or ø. Where is specified the conversion from
ISO-8859-1 to UCS? Nowhere.
If a server knows it cannot deal with Unicode Normalization, it should
either use an encoding form of Unicode (utf-8, utf-16), implement a
technology that uses directly IRIs (because Normalization is
introduced only when converting to an URI) or generate IRIs with
binary path data in opaque form (ie percent-encoded)
By the way, the server should be able to deal with both composed and
decomposed forms of accented character (or use none of them), because
I may type the path directly in my address bar (do you know what IME I
use?)

Giovanni


Re: [whatwg] Worker feedback

2009-03-30 Thread Drew Wilson
On Fri, Mar 27, 2009 at 6:23 PM, Ian Hickson i...@hixie.ch wrote:


 Another use case would be keeping track of what has been done so far, for
 this I guess it would make sense to have a localStorage API for shared
 workers (scoped to their name). I haven't added this yet, though.


On a related note, I totally understand the desire to protect developers
from race conditions, so I understand why we've removed localStorage access
from dedicated workers. In the past we've discussed having synchronous APIs
for structured storage that only workers can use - it's a much more
convenient API, particularly for applications porting to HTML5 structured
storage from gears. It sounds like if we want to support these APIs in
workers, we'd need to enforce the same kind of serializability guarantees
that we have for localStorage in browser windows (i.e. add some kind of
structured storage mutex similar to the localStorage mutex).





Gears had an explicit permissions variable applications could check,
which seems valuable - do we do anything similar elsewhere in HTML5
that we could use as a model here?
  
   HTML5 so far has avoided anything that requires explicit permission
   grants, because they are generally a bad idea from a security
   perspective (users will grant any permissions the system asks them
   for).
 
  The Database spec has a strong implication that applications can request
  a larger DB quota, which will result in the user being prompted for
  permission either immediately, or at the point that the default quota is
  exceeded. So it's not without precedent, I think. Or maybe I'm just
  misreading this:
 
  User agents are expected to use the display name and the estimated
  database size to optimize the user experience. For example, a user agent
  could use the estimated size to suggest an initial quota to the user.
  This allows a site that is aware that it will try to use hundreds of
  megabytes to declare this upfront, instead of the user agent prompting
  the user for permission to increase the quota every five megabytes.

 There are many ways to expose this, e.g. asynchronously as a drop-down
 infobar, or as a pie chart showing the disk usage that the user can click
 on to increase the allocaton whenever they want, etc.


Certainly. I actually think we're in agreement here - my point is not that
you need a synchronous permission grant (since starting up a worker is an
inherently asynchronous operation anyway) - just that there's precedent in
the spec for applications to request access to resources (storage space,
persistent workers) that are not necessarily granted to all sites by
default. It sounds like the specifics of how the UA chooses to expose this
access control (pie charts, async dropdowns, domain whitelists, trusted
zones with security levels) left to the individual implementation.


 Re: cookies
 I suppose that network activity should also wait for the lock. I've made
 that happen.


Seems like that would restrict parallelism between network loads and
executing javascript, which seems like the wrong direction to go.

It feels like we are jumping through hoops to protect running script from
having document.cookies modified out from underneath it, and now some of the
ramifications may have real performance impacts. From a pragmatic point of
view, I just want to remind people that many current browsers do not make
these types of guarantees about document.cookies, and yet the tubes have not
imploded.




  Cookies have a cross-domain aspect (multiple subdomains can share cookie
  state at the top domain) - does this impact the specification of the
  storage mutex since we need to lockout multiple domains?

 There's only one lock, so that should work fine.


OK, I was assuming a single per-domain lock (ala localStorage) but it sounds
like there's a group lock, cross-domain. This makes it even more onerous if
network activity across all related domains has to serialize on a single
lock.

-atw


[whatwg] New mailing list h...@ietf.org for discussion of WebSocket (et al)

2009-03-30 Thread Ian Hickson

As Mark discusses below, the IETF has created a mailing list for 
discussion of the WebSocket protocol. I encourage anyone interested in 
this technology to subscribe to this new mailing list and particpate in 
the discussions.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

-- Forwarded message --
Subject: New mailing list: hybi (HTTP long poll and related protocols)
From: Mark Nottingham m...@mnot.net
To: Apps Discuss disc...@apps.ietf.org,
HTTP Working Group ietf-http...@w3.org
Date: Tue, 31 Mar 2009 06:09:17 +1100

As discussed in the APPS area meeting last week, a new mailing list has been
created to discuss the standards implications and actions related to HTTP long
poll techniques (e.g., COMET, BOSH) as well as new protocols that serve similar
use cases (e.g., WebSockets, rHTTP).

See:
  https://www.ietf.org/mailman/listinfo/hybi

Cheers,

--
Mark Nottingham http://www.mnot.net/



Re: [whatwg] Worker feedback

2009-03-30 Thread Robert O'Callahan
On Tue, Mar 31, 2009 at 7:22 AM, Drew Wilson atwil...@google.com wrote:



 Re: cookies
 I suppose that network activity should also wait for the lock. I've made
 that happen.


 Seems like that would restrict parallelism between network loads and
 executing javascript, which seems like the wrong direction to go.

 It feels like we are jumping through hoops to protect running script from
 having document.cookies modified out from underneath it, and now some of the
 ramifications may have real performance impacts. From a pragmatic point of
 view, I just want to remind people that many current browsers do not make
 these types of guarantees about document.cookies, and yet the tubes have not
 imploded.


We have no way of knowing how much trouble this has caused so far;
non-reproducibility means you probably won't get a good bug report for any
given incident.

It's even plausible that people are getting lucky with cookie races almost
all the time, or maybe cookies are usually used in a way that makes them a
non-issue. That doesn't mean designing cookie races in is a good idea.



  Cookies have a cross-domain aspect (multiple subdomains can share cookie
  state at the top domain) - does this impact the specification of the
  storage mutex since we need to lockout multiple domains?

 There's only one lock, so that should work fine.


 OK, I was assuming a single per-domain lock (ala localStorage) but it
 sounds like there's a group lock, cross-domain. This makes it even more
 onerous if network activity across all related domains has to serialize on a
 single lock.


It doesn't have to. There are lots of ways to optimize here.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] Worker feedback

2009-03-30 Thread Michael Nordman

 I think it makes sense to treat dedicated workers as simple subresources,

not separate browsing contexts, and that they should thus just use the

application cache of their parent browsing contexts. This is what WebKit

does, according to ap.


 I've now done this in the spec.


Sounds good. I'd phrase it a little differently though. Dedicated worker do
have a browsing context that is distinct from their parents, but the
appcache
selected for a dedicated worker context is identical to the appacache
selected
for the parents context.



 For shared workers, I see these options:


  - Not allow app caches, so shared workers don't work when offline. That

  seems bad.


  - Same as suggested for dedicated workers above -- use the creator's

  cache, so at least one client will get the version they expect. Other

  clients will have no idea what version they're talking to, the creator

  would have an unusual relationship with the worker (it would be able

  to call swapCache() but nobody else would), and once the creator goes

  away, there will be a zombie relationship.


  - Pick an appcache more or less at random, like when you view an image in

  a top-level browsing context. Clients will have no idea which version

  they're talking to.


  - Allow workers to specify a manifest using some sort of comment syntax.

  Nobody knows what version they'll get, but at least it's always the

  same version, and it's always up to date.


 Using the creator's cache is the one that minimises the number of clients

that are confused, but it also makes the debugging experience most differ

from the case where there are two apps using the worker.


 Using an appcache selected the same way we would pick one for images has

the minor benefit of being somewhat consistent with how window.open()

works, and we could say that window.open() and new SharedWorker are

somewhat similar.


 I have picked this route for now. Implementation feedback is welcome in

determining if this is a good idea.


Sounds good for now.

Ultimately, I suspect that  additionally allowing workers to specify a
manifest using
some sort of syntax may be the right answer. That would put cache selection
for
shared workers on par with how cache selection works for pages (not just
images)
opened via window.open. As 'page' cache selection is refined due to
experience with
this system, those same refinements would also apply to 'shared worker'
cache
selection.