Re: [XHR] support for streaming data

2011-08-11 Thread Cyril Concolato

Hi Charles,

Le 10/08/2011 23:19, Charles Pritchard a écrit :

On 8/9/2011 1:00 AM, Cyril Concolato wrote:

Hi Charles,


I believe that GPAC seeks through large SVG files via offsets and small 
buffers, from what I understood at SVG F2F.
http://gpac.wp.institut-telecom.fr/
The technique is similar to what PDF has in it's spec.

I don't know what you're referring to.


PDF 7.5.8.3 Cross-Reference Stream Data
PDF supports byte offsets, links and SMIL.

Thanks for the reference.



I suppose I was referring more to the MP4Box work than GPAC, though they do 
work in harmony.

MP4 has chunk offsets, and GPAC includes SVG discard support.
I believe that MP4Box stores, and GPAC reads fragments of a large SVG file
throughout the MP4 stream, in a limited manner, similar to how a PDF processes 
streams.

They both allow someone to seek and render portions of a large file,
without loading it all into memory.

 From the article:

We have applied the proposed method to fragment SVG content into SVG streams on
long-running animated vector graphics cartoons, resulting from
the transcoding of Flash content... NHML descriptions were generated
automatically by the cartoon or subtitle transcoders.

... the smallest amount of memory [consumed] is the 'Streaming and Progressive 
Rendering'. The
memory consumption peak is reduced by 64%


SVG does not have byte offset hints, but GPAC expects
data to be processed by an authoring tool and otherwise works with transcoding, 
much as VLC (VideoLan) does.

The details of how we can do it is here:
http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=7129
Basically, for long running SVG animations (e.g. automatic translation from Flash to 
SVG), it is interesting to load only some SVG parts when they are needed and to 
discard them (using the SVG Tiny 1.2 discard element), when they are no 
longer needed. For that, we use an auxiliary file that indicates how to fragment the 
SVG file into a stream, giving timestamps to each SVG file fragment. That auxiliary 
file is then used to store the SVG fragments as regular access units in MP4 files, we 
use MP4Box for that. The manipulation of those fragments for storage and playback is 
then similar to what you would do for audio/video streams. We don't do transcoding 
for SVG fragments but for instance individual gzip encoding is possible.

I think an interesting use case for XHR would be to be able to request data 
with some synchronization, i.e. with a clock reference and timestamp for each 
response data.

Some part of that could be handled via custom HTTP headers; though it's 
certainly a bit of extra-work,
much as implementing seek over http can be work.

Custom HTTP headers or other HTTP Streaming solutions (e.g. MPEG DASH). That's 
the benefit of storing the SVG as fragments in an MP4. At the time we wrote the 
paper we were able to stream the SVG with an unmodified Darwin Streaming Server 
using the RTP protocol. I believe there would be no problem in streaming the 
SVG in an MP4 with an unmodified HTTP Server using the DASH approach. I haven't 
tried though.



I'll keep thinking about the case you brought up. I do believe timestamps are 
currently
available on events, relating to when the event was raised.

What do you mean by a clock reference?

That's a general concept when synchronizing multiple media streams, possibly 
not all synchronized together, you need to group them by their common clock.

Regards,

Cyril

--
Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Multimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France
http://concolato.wp.institut-telecom.fr/



Re: DOM Mutation Events Replacement: When to deliver mutations

2011-08-11 Thread Olli Pettay

On 08/11/2011 03:44 AM, Rafael Weinstein wrote:

Although everyone seems to agree that mutations should be delivered
after the DOM operations which generated them complete, the question
remains:

   When, exactly, should mutations be delivered?

The four options I'm aware of are:

1) Immediately - i.e. while the operation is underway. [Note: This is
how current DOM Mutation events work].

2) Upon completion of the outer-most DOM operation. i.e. Immediately
before a the lowest-on-the-stack DOM operation returns, but after it
has done all of its work.

3) At the end of the current Task. i.e. immediately before the UA is
about to fetch a new Task to run.

4) Scheduled as a future Task. i.e. fully async.

---

Discussion:

Options 1  4 are don't seem to have any proponents that I know of, so briefly:

Option 1, Immediately:

Pro:
-It's conceptually the easiest thing to understand. The following *always* hold:
   -For calling code: When any DOM operation I make completes, all
observers will have run.
   -For notified code: If I'm being called, the operation which caused
this is below me on the stack.

Con:
-Because mutations must be delivered for some DOM operations before
the operation is complete, UAs must tolerate all ways in which script
may invalidate their assumptions before they do further work.


Option 4, Scheduled as a future Task:

Pro:
-Conceptually easy to understand
-Easy to implement.

Con:
-It's too late. Most use cases for mutation observation require that
observers run before a paint occurs. E.g. a widget library which
watches for special attributes. Script may create adiv
class=FooButton  and an observer will react to this by decorating
the div as a FooButton. It is unacceptable (creates visual
artifacts/flickering) to have the div be painted before the widget
library has decorated it as a FooButton.

Both of these options appear to be non-starters. Option 1 has been
shown by experience to be an unreasonable implementation burden for
UAs. Option 4 clearly doesn't handle properly important use cases.

---

Options 2  3 have proponents. Since I'm one of them (a proponent),
I'll just summarize the main *pro* arguments for each and invite those
who wish (including myself), to weigh in with further support or
criticism in follow-on emails.


Option 2: Upon completion of the outer-most DOM operation.

Pro:
-It's conceptually close to fully synchronous. For simple uses
(specifically, setting aside the case of making DOM operations within
a mutation callback), it has the advantages of Option 1, without its
disadvantages. Because of this, it's similar to the behavior of
current Mutation Events.


Pro:
Semantics are consistent: delivery happens right before the
outermost DOM operation returns.

Easier transition from mutation events to the new API.

Not bound to tasks. Side effects, like problems related
to spinning event loop are per mutation callback, not
per whole task.





Option 3: At the end of the current Task.

Pro:
-No code is at risk for having its assumptions invalidated while it is
trying to do work. All participants (main application script,
libraries which are implemented using DOM mutation observation) are
allowed to complete whatever work (DOM operations) they wish before
another participant starts doing work.




Con:
Since the approach is bound to tasks, it is not clear what should happen
if event loop spins while handling the task. What if some other task
modifies the DOM[1], when should the mutation callbacks fire?
Because of this issue, tasks, which may spin event loop, should not
also modify DOM since that may cause some unexpected result.

Callback handling is moved far away from the actual mutation.


Pro:
Can batch more, since the callbacks are called later than in
option 2.


-Olli


[1] showModalDialog(javascript:opener.document.body.textContent = '';, 
, );




RfC: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]

2011-08-11 Thread Arthur Barstow

[ Topic changed to how to organize the group's DOM specs ... ]

Hi Adrian, Anne, Doug, Jacob, All,

The WG is chartered to do maintenance on the DOM specs so a question for 
us is how to organize the DOM specs, in particular, whether Anne's DOM 
spec should be constrained (or not) to some set of features e.g. the 
feature set in the DOM L3 Core spec.


There are advantages to the monolithic/kitchen-sink approach and, as we 
have seen with other large specification efforts, there aredisadvantages 
too. In general, I prefer smaller specs with a tight{er,ish} scope and I 
think there should be compelling reasons to take the monolithic 
approach, especially if there is a single editor. Regardless of the 
approach, the minimal editor(s) requirements are: previous credible 
experience, technical competence in the area, demonstrated ability to 
seek consensus with all of the participants and willingness to comply 
with the W3C's procedures for publishing documents.


In the case of AvK's DOM spec, there has been some progressive feature 
creep. For instance, the 31-May-2011 WD included a new chapter on Events 
(with some overlap with D3 Events). The 2-Aug-2011 ED proposed for 
publication includes a new chapter on Traversal. Additionally, the ED 
still includes a stub section for mutation events which is listed as a 
separate deliverable in group's charter (Asynchronous DOM Mutation 
Notification (ADMN)).


Before we publish a new WD of Anne's DOM spec, I would like comments on 
how the DOM specs should be organized. In particular: a) whether you 
prefer the status quo (currently that is DOM Core plus D3E) or if you 
want various additional features of DOM e.g. Traversal, Mutation Events, 
etc. to be specified in separate specs; and b) why. Additionally, if you 
prefer features be spec'ed separately, please indicate your willingness 
and availability to contribute as an editor vis-à-vis the editor 
requirements above.


-ArtB

On 8/4/11 2:24 PM, ext Adrian Bateman wrote:

On Wednesday, August 03, 2011 7:12 AM, Arthur Barstow wrote:

Anne would like to publish a new WD of DOM Core and this is a Call for
Consensus (CfC) to do so:

http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html

Agreeing with this proposal: a) indicates support for publishing a new
WD; and b) does not necessarily indicate support for the contents of the WD.

If you have any comments or concerns about this proposal, please send
them topublic-weba...@w3.org  by August 10 at the latest.

Positive response is preferred and encouraged and silence will be
considered as agreement with the proposal.

Microsoft has some concerns about this document:

1. We have received feedback from both customers and teams at Microsoft that
the name DOM Core is causing confusion with the previous versions of DOM Core.
We request that the specification be named DOM Level 4 Core. The original Web
DOM Core name would also be acceptable.

2. The scope of the document is unclear. Microsoft believes that the document
should focus on core DOM interfaces to improve interoperability for DOM Core
in the web platform and to incorporate errata. If there are problems with
other specification such as Traversal, those documents should be amended.
This functionality shouldn't be pulled into DOM Core. We believe improvements
for mutation events should be kept a separate deliverable for this working
group (ADMN).

We would prefer to see these issues addressed before moving ahead with
publication.

Thanks,

Adrian.




Re: RfC: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]

2011-08-11 Thread Ms2ger

Hi Art,

(CCing some people you apparently forget to CC, but who might have an 
opinion on this matter, and a stake in the outcome of the discussion.)


On 08/11/2011 12:28 PM, Arthur Barstow wrote:

[ Topic changed to how to organize the group's DOM specs ... ]

Hi Adrian, Anne, Doug, Jacob, All,

The WG is chartered to do maintenance on the DOM specs so a question for
us is how to organize the DOM specs, in particular, whether Anne's DOM
spec should be constrained (or not) to some set of features e.g. the
feature set in the DOM L3 Core spec.

There are advantages to the monolithic/kitchen-sink approach and, as we
have seen with other large specification efforts, there aredisadvantages
too. In general, I prefer smaller specs with a tight{er,ish} scope and I
think there should be compelling reasons to take the monolithic
approach, especially if there is a single editor. Regardless of the
approach, the minimal editor(s) requirements are: previous credible
experience, technical competence in the area, demonstrated ability to
seek consensus with all of the participants and willingness to comply
with the W3C's procedures for publishing documents.


I believe you missed time and willingness to spend that time on editing 
the specification, both on the side of the editor and possibly their 
manager.



In the case of AvK's DOM spec, there has been some progressive feature
creep. For instance, the 31-May-2011 WD included a new chapter on Events
(with some overlap with D3 Events). The 2-Aug-2011 ED proposed for
publication includes a new chapter on Traversal. Additionally, the ED
still includes a stub section for mutation events which is listed as a
separate deliverable in group's charter (Asynchronous DOM Mutation
Notification (ADMN)).

Before we publish a new WD of Anne's DOM spec, I would like comments on
how the DOM specs should be organized. In particular: a) whether you
prefer the status quo (currently that is DOM Core plus D3E) or if you
want various additional features of DOM e.g. Traversal, Mutation Events,
etc. to be specified in separate specs; and b) why. Additionally, if you
prefer features be spec'ed separately, please indicate your willingness
and availability to contribute as an editor vis-à-vis the editor
requirements above.


Firstly, I find the description of the current DOM Core specification as 
a kitchen-sink approach rather exaggerated. On Letter paper, it 
currently covers between 40 and 50 pages, of which


* 2 pages on exceptions
* 5 pages on events
* 24 pages on nodes (the core of DOM Core, if you will)
* 6 pages on traversal
* 5 pages on various collection and list interfaces needed by the above.

As you can see, DOM Core is still primarily about nodes, and the 
enlargement caused by importing events and traversal is rather limited.


Secondly, these are all technologies that still lacked a specification 
in the algorithmic style we have come to expect from modern 
specifications. One could publish some of these chapters separately, and 
make them seem somewhat more worth of splitting, by doubling the 
boilerplate and hard-to-follow cross-specification references. Indeed, 
separate DOM Core and DOM Events specifications would be mutually 
dependent, and thus one would not be able to progress faster along the 
Recommendation track than the other.


Thirdly, these old-style specifications, by virtue of being split in 
chunks that only described one or two interfaces each (except for 
Traversal-Range, which combines two rather unrelated specifications into 
one document), tended to leave interactions between their technologies 
under-defined—perhaps each set of editors hoping the others would do 
that, and not considering themselves responsible for what are, indeed, 
some of the more difficult to author parts of the specification. This 
could be solved by either having both specifications be edited by the 
same people—which would introduce the overhead of having to decide, for 
every edit to a specification, which document the WG would like that 
edit to happen to—, or edited by different people who have to work 
together closely to ensure all feedback is addressed in one document or 
the other—this, too, causes obvious overhead, and a higher likelihood 
that feedback gets lost.


Fourthly, whatever the charter says about ADMN, I will strongly object 
to the publication of any document trying to specify any kind of DOM 
Mutation handlers outside of the specification that defines DOM 
mutations, which, I assume, will remain DOM Core for the foreseeable 
future. Not because I like them so much that I want control over them, 
but because not having them specified along with the actual mutations is 
very likely to cause the behaviour in edge cases to under-defined (as we 
have seen before), or, at best, will need significant cooperation from 
DOM Core in order to define this clearly, and even in this case, I 
expect the set-up to be rather brittle.


Furthermore, if anyone wishes to step 

Re: CfC: publish LCWD of Server-sent Events spec; deadline August 17

2011-08-11 Thread Bryan Sullivan
Hi Art,

+1 for publication of the LCWD.

Bryan

On 8/10/11 7:24 AM, Arthur Barstow art.bars...@nokia.com wrote:

 Given Hixie's recent set of bug fixes, the Server-sent Events spec now
 has zero bugs. As such, it appears this spec is ready to proceed on the
 Recommendation track and this is a Call for Consensus to publish a new
 LCWD of this spec using the following ED as the basis:
 
 http://dev.w3.org/html5/eventsource/
 
 This CfC satisfies the group's requirement to record the group's
 decision to request advancement for this LCWD.
 
 Note the Process Document states the following regarding the
 significance/meaning of a LCWD:
 
 [[
 http://www.w3.org/2005/10/Process-20051014/tr.html#last-call
 
 Purpose: A Working Group's Last Call announcement is a signal that:
 
 * the Working Group believes that it has satisfied its relevant
 technical requirements (e.g., of the charter or requirements document)
 in the Working Draft;
 
 * the Working Group believes that it has satisfied significant
 dependencies with other groups;
 
 * other groups SHOULD review the document to confirm that these
 dependencies have been satisfied. In general, a Last Call announcement
 is also a signal that the Working Group is planning to advance the
 technical report to later maturity levels.
 ]]
 
 Positive response to this CfC is preferred and encouraged and silence
 will be assumed to mean agreement with the proposal. The deadline for
 comments is August 17. Please send all comments to:
 
 public-webapps@w3.org
 
 -ArtB
 
 





Re: RfC: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]

2011-08-11 Thread Aryeh Gregor
On Thu, Aug 11, 2011 at 6:28 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Before we publish a new WD of Anne's DOM spec, I would like comments on how
 the DOM specs should be organized. In particular: a) whether you prefer the
 status quo (currently that is DOM Core plus D3E) or if you want various
 additional features of DOM e.g. Traversal, Mutation Events, etc. to be
 specified in separate specs; and b) why. Additionally, if you prefer
 features be spec'ed separately, please indicate your willingness and
 availability to contribute as an editor vis-à-vis the editor requirements
 above.

While I think HTML/Web Applications 1.0 might be overboard when it
comes to spec length, I strongly feel that we should not be splitting
things up into lots of little specs of a few pages each.  DOM Core as
it stands is a reasonable length and covers a pretty logical grouping
of material: everything related to the DOM itself without dependence
on the host language.  I think it would be logical to add some more
things to it, even -- Anne and Ms2ger and I have discussed merging
Ms2ger's/my DOM Range spec into DOM Core (Range only, with the
HTML-specific Selection part removed).

We don't have to feel bound by the way things were divided up before.
Historically, we've had lots of little specs in some working groups
partly because we had lots of people putting in small amounts of time.
 These days we have more editors capable of handling larger specs, so
it's logical to merge things that were once separate.  As long as
there are no substantive issues people have with the contents of the
spec, I don't think it's productive at all to tell willing and capable
editors that they can't edit something or that they have to write it
in a more complicated and awkward fashion because some people have an
aesthetic preference for smaller specs or because that's the way we
used to do it.

It's true that procedurally, the more we add to a spec the harder it
will be to get it to REC.  I have not made any secret of the fact that
I view this part of the Process as a harmful anachronism at best, but
in any event, it shouldn't be prohibitive.  Given that we have to make
REC snapshots, the way it's realistically going to have to work is
we'll split off a version (say DOM 4 Core) and start stabilizing it,
while continuing new work in a new ED (say DOM 5 Core).  We can drop
features that aren't stable enough from the old draft when necessary
-- we don't have to drop them preemptively.  That's the whole idea of
at-risk features.

Also, a lot of the features we're talking about are actually very
stable.  I've written very extensive test cases for DOM Range, for
instance, and I can assure you that the large majority of requirements
in the Range portion (as opposed to Selection) have at least two
independent interoperable implementations, and often four.  I don't
think that merging Range in would have to significantly slow progress
on the REC track.  I imagine Traversal is also very stable.  Things
like a DOM mutation events replacement would obviously not be suitable
for a draft we want to get to REC anytime soon, but again, it can be
put in the next DOM Core instead of a separate small spec.

I also definitely think that DOM mutation events have to be in DOM
Core.  Things like Range and Traversal can reasonably be defined on
top of Core as separate specs, since Core has no real dependency on
them.  Mutation events, on the other hand, are intimately tied to some
of the basic features of DOM Core and it isn't reasonable to separate
them.



Re: DOM Mutation Events Replacement: When to deliver mutations

2011-08-11 Thread Rafael Weinstein
Thanks Olli. I think this is now a fairly complete summary of the
issues identified thus far.

It'd be great to get some additional views -- in particular from folks
representing UAs that haven't yet registered any observations or
opinons.

Note: I think what Olli has listed is fair, but I'm concerned that
because of terminology (consistent vs inconsistent semantics),
others may be confused. I'm going to clarify a bit. I believe my
comments should be uncontroversial. Olli (or anyone else), please
correct me if this isn't so.

On Thu, Aug 11, 2011 at 2:02 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 08/11/2011 03:44 AM, Rafael Weinstein wrote:

 Although everyone seems to agree that mutations should be delivered
 after the DOM operations which generated them complete, the question
 remains:

   When, exactly, should mutations be delivered?

 The four options I'm aware of are:

 1) Immediately - i.e. while the operation is underway. [Note: This is
 how current DOM Mutation events work].

 2) Upon completion of the outer-most DOM operation. i.e. Immediately
 before a the lowest-on-the-stack DOM operation returns, but after it
 has done all of its work.

 3) At the end of the current Task. i.e. immediately before the UA is
 about to fetch a new Task to run.

 4) Scheduled as a future Task. i.e. fully async.

 ---

 Discussion:

 Options 1  4 are don't seem to have any proponents that I know of, so
 briefly:

 Option 1, Immediately:

 Pro:
 -It's conceptually the easiest thing to understand. The following *always*
 hold:
   -For calling code: When any DOM operation I make completes, all
 observers will have run.
   -For notified code: If I'm being called, the operation which caused
 this is below me on the stack.

 Con:
 -Because mutations must be delivered for some DOM operations before
 the operation is complete, UAs must tolerate all ways in which script
 may invalidate their assumptions before they do further work.


 Option 4, Scheduled as a future Task:

 Pro:
 -Conceptually easy to understand
 -Easy to implement.

 Con:
 -It's too late. Most use cases for mutation observation require that
 observers run before a paint occurs. E.g. a widget library which
 watches for special attributes. Script may create adiv
 class=FooButton  and an observer will react to this by decorating
 the div as a FooButton. It is unacceptable (creates visual
 artifacts/flickering) to have the div be painted before the widget
 library has decorated it as a FooButton.

 Both of these options appear to be non-starters. Option 1 has been
 shown by experience to be an unreasonable implementation burden for
 UAs. Option 4 clearly doesn't handle properly important use cases.

 ---

 Options 2  3 have proponents. Since I'm one of them (a proponent),
 I'll just summarize the main *pro* arguments for each and invite those
 who wish (including myself), to weigh in with further support or
 criticism in follow-on emails.


 Option 2: Upon completion of the outer-most DOM operation.

 Pro:
 -It's conceptually close to fully synchronous. For simple uses
 (specifically, setting aside the case of making DOM operations within
 a mutation callback), it has the advantages of Option 1, without its
 disadvantages. Because of this, it's similar to the behavior of
 current Mutation Events.

 Pro:
 Semantics are consistent: delivery happens right before the
 outermost DOM operation returns.

This statement is true. When I described Option 2 (perhaps too
harshly) as having inconsistent semantics, I was referrer only to
the expectations of Callers and Observers. To be totally clear:

Parties:

Caller = any code which performers a DOM operation which triggers a mutation.
Observer = any code to whom the mutation is delivered.

Expectations for synchrony:

Caller: When any DOM operation I make completes, all observers will
have been notified.
Observer: If I'm being notified, the Caller which triggered the
mutation is below me on the stack.

Parties:   Caller  Observer
Options
1:Always   Always
2:Sometimes(a) Sometimes(a)
3:Never Never
4:Never Never

(a) True when Caller is run outside of a mutation observer callback.
False when Caller is run inside a mutation observer callback.


 Easier transition from mutation events to the new API.

 Not bound to tasks. Side effects, like problems related
 to spinning event loop are per mutation callback, not
 per whole task.




 Option 3: At the end of the current Task.

 Pro:
 -No code is at risk for having its assumptions invalidated while it is
 trying to do work. All participants (main application script,
 libraries which are implemented using DOM mutation observation) are
 allowed to complete whatever work (DOM operations) they wish before
 another participant starts doing work.



 Con:
 Since the approach is bound to tasks, it is not clear what should happen
 if event loop spins while handling the task. What 

Re: DOM Mutation Events Replacement: When to deliver mutations

2011-08-11 Thread Olli Pettay

On 08/11/2011 06:13 PM, Rafael Weinstein wrote:

Con:
Since the approach is bound to tasks, it is not clear what should happen
if event loop spins while handling the task. What if some other task
modifies the DOM[1], when should the mutation callbacks fire?
Because of this issue, tasks, which may spin event loop, should not
also modify DOM since that may cause some unexpected result.


I think the *pro* side of this you listed is more fair. Both Options 2
  3 must answer this question. It's true that because Option 3 is
later, it sort of has this issue more.

And it has a lot more. Since for example when handling an event, all
the listeners for it are called in the same task and if one event
listener modifies DOM and some other spins event loop, it is hard to
see what is causing the somewhat unexpected behavior.




However, what should happen has been defined. In both cases, if
there are any mutations which are queued for delivery when an inner
event loop spins up, they are *not* delivered inside the inner event
loop. In both Options, they are always delivered in the loop which
queued them.

But what happens when event loop spins within a task, and some
inner task causes new mutations?
We want to notify about mutations in the order they have happened, right?
So if there are pending mutations to notify, the inner task must just
queue notifications to the queue of the outermost task.
This could effectively disable all the mutation callbacks for example 
when a modal dialog (showModalDialog) is open.



Option 2 has similar problem, but *only* while handling mutation 
callbacks, not during the whole task.




-Olli







Callback handling is moved far away from the actual mutation.


Pro:
Can batch more, since the callbacks are called later than in
option 2.


-Olli


[1] showModalDialog(javascript:opener.document.body.textContent = '';, ,
);









[Bug 13761] New: Now that event data can be discarded (if there is no newline before eof), the last id field value received should be stored in a buffer and set as the EventSource's lastEventId only

2011-08-11 Thread bugzilla
http://www.w3.org/Bugs/Public/show_bug.cgi?id=13761

   Summary: Now that event data can be discarded (if there is no
newline before eof), the last id field value received
should be stored in a buffer and set as the
EventSource's lastEventId only when the event is
actually dispatched. That way the id associated with
th
   Product: WebAppsWG
   Version: unspecified
  Platform: Other
   URL: http://www.whatwg.org/specs/web-apps/current-work/#top
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P3
 Component: Server-Sent Events (editor: Ian Hickson)
AssignedTo: i...@hixie.ch
ReportedBy: contribu...@whatwg.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


Specification: http://dev.w3.org/html5/eventsource/
Multipage: http://www.whatwg.org/C#top
Complete: http://www.whatwg.org/c#top

Comment:
Now that event data can be discarded (if there is no newline before eof), the
last id field value received should be stored in a buffer and set as the
EventSource's lastEventId only when the event is actually dispatched. That way
the id associated with the discarded event data can also be discarded.

Posted from: 89.233.225.89
User agent: Mozilla/5.0 (Linux; U; Android 2.3.3; en-us; SonyEricssonMT15i
Build/3.0.1.A.0.145) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile
Safari/533.1

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: DOM Mutation Events Replacement: When to deliver mutations

2011-08-11 Thread Rafael Weinstein
On Thu, Aug 11, 2011 at 9:12 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 08/11/2011 06:13 PM, Rafael Weinstein wrote:

 Con:
 Since the approach is bound to tasks, it is not clear what should happen
 if event loop spins while handling the task. What if some other task
 modifies the DOM[1], when should the mutation callbacks fire?
 Because of this issue, tasks, which may spin event loop, should not
 also modify DOM since that may cause some unexpected result.

 I think the *pro* side of this you listed is more fair. Both Options 2
   3 must answer this question. It's true that because Option 3 is
 later, it sort of has this issue more.

 And it has a lot more. Since for example when handling an event, all
 the listeners for it are called in the same task and if one event
 listener modifies DOM and some other spins event loop, it is hard to
 see what is causing the somewhat unexpected behavior.



 However, what should happen has been defined. In both cases, if
 there are any mutations which are queued for delivery when an inner
 event loop spins up, they are *not* delivered inside the inner event
 loop. In both Options, they are always delivered in the loop which
 queued them.

 But what happens when event loop spins within a task, and some
 inner task causes new mutations?
 We want to notify about mutations in the order they have happened, right?

In general, yes. But I believe the idea is that spinning an inner
event loop is an exception. In that case delivering mutations in the
order they were generated will be broken. To be perfectly precise:
Mutations will be delivered in the order they were generated *for and
within any given event loop*.

There's no question this is unfortunate. The case in which the bad
thing happens is you:

-Made some modifications to the main document
-Used showModalDialog
-Modified the opener document from the event loop of showModalDialog
-Got confused because mutations from within the showModalDialog were
delivered before the mutations made before calling it

I suppose this comes down to judgement. Mine is that it's acceptable
for us to not attempt to improve the outcome in this case.

 So if there are pending mutations to notify, the inner task must just
 queue notifications to the queue of the outermost task.
 This could effectively disable all the mutation callbacks for example when a
 modal dialog (showModalDialog) is open.


 Option 2 has similar problem, but *only* while handling mutation callbacks,
 not during the whole task.



 -Olli





 Callback handling is moved far away from the actual mutation.


 Pro:
 Can batch more, since the callbacks are called later than in
 option 2.


 -Olli


 [1] showModalDialog(javascript:opener.document.body.textContent = '';,
 ,
 );








Re: HTTP, websockets, and redirects

2011-08-11 Thread Adam Barth
Generally speaking, browsers have been moving away from triggering
authentication dialogs for subresource loads because they are more
often used for phishing than for legitimate purposes.  A WebSocket
connection is much like a subresource load.

Adam


On Wed, Aug 10, 2011 at 9:36 PM, Brian Raymor
brian.ray...@microsoft.com wrote:

 What is the rationale for also failing the websocket connection when a 
 response for authentication is received such as:

 401 Unauthorized
 407 Proxy Authentication Required


 On 8/10/11 Art Barstow wrote:

 Hi All,

 Bugzilla now reports only 2 bugs for the Web Socket API [WSAPI] and I would
 characterize them both as editorial [Bugs]. As such, the redirect issue 
 Thomas
 originally reported in this thread (see [Head]) appears to be the only
 substantive issue blocking WSAPI Last Call.

 If anyone wants to continue discussing this redirect issue for WSAPI, I
 recommend using e-mail (additionally, it may be useful to also create a new
 bug in Bugzilla).

 As I understand it, the HyBi WG plans to freeze the Web Socket Protocol spec
 real soon now (~August 19?).

 -Art Barstow

 [WSAPI] http://dev.w3.org/html5/websockets/
 [Head]
 http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/0474.html
 [Bugs]
 http://www.w3.org/Bugs/Public/buglist.cgi?query_format=advancedshort_de
 sc_type=allwordssubstrshort_desc=product=WebAppsWGcomponent=We
 bSocket+API+%28editor%3A+Ian+Hickson%29longdesc_type=allwordssubstr
 longdesc=bug_file_loc_type=allwordssubstrbug_file_loc=status_whiteboar
 d_type=allwordssubstrstatus_whiteboard=keywords_type=allwordskeywor
 ds=bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDemailt
 ype1=substringemail1=emailtype2=substringemail2=bug_id_type=anyex
 actbug_id=votes=chfieldfrom=chfieldto=Nowchfieldvalue=cmdtype=d
 oitorder=Reuse+same+sort+as+last+timefield0-0-0=nooptype0-0-
 0=noopvalue0-0-0=


 On 7/27/11 8:12 PM, ext Adam Barth wrote:
  On Mon, Jul 25, 2011 at 3:52 PM, Gabriel Montenegro
  gabriel.montene...@microsoft.com  wrote:
  Thanks Adam,
 
  By discussed on some  mailing list, do you mean a *W3C* mailing list?
  A quick search turned up this message:
 
  But I'm totally fine with punting on this for the future and just
  disallowing redirects on an API level for now.
 
  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-March/031079.
  html
 
  I started that thread at Greg Wilkins' recommendation:
 
  This is essentially an API issue for the browser websocket object.
 
  http://www.ietf.org/mail-archive/web/hybi/current/msg06954.html
 
  Also, allowing the users to handle these explicitly implies that the API 
  does
 not mandate dropping the connection. Currently, the API does not have this
 flexibility, nor does it allow other uses of non-101 codes, like for
 authentication. I understand the potential risks with redirects in browsers, 
 and I
 thought at one moment we were going to augment the security considerations
 with your help for additional guidance. If websec has already worked on 
 similar
 language in some draft that we could reuse that would be great, or, 
 similarly, if
 we could work with you on that text.
  We can always add support for explicitly following redirects in the
  future.  If we were to automatically follow them today, we'd never be
  able to remove that behavior by default.
 
  Adam
 
 
  -Original Message-
  From: Adam Barth [mailto:w...@adambarth.com]
  Sent: Sunday, July 24, 2011 13:35
  To: Thomas Roessler
  Cc: public-ietf-...@w3.org; WebApps WG; Salvatore Loreto; Gabriel
  Montenegro; Art Barstow; François Daoust; Eric Rescorla; Harald
  Alvestrand; Tobias Gondrom
  Subject: Re: HTTP, websockets, and redirects
 
  This issue was discussed on some mailing list a while back (I forget
  which).  The consensus seemed to be that redirects are the source of
  a large number of security vulnerabilities in HTTP and we'd like
  users of the WebSocket API to handle them explicitly.
 
  I'm not sure I understand your question regarding WebRTC, but the
  general answer to that class of questions is that WebRTC relies, in
  large part, on ICE to be secure against cross-protocol and voicehammer
 attacks.
 
  Adam
 
 
  On Sun, Jul 24, 2011 at 6:52 AM, Thomas Roesslert...@w3.org  wrote:
  The hybi WG is concerned about the following clause in the
  websocket API
  spec:
  When the user agent validates the server's response during the
  establish a
  WebSocket connection algorithm, if the status code received from
  the server is not 101 (e.g. it is a redirect), the user agent must fail 
  the
 websocket connection.
  http://dev.w3.org/html5/websockets/
 
  Discussion with the WG chairs:
 
  - this looks like a conservative attempt to lock down redirects in
  the face of ill-understood cross-protocol interactions
  - critical path for addressing includes analysis of interaction
  with XHR, XHR2, CORS
  - following redirects in HTTP is optional for the client, therefore
  in principle a decision that a 

[File API] Latest Editor's Draft | Call for Review

2011-08-11 Thread Arun Ranganathan

Greetings WebApps WG,

The latest editor's draft of the File API can be found here:

http://dev.w3.org/2006/webapi/FileAPI/

Changes are based on feedback on this listserv, as well as the URI 
listserv (e.g. [1][2][3]).


Chrome team: some of the feedback is to more rigorously define the 
opaqueString production in Blob URIs.  Currently, you generate Blob URIs 
that look like this:


blob:http://localhost/c745ef73-ece9-46da-8f66-ebes574789b1 [4]

I've included language that allows use of this kind, but some review 
about what is NOT allowed would be appreciated.


-- A*

[1] http://lists.w3.org/Archives/Public/uri/2011May/0004.html
[2] http://lists.w3.org/Archives/Public/uri/2011May/0002.html
[3] http://lists.w3.org/Archives/Public/uri/2011May/0006.html
[4] 
http://www.html5rocks.com/en/tutorials/workers/basics/#toc-inlineworkers-bloburis




[Bug 13634] Document autoimplementation.html and link to it more prominently

2011-08-11 Thread bugzilla
http://www.w3.org/Bugs/Public/show_bug.cgi?id=13634

Aryeh Gregor a...@aryeh.name changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED

--- Comment #1 from Aryeh Gregor a...@aryeh.name 2011-08-11 20:08:47 UTC ---
http://aryeh.name/gitweb.cgi?p=editing;a=commitdiff;h=296e7245

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: [whatwg] File API Streaming Blobs

2011-08-11 Thread Bjartur Thorlacius

Þann mán  8.ágú 2011 20:31, skrifaði Simon Heckmann:

Well, not directly an answer to your question, but the use case I had in mind 
is the following:

A large encrypted video (e.g. HD movie with 2GB) file is stored using the File 
API, I then want to decrypt this file and start playing with only a minor 
delay. I do not want to decrypt the entire file before it can be viewed. As 
long as such as use case gets covered I am fine with everything.


Has the decryption to happen above the File API?



Re: [whatwg] File API Streaming Blobs

2011-08-11 Thread Simon Heckmann
See below!

Am 11.08.2011 um 23:24 schrieb Bjartur Thorlacius svartma...@gmail.com:

 Þann mán  8.ágú 2011 20:31, skrifaði Simon Heckmann:
 Well, not directly an answer to your question, but the use case I had in 
 mind is the following:
 
 A large encrypted video (e.g. HD movie with 2GB) file is stored using the 
 File API, I then want to decrypt this file and start playing with only a 
 minor delay. I do not want to decrypt the entire file before it can be 
 viewed. As long as such as use case gets covered I am fine with everything.
 
 Has the decryption to happen above the File API?

Well, ist could also be part of the File API, but ist should not be below the 
File API.



Re: [whatwg] File API Streaming Blobs

2011-08-11 Thread Aaron Colwell
FYI I'm working on an experimental extension to Chromium to allow media data
to be streamed into a media element via JavaScript. Here is the draft
spechttp://html5-mediasource-api.googlecode.com/svn/tags/0.2/draft-spec/mediasource-draft-spec.html
and
pending WebKit patch https://bugs.webkit.org/show_bug.cgi?id=64731 related
to this work. I have simple WebM VOD playback w/ seeking working where all
media data is fetched via XHR.

Aaron

On Mon, Aug 8, 2011 at 7:16 PM, Charles Pritchard ch...@jumis.com wrote:

 On 8/8/2011 2:51 PM, Glenn Maynard wrote:

  On Mon, Aug 8, 2011 at 4:31 PM, Simon Heckmann 
 si...@simonheckmann.demailto:
 si...@simonheckmann.de** wrote:

Well, not directly an answer to your question, but the use case I
had in mind is the following:

A large encrypted video (e.g. HD movie with 2GB) file is stored
using the File API, I then want to decrypt this file and start
playing with only a minor delay. I do not want to decrypt the
entire file before it can be viewed. As long as such as use case
gets covered I am fine with everything.


 Assuming you're thinking of DRM, are there any related use cases other
 than crypto?  Encryption for DRM, at least, isn't a very compelling use
 case; client-side Javascript encryption is a very weak level of protection
 (putting aside, for now, the question of whether the web can or should be
 attempting to handle DRM in the first place).  If it's not DRM you're
 thinking of, can you clarify?


 Jonas Sickling brought up a few cases for XHR-based streaming of
 arraybuffers: progressive rendering of word docs and PDFs.
 WebP and WebM have had interesting packaging hacks. Packaging itself,
 whether DRM or not, is compelling.
 PDF supports embedded data, a wide range of formats. GPAC provides many
 related tools (MP4 based, I believe):
 http://gpac.wp.institut-**telecom.fr/http://gpac.wp.institut-telecom.fr/

 The audio and video tags drop frames
 It seems to me that if a listener is not registered to the stream, data
 would just be dropped.

 As an alternative, the author could register a fixed length circular
 buffer.

 For instance, I could create  1 megabyte arrayview, run
 URL.createBlobStream(**ArrayView)
 and use .append(data). That kind of structure may support multicast
 (multiple audio/video elements)
 and improved XHR2 semantics. The circular buffer, itself, is easy to
 prototype: subarray
 works well with typed arrays.

 Otherwise relevant, is the work on raw audio data
 that Firefox and Chromium have released as experimental extensions.
 It does work on a buffer-based system.

 -Charles










Re: [whatwg] File API Streaming Blobs

2011-08-11 Thread Aaron Colwell
Comments inline...

On Wed, Aug 10, 2011 at 2:05 PM, Charles Pritchard ch...@jumis.com wrote:

  On 8/9/2011 9:38 AM, Aaron Colwell wrote:

 FYI I'm working on an experimental extension to Chromium to allow media
 data to be streamed into a media element via JavaScript. Here is the draft
 spechttp://html5-mediasource-api.googlecode.com/svn/tags/0.2/draft-spec/mediasource-draft-spec.html
  and
 pending WebKit patch https://bugs.webkit.org/show_bug.cgi?id=64731 related
 to this work. I have simple WebM VOD playback w/ seeking working where all
 media data is fetched via XHR.


 It's nice to see this patch.


Thanks. Hopefully I can get it landed soon so people can start playing with
it in Chrome Dev Channel builds.


 I'm hoping to see streamed array buffers in XHR, though fetching in chunks
 can work,
 given the relatively small overhead of HTTP headers vs Video content.


Eventually I'd like to see streamed array buffers in XHR. For now I'm just
using range requests and allowing the JavaScript code determine how large
the ranges should be to control overhead.


 The WHAWG specs have a Media Stream example which uses URL createObjectURL:
 navigator.getUserMedia('video user', gotStream, noStream);
 function gotStream(stream) {
 video.src = URL.createObjectURL(stream);

 http://www.whatwg.org/specs/web-apps/current-work/complete/video-conferencing-and-peer-to-peer-communication.html#dom-mediastream

 The WHATWG spec seems closer to (mediaElement.createStream()).append()
 semantics.


There was a previous discussion about this on WHATWG. There was concern
about providing compressed data to a MediaStream object since they are
basically format agnostic right now.


 Both WHATWG and the draft spec agree on src=uri;


The benefit of src=uri is that it allows you to leverage all the existing
state transition and behavior defined in the spec.


 File API has to toURL semantics on objects, simlar to the draft spec, for
 getting filesystem:// uris.

 My understanding: The draft spec is simpler, intended only to be used by
 HTMLMediaElement
 and only by one element at a time, without introducing a new object. In the
 long
 run, it may make sense to create a media stream object, consistent with the
 WHATWG direction.


The draft spec was intended to be as simple as possible. Attaching this
functionality to HTMLMediaElement instead of
creating a MediaStream came out of discussions on whatwg
herehttp://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-July/032283.html
 and 
herehttp://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-July/032384.html.
I'm definitely open to revisiting this, but I got
the feeling that people wanted to see a more concrete implementation first.
I also like having this functionality part of
HTMLMediaElement because then I only have to deal with the HTMLMediaElement
during seeking instead of having to coordinate behavior
between the MediaStream  the HTMLMediaElement.



 On another note, Mozilla Labs has some experiments on recording video from
 canvas (as well as general webcam, etc):
 https://mozillalabs.com/rainbow/
 https://github.com/mozilla/rainbow
 https://github.com/mozilla/rainbow/blob/master/content/example_canvas.html


I'll take a look at this.

Aaron


Re: Mouse Lock

2011-08-11 Thread Klaas Heidstra
You actually can get mouse delta info in windows using raw WM_INPUT data
see: http://msdn.microsoft.com/en-us/library/ee418864(VS.85).aspx. This is
also the only way to take advantage of  400dpi mice, which is useful for
FPS games.

As for mouse locking isn't that a completely distinct feature from getting
mouse delta information? For example in full-screen mode (and only using one
screen) there is no need for mouse lock when you always can get mouse delta
(because the mouse can't leave the screen).

The only problem that remains is when in multi-screen and/or
non-full-screen-mode, the mouse cursor can go outside the game viewport.
This is something that could be solved by mouse lock. But wouldn't it be
better (as previously suggested in this thread) to implement walling of the
mouse cursor by limiting the mouse cursor to the bounds of a targeted
element. This is both useful for FPS-type games and RTS's. FPS's can hide
the mouse cursor using CSS and don't have to worry about mouse events being
fired outside the element or window. RTS-games just could use the available
mouse cursor without needing to hide it (they wouldn't even need the mouse
delta that way).

It appears to me that mouse lock is just a workaround/hack that originated
from the assumption that you can't get mouse delta info in Windows. Isn't it
always better to first make an ideal design before looking at platform
limitations that possibly could undermine that design?

So to summarize all the above, design to separate features: 1. an event for
getting mouse delta 2. an API for walling the mouse to an element.

Klaas Heidstra

 A few comments:

 Is there a need to provide mouse-locking on a per-element basis? It seems
to
 me it would be enough for mouse-locking to be per-DOM-window (or
 per-DOM-document) and deliver events to the focused element. This
simplifies
 the model a little bit by not having to define new state for the
 mouse-locked element. Or is there a need for mouse-lock motion events to
 go to one element while keyboard input goes elsewhere?

 As was suggested earlier in this thread, I think given we're not
displaying
 the normal mouse cursor and in fact warping the system mouse cursor to the
 center of the screen in some implementations, we shouldn't deliver normal
 mouse-move events. Instead, while mouselock is active, we should deliver a
 new kind of mouse motion event, which carries the delta properties. If you
 do that, then hopefully you don't need a failure or success callback. Your
 app should just be able to handle both kinds of mouse motion events.

 I'm not really sure how touch events fit into this. Unlike mouse events,
 touch events always correspond to a position on the screen, so the delta
 information isn't as useful. (Or do some platforms detect touches outside
 the screen?) Maybe the only thing you need to do for touch events is to
 capture them to the focused element.

 In many of your use cases, it's OK to automatically release the mouse-lock
 on mouse-up. If you automatically release on mouse-up, the security issues
 are far less serious. So I think it would be a good idea to allow
 applications to accept that behavior via the API.

 A lot of this would be much simpler if we could somehow get mouse delta
 information from all platforms (Windows!) without having to warp the
cursor
 :-(. Has research definitively ruled out achieving that by any combination
 of hacks?

 Rob
 --
 If we claim to be without sin, we deceive ourselves and the truth is not
in
 us. If we confess our sins, he is faithful and just and will forgive us
our
 sins and purify us from all unrighteousness. If we claim we have not
sinned,
 we make him out to be a liar and his word is not in us. [1 John 1:8-10]


Proposal to allow Transferables to be used in initMessageEvent

2011-08-11 Thread Luke Zarko
I came across this while implementing support for the new Transferable[1]
interface for Chromium. initMessageEvent is defined[2] as:

  void initMessageEvent(in DOMString typeArg, in boolean canBubbleArg, in
boolean cancelableArg, in any dataArg, in DOMString originArg, in DOMString
lastEventIdArg, in WindowProxy? sourceArg, in sequenceMessagePort
portsArg);

However, postMessage is usually defined to take a sequenceTransferable
[3]:

  void postMessage(in any message, in optional sequenceTransferable
transfer);

I suggest changing initMessageEvent to permit arbitrary Transferables:

  void initMessageEvent(in DOMString typeArg, in boolean canBubbleArg, in
boolean cancelableArg, in any dataArg, in DOMString originArg, in DOMString
lastEventIdArg, in WindowProxy? sourceArg, in sequenceTransferable
transferablesArg);

Without this change, it is not possible for a JavaScript author to directly
construct a MessageEvent with a dataArg that contains Transferable objects
(other than MessagePorts).

This does not imply that the ports property of MessageEvent should change.
It should behave just like the ports array for MessageEvents generated by
postMessage: the ports array contains all MessagePorts sent in the transfer
list in the same relative order.

Please let me know what you think!

  Luke

[1]
http://www.whatwg.org/specs/web-apps/current-work/complete/common-dom-interfaces.html#transferable-objects
[2]
http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#event-definitions-1
[3]
http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#message-ports


Re: Proposal to allow Transferables to be used in initMessageEvent

2011-08-11 Thread Ian Hickson
On Tue, 9 Aug 2011, Luke Zarko wrote:

 I came across this while implementing support for the new 
 Transferable[1] interface for Chromium. initMessageEvent is defined[2] 
 as:
 
   void initMessageEvent(in DOMString typeArg, in boolean canBubbleArg, 
 in boolean cancelableArg, in any dataArg, in DOMString originArg, in 
 DOMString lastEventIdArg, in WindowProxy? sourceArg, in 
 sequenceMessagePort portsArg);
 
 However, postMessage is usually defined to take a sequenceTransferable 
 [3]:
 
   void postMessage(in any message, in optional sequenceTransferable 
 transfer);
 
 I suggest changing initMessageEvent to permit arbitrary Transferables:

While it is possible for postMessage()'s second argument to take non-port 
Transferables (in particular ArrayBuffers), it's not possible for the 
generated event to contain those objects in the event.ports array, so 
there's no reason for the constructor to support that.


 Without this change, it is not possible for a JavaScript author to 
 directly construct a MessageEvent with a dataArg that contains 
 Transferable objects (other than MessagePorts).

Why not? The dataArg has type any.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [File API] Latest Editor's Draft | Call for Review

2011-08-11 Thread Jian Li
On Thu, Aug 11, 2011 at 12:43 PM, Arun Ranganathan a...@mozilla.com wrote:

 Greetings WebApps WG,

 The latest editor's draft of the File API can be found here:

 http://dev.w3.org/2006/webapi/**FileAPI/http://dev.w3.org/2006/webapi/FileAPI/

 Changes are based on feedback on this listserv, as well as the URI listserv
 (e.g. [1][2][3]).

 Chrome team: some of the feedback is to more rigorously define the
 opaqueString production in Blob URIs.  Currently, you generate Blob URIs
 that look like this:

 blob:http://localhost/**c745ef73-ece9-46da-8f66-**ebes574789b1http://localhost/c745ef73-ece9-46da-8f66-ebes574789b1[4]


For chromium, we're going to escape those reserved characters that could
appear in the opaqueString.



 I've included language that allows use of this kind, but some review about
 what is NOT allowed would be appreciated.

 -- A*

 [1] 
 http://lists.w3.org/Archives/**Public/uri/2011May/0004.htmlhttp://lists.w3.org/Archives/Public/uri/2011May/0004.html
 [2] 
 http://lists.w3.org/Archives/**Public/uri/2011May/0002.htmlhttp://lists.w3.org/Archives/Public/uri/2011May/0002.html
 [3] 
 http://lists.w3.org/Archives/**Public/uri/2011May/0006.htmlhttp://lists.w3.org/Archives/Public/uri/2011May/0006.html
 [4] http://www.html5rocks.com/en/**tutorials/workers/basics/#toc-**
 inlineworkers-bloburishttp://www.html5rocks.com/en/tutorials/workers/basics/#toc-inlineworkers-bloburis




Re: [File API] Latest Editor's Draft | Call for Review

2011-08-11 Thread Jonas Sicking
For FileReader.abort(), we should only fire abort and loadend
events if there is a load currently in progress. If no load is in
progress then no events should be fired.

Basically the invariant we want to enforce is that for each
loadstart event there is one and exactly one loadend event as well
as one of error, load or abort. That makes it easier for people
to build state machines which react to the various events.

One way to do this would be to merge step 1 and step 3 into:

1. If readyState = EMPTY or readyState = DONE, set result to null and
terminate the overall set of steps without doing anything else.

/ Jonas

On Thu, Aug 11, 2011 at 12:43 PM, Arun Ranganathan a...@mozilla.com wrote:
 Greetings WebApps WG,

 The latest editor's draft of the File API can be found here:

 http://dev.w3.org/2006/webapi/FileAPI/

 Changes are based on feedback on this listserv, as well as the URI listserv
 (e.g. [1][2][3]).

 Chrome team: some of the feedback is to more rigorously define the
 opaqueString production in Blob URIs.  Currently, you generate Blob URIs
 that look like this:

 blob:http://localhost/c745ef73-ece9-46da-8f66-ebes574789b1 [4]

 I've included language that allows use of this kind, but some review about
 what is NOT allowed would be appreciated.

 -- A*

 [1] http://lists.w3.org/Archives/Public/uri/2011May/0004.html
 [2] http://lists.w3.org/Archives/Public/uri/2011May/0002.html
 [3] http://lists.w3.org/Archives/Public/uri/2011May/0006.html
 [4]
 http://www.html5rocks.com/en/tutorials/workers/basics/#toc-inlineworkers-bloburis





Re: Mouse Lock

2011-08-11 Thread Vincent Scheib
Re Rob:
 Is there a need to provide mouse-locking on a per-element basis? It seems
to
 me it would be enough for mouse-locking to be per-DOM-window (or
 per-DOM-document) and deliver events to the focused element. This
simplifies
 the model a little bit by not having to define new state for the
 mouse-locked element. Or is there a need for mouse-lock motion events to
 go to one element while keyboard input goes elsewhere?

I may need to clarify the specification to state that there is only a single
state of mouse lock global to the user agent. You may be suggesting that the
MouseLockable interface be added to only the window and not to all elements?
An argument was made that multiple elements may attempt a mouseLock,
especially in pages composing / aggregating content. If so, it would be
undesirable for an unlockMouse() call on one element to disrupt a lock held
by another element. I will update the spec to explain that decision. If you
were suggesting something else, I didn't follow you.

 As was suggested earlier in this thread, I think given we're not
displaying
 the normal mouse cursor and in fact warping the system mouse cursor to the
 center of the screen in some implementations, we shouldn't deliver normal
 mouse-move events. Instead, while mouselock is active, we should deliver a
 new kind of mouse motion event, which carries the delta properties. If you
 do that, then hopefully you don't need a failure or success callback. Your
 app should just be able to handle both kinds of mouse motion events.

The argument is that mouse handling code can be simplified by always
handling the same MouseEvent structures. The six modifier key and button
state  members and event target are still desired. When not under lock the
movement members are still useful - and are tedious recreate by adding last
position and edge case handling in mouseleave/enter.

The success or failure events are desired to be once per call to lock, for
appropriate application response. Triggering that response logic on every
mouse event, or detecting the edge of that signal seems more complicated.
The callbacks are also low cost, they can be inline anonymous functions or
references.. they don't take special event handler registration.

What am I missing in the value of having new event types?


 I'm not really sure how touch events fit into this. Unlike mouse events,
 touch events always correspond to a position on the screen, so the delta
 information isn't as useful. (Or do some platforms detect touches outside
 the screen?) Maybe the only thing you need to do for touch events is to
 capture them to the focused element.

The motivation for touch events is to make capturing to a specific element
easy, including any other window content out of an apps control (e.g.
if embedded via iframe) and or any user agent UI that disables its
interaction when in mouse lock to offer a better app experience.


 In many of your use cases, it's OK to automatically release the mouse-lock
 on mouse-up. If you automatically release on mouse-up, the security issues
 are far less serious. So I think it would be a good idea to allow
 applications to accept that behavior via the API.

You mean you'd prefer the API have an option specified in lockMouse(...) to
cause the mouse to be automatically unlocked when a mouse button is let up
if the lock occurred during an event for that button being pressed? Under
current API draft an application specifies that with a call to unlockMouse
at a button up event.

I'm hesitant to add it, since it seems relevant only for the lower priority
use cases. The killer use case here is first person controls or full mouse
control applications. The mouse capture API satisfies most of the use cases
you're referring to, with the only limitation being loss of movement data
when the cursor hits a screen edge. I don't think we should complicate this
API for those nitch use cases when the functionality is available to them in
this API without great effort.

 A lot of this would be much simpler if we could somehow get mouse delta
 information from all platforms (Windows!) without having to warp the
cursor
 :-(. Has research definitively ruled out achieving that by any combination
 of hacks?

 Rob

Deltas alone don't satisfy the key use cases. We must prevent errant clicks.

Re: Klaas Heidstra

 You actually can get mouse delta info in windows using raw WM_INPUT data
 see: http://msdn.microsoft.com/en-us/library/ee418864(VS.85).aspx. This is
 also the only way to take advantage of  400dpi mice, which is useful for
 FPS games.


Thanks for implementation tip!



 As for mouse locking isn't that a completely distinct feature from getting
 mouse delta information? For example in full-screen mode (and only using one
 screen) there is no need for mouse lock when you always can get mouse delta
 (because the mouse can't leave the screen).


It is the combination of mouse capture, hidden cursor, deltas being provided
with no limits of screen 

Re: Mouse Lock

2011-08-11 Thread Robert O'Callahan
On Fri, Aug 12, 2011 at 2:27 PM, Vincent Scheib sch...@google.com wrote:
 Re Rob:
 Is there a need to provide mouse-locking on a per-element basis? It seems
 to
 me it would be enough for mouse-locking to be per-DOM-window (or
 per-DOM-document) and deliver events to the focused element. This
 simplifies
 the model a little bit by not having to define new state for the
 mouse-locked element. Or is there a need for mouse-lock motion events to
 go to one element while keyboard input goes elsewhere?

 I may need to clarify the specification to state that there is only a single
 state of mouse lock global to the user agent. You may be suggesting that the
 MouseLockable interface be added to only the window and not to all elements?
 An argument was made that multiple elements may attempt a mouseLock,
 especially in pages composing / aggregating content. If so, it would be
 undesirable for an unlockMouse() call on one element to disrupt a lock held
 by another element. I will update the spec to explain that decision. If you
 were suggesting something else, I didn't follow you.

I'm asking whether we can reuse the existing notion of the currently
focused element and send mouse events to that element while the mouse
is locked.

 As was suggested earlier in this thread, I think given we're not
 displaying
 the normal mouse cursor and in fact warping the system mouse cursor to the
 center of the screen in some implementations, we shouldn't deliver normal
 mouse-move events. Instead, while mouselock is active, we should deliver a
 new kind of mouse motion event, which carries the delta properties. If you
 do that, then hopefully you don't need a failure or success callback. Your
 app should just be able to handle both kinds of mouse motion events.
 The argument is that mouse handling code can be simplified by always
 handling the same MouseEvent structures. The six modifier key and button
 state  members and event target are still desired. When not under lock the
 movement members are still useful - and are tedious recreate by adding last
 position and edge case handling in mouseleave/enter.
 The success or failure events are desired to be once per call to lock, for
 appropriate application response. Triggering that response logic on every
 mouse event, or detecting the edge of that signal seems more complicated.
 The callbacks are also low cost, they can be inline anonymous functions or
 references.. they don't take special event handler registration.
 What am I missing in the value of having new event types?

If your implementation had to warp the mouse cursor on Windows to get
accurate delta information, the mouse position in the existing mouse
events would no longer be very meaningful and a new event type seemed
more logical. But assuming Klaas is right, we no longer need to worry
about this. It seems we can unconditionally add delta information to
existing mouse events. So I withdraw that comment.

 I'm not really sure how touch events fit into this. Unlike mouse events,
 touch events always correspond to a position on the screen, so the delta
 information isn't as useful. (Or do some platforms detect touches outside
 the screen?) Maybe the only thing you need to do for touch events is to
 capture them to the focused element.
 The motivation for touch events is to make capturing to a specific element
 easy, including any other window content out of an apps control (e.g.
 if embedded via iframe) and or any user agent UI that disables its
 interaction when in mouse lock to offer a better app experience.

OK.

 In many of your use cases, it's OK to automatically release the mouse-lock
 on mouse-up. If you automatically release on mouse-up, the security issues
 are far less serious. So I think it would be a good idea to allow
 applications to accept that behavior via the API.

 You mean you'd prefer the API have an option specified in lockMouse(...) to
 cause the mouse to be automatically unlocked when a mouse button is let up
 if the lock occurred during an event for that button being pressed? Under
 current API draft an application specifies that with a call to unlockMouse
 at a button up event.
 I'm hesitant to add it, since it seems relevant only for the lower priority
 use cases. The killer use case here is first person controls or full mouse
 control applications. The mouse capture API satisfies most of the use cases
 you're referring to, with the only limitation being loss of movement data
 when the cursor hits a screen edge.

And if Klaas is right, we don't even lose that.

Right now I'm thinking about full-screen apps. It seems to me that if
we add delta information to all mouse events, then full-screen apps
don't need anything else from your MouseLock API. Correct?

If so, I think it might be a good idea to make the delta information
API a separate proposal, and then adjust the use-cases for MouseLock
to exclude cases that only make sense for full-screen apps, and
exclude cases that can be satisfied with