Re: Draft recharter proposal

2016-07-29 Thread Olli Pettay

On 07/29/2016 06:13 PM, Chaals McCathie Nevile wrote:

Hi folks,

our charter expires at the end of September. I've produced a draft version of a 
new charter, for people to comment on:
http://w3c.github.io/charter-html/group-charter.html

Feel free to raise comments as issues: 
https://github.com/w3c/charter-html/issues/new

As per the change section:

New deliverables:
Microdata

Removed as deliverables:
Streams; URL; XHR1

Marked as deliverables to be taken up if incubation suggests likely success:
Background Synchronisation; Filesystem API; FindText API; HTML Import; Input 
Methods; Packaging; Quota API



Given what has been happening with directory upload stuff recently, Filesystem 
stuff is a bit controversial.
(Gecko and Edge implementing https://wicg.github.io/entries-api/, or something 
quite similar. The draft doesn't quite follow browsers.
 Entries API is a subset of what Blink has been shipping.)
But I think some way better API than the old Chrome-only API should be 
implemented for Filesystem in general, and at that point also
better stuff for directory upload, *and* for directory download.
I'd consider the callback based, awkward to use Blink API a legacy thing.


I thought it is pretty much agreed that HTML Import is deprecated, or something 
to not to do now.




Microdata has very wide ongoing usage, and it would be helpful to have 
something clearer than the current W3C Note - which includes things that don't
work - for people to refer to. So I'm proposing to do the editing, along with 
Dan Brickley from Google, and to work roughly on the basis we use in
HTML of specifying what actually works, rather than adding in what we would 
like.



So the only implementation of HTML Microdata API in browsers was removed 
recently
https://bugzilla.mozilla.org/show_bug.cgi?id=909633 because exposing the API 
caused web pages to break.




-Olli





cheers

Chaals






Re: Apple's feedback for custom elements

2016-01-24 Thread Olli Pettay

Random comments inline (other people from Mozilla may have different opinions)

On 01/24/2016 10:01 AM, Ryosuke Niwa wrote:



Hi all,

Here's WebKit team's feedback for custom elements.


== Constructor vs createdCallback ==
We would like to use constructor instead of created callback.

no comments



== Symbol-named properties for lifecycle hooks ==
After thorough consideration, we no longer think using symbols for callback 
names is a good idea.  The problem of name conflicts with an existing
library seems theoretical at best, and library and framework authors shouldn't be using 
names such as "attributeChanged" for other purposes than as
for the designated purpose of custom elements API.

In addition, forcing authors write `[Element.attributeChanged]()` instead of 
`attributeChanged()` in this one API is inconsistent with the rest of Web
API.


Personally I agree.



== Calling attributeChanged for all attributes on creation ==
We think invoking `attributeChanged` for each attribute during creation will 
help mitigating the difference between the upgrade case and a direct
creation inside author script.

https://github.com/w3c/webcomponents/issues/364


Fine to me. This is also close to what XTF had. (It had also a pre-callback, 
willSet/RemoveAttribute [1], but we can live without those at least in v1)




== Lifecycle callback timing ==
We're fine with end-of-nano-task timing due to the implementation difficulty of 
going fully sync and async model doesn’t meet author’s expectation.


nano-task scheduling sounds good.




== Consistency problem ==
This is a problem but we think calling constructor before attributes and 
children are added during parsing is a good enough mitigation strategy.

It is possible that we'll need beginAddingChildren()/doneAddingChildren() for 
the parsing case later, but not for v1.
I wonder if all the engines can easily have end-of-nanotask right after main-thread part of parser has created the DOM element and before any 
attributes/children are set.

IIRC Gecko should be fine these days.



== Attached/detached vs. inserted/removed hooks ==
Elements that define things or get used by other elements should probably do 
their work when they’re inserted into a document.  e.g. HTMLBaseElement
needs to modify the base URL of a document when it gets inserted. To support 
this use case, we need callbacks when an element is inserted into a
document/shadow-tree and removed from a document/shadow-tree.

Once we have added such insertedIntoDocument/removedFromDocument callbacks, 
attached/detached seems rather arbitrary and unnecessary as the author can
easily check the existence of the browsing context via `document.defaultView`.

We would not like to add generic callbacks (inserted/removed) for every 
insertion and removal due to performance reasons.

https://github.com/w3c/webcomponents/issues/362

I'd prefer getting more consistent parentChainChanged(oldSubtreeRoot, 
newSubtreeRoot) callback or some such. That wouldn't depend on is-in-document
state. Adding parentChainChanged later if there is already 
insertedIntoDocument/removedFromDocument would just duplicate some of the 
behavior.


Something to consider, borrowed from XTF, should custom element implementation 
tell to browser engine which notifications it is interested in?
That way performance considerations are partially moved from browser engine to 
the custom element author.



== Style attribute spamming ==
Since keeping the old value is inherently expensive, we think we should 
mitigate this issue by adding an attribute filtering.  We think having this
callback is important because responding to attribute change was the primary 
motivation for keeping the end-of-a-nano-task timing for lifecycle callbacks.

https://github.com/w3c/webcomponents/issues/350

attribute filter sounds good. Especially if it is consistent with 
MutationObserver.



== childrenChanged callback ==
Given the consistency problem, it might be good idea to add `childrenChanged` 
callback to encourage authors to respond to this event instead of
relying on children being present if we decided to go with non-synchronous 
construction/upgrading model.

On the other hand, many built-in elements that rely on children such as 
`textarea` tends to respond to all children at once.  So attaching mutation
observer and doing the work lazily might be an acceptable workflow.

So would childrenChanged be called after parser has added all the child nodes? 
Or just at random time, or for each child nodes separately?

This is a case where beginAddingChildren()/doneAddingChildren() would be 
needed, but for now, childrenChanged is probably ok.





== Upgrading order ==
We should do top-down in both parsing and upgrading since parser needs to do it 
top-down.

no comments






== What happens when a custom element is adopted to another document? ==
Since a given custom element may not exist in a new document, retaining the 
prototype, etc... from the original 

Re: FileReader: rename onload to onsuccess

2016-01-11 Thread Olli Pettay

On 01/08/2016 07:06 PM, Juraj Maracky wrote:

Hello,
I propose to rename FileReader's load event to success. This way it is much 
easier to remember and mentally group FileReader's events:
error - success; loadstart - loadend; progress - abort.
Also, this would clarify potential confusion with loadend.

Thanks,
Juraj



'load' has been there for years now, so it can't be removed, and adding another 
event doesn't
really make the API better.

And 'load' follows the suggested event names from 
https://xhr.spec.whatwg.org/#suggested-names-for-events-using-the-progressevent-interface
so there is some consistency with the events in XMLHttpRequest.



-Olli



Re: [UIEvents] Keydown/keyup events during composition

2016-01-09 Thread Olli Pettay

On 01/10/2016 01:16 AM, Ryosuke Niwa wrote:

Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


We've been informed that Gecko/Firefox does not fire keydown/keyup events 
during input method composition for each key stroke.  Could someone from 
Mozilla clarify why this is desirable behavior?

We think it's better to fire keydown/keyup events for consistency across 
browsers.  If anything authors can detect that a given keydown/keyup event is 
associated with input methods by listening to composition events as well.

- R. Niwa





Masayuki should clarify this, but as far as I know, this case depends on the 
IME software one is using, and nothing guarantees browser
gets any sane key events.


-Olli



Re: [Editing] [DOM] Adding static range API

2016-01-09 Thread Olli Pettay

Hard to judge this proposal before seeing an API using StaticRange objects.

One thing though, if apps were to create an undo stack of their own, they could easily have their own Range-like API implemented in JS. So if that is 
the only use case, probably not worth to add anything to make the platform more complicated. Especially since StaticRange API might be good for some 
script library, but not for some other.


-Olli


On 01/10/2016 01:42 AM, Ryosuke Niwa wrote:

Hi,

This is yet another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.

For editing APIs, it's desirable to have a variant of Range that is immutable.  
For example, if apps were to create an undo stack of its own, then storing the 
selection state using Range would be problematic because those Ranges would get 
updated whenever DOM is mutated.  Furthermore, live ranges are expensive if 
browsers had to keep updating them as DOM is mutated.  This is analogous to how 
we're moving away form LiveNodeList/HTMLCollection to use StaticNodeList in 
various new DOM APIs.

So we came up with a proposal to add StaticRange: a static, immutable variant 
of Range defined as follows:

[Constructor,
  Exposed=Window]
interface StaticRange {
   readonly attribute Node startContainer;
   readonly attribute unsigned long startOffset;
   readonly attribute Node endContainer;
   readonly attribute unsigned long endOffset;
   readonly attribute boolean collapsed;
   readonly attribute Node commonAncestorContainer;

   const unsigned short START_TO_START = 0;
   const unsigned short START_TO_END = 1;
   const unsigned short END_TO_END = 2;
   const unsigned short END_TO_START = 3;
   short compareBoundaryPoints(unsigned short how, Range sourceRange);

   [NewObject]
Range cloneRange();

   boolean isPointInRange(Node node, unsigned long offset);
   short comparePoint(Node node, unsigned long offset);

   boolean intersectsNode(Node node);
};

Along with range extensions from CSS OM view also added as follows:
https://drafts.csswg.org/cssom-view/#extensions-to-the-range-interface

partial interface StaticRange
  {
   [NewObject] sequence getClientRects();
   [NewObject] DOMRect getBoundingClientRect();
};

with one difference, which is to throw an exception (perhaps 
InvalidStateError?) when StaticRange's boundary points don't share a common 
ancestor, not in a document, or offsets are out of bounds.

- R. Niwa







Re: [UIEvents] Firing composition events for dead keys

2016-01-09 Thread Olli Pettay

On 01/10/2016 01:14 AM, Ryosuke Niwa wrote:

Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


We found out that all major browsers (Chrome, Firefox, and Safari) fire 
composition events for dead keys on Mac but they don't on Windows.  I think 
this difference comes from the underlying platform's difference but we think we 
should standardize it to always fire composition events for consistent behavior 
across platforms.

Does anyone know of any implementation limitation to do this?  Or are there any 
reason we should not fire composition events for dead keys on Windows?

- R. Niwa




Does anyone know the behavior on Linux.

What is the exact case you're talking about here? do you have a testcase?


-Olli



Re: [UIEvents] Firing composition events for dead keys

2016-01-09 Thread Olli Pettay

On 01/10/2016 05:05 AM, Ryosuke Niwa wrote:



On Jan 9, 2016, at 6:33 PM, Olli Pettay <o...@pettay.fi> wrote:

On 01/10/2016 01:14 AM, Ryosuke Niwa wrote:

Hi all,

This is another feedback from multiple browser vendors (Apple, Google, 
Microsoft) that got together in Redmond last Thursday to discuss editing API 
and related events.


We found out that all major browsers (Chrome, Firefox, and Safari) fire 
composition events for dead keys on Mac but they don't on Windows.  I think 
this difference comes from the underlying platform's difference but we think we 
should standardize it to always fire composition events for consistent behavior 
across platforms.

Does anyone know of any implementation limitation to do this?  Or are there any 
reason we should not fire composition events for dead keys on Windows?



Does anyone know the behavior on Linux.

What is the exact case you're talking about here? do you have a test case?


Sure. On Mac, you can enable International English keyboard and type ' key and 
then u.

On Mac:
1. Pressing ' key inserts ' (character) and fires `compositionstart` event.
2. Pressing u key replaces ' with ú and fires `compositionend`.


On Windows, dead key doesn't insert any character at all, and pressing the 
second key insert the composed character.

Looking at MSDN:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms646267(v=vs.85).aspx#_win32_Dead_Character_Messages

dead key should issue WM_KEYDOWN as well as WM_DEADCHAR in TranslateMessage so 
I don't think there is an inherent platform limitation to fire composition 
events.

- R. Niwa




On linux pressing ` once doesn't insert any character nor dispatch composition 
events
Then pressing u after that gives composition events in Firefox Nightly and ù is 
inserted.
Chrome(49) doesn't seem to dispatch any composition events in this case on 
linux, although ù is also inserted.




Re: [WebIDL] T[] migration

2015-12-18 Thread Olli Pettay

On 12/18/2015 06:20 PM, Domenic Denicola wrote:

From: Simon Pieters [mailto:sim...@opera.com]


Note that it requires liveness. Does that work for a frozen array?


Frozen array instances are frozen and cannot change. However, you can have the 
property that returns them start returning a new frozen array. The spec needs 
to track when these new instances are created.


Changing the array object wouldn't be backwards compatible.
(The attribute used to be DOMStringList)


Maybe this particular API should be a method instead that returns a 
sequence?

Also not backwards compatible.

But I'd assume the first option (changing the array) would be less backwards 
incompatible, so I'd prefer that one.




That does seem potentially better... Either could work, I think?






Re: Callback when an event handler has been added to a custom element

2015-11-06 Thread Olli Pettay

On 11/06/2015 09:28 PM, Justin Fagnani wrote:

You can also override addEventListener/removeEventListener on your element. My 
concern with that, and possibly an event listener change callback, is
that it only works reliably for non-bubbling events.

How even with those? One could just add capturing event listener higher up in 
the tree.
You need to override addEventListener on EventTarget, and also relevant onfoo 
EventHandler setters on Window and Document and *Element prototypes,
but unfortunately even that doesn't catch onfoo content attributes (). But one could use MutationObserver then to
observe changes to DOM.


-Olli




On Thu, Nov 5, 2015 at 4:16 PM, Travis Leithead > wrote:

Interesting. Alternatively, you can add .onwhatever handlers, as well as 
define your own overload of addEventListener (which will be called
instead of the EventTarget.addEventListener method). That way you can 
capture all attempts at setting events on your element.

-Original Message-
From: Mitar [mailto:mmi...@gmail.com ]
Sent: Thursday, November 5, 2015 4:05 PM
To: public-webapps >
Subject: Callback when an event handler has been added to a custom element

Hi!

We are using message ports to communicate with our logic and are wrapping 
the API into a custom element. The issue is that we would like to call
start on a message port only after user has registered an event handler on 
the custom element instance. But it seems there is no way to get a
callback when an event handler is added.

So I am suggesting that there should be a callback every time an event 
listener is added to a custom element (and possibly one when removed).


Mitar

--
http://mitar.tnode.com/
https://twitter.com/mitar_m







Re: [Web Components] proposing a f2f...

2015-10-28 Thread Olli Pettay

On 10/28/2015 06:19 AM, Chaals McCathie Nevile wrote:

Hi,

it would be good to have a face to face meeting, and wrap up loose ends. At the 
TPAC meeting times suggested were December and late January.

If people want to do it soon, we should probably aim for December, which means 
finding a date and host. The assumption is that we will be meeting
around the Bay Area...

I would propose a day between 10 and 14 December as a starting point (based on 
my own availability - as one of the few people traveling internationally).

FYI, Mozilla has coincidental work weeks Monday, December 7 - Friday, December 
11 in Orlando.




If you cannot make a date in that window, or think we need a meeting later, and 
want to make a counter suggestion, please do so. If you can make it,
please let me know - I am trying to get a quick sense before calling for a host 
and agreement on a particular date…

I hope we can get people settling a rough time-frame very fast, to provide 
adequate notice for people who need to organise travel.

cheers

Chaals

close action-759






Re: The key custom elements question: custom constructors?

2015-07-16 Thread Olli Pettay

On 07/16/2015 08:30 AM, Domenic Denicola wrote:

From: Travis Leithead [mailto:travis.leith...@microsoft.com]


I've discussed this issue with some of Edge's key parser developers.


Awesome; thank you for doing that!


I believe to be the most straightforward approach that most closely matches how 
the platform itself works


Thanks, it's helpful to get this non-implementation-focused reasoning out in 
the open.

What are your responses to Olli's concerns about how this is hard to spec properly? 
I.e. no one ever managed to spec MutationEvents properly, and
running author code during cloneNode(true) is at least as hard problem to 
solve. Are you concerned about interop? It sounds like it's technically
feasible for you, but do you think it will be technically feasible in a way 
that is interoperable? (I realize that's a hard question to answer.)


For example, in parsing, I would expect that the callout happens after initial 
instance creation, but before the target node is attached to the
DOM tree by the parser.


Can you expand on this more? In particular I am confused on how initial instance 
creation can happen without calling the constructor.


I am sympathetic to this concern, but have my own reservations about the 
proto-swizzle technique.


I think this is not the correct positioning for this question. There are two 
independent questions: is it OK to run author code during parsing and
cloning? And separately, is there utility to be gained from proto-swizzling? 
You can imagine (at least) four solutions for this 2x2 grid of yes/no
responses. In this particular thread I really want to focus on the former 
question since it is foundational.

---

It sounds like so far we have:

- Mozilla against running author code during these times

That is too strongly said, at least if you refer to my email
(where I expressed my opinions, but as usually, others from Mozilla may have 
different opinions).
I said I'd prefer if we could avoid that [Running author code during 
cloneNode(true)].

And my worry is largely in the spec level.
It would be also a bit sad to reintroduce some of the issues MutationEvents 
have to the platform, now that we're
finally getting rid of those events




- Microsoft for running author code during these times, but sympathetic to 
concerns in the
opposite direction

Is this correct so far?

I suppose I should also note

- Google against running author code during these times, based on investigation by 
Dominic (with an i) into the complexity it would add to the
platform/event loop/etc. (I believe the exact phrase MutationEvents all over 
again was used.)







Re: The key custom elements question: custom constructors?

2015-07-15 Thread Olli Pettay

On 07/16/2015 03:45 AM, Domenic Denicola wrote:

Hi all,

Ahead of next week's F2F, I'm trying to pull together some clarifying and 
stage-setting materials, proposals, lists of open issues, etc. In the
end, they all get blocked on one key question:

**Is it OK to run author code during parsing/cloning/editing/printing (in 
Gecko)/etc.?**


As of now, clone-document-for-printing is a gecko implementation detail and 
shouldn't limit anything here.
I think we'll just clone whatever data is needed to be cloned without running 
any scripts, since scripts won't run in the static clone anyway.
(but from implementation point of view I can say clone-document-for-printing is 
rather nice feature, simplified Gecko's printing setup quite a bit ;))


Running author code during cloneNode(true) can be very hard to spec and 
implement correctly, so I'd prefer if we could avoid that.





If we allow custom elements to have custom constructors, then those must run in 
order to create properly-allocated instances of those elements;
there is simply no other way to create those objects. You can shuffle the 
timing around a bit: e.g., while cloning a tree, you could either run the
constructors at the normal times, or try to do something like 
almost-synchronous constructors [1] where you run them after constructing a 
skeleton
of the cloned tree, but before inserting them into the tree. But the fact 
remains that if custom elements have custom constructors, those custom
constructors must run in the middle of all those operations.

We've danced around this question many times. But I think we need a clear 
answer from the various implementers involved before we can continue. In
particular, I'm not interested in whether the implementers think it's 
technically feasible. I'd like to know whether they think it's something we
should standardize.

It is also about someone writing a good spec for this all - no one ever managed 
to spec MutationEvents properly, and running author code during
cloneNode(true) is at least as hard problem to solve.



-Olli



I'm hoping we can settle this on-list over the next day or two so that we all 
come to the meeting with a clear starting point. Thanks very much,
and looking forward to your replies,

-Domenic

[1]: https://lists.w3.org/Archives/Public/public-webapps/2014JanMar/0098.html






Re: Custom Elements: createdCallback cloning

2015-07-13 Thread Olli Pettay

On 07/13/2015 09:22 AM, Anne van Kesteren wrote:

On Sun, Jul 12, 2015 at 9:32 PM, Olli Pettay o...@pettay.fi wrote:

Well, this printing case would just clone the final flattened tree without
the original document knowing any cloning happened.
(scripts aren't suppose to run in Gecko's static clone documents, which
print preview on linux and Windows and printing use)

If one needs a special DOM tree for printing, beforeprint event should be
used to modify the DOM.


Sure, but you'd lose some stuff, e.g. canvas, and presumably custom
elements if they require copying some state, due to the cloning.
(Unless it's doing more than just cloning.)




Clone-for-printing takes a snapshot of canvas and animated images etc.

And what state from a custom element would be needed in static clone document?
If the state is there in original document, and the state is somehow affecting 
layout, it should be
copied (well, not :focus/:active and such).

Anyhow, I see clone-for-printing very much an implementation detail, and 
wouldn't be too worried about it here.
There is enough to worry with plain normal element.cloneNode(true); or 
selection/range handling.



-Olli



Re: Custom Elements: createdCallback cloning

2015-07-12 Thread Olli Pettay

On 07/12/2015 08:09 PM, Anne van Kesteren wrote:

On Fri, Jul 10, 2015 at 10:11 AM, Dominic Cooney domin...@google.com wrote:

I think the most important question here, though, is not constructors or
prototype swizzling.


I guess that depends on what you want to enable. If you want to
recreate existing elements in terms of Custom Elements, you need
private state.



- Progressive Enhancement. The author can write more things in markup and
present them while loading definitions asynchronously. Unlike progressive
enhancement by finding and replacing nodes in the tree, prototype swizzling
means that the author is free to detach a subtree, do a setTimeout, and
reattach it without worrying whether the definition was registered in the
interim.


How does this not result in the same issues we see with FOUC? It seems
rather problematic for the user to be able to interact with components
that do not actually work, but I might be missing things.



- Fewer (no?) complications with parsing and cloning. Prototype swizzling
makes it possible to decouple constructing the tree, allocating the wrapper,
and running Custom Element initialization. For example, if you have a Custom
Element in Chromium that does not have a createdCallback, we don't actually
allocate its wrapper until it's touched (like any Element.) But it would not
be possible to distinguish whether a user-provided constructor is trivial
and needs this.


True true.



Could you share a list of things that use the cloning algorithm?


In the DOM specification a heavy user is ranges. In turn, selection
heavily depends upon ranges. Which brings us to editing operations
such as cut  copy. None of those algorithms anticipate the DOM
changing around under them. (Though perhaps as long as mutation events
are still supported there are some corner cases there, though the
level of support of those varies.)

In Gecko printing also clones the tree and definitely does not expect
that to have side effects.


Well, this printing case would just clone the final flattened tree without the 
original document knowing any cloning happened.
(scripts aren't suppose to run in Gecko's static clone documents, which print 
preview on linux and Windows and printing use)


If one needs a special DOM tree for printing, beforeprint event should be used 
to modify the DOM.



Note that this would break with prototype
swizzling too. Or at least you'd get a less pretty page when
printing...



What do you mean by mode switch?


That during cloning certain DOM operations cease to function, basically.







Re: [shadow-dom] ::before/after on shadow hosts

2015-06-30 Thread Olli Pettay

On 07/01/2015 02:48 AM, Tab Atkins Jr. wrote:

I was recently pointed to this StackOverflow thread
http://stackoverflow.com/questions/31094454/does-the-shadow-dom-replace-before-and-after/
which asks what happens to ::before and ::after on shadow hosts, as
it's not clear from the specs.  I had to admit that I hadn't thought
of this corner-case, and it wasn't clear what the answer was!

In particular, there seem to be two reasonable options:

1. ::before and ::after are *basically* children of the host element,
so they get suppressed when the shadow contents are displayed

2. ::before and ::after aren't *really* children of the host element,
so they still show up before/after the shadow contents.

According to the SO thread (I haven't tested this myself), Firefox and
Chrome both settled on #2.  I'm fine to spec this in the Scoping
module, I just wanted to be sure this was the answer we wanted.

~TJ




Just after reading the first paragraph and without knowing what the 
implementations do
in this case I thought #2 would be the most obvious behavior to have.


-Olli



Re: Writing spec algorithms in ES6?

2015-06-11 Thread Olli Pettay

On 06/11/2015 11:41 PM, Boris Zbarsky wrote:

I would actually prefer some sort of pseudocode that is _not_ JS-looking, just 
so people don't accidentally screw this up.


This one please - otherwise it would be way too easy to think the algorithm 
would run in the context of the page.

But usually the algorithms in HTML and DOM specs are quite easy to follow. The 
issues in readability, IMO, come from using tons of (usually
spec-internal) links in algorithms, but I don't know how to solve that issue. 
Perhaps some tool to actually inline the relevant text from the
linked place to the algorithm definition - or at least have some option to do 
that.

-Olli




Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-09 Thread Olli Pettay

On 06/09/2015 09:39 PM, Daniel Cheng wrote:

Currently, the Clipboard API [1] mandates support for a number of formats. 
Unfortunately, we do not believe it is possible to safely support writing a
number of formats to the clipboard:
- image/png
- image/jpg, image/jpeg
- image/gif

If these types are supported, malicious web content can trivially write a 
malformed GIF/JPG/PNG to the clipboard and trigger code execution when
pasting in a program with a vulnerable image decoder. This provides a trivial 
way to bypass the sandbox that web content is usually in.

Given this, I'd like to propose that we remove the above formats from the list 
of mandatory data types, and avoid adding support for any more complex
formats.

Daniel

[1] http://www.w3.org/TR/clipboard-apis/#mandatory-data-types-1



Why would text/html, application/xhtml+xml, image/svg+xml, application/xml, 
text/xml, application/javascript
be any safer if the program which the data is pasted to has vulnerable 
html/xml/js parsing?


-Olli



Re: Shadow DOM: state of the distribution API

2015-05-15 Thread Olli Pettay

On 05/15/2015 06:39 PM, Wilson Page wrote:

Wouldn't it likely need to be called just before layout?

Probably yes, but it is not defined when that actually happens.


 All the issues Dimitri highlighted are symptoms of layout running before 
distribution.


On Fri, May 15, 2015 at 3:46 PM, Olli Pettay o...@pettay.fi 
mailto:o...@pettay.fi wrote:

On 05/15/2015 05:37 PM, Wilson Page wrote:

Would it be possible to leave the calling of the shadowRoot's 
distribute() function to the engine? This way the engine can be in full control 
over
*when* distribution happens.



We would need to define when the engine calls it, otherwise web pages start 
to rely on the behavior of whatever engine the developers of the
particular page mostly use.



-Olli



On Wed, May 13, 2015 at 5:46 PM, Dimitri Glazkov dglaz...@google.com 
mailto:dglaz...@google.com mailto:dglaz...@google.com
mailto:dglaz...@google.com wrote:

 I did a quick experiment around distribution timing:

https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Distribution-Timing-Experiment.md.
 Hope you find it helpful.

 :DG









Re: Shadow DOM: state of the distribution API

2015-05-15 Thread Olli Pettay

On 05/15/2015 05:37 PM, Wilson Page wrote:

Would it be possible to leave the calling of the shadowRoot's distribute() 
function to the engine? This way the engine can be in full control over
*when* distribution happens.



We would need to define when the engine calls it, otherwise web pages start to rely on the behavior of whatever engine the developers of the 
particular page mostly use.




-Olli



On Wed, May 13, 2015 at 5:46 PM, Dimitri Glazkov dglaz...@google.com 
mailto:dglaz...@google.com wrote:

I did a quick experiment around distribution timing:

https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Distribution-Timing-Experiment.md.
 Hope you find it helpful.

:DG







Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Olli Pettay

On 04/27/2015 02:11 AM, Hayato Ito wrote:

I think Polymer folks will answer the use case of re-distribution.



I wasn't questioning the need for re-distribution. I was questioning the need 
to distribute grandchildren etc -
and even more, I was wondering what kind of algorithm would be sane in that 
case.

And passing random not-in-document, nor in-shadow-DOM elements to be 
distributed would be hard too.




So let me just show a good analogy so that every one can understand intuitively 
what re-distribution *means*.
Let me use a pseudo language and define XComponent's constructor as follows:

XComponents::XComponents(Title text, Icon icon) {
   this.text = text;
   this.button = new XButton(icon);
   ...
}

Here, |icon| is *re-distributed*.

In HTML world, this corresponds the followings:

The usage of x-component element:
   x-components
 x-textHello World/x-text
 x-iconMy Icon/x-icon
   /x-component

XComponent's shadow tree is:

   shadow-root
 h1content select=x-text/content/h1
 x-buttoncontent select=x-icon/content/x-button
   /shadow-root

Re-distribution enables the constructor of X-Component to pass the given 
parameter to other component's constructor, XButton's constructor.
If we don't have a re-distribution, XComponents can't create X-Button using the 
dynamic information.

XComponents::XCompoennts(Title text, Icon icon) {
   this.text = text;
   // this.button = new xbutton(icon);  // We can't!  We don't have 
redistribution!
   this.button = new xbutton(icon.png);  // XComponet have to do 
hard-coding. Please allow me to pass |icon| to x-button!
   ...
}


On Sun, Apr 26, 2015 at 12:23 PM Olli Pettay o...@pettay.fi 
mailto:o...@pettay.fi wrote:

On 04/25/2015 01:58 PM, Ryosuke Niwa wrote:
 
  On Apr 25, 2015, at 1:17 PM, Olli Pettay o...@pettay.fi 
mailto:o...@pettay.fi wrote:
 
  On 04/25/2015 09:28 AM, Anne van Kesteren wrote:
  On Sat, Apr 25, 2015 at 12:17 AM, Ryosuke Niwa rn...@apple.com 
mailto:rn...@apple.com wrote:
  In today's F2F, I've got an action item to come up with a concrete 
workable proposal for imperative API.  I had a great chat about this
  afterwards with various people who attended F2F and here's a summary.  
I'll continue to work with Dimitri  Erik to work out details in the
  coming months (our deadline is July 13th).
 
  https://gist.github.com/rniwa/2f14588926e1a11c65d3
 
  I thought we came up with something somewhat simpler that didn't 
require adding an event or adding remove() for that matter:
 
  https://gist.github.com/annevk/e9e61801fcfb251389ef
 
 
  That is pretty much exactly how I was thinking the imperative API to 
work. (well, assuming errors in the example fixed)
 
  An example explaining how this all works in case of nested shadow trees 
would be good. I assume the more nested shadow tree just may get some
  nodes, which were already distributed, in the distributionList.
 
  Right, that was the design we discussed.
 
  How does the distribute() behave? Does it end up invoking distribution 
in all the nested shadow roots or only in the callee?
 
  Yes, that's the only reason we need distribute() in the first place.  If 
we didn't have to care about redistribution, simply exposing methods to
  insert/remove distributed nodes on content element is sufficient.
 
  Should distribute callback be called automatically at the end of the 
microtask if there has been relevant[1] DOM mutations since the last manual
  call to distribute()? That would make the API a bit simpler to use, if 
one wouldn't have to use MutationObservers.
 
  That's a possibility.  It could be an option to specify as well.  But 
there might be components that are not interested in updating distributed
  nodes for the sake of performance for example.  I'm not certain forcing 
everyone to always update distributed nodes is necessarily desirable given
  the lack of experience with an imperative API for distributing nodes.
 
  [1] Assuming we want to distribute only direct children, then any child 
list change or any attribute change in the children might cause
  distribution() automatically.
 
  I think that's a big if now that we've gotten rid of select attribute 
and multiple generations of shadow DOM.

It is not clear to me at all how you would handle the case when a node has 
several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?




   As far as I could recall, one of
  the reasons we only supported distributing direct children was so that we could 
implement select attribute and multiple generations of shadow
  DOM.   If we wanted, we could always impose such a restriction in a 
declarative syntax and inheritance mechanism we add in v2 since those v2 APIs

Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-25 Thread Olli Pettay

On 04/25/2015 09:28 AM, Anne van Kesteren wrote:

On Sat, Apr 25, 2015 at 12:17 AM, Ryosuke Niwa rn...@apple.com wrote:

In today's F2F, I've got an action item to come up with a concrete workable
proposal for imperative API.  I had a great chat about this afterwards with
various people who attended F2F and here's a summary.  I'll continue to work
with Dimitri  Erik to work out details in the coming months (our deadline
is July 13th).

https://gist.github.com/rniwa/2f14588926e1a11c65d3


I thought we came up with something somewhat simpler that didn't
require adding an event or adding remove() for that matter:

   https://gist.github.com/annevk/e9e61801fcfb251389ef



That is pretty much exactly how I was thinking the imperative API to work.
(well, assuming errors in the example fixed)

An example explaining how this all works in case of nested shadow trees would 
be good.
I assume the more nested shadow tree just may get some nodes, which were 
already distributed, in the distributionList.

How does the distribute() behave? Does it end up invoking distribution in all 
the nested shadow roots or only in the callee?

Should distribute callback be called automatically at the end of the microtask 
if there has been relevant[1] DOM mutations since the last
manual call to distribute()? That would make the API a bit simpler to use, if 
one wouldn't have to use MutationObservers.
(even then one could skip distribution say for example during page load time and do a page level distribute all the stuff once all the data is ready 
etc, if wanted.).





-Olli

[1] Assuming we want to distribute only direct children, then any child list 
change or any attribute change in the children
might cause distribution() automatically.





I added an example there that shows how you could implement content
select, it's rather trivial with the matches() API. I think you can
derive any other use case easily from that example, though I'm willing
to help guide people through others if it is unclear. I guess we might
still want positional insertion as a convenience though the above
seems to be all you need primitive-wise.







Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-25 Thread Olli Pettay

On 04/25/2015 01:58 PM, Ryosuke Niwa wrote:



On Apr 25, 2015, at 1:17 PM, Olli Pettay o...@pettay.fi wrote:

On 04/25/2015 09:28 AM, Anne van Kesteren wrote:

On Sat, Apr 25, 2015 at 12:17 AM, Ryosuke Niwa rn...@apple.com wrote:

In today's F2F, I've got an action item to come up with a concrete workable 
proposal for imperative API.  I had a great chat about this
afterwards with various people who attended F2F and here's a summary.  I'll 
continue to work with Dimitri  Erik to work out details in the
coming months (our deadline is July 13th).

https://gist.github.com/rniwa/2f14588926e1a11c65d3


I thought we came up with something somewhat simpler that didn't require adding 
an event or adding remove() for that matter:

https://gist.github.com/annevk/e9e61801fcfb251389ef



That is pretty much exactly how I was thinking the imperative API to work. 
(well, assuming errors in the example fixed)

An example explaining how this all works in case of nested shadow trees would 
be good. I assume the more nested shadow tree just may get some
nodes, which were already distributed, in the distributionList.


Right, that was the design we discussed.


How does the distribute() behave? Does it end up invoking distribution in all 
the nested shadow roots or only in the callee?


Yes, that's the only reason we need distribute() in the first place.  If we 
didn't have to care about redistribution, simply exposing methods to
insert/remove distributed nodes on content element is sufficient.


Should distribute callback be called automatically at the end of the microtask 
if there has been relevant[1] DOM mutations since the last manual
call to distribute()? That would make the API a bit simpler to use, if one 
wouldn't have to use MutationObservers.


That's a possibility.  It could be an option to specify as well.  But there 
might be components that are not interested in updating distributed
nodes for the sake of performance for example.  I'm not certain forcing 
everyone to always update distributed nodes is necessarily desirable given
the lack of experience with an imperative API for distributing nodes.


[1] Assuming we want to distribute only direct children, then any child list 
change or any attribute change in the children might cause
distribution() automatically.


I think that's a big if now that we've gotten rid of select attribute and 
multiple generations of shadow DOM.


It is not clear to me at all how you would handle the case when a node has 
several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?





 As far as I could recall, one of
the reasons we only supported distributing direct children was so that we could implement 
select attribute and multiple generations of shadow
DOM.   If we wanted, we could always impose such a restriction in a declarative 
syntax and inheritance mechanism we add in v2 since those v2 APIs
are supposed to build on top of this imperative API.

Another big if is whether we even need to let each shadow DOM select nodes to 
redistribute.  If we don't need to support filtering distributed
nodes in insertion points for re-distribution (i.e. we either distribute 
everything under a given content element or nothing), then we don't need
all of this redistribution mechanism baked into the browser and the model where 
we just have insert/remove on content element will work.

- R. Niwa






Re: Proposal for changes to manage Shadow DOM content distribution

2015-04-22 Thread Olli Pettay

On 04/22/2015 03:54 PM, Tab Atkins Jr. wrote:

On Wed, Apr 22, 2015 at 2:53 PM, Ryosuke Niwa rn...@apple.com wrote:

On Apr 22, 2015, at 2:38 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

On Wed, Apr 22, 2015 at 2:29 PM, Ryosuke Niwa rn...@apple.com wrote:

On Apr 22, 2015, at 8:52 AM, Domenic Denicola d...@domenic.me wrote:

Between content-slot-specified slots, attribute-specified slots,
element-named slots, and everything-else-slots, we're now in a weird place
where we've reinvented a micro-language with some, but not all, of the power
of CSS selectors. Is adding a new micro-language to the web platform worth
helping implementers avoid the complexity of implementing CSS selector
matching in this context?


I don't think mapping an attribute value to a slot is achievable with a
content element with select attribute.


content select=[my-attr='the slot value']


No. That's not what I'm talking here.  I'm talking about putting the
attribute value into the insertion point in [1] [2] [3], not distributing an
element based on an attribute value.


Oh, interesting.  That appears to be a complete non-sequitur, tho, as
no one has asked for anything like that.  It's *certainly* irrelevant
as a response to the text you quoted.



FYI, putting attribute into the (attribute) insertion point is something 
XBL[1|2] support.
https://developer.mozilla.org/en-US/docs/XBL/XBL_1.0_Reference/Anonymous_Content#Attribute_Forwarding
http://www-archive.mozilla.org/projects/xbl/xbl2.html#forwarding

xbl:text isn't used too often, but used  anyhow,
http://mxr.mozilla.org/mozilla-central/search?string=xbl%3Atext
and xbl:inherits is rather common
http://mxr.mozilla.org/mozilla-central/search?string=xbl%3Ainheritsfind=findi=filter=^[^\0]*%24hitlimit=tree=mozilla-central
in Firefox' UI, which after all is mostly created using various components or 
bindings (doesn't matter whether the underlying language is XUL or HTML).



-Olli



Re: [websockets] Test results available

2015-03-26 Thread Olli Pettay

On 03/26/2015 04:51 PM, Arthur Barstow wrote:

Earlier today I ran the Web Sockets tests on Chrome 41, Chrome/Canary 43, FF 
Nightly 39, IE 11, and Opera 12 and pushed the results to the
test-results repo:

* All results http://w3c.github.io/test-results/websockets/all.html

* 2 passes http://w3c.github.io/test-results/websockets/less-than-2.html

Overall these results are pretty good: 97% of the 495 tests have two or more 
passes.

If anyone is willing to help with the failure analysis, that would be very much 
appreciated.

Odin, Simon - for the purposes of evaluating these results and the Candidate 
Recommendation (exit criteria), should the Opera data be included?

-Thanks, ArtB






websockets/interfaces.html  the test itself has bugs (uses old idlharness.js?).

Also websockets/interfaces/WebSocket/events/013.html is buggy. Seems to rely on 
blink/presto's EventHandler behavior, which is not
what the specs says should happen.


-Olli






Re: Shadow tree style isolation primitive

2015-02-05 Thread Olli Pettay

On 02/05/2015 02:24 AM, Dimitri Glazkov wrote:



However, I would like to first understand if that is the problem that the group 
wants to solve. It is unclear from this conversation.


Yes. The marketing speech for shadow DOM has changed over time from do everything possible, 
make things awesome to explain the platform
to the current enable easier composition.
So it is not very clear to me what even the authors of the spec always want, 
this said with all the kindness :)



Personally I think composition piece itself doesn't legitimate the complexity 
of shadow DOM.
Though, it is not clear what composition means to different people. Is the need 
for insertion points part of
composition? In my mind it might not be. It is part of the stronger 
encapsulation where
one has hidden DOM between a parent node and a child node.
Is event retargeting part of composition? It might not, if composition was to 
deal with nodes which are all in document
(and if nodes part of the composition were in document, we wouldn't have all 
the is-in-document issues).
And so on.


That said, I think we should aim for something stronger than just enabling 
easier composition.
The end goal could go as far as let pages to implement their own form controls. 
And to make that
all less error prone for the users of such components requires encapsulation.
So, start with composition but keep the requirements for the proper 
encapsulation in mind by not introducing
syntaxes or APIs which might make implementing encapsulation harder.


Are there cases where encapsulation and composition contradicts? I guess that 
depends on the definition of those both.





-Olli



Re: Shadow tree style isolation primitive

2015-02-04 Thread Olli Pettay

On 02/05/2015 01:20 AM, Tab Atkins Jr. wrote:

You don't need strong isolation primitives to do a lot of good.
Simple composition helpers lift an *enormous* weight off the shoulders
of web devs, and make whole classes of bugs obsolete.  Shadow DOM is
precisely that composition helper right now.  In most contexts, you
can't ever touch something inside of shadow DOM unless you're doing it
on purpose, so there's no way to friendly fire (as Brian puts it).


If we want to just help with composition, then we can find simpler
model than shadow DOM with its multiple shadow root per host and event handling
oddities and what not. (and all the mess with is-in-doc is still something to 
be sorted out etc.)






Stronger isolation does solve some problems, sure.  But trying to
imply that those are the only problems we need to solve,

No one has tried to imply that. I don't know where you got that.






Re: Shadow tree style isolation primitive

2015-02-04 Thread Olli Pettay

On 02/03/2015 07:24 PM, Dimitri Glazkov wrote:

Not trying to barge in, just sprinkling data...

On Tue, Feb 3, 2015 at 6:22 AM, Brian Kardell bkard...@gmail.com 
mailto:bkard...@gmail.com wrote:



On Tue, Feb 3, 2015 at 8:06 AM, Olli Pettay o...@pettay.fi 
mailto:o...@pettay.fi wrote:

On 02/02/2015 09:22 PM, Dimitri Glazkov wrote:

Brian recently posted what looks like an excellent framing of the 
composition problem:


https://briankardell.__wordpress.com/2015/01/14/__friendly-fire-the-fog-of-dom/

https://briankardell.wordpress.com/2015/01/14/friendly-fire-the-fog-of-dom/

This is the problem we solved with Shadow DOM and the problem I 
would like to see solved with the primitive being discussed on this thread.



random comments about that blog post.

[snip]
We need to be able to select mount nodes explicitly, and perhaps 
explicitly say that all such nodes should be selected.
So, maybe, deep(mountName) and deep(*)

Is there a reason you couldn't do that with normal CSS techniques, no 
additional combinator?  something like /mount/[id=foo] ?


That's ::shadow in the scoping spec: 
http://dev.w3.org/csswg/css-scoping/#shadow-pseudoelement



[snip]

It still needs to be possible from the hosting page to say “Yes, I mean all 
buttons should be blue”
I disagree with that. It can very well be possible that some component 
really must control the colors itself. Say, it uses
buttons to indicate if traffic light is red or green. Making both those 
buttons suddenly blue would break the whole concept of the
component.


This is still possible, and works in a predictable way with today's styling 
machinery. Use inline styles on the button that you want to be green/red
inside of the scope, and no /deep/ or /mount/ or  will be able to affect it: 
http://jsbin.com/juyeziwaqo/1/edit?html,css,js,output ... unless the
war progressed to the stage where !important is used as hammer.



Why should even !important work if the component wants to use its own colors?






:DG





Re: Shadow tree style isolation primitive

2015-02-04 Thread Olli Pettay

On 02/03/2015 04:22 PM, Brian Kardell wrote:



On Tue, Feb 3, 2015 at 8:06 AM, Olli Pettay o...@pettay.fi 
mailto:o...@pettay.fi wrote:

On 02/02/2015 09:22 PM, Dimitri Glazkov wrote:

Brian recently posted what looks like an excellent framing of the 
composition problem:


https://briankardell.__wordpress.com/2015/01/14/__friendly-fire-the-fog-of-dom/

https://briankardell.wordpress.com/2015/01/14/friendly-fire-the-fog-of-dom/

This is the problem we solved with Shadow DOM and the problem I would 
like to see solved with the primitive being discussed on this thread.



random comments about that blog post.

[snip]
We need to be able to select mount nodes explicitly, and perhaps explicitly 
say that all such nodes should be selected.
So, maybe, deep(mountName) and deep(*)

Is there a reason you couldn't do that with normal CSS techniques, no 
additional combinator?  something like /mount/[id=foo] ?


That leaves all the isolation up to the outside world.
If ShadowRoot had something like attribute DOMString name?; which defaults to 
null and null means deep(name) or deep(*) wouldn't be able
to find the mount, that would let the component itself to say whether it can 
deal with outside world poking it CSS.





[snip]

It still needs to be possible from the hosting page to say “Yes, I mean all 
buttons should be blue”
I disagree with that. It can very well be possible that some component 
really must control the colors itself. Say, it uses
buttons to indicate if traffic light is red or green. Making both those 
buttons suddenly blue would break the whole concept of the
component.


By the previous comment though it seems you are saying it's ok to reach into 
the mounts,

If mount explicitly wants that


in which case you could do exactly this... Perhaps the

shortness of the sentence makes it seem like I am saying something I am not, 
basically I'm saying it should be possible to explicitly write rules
which do apply inside a mount.

I agree with it should be possible to explicitly write rules which do apply inside 
a mount assuming the mount itself has been flagged to allow that.
Otherwise it wouldn't be really explicitlyness, since  can just easily 
select randomly any mount.



 CSS already gives you all sorts of tools for someone developing a bit in 
isolation to say how important it is that
this particular rule holds up - you can increase specificity with id-based nots 
or use !important or even the style attribute itself if it is that
fundamental - what you can't do is protect yourself on either end from 
accidental error.  I feel like one could easily over-engineer a solution here
and kill its actual chances of success, whereas a smaller change could not only 
have a good chance of getting done, but have very outsized impact and
provide some of the data on how to improve it further.



Why do we need shadow DOM (or something similar) at all if we expose it easily 
to the outside world.
One could even now just require that elements in components in a web page have 
class=component, and then
.component could be used as . Sure, it would require :not(.component) usage 
too.
And from DOM APIs side one could easily implement filtering for the contents of 
components using small script libraries.


[Perhaps a bit off topic to the style isolation]
In other words, I'm not very happy to add super complicated Shadow DOM to the 
platform if it doesn't really provide anything new which
couldn't be implemented easily with script libraries and a bit stricter coding 
styles and conventions.



-Olli




If this doesn't seem -hostile- to decent further improvements, finding 
something minimal but
still very useful might be good.






--
Brian Kardell :: @briankardell :: hitchjs.com http://hitchjs.com/





Re: Shadow tree style isolation primitive

2015-02-03 Thread Olli Pettay

On 02/02/2015 09:22 PM, Dimitri Glazkov wrote:

Brian recently posted what looks like an excellent framing of the composition 
problem:

https://briankardell.wordpress.com/2015/01/14/friendly-fire-the-fog-of-dom/

This is the problem we solved with Shadow DOM and the problem I would like to 
see solved with the primitive being discussed on this thread.




random comments about that blog post.

Its intuitive then to create a combinator in CSS which allows you to select the 
mount explicitly
Yes, I agree with that assuming the mount actually wants to be selectable.
And even if all the mounts were selectable, we don't have atm a way to select 
some particular
mount explicitly. And I think we should have that explicitly-ness.  or 
/deep/ are like
select all and cross your fingers you selected what you wanted, and not 
anything more.
We need to be able to select mount nodes explicitly, and perhaps explicitly say 
that all such nodes should be selected.
So, maybe, deep(mountName) and deep(*)


It still needs to be possible from the hosting page to say “Yes, I mean all buttons 
should be blue”
I disagree with that. It can very well be possible that some component really 
must control the colors itself. Say, it uses
buttons to indicate if traffic light is red or green. Making both those buttons 
suddenly blue would break the whole concept of the
component.


Without the explicitly-ness we're back having the initial problems we're trying 
to solve, as the
blog says
That is, preventing accidental violence against your allies is really hard – it’s simply too easy to accidentally select and operate on elements that 
aren’t “yours“. 

Same explicitly-ness should apply to things like Event.path etc.




(I still think shadow DOM needs proper encapsulation, even if components would 
all be 'allies'. A use case for encapsulation
would be rather similar to private: or protected: in many languages. But 
encapsulation is perhaps a bit different issue from
weaker isolation.)



-Olli



Re: [Selection] Should selection.getRangeAt return a clone or a reference?

2015-01-24 Thread Olli Pettay

On 01/24/2015 09:52 AM, Koji Ishii wrote:

On Thu, Jan 22, 2015 at 12:20 AM, Mats Palmgren m...@mozilla.com wrote:



If we really want authors to have convenience methods like
setStartBefore() on Selection, we could add them to Selection.


Selection methods wouldn't provide the same functionality though.
Selection.setStart* would presumably be equivalent to setStart*
on the first range in the Selection, but how do you modify the start
boundary point on other ranges when there are more than one?

I guess we could add them as convenience methods, making setStart*
operate on the first range and setEnd* on the last, but it's still
an incomplete API for multi-range Selections.


We could add, say, getRangeProxyAt(index) to get a selection object
that has Rage interface if this is really what authors want.

But right now, authors are not relying on the live-ness behavior
because it's not interoperable. As I understand, not-interoperable
is bigger issue than getRangeAt(index) does not have live-ness.

Right now the liveness doesn't really cause issues, since only some UAs support 
it.
But that doesn't mean getRangeAt should return cloned ranges.
Adding another getRange*At would just pollute the API.

The more I think this, the more I'm leaning over to the option that we should 
play with
live range objects with selection.
(but perhaps there is some different kind of API model which
can support multiple ranges and getRangeAt could be left as a legacy method and 
return clones.
But adding something like getRangeProxyAt would not be such new model.)



In
selections and editing, we have so much we wish to do, I'd like us to
solve bigger issues first,

Like what?

How we end up supporting multiple ranges is rather big thing and something
to keep in mind even if we end up having some kind of v1 spec without
support for it.


-Olli



make them available to editor developers,
then improve as needed.




Actually -- well, only if you're interested in doing this -- you could
have both methods, then see how much authors prefer the live-ness. If
it's proved that the live-ness is so much liked by editor developers,
and if we have solved other critical issues at that point, I do not
see any reasons other browsers do not follow.

/koji






Re: webview API common subset

2015-01-23 Thread Olli Pettay

On 01/23/2015 02:20 PM, Arthur Barstow wrote:


On 1/23/15 5:11 AM, Kan-Ru Chen (陳侃如) wrote:

Hi all,

I intend to implement the webview element in gecko and as part of my
research I put up a wiki page that summarize the browser-api or
webview API that has been implemented by vendors here:

   https://wiki.mozilla.org/WebAPI/BrowserAPI/Common_Subset

I wonder if other vendors are interested in creating a unified webview
standard? I hope this is the right forum for this.


Hi Kanru,

Thanks for your e-mail.

Where applicable, what do you think about including a link to the relevant 
standard? For example, it appears that back() has a standard at [1] and [2].

back() in [1] is about the whole browsing context tree. webview would need to 
limit it to the contents of the element.
So the scope of the session history needs to be split at webview.



I say this because I presume some goals here are to help converge on an agreed 
standard if/when one is available and to help identify standardization
gaps?

-ArtB

[1] https://html.spec.whatwg.org/#dom-history-back
[2] 
http://www.w3.org/html/wg/drafts/html/master/browsers.html#dom-history-back



Kanru









Re: [Selection] Should selection.getRangeAt return a clone or a reference?

2015-01-12 Thread Olli Pettay

On 01/10/2015 06:30 PM, Aryeh Gregor wrote:

On Fri, Jan 9, 2015 at 8:29 PM, Olivier Forget teleclim...@gmail.com wrote:

On Fri Jan 09 2015 at 4:43:49 AM Aryeh Gregor a...@aryeh.name wrote:




- It may never happen, but when multiple ranges are supported, are
they bound to index?


Everyone wants to kill this feature, so it's moot.



Could you please point me to the discussion where this conclusion was
reached? I searched the mailing list but I only found a few ambivalent
threads, none indicating that everyone wants to kill this. Thanks.


I don't remember whether it was ever discussed on the mailing list in
depth.  The gist is that no one has ever implemented it except Gecko,
and I'm pretty sure no one else is interested in implementing it.  The
Selection interface was invented by Netscape to support multiple
ranges to begin with, but all the other UAs that reverse-engineered it
and/or implemented from the DOM Range specs deliberately made it
support only one range (in incompatible UA-specific ways, naturally).
Ehsan Akhgari, maintainer of the editor component for Gecko, is in
favor of removing (user-visible) support for multiple selection ranges
from Gecko, and last I heard no one objected in principle.  So the
consensus of implementers is to support only one range.  As far as I
know, the only reason Gecko still supports multiple ranges is because
no one has gotten around to removing them.  (Ehsan would know more
about that.)

I doubt we're going to remove support for multiple ranges, at least not 
internally.
If the standardized/stable web API will support only one range, then perhaps 
expose
only whatever primary range there is, but internally we need the functionality.
(and personally I think supporting multiple ranges is a good thing, and if 
there are
issues in an implementation, that shouldn't lead to weaker spec which doesn't 
support
them.)


-Olli




The reason for all this is that while it makes wonderful theoretical
sense to support multiple ranges for a selection, and is necessary for
extremely sensible features like allowing a user to select columns of
a table, multi-range selections are nonexistent in practice.  A
selection that has multiple ranges in it is guaranteed to be
mistreated by author code, because no one actually tests their code on
multi-range selections.  More than that, Gecko code -- which is much
higher-quality than typical author code and much more likely to take
multiple ranges into account -- has tons of bugs with multi-range
selections and behaves nonsensically in all sorts of cases.  So in
practice, multi-range selections break everyone's code in the rare
cases where they actually occur.  In general, an API that has a
special case that will almost never occur is guaranteed to be used in
a way that will break the special case, and that's very poor API
design.

In theory, a redesigned selection API that allows for non-contiguous
selections *without* making them a special case would be great.
Perhaps a list of selected nodes/character ranges.  But multiple
ranges is not the way to do things.






Re: [Selection] Should selection.getRangeAt return a clone or a reference?

2015-01-06 Thread Olli Pettay

On 01/07/2015 12:32 AM, Ryosuke Niwa wrote:

https://github.com/w3c/selection-api/issues/40

Trident (since IE10) and Gecko both return a live Range, which can be modified 
to update selection.  WebKit and Blink both return a clone Range so that any 
changes to the Range doesn't update the selection.

It appears that there is a moderate interest at Mozilla to change Gecko's 
behavior.  Does anyone have a strong opinion about this?



I don't have a strong opinion on this, although live Range can be rather nice 
thing when one wants to change the selection.
But implementing the live-ness properly can be somewhat annoying - except that 
engines need to internally track DOM mutation inside
selection anyway, so maybe not so bad after all.
Perhaps speccing the special cases (like when one makes Range to point to 
detached dom subtree) would be enough?

But as I said, I don't have strong feelings about this.

-Olli





- R. Niwa







Re: =[xhr]

2014-09-03 Thread Olli Pettay

On 09/03/2014 12:10 PM, Greeves, Nick wrote:

I would like to emphasise the detrimental effect that the proposed 
experimentation would have on a large number of sites across Chemistry research 
and
education that would mysteriously stop working when users (automatically) 
upgraded their browsers and JSmol ceased to function.


But you know now that sync XHR will be removed from the main thread, and have 
plenty of time to fix JSmol to use async XHR.
I wouldn't expect any browser to even try to remove support for sync XHR before 
2016, and even then only if the usage is low enough.
(and the initial experiments to try to remove the feature would be done in 
nightly/development builds, not in release builds)




-Olli



JSmol is used so widely because it gets away from the historic need for a 
specific browser version and a specific plugin  or Java installation and
works across all browsers and platforms.

Examples of critical sites that would be broken/have to be rebuilt include

UK National Chemical Database Service http://cds.rsc.org notably CSD, ICSD, 
ChemSpider, CrystalWorks

I should also declare a vested interest as my own Open Educational Resource 
ChemTube3D depends on JSmol, which supports the teaching of Chemistry in
Liverpool and across the world. There were more than 590,000 visitors (up 48% 
on the previous year) from 209 countries in the last year of operation.

--
Nick Greevesvia OS X Mail
Director of Teaching and Learning
Department of Chemistry
University of Liverpool
Donnan and Robert Robinson Laboratories
Crown Street, LIVERPOOL L69 7ZD U.K.
Email address: ngree...@liverpool.ac.uk mailto:ngree...@liverpool.ac.uk
WWW Pages: http://www.chemtube3d.com
Tel:+44 (0)151-794-3506 (3500 secretary)
Dept Fax:   +44 (0)151-794-3588





Re: =[xhr]

2014-07-25 Thread Olli Pettay

On 07/24/2014 02:49 AM, Paul bellamy wrote:

Hi

In the specification for XMLHttpRequest you posted a “warning” about using 
async=false which indicates that it is the intention to eventually remove
this feature due to “detrimental effects to the user experience” when in a 
document environment.

I understand that synchronous events retrieving data can, if not managed 
properly in the code, cause delays to the flow of the parsing and display of
the document.


Sync XHR does always cause jank, and one can't really control how bad, since it 
depends on the quality of the network.
See for example 
http://blogs.msdn.com/b/wer/archive/2011/08/03/why-you-should-use-xmlhttprequest-asynchronously.aspx




This may, if the programming practices are poor, be extrapolated to be 
“detrimental to the users experience”, however there are times
when there is a need to have data retrieved and passed synchronously when 
dealing with applications.

In *business* application development there will always be the situation of the 
client needing to manipulate the display based on actions that
retrieve data or on previously retrieved data. In these cases it is necessary 
for the data retrieval to be synchronous.

Why you need synchronous XHR in this case? Async XHR can be used just as well 
to retrieve data which is then used for manipulating the UI.



If the document/form has to be resubmitted in full each time a client-side 
action is taken or the client needs to retrieve data to decide what action
to take, then the user experience is definitely affected detrimentally as the 
entire document needs to be uploaded, downloaded, parsed and displayed
again. Further there is the unnecessary need to retain instances of variables 
describing the client-side environment on the server-side.  Variables
which are not necessary for processing and should be handled by the client.

For this reason I wonder why it would be necessary to remove such a useful tool.

I don’t claim to be the expert on programming or technical specification, 
perhaps I’ve missed something in the specification and I am more than happy
to be shown better ways to manage the development of our business applications. 
 It just seems to me that deciding the user’s experience is
detrimentally affected by the possibility of some developers having poor 
programming practices

Using synchronous calls in the UI thread is a bad practice.


-Olli



seems to be a fairly blinkered approach to the
development process of such fantastic tools as XMLHttpRequest .

I would welcome discussion or advice on this topic .

Thanks for your time in reading this

Paul Bellamy

(Director)

*/Pacific West Data Pty Ltd/*

/Ph:/ +61-412-754-052

www.pacificwestdata.com http://www.pacificwestdata.com/







Re: [April2014Meeting] Plans and expectations for specs in CR; deadline April 9

2014-04-09 Thread Olli Pettay

On 04/09/2014 09:10 PM, Arthur Barstow wrote:

On 4/9/14 10:19 AM, ext Zhang, Zhiqiang wrote:

From: Arthur Barstow [mailto:art.bars...@nokia.com]


* IDB: Zhiqiang; test suite status; plan to create an Implementation Report

I've reviewed the open pull requests and moved all tests into 
https://github.com/w3c/web-platform-tests/tree/master/IndexedDB. And ran the 
tests on
Chrome and Firefox in http://w3c-test.org/tools/runner/index.html (/IndexedDB/ 
as path), to get an initial implementation report
http://zqzhang.github.io/IndexedDB-test/

Though we've gotten a test suite for IDB, it is not yet completed; at least we 
can do the following as I observed:
- improve support.js as comment 
https://github.com/w3c/web-platform-tests/pull/516#issuecomment-39178659
- add more tests, for example, some of the checkpoints for exception are 
valuable in https://github.com/w3c/web-platform-tests/pull/292
- revisit the failed tests and provide fixing in need
- add test results of IE11; is there someone can help on this?


Thanks for this information Zhiqiang and for creating a Draft Implementation 
Report for IDB!

(If no none volunteers to provide IE data before this week's meeting, we can 
ask Adrian and Kris during the meeting.)



* Server-sent Events: Zhiqiang; is the implementation report current
http://www.w3.org/wiki/Webapps/Interop/ServerSentEvents; when do
you
expect the CR exit criteria to be met?

Yes, with one update that request-credentials.htm passed on Chrome and Firefox 
with fixing https://github.com/w3c/web-platform-tests/pull/836
applied; however, this fixing is not so perfect.


Simon - would you (or someone else) please review 
http://w3c-test.org/eventsource/request-credentials.htm vis-à-vis PR836 
(without this patch, this
test case produces Timeout on Chrome, FFNightly and Opera)?


Another, I'd like to ask Mozilla experts to double check if the shared and 
dedicated worker are supported on Firefox as below tests or if there are
bugs in the tests themselves; can someone help on this?

- http://w3c-test.org/eventsource/dedicated-worker/
- http://w3c-test.org/eventsource/shared-worker/


My recollection is that during the TPAC2013 meeting, Jonas said that 
functionality was implemented so yes, it would be good if someone would please
clarify the situation and plan.


Gecko doesn't have EventSource in DedicatedWorker nor in SharedWorker

-Olli





-Thanks, AB








Re: Extending Mutation Observers to address use cases of

2014-02-12 Thread Olli Pettay

On 02/12/2014 04:27 AM, Ryosuke Niwa wrote:



On Feb 11, 2014, at 6:06 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

* Olli Pettay wrote:

We could add some scheduling thing to mutation observers. By default we'd use 
microtask, since that tends to be good for various performance
reasons, but normal tasks or nanotasks could be possible too.


Right, we need some sort of a switch.  I'm not certain if we want to add it as 
a per-observation option or a global switch when we create an
observer. My guy feeling is that we want the latter.  It would be weird for 
some mutation records to be delivered earlier than others to the same
observer.



Yeah, I was thinking per observer.
Something like

var m = new MutationObserver(callback, { interval: task} );
m.observe(document, { childList: true, subtree: true});


Some devtools devs have asked for adding 'interval: nanotask' thing
I was thinking to add such thing only for addons and such in Gecko, because it 
brings
back some of the performance problems Mutation Events have.
But if web components stuff would be less special with such option, perhaps it 
should be enabled for all.




-Olli



I'd like to know exact semantics requirements before start jumping into details 
though.


This sounds like adding a switch that would dynamically invalidate assumptions 
mutation observers might make, which sounds like a bad idea. Could
you elaborate?


I don't really follow what the problem is. Could you elaborate on what you see 
as a problem?

- R. Niwa






Re: Extending Mutation Observers to address use cases of

2014-02-11 Thread Olli Pettay

On 02/12/2014 03:41 AM, Ryosuke Niwa wrote:

Hi,

I’m bringing this up out of:

[Custom]: enteredView and leftView callbacks are still confusing
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24314

Could someone clarify exactly why mutation observers can’t satisfy use cases 
for custom elements?

I strongly believe that we should extend mutation observers (e.g. add some flag 
to fire more eagerly) so that we could *explain* these callbacks in terms of 
mutation observers.

- R. Niwa





Doesn't the web component stuff want to be notified way before mutation 
observer callbacks are called (end of microtask)
This new stuff would be called right before some dom method call returns (I 
think someone mentioned 'nanotask').


We could add some scheduling thing to mutation observers. By default we'd use 
microtask, since that tends to be good
for various performance reasons, but normal tasks or nanotasks could be 
possible too.



-Olli






Officially deprecating main-thread synchronous XHR?

2014-02-07 Thread Olli Pettay

Hi all,


I wonder what people think of if we started to rather aggressively deprecate 
the horrible API
main-thread sync XHR?
Currently its usage is still way too high (up to 2% based on telemetry data), 
but
if all the browsers warned about use of deprecated feature, we might be able to 
get its usage down.
And at least we'd improve responsiveness of those websites which stop using 
sync XHR because of the warning.




-Olli



Re: Officially deprecating main-thread synchronous XHR?

2014-02-07 Thread Olli Pettay

On 02/07/2014 07:32 PM, Scott González wrote:

What about developers who are sending requests as the page is unloading? My 
understanding is that sync requests are required. Is this not the case?


We need sendBeacon asap, and browsers could start warn first when sync XHR is 
used outside unload event listeners.






On Friday, February 7, 2014, Anne van Kesteren ann...@annevk.nl 
mailto:ann...@annevk.nl wrote:

On Fri, Feb 7, 2014 at 6:18 PM, Jonas Sicking jo...@sicking.cc wrote:
  Agreed. I think for this to be effective we need to get multiple browser
  vendors being willing to add such a warning. We would also need to add 
text
  to the various versions of the spec (whatwg and w3c).

For what it's worth, was done when Olli brought this up in #whatwg:
http://xhr.spec.whatwg.org/#sync-warning


--
http://annevankesteren.nl/








Re: Officially deprecating main-thread synchronous XHR?

2014-02-07 Thread Olli Pettay

On 02/08/2014 03:19 AM, James Greene wrote:

There are certain situations where sync XHRs are, in fact, required... unless 
we make other accommodations. For example, in the Clipboard API,
developers are allowed to inject into the clipboard as a semi-trusted event 
during the event handling phase of certain user-initiated events (e.g.
`click`).[1]  This has not yet been implemented in any browsers yet.

However, if browser vendors choose to treat this scenario as it is treated for 
Flash clipboard injection, then the semi-trusted state ends after the
default action for that event would occur.[2]

For Flash clipboard injection, this means that any required on-demand XHRs 
must be resolved synchronously. For the DOM Clipboard API, it would be
nice to either still be able to use sync XHRs or else we would need to 
specially authorize async XHRs that are started during the semi-trusted state
to have their completion handlers also still resolve/execute in a semi-trusted 
state.



Doesn't sound like a case where we should allow the horrible sync XHR to run.
If really needed, we can add something to clipboard API to let it deal with 
asynchronous loading.






cc: Hallvord R. M. Steen

[1] http://dev.w3.org/2006/webapi/clipops/clipops.html#semi-trusted-event

[2] 
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/desktop/Clipboard.html#setData()

Sincerely,
 James Greene
 Sent from my [smart?]phone

On Feb 7, 2014 2:55 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com wrote:


On Feb 7, 2014, at 9:18 AM, Jonas Sicking jo...@sicking.cc 
mailto:jo...@sicking.cc wrote:


On Feb 7, 2014 8:57 AM, Domenic Denicola dome...@domenicdenicola.com 
mailto:dome...@domenicdenicola.com wrote:

 From: Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi

  And at least we'd improve responsiveness of those websites which stop 
using sync XHR because of the warning.

 I think this is a great point that makes such an effort worthwhile even 
if it ends up not leading to euthanizing sync XHR.

Agreed. I think for this to be effective we need to get multiple browser 
vendors being willing to add such a warning. We would also need to add
text to the various versions of the spec (whatwg and w3c).

Which browsers are game? (I think mozilla is). Which spec editors are?


I usually hate deprecation warnings because I think they are ineffective 
and time-wasting. But this case may be worthy of an exception. In
addition to console warnings in browsers and the alert in the spec, it 
might be useful to have a concerted documentation and outreach effort (e.g.
blog posts on the topic) as an additional push to get Web developers to 
stop using sync XHR.

Regards,
Maciej








Re: Request for feedback: Streams API

2013-12-16 Thread Olli Pettay

On 12/04/2013 06:27 PM, Feras Moussa wrote:

The editors of the Streams API have reached a milestone where we feel many of 
the major issues that have been identified thus far are now resolved and
incorporated in the editors draft.

The editors draft [1] has been heavily updated and reviewed the past few weeks 
to address all concerns raised, including:
1. Separation into two distinct types -ReadableByteStream and WritableByteStream
2. Explicit support for back pressure management
3. Improvements to help with pipe( ) and flow-control management
4. Updated spec text and diagrams for further clarifications

There are still a set of bugs being tracked in bugzilla. We would like others 
to please review the updated proposal, and provide any feedback they may
have (or file bugs).

Thanks.
-Feras


[1] https://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm



So per https://www.w3.org/Bugs/Public/show_bug.cgi?id=24054
it is not clear to me why the API is heavily Promise based.
Event listeners tend to work better with stream like APIs.


(The fact the Promises are hip atm is not a reason to use them for everything 
;) )

-Olli





Re: CfC: publish FPWD of UI Events; deadline May 4

2013-04-30 Thread Olli Pettay

+1


On 04/27/2013 05:30 PM, Arthur Barstow wrote:

As discussed during WebApps' April 25 meeting, this is a Call for Consensus to 
publish a First Public Working Draft of the UI Events spec using the
following ED as the basis:

   https://dvcs.w3.org/hg/d4e/raw-file/tip/source_respec.htm

This CfC satisfies the group's requirement to record the group's decision to 
request advancement.

By publishing this FPWD, the group sends a signal to the community to begin 
reviewing the document. The FPWD reflects where the group is on this spec
at the time of publication; it does _not_ necessarily mean there is consensus 
on the spec's contents.

If you have any comments or concerns about this CfC, please reply to this 
e-mail by May 4 at the latest. Positive response is preferred and
encouraged, and silence will be considered as agreement with the proposal.

-Thanks, AB

 Original Message 
Subject: ACTION-682: Start a CfC for FPWD of UI Events (and make sure it 
has a Bugzilla component) (Web Applications Working Group)
Date: Thu, 25 Apr 2013 17:29:27 +
From: ext Web Applications Working Group Issue Tracker 
sysbot+trac...@w3.org
Reply-To: Web Applications Working Group public-webapps@w3.org
To: art.bars...@nokia.com



ACTION-682: Start a CfC for FPWD of UI Events (and make sure it has a Bugzilla 
component) (Web Applications Working Group)

http://www.w3.org/2008/webapps/track/actions/682

On: Arthur Barstow
Due: 2013-05-02

If you do not want to be notified on new action items for this group, please 
update your settings at:
http://www.w3.org/2008/webapps/track/users/7672#settings









Re: [webcomponents] Making the shadow root an Element

2013-02-19 Thread Olli Pettay

On 02/19/2013 10:24 PM, Rafael Weinstein wrote:

On Mon, Feb 18, 2013 at 12:06 PM, Jonas Sicking jo...@sicking.cc 
mailto:jo...@sicking.cc wrote:

On Mon, Feb 18, 2013 at 1:48 AM, Anne van Kesteren ann...@annevk.nl 
mailto:ann...@annevk.nl wrote:
  On Sat, Feb 16, 2013 at 5:23 PM, Dimitri Glazkov dglaz...@google.com 
mailto:dglaz...@google.com wrote:
  We were thinking of adding innerHTML to DocumentFragments anyway... 
right, Anne?
 
  Well I thought so, but that plan didn't work out at the end of the day.
 
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=14694#c7
 
  So given that consensus still putting it on ShadowRoot strikes me like
  a bad idea (as I think I've said somewhere in a bug). The same goes
  for various other members of ShadowRoot.

I don't think there's a consensus really. JS authors were very vocal
about needing this ability. Does anyone have a link to the strong
case against adding explicit API for DF.innerHTML from Hixie that
that comment refers to?


Unfortunately that comment referred to an IRC discussion that took place last 
June on #whatwg.


We do have logs for #whatwg. See the topic of that channel.



IIRC, Hixie's position was that adding more explicit API for innerHTML is a 
moral hazard because it encourages an anti-pattern. (Also IIRC), Anne and
Henri both sided with Hixie at the time and the DF.innerHTML got left in a 
ditch.

It's also worth pointing out that if it was decided to have innerHTML on DF and 
on ShadowRoot, they would likely have subtly different semantics:

-DF.innerHTML would parse exactly the way template.innerHTML does (using the 
'implied context parsing).
-SR.innerHTML would use its host as the context element and the output would be as if 
the input *had been* applied to host.innerHTML, then lifted
out and attached to the SR.

(I believe the later currently the case for ShadowRoot).


/ Jonas






Re: [XHR] Need to define the behavior when the Window the XHR is created from does not have an associated document

2012-12-14 Thread Olli Pettay

On 12/14/2012 09:46 PM, Boris Zbarsky wrote:

On 12/14/12 2:29 PM, Anne van Kesteren wrote:

Per Hixie the Document is associated with both the old and the new
Window. Meaning that XMLHttpRequest will function normally even though
XMLHttpRequest != window.XMLHttpRequest.


I'm not sure it actually will; Olli had some concerns about event dispatch in 
the responseXML if it's tied to the old window, not the new one,

That concern is Gecko implementation detail.


 for

example.  Not sure to what extent those are Gecko-implementation-specific.

-Boris

Note that in Gecko the old window is in fact unhooked from the document during 
open(), to prevent memory leaks; we _may_ be able to change that, but
it's not clear.







Re: [Clipboard API] The before* events

2012-11-02 Thread Olli Pettay

On 11/02/2012 12:56 AM, Glenn Maynard wrote:

On Thu, Nov 1, 2012 at 5:14 PM, Hallvord Reiar Michaelsen Steen hallv...@opera.com 
mailto:hallv...@opera.com wrote:

The most IMHO elegant solution is what we implemented in Opera: we simply 
keep relevant menu entries enabled if there are event listeners
registered for the corresponding event. This sort of goes against the 
registering event listeners should not have side effects rule, but it's a
UI effect the page can't detect so I guess it's ok.


This doesn't really work when pages put their event listeners further up the tree, 
eg. capturing listeners on the document and other event
delegation tricks, right?


It should work just fine if you check the whole eventtarget chain (from the 
target to the window object).


-Olli





--
Glenn Maynard






Re: Scheduling multiple types of end-of-(micro)task work

2012-10-18 Thread Olli Pettay

On 10/19/2012 01:19 AM, Alan Stearns wrote:

On 10/18/12 2:51 PM, Olli Pettay olli.pet...@helsinki.fi wrote:


On 10/19/2012 12:08 AM, Rafael Weinstein wrote:

CSS Regions regionLayoutUpdate brings up an issue I think we need to
get ahead of:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=16391

For context:

Mutation Observers are currently spec'd in DOM4

  http://dom.spec.whatwg.org/#mutation-observers

and delivery timing is defined in HTML


http://www.whatwg.org/specs/web-apps/current-work/#perform-a-microtask-ch
eckpoint

The timing here is described as a microtask checkpoint and is
conceptually deliver all pending mutation records immediately after
any script invocation exits.

TC-39 has recently approved Object.observe

  http://wiki.ecmascript.org/doku.php?id=harmony:observe


(Not sure how that will work with native objects.)




for inclusion in ECMAScript. It is conceptually modeled on Mutation
Observers, and delivers all pending change records immediately
*before* the last script stack frame exits.

Additionally, although I've seen various discussion of dispatching DOM
Events with the microtask timing, CSS regionLayoutUpdate is the first
I'm aware of to attempt it

  http://dev.w3.org/csswg/css3-regions/#region-flow-layout-events



Could you explain why microtasks are good for this case?
I would have expected something bound to animation frame callback
handling,
or perhaps just tasks (but before next layout flush or something).


In the spec bug discussion, it was suggested that we use end-of-task or
end-of-microtask timing. When I looked at these options, it seemed to me
that the regionLayoutUpdate event was somewhat close in intent to
MutationObservers. So between those two options, I picked microtask. If
there's a better place to trigger the event, I'm happy to make a change to
the spec.

The current wording may be wrong for separate reasons anyway. The event is
looking for layout changes. For instance, if the geometry of a region in
the region chain is modified, and this causes either (a) overflow in the
last region in the chain or (b) the last region in the chain to become
empty, then we want the event to trigger so that a script can add or
remove regions in the chain to make the content fit correctly. If a task
in the event queue caused the change, then the microtask point after that
task is probably too soon to evaluate whether the event needs to fire. And
if that was the last task in the queue, then there may not be another
microtask happening after layout has occurred.

So what I need is an appropriate timing step for responding to layout
changes. Any suggestions?



Is there something wrong with animation frame callbacks or similar?

(I'm not a layout hacker ;) )










[I think this is wrong, and I'm hoping this email can help nail down
what will work better].

---

Strawman:

I'd like to propose a mental model for how these types of work get
scheduled. Note that my guiding principles are consistent with the
original design of the the end-of-(micro)task timing:

-Observers should be delivered to async, but soon

-Best efforts should be made to prevent future events from running in
a world where pending observer work has not yet been completed.


Delivery cycles:

1) Script (Object.observe) delivery. This is conceptually identical to
Mutation Observers.


http://wiki.ecmascript.org/doku.php?id=harmony:observe#deliverallchangere
cords

2) DOM (Mutation Observers) delivery.

http://dom.spec.whatwg.org/#mutation-observers

3) End-of-task queue.

This would be a new construct. Conceptually it would be a task queue
like other task queues, except that its purpose is to schedule
end-of-task work. Running it causes events to be dispatched in order
until the queue is empty.


Scheduling:

A) Immediately before any script invocation returns to the browser
(after the last stack frame exits), run (1). This can be purely a
concern of the script engine and spec'd independent of HTML  DOM4.

B) Immediately after any script invocation returns to the browser
(microtask checkpoint), run (2). Note that delivering to each observer
creates a new script invocation, at the end of which, (1) will run
again because of (A).

C) Immediately before the UA completes the current task, run (2). This
is necessary incase DOM changes have occurred outside of a script
context (e.g. an input event triggered a change), and is already
implemented as part of DOM Mutation Observers.

D) Run (3). Note that each script invocation terminates in running (1)
because of (A), then (2) because of (B).











Re: [pointerlock] Is Pointer Lock feature complete i.e. LC ready? [Was: Re: [admin] Publishing specs before TPAC: CfC start deadline is Oct 15]

2012-10-02 Thread Olli Pettay

On 10/02/2012 11:55 PM, Florian Bösch wrote:

I'd like to point out that vendors are using additional failure criteria to 
determine if pointerlock succeeds that are not outlined in the
specification. Firefox uses the fullscreen change event to determine failure 
and chrome requires the pointer lock request to fail if not resulting
from a user interaction target. I think that Firefoxes interpretation is less 
useful than Chromes,

But safer



 and that Chromes interpretation should be amended

to the spec since it seems like a fairly good idea.


I'm not yet convinced that it is safe enough.
Also, it is not properly defined anywhere.



On Tue, Oct 2, 2012 at 10:37 PM, Chris Pearce cpea...@mozilla.com 
mailto:cpea...@mozilla.com wrote:

On 27/09/12 08:37, Vincent Scheib wrote:

On Wed, Sep 26, 2012 at 9:17 AM, Arthur Barstow art.bars...@nokia.com 
mailto:art.bars...@nokia.com wrote:

On 9/26/12 11:46 AM, ext Vincent Scheib wrote:

On Wed, Sep 26, 2012 at 7:27 AM, Arthur Barstow art.bars...@nokia.com 
mailto:art.bars...@nokia.com
wrote:

* Pointer Lock - Vincent - what's the status of the spec 
and its
implementation?

Firefox 14 and Chrome 22 shipped Pointer Lock implementations to
stable channel users recently. (Check out this Mozilla demo
https://developer.mozilla.org/__en-US/demos/detail/bananabread 
https://developer.mozilla.org/en-US/demos/detail/bananabread__, using
either.)

Pointer Lock specification did have minor adjustments 
(inter-document
and iframe sandbox security issues, pending state and mouse 
movement
clarifications). diffs:
http://dvcs.w3.org/hg/__pointerlock/log/default/index.__html 
http://dvcs.w3.org/hg/pointerlock/log/default/index.html

So, I'm happy to prepare an updated working draft.


Thanks for the update Vincent!

Do you and/or the implementers consider the spec feature complete, 
which is
a major factor to determine if the spec is Last Call ready (other
considerations are documented at [1])?

There are no known issues, and no known additional features. We
haven't seen many applications developed yet, but there have been a
few functionally complete demos.  Reading over [1] I believe it is
Last Call Ready.


I agree. No one involved on our side of things is aware of any remaining 
issues with the pointer lock spec.


Chris Pearce
(Mozilla's pointer lock implementation maintainer)









Re: [pointerlock] Is Pointer Lock feature complete i.e. LC ready? [Was: Re: [admin] Publishing specs before TPAC: CfC start deadline is Oct 15]

2012-10-02 Thread Olli Pettay

On 10/03/2012 12:59 AM, Florian Bösch wrote:

On Tue, Oct 2, 2012 at 11:52 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

On 10/02/2012 11:55 PM, Florian Bösch wrote:

I'd like to point out that vendors are using additional failure 
criteria to determine if pointerlock succeeds that are not outlined in the
specification. Firefox uses the fullscreen change event to determine 
failure and chrome requires the pointer lock request to fail if not resulting
from a user interaction target. I think that Firefoxes interpretation 
is less useful than Chromes,

But safer

Also not in conformance to the specification (hence a bug). Additionally, it 
will make it really difficult to follow the specification since
non-fullscreen mouse capture is specifically intended by the specification by 
not adding that failure mode *to* the specification (there's a fairly
long discussion on this on the chrome ticket for pointerlock resulting in what 
Chrome does now).

  and that Chromes interpretation should be amended

to the spec since it seems like a fairly good idea.

I'm not yet convinced that it is safe enough.
Also, it is not properly defined anywhere.

So either Chrome is also implementing in conformance to the specification, or 
the specification is changed.

Chrome is not following the spec, because per spec one should be able to call 
requestPointerLock() whenever
the window/browser is focused and element is in document (the spec doesn't btw 
say which DOM tree)
and there is no sandboxed pointer lock flag.


Ipso facto, the specification is not
complete

Yup.


since I don't think Chrome will drop this failure mode, and it seems like 
Firefox is intending to follow Chromes lead because otherwise it
wouldn't be possible to implement non-fullscreen pointerlock.

Chrome has implemented the feature using one permission model. It is possible 
the Firefox will use the same, but the model
is such that it certainly needs a proper security review.


-Olli



Re: [XHR] Event processing during synchronous request

2012-09-09 Thread Olli Pettay

On 09/09/2012 06:33 PM, Mike Wilson wrote:

Is it defined how the browser should behave wrt calling
unrelated event handlers in user code during synchronous
XHR requests? (with unrelated I refer to events that are
not related to the ongoing synchronous request itself)

I didn't find statements directly addressing this in
http://www.w3.org/TR/XMLHttpRequest/
or
http://www.whatwg.org/specs/web-apps/current-work/multipage/fetching-resourc
es.html
but maybe there are indirect relationships between
specification sections that I am missing?
Or maybe it's deliberately undefined?

I ask because Firefox behaves differently to the other
popular browsers, in that it triggers event handlers for
other asynchronous XHR requests while blocking for a
synchronous XHR request.

That is a well-known bug in Gecko (other engines have or have had different 
kinds of bug
related to sync XHR, like locks etc).
But since synchronous XHR in UI-thread is by all means
effectively deprecated and very bad for the ux, I wouldn't
expect the bug in Gecko to be fixed any time soon (at least not by me :) ).


-Olli




Thanks
Mike Wilson







Re: Sync API for workers

2012-09-06 Thread Olli Pettay

On 09/06/2012 09:12 AM, Jonas Sicking wrote:

On Wed, Sep 5, 2012 at 11:02 PM, b...@pettay.fi b...@pettay.fi wrote:

On 09/06/2012 08:31 AM, Jonas Sicking wrote:


On Wed, Sep 5, 2012 at 8:07 PM, Glenn Maynard gl...@zewt.org wrote:


On Wed, Sep 5, 2012 at 2:49 AM, Jonas Sicking jo...@sicking.cc wrote:



The problem with a Only allow blocking on children, except that
window can't block on its children is that you can never block on a
computation which is implemented in the main thread. I think that cuts
out some major use cases since todays browsers have many APIs which
are only implemented in the main thread.



You can't have both--you have to choose one of 1: allow blocking upwards,
2:
allow blocking downwards, or 3: allow deadlocks.  (I believe #1 is more
useful than #2, but each proposal can go both ways.  I'm ignoring more
complex deadlock detection algorithms that can allow both #1 and #2, of
course, since that's a lot harder.)



Indeed. But I believe #2 is more useful than #1. I wasn't proposing
having both, I was proposing only doing #2.

It's actually technically possible to allow both #1 and #2 without
deadlock detection algorithms, but to keep things sane I'll leave that
as out of scope for this thread.

[snip]


I think that's by far the most
interesting category of use cases raised for this feature so far, the
ability to implement sync APIs from async APIs (or several async APIs).



That is certainly an interesting use case. I think another interesting
use case is being able to write synchronous APIs in workers whose
implementation uses APIs that are only available on the main thread.

That's why I'm not interested in only blocking on children, but rather
only blocking on parents.


The fact that all the examples that people have used while we have
been discussing synchronous messaging have spun event loops in
attempts to deal with messages that couldn't be handled by the
synchronous poller makes me very much think that so will web
developers.



getMessage doesn't spin the event loop.  Spinning the event loop means
that tasks are run from task queues (such as asynchronous callbacks)
which
might not be expecting to run, and that tasks might be run recursively;
none
of that that happens here.  All this does is block until a message is
available on a specified port (or ports), and then returns it--it's just
a
blocking call, like sync XHR or FileReaderSync.



The example from Olli's proposal 3 does what effectively amounts to
spinning an event loop. It pulls out a bunch of events from the
normal event loop and then manually dispatches them in a while loop.
The behavior is exactly the same as spinning the event loop (except
that non-message tasks doesn't get dispatchet).


It is just dispatching events.
The problems we (Gecko) have had with event loop spinning in main thread
relate mainly to the problems where unexpected events are dispatched while
running the loop, as an example user input events or events coming from
network.
getMessage/waitForMessage does not have that problem.


I'm not sure what you mean by just dispatching events. That's
exactly what event loop spinning is.


No. waitForMessage example I wrote down just dispatches DOM events in a loop.
That is a synchronous operation and you know exactly which events you're
about to dispatch.
If you run the generic event loop, you also end up running
timers and getting input from network and user etc. and you can't
controls those.



Why are the gecko events any more unexpected than the message events
that the example dispatches.

We don't want to block certain events in Gecko (like user input to chrome).
Blocking events in worker code is ok.


The network events or user input events
that we had are all events created by gecko code. The messages that
might get dispatches by the worker code can easily also be network
events or user events which are sent to the worker for processing.

Web pages can have just as much inconsistent state while deep in call
stacks as we do.

That is true...


If they at that point call into a library which
starts pulling messages off of the task queue and dispatches them,
they'll run into the same problems as we've had.


...but then it is up to the library to handle the case properly and
dispatch events async.


.Olli




/ Jonas






Re: Sync API for workers

2012-09-06 Thread Olli Pettay

On 09/06/2012 09:30 AM, Olli Pettay wrote:

On 09/06/2012 09:12 AM, Jonas Sicking wrote:

On Wed, Sep 5, 2012 at 11:02 PM, b...@pettay.fi b...@pettay.fi wrote:

On 09/06/2012 08:31 AM, Jonas Sicking wrote:


On Wed, Sep 5, 2012 at 8:07 PM, Glenn Maynard gl...@zewt.org wrote:


On Wed, Sep 5, 2012 at 2:49 AM, Jonas Sicking jo...@sicking.cc wrote:



The problem with a Only allow blocking on children, except that
window can't block on its children is that you can never block on a
computation which is implemented in the main thread. I think that cuts
out some major use cases since todays browsers have many APIs which
are only implemented in the main thread.



You can't have both--you have to choose one of 1: allow blocking upwards,
2:
allow blocking downwards, or 3: allow deadlocks.  (I believe #1 is more
useful than #2, but each proposal can go both ways.  I'm ignoring more
complex deadlock detection algorithms that can allow both #1 and #2, of
course, since that's a lot harder.)



Indeed. But I believe #2 is more useful than #1. I wasn't proposing
having both, I was proposing only doing #2.

It's actually technically possible to allow both #1 and #2 without
deadlock detection algorithms, but to keep things sane I'll leave that
as out of scope for this thread.

[snip]


I think that's by far the most
interesting category of use cases raised for this feature so far, the
ability to implement sync APIs from async APIs (or several async APIs).



That is certainly an interesting use case. I think another interesting
use case is being able to write synchronous APIs in workers whose
implementation uses APIs that are only available on the main thread.

That's why I'm not interested in only blocking on children, but rather
only blocking on parents.


The fact that all the examples that people have used while we have
been discussing synchronous messaging have spun event loops in
attempts to deal with messages that couldn't be handled by the
synchronous poller makes me very much think that so will web
developers.



getMessage doesn't spin the event loop.  Spinning the event loop means
that tasks are run from task queues (such as asynchronous callbacks)
which
might not be expecting to run, and that tasks might be run recursively;
none
of that that happens here.  All this does is block until a message is
available on a specified port (or ports), and then returns it--it's just
a
blocking call, like sync XHR or FileReaderSync.



The example from Olli's proposal 3 does what effectively amounts to
spinning an event loop. It pulls out a bunch of events from the
normal event loop and then manually dispatches them in a while loop.
The behavior is exactly the same as spinning the event loop (except
that non-message tasks doesn't get dispatchet).


It is just dispatching events.
The problems we (Gecko) have had with event loop spinning in main thread
relate mainly to the problems where unexpected events are dispatched while
running the loop, as an example user input events or events coming from
network.
getMessage/waitForMessage does not have that problem.


I'm not sure what you mean by just dispatching events. That's
exactly what event loop spinning is.


No. waitForMessage example I wrote down just dispatches DOM events in a loop.
That is a synchronous operation and you know exactly which events you're
about to dispatch.
If you run the generic event loop, you also end up running
timers and getting input from network and user etc. and you can't
controls those.



Why are the gecko events any more unexpected than the message events
that the example dispatches.

We don't want to block certain events in Gecko (like user input to chrome).
Blocking events in worker code is ok.


The network events or user input events
that we had are all events created by gecko code. The messages that
might get dispatches by the worker code can easily also be network
events or user events which are sent to the worker for processing.

Web pages can have just as much inconsistent state while deep in call
stacks as we do.

That is true...


If they at that point call into a library which
starts pulling messages off of the task queue and dispatches them,
they'll run into the same problems as we've had.


...but then it is up to the library to handle the case properly and
dispatch events async.



Though, dispatching events async so that other new message events don't get 
handled before them
would require some new API.



.Olli




/ Jonas









Re: Sync API for workers

2012-09-01 Thread Olli Pettay

On 09/01/2012 11:19 PM, Rick Waldron wrote:


David,

Thanks for preparing this summary—I just wanted to note that I still stand 
behind my original, reality based arguments.

One comment inline..

On Saturday, September 1, 2012 at 12:49 PM, David Bruant wrote:


Hi,

A Sync API for workers is being implemented in Firefox [1].
I'd like to come back to the discussions mentionned in comment 4 of the bug.

The original post actually describes an async API—putting the word sync in the middle 
of a method or event name doesn't make it sync.

As the proposed API developed, it still retains the event handler-esque 
design (https://bugzilla.mozilla.org/show_bug.cgi?id=783190#c12). All of the
terminology being used is async:
- event
- callback
- onfoo

Even Olli's proposal example is async. 
https://bugzilla.mozilla.org/show_bug.cgi?id=783190#c9 (setTimeout)

If the argument is callback hell, save it—because if that's the problem with 
your program, then your doing it wrong (see: node.js ecosystem).


If this API introduces any renderer process blocking, the result will be 
catastrophic in the hands of inexperienced web developers.



I haven't seen any proposal which would block rendering/main/dom thread


We've been thinking the following approaches:

Proposal 1
Parent Thread:
var w = new Worker('foo.js');
w.onsyncmessage = function(event) {
  event.reply('bar');
}
Worker:
var r = postSyncMessage('foobar', null, 1000 /* timeout */);
if (r == 'bar') ..
PRO:
- It's already implemented :)
CON:
- Multiple event listeners - Multiple reply() calls. How to deal with it?
- Multiple event listeners - is this your message?
- Wrong order of the messages in worker if parent sends async message just 
before receiving sync message
- The message must be read in order to reply


Proposal 1.1
Parent Thread:
var w = new Worker('foo.js');
w.onsyncmessage = function(event) {
  var r = new Reply(event);
  r.reply(bar);  // Can be called after event dispatch.
}
Worker:
var replies = postSyncMessage('foobar', null, 1000 /* timeout */);
for (var r in replies) {
  handleEachReply(r);
}
PRO:
- Can handle multiple replies.
- No awkward limitations on main thread because of reply handling
CON:
- A bit ugly.
- Reply on the worker thread becomes an array - unintuitive
- Wrong order of the messages in worker if parent sends async message just 
before receiving sync message
- The Reply object must be created during event dispatch.


Proposal 2
Parent Thread:
var w = new Worker('foo.js');
w.setSyncHandler('typeFoobar', function(message) {
  return 'bar';
});
Worker:
var r = postSyncMessage('typeFoobar', 'foobar', null, 1000 /* timeout */);
if (r == 'bar') ..
PRO:
- no multple replyies are possible
- types for sync messages
CON:
- Just a single listener
- It's not based on event - it's something different compare with any other 
worker/parent communication.
- Wrong order of the messages in worker if parent sends async message just 
before receiving sync message


Proposal 3
Worker:
postMessage(I want reply to this);
var events = [];
while (var m = waitForMessage()) {
  if (m.data != /* the reply * /) {
events.push(m);
  } else {
// do something with message
  }
}
while (events.length()) {
   dispatchEvent(events.shift());
}
PRO:
- Flexible
- the order of the events is changed by the developer
- since there isn't any special sync messaging, multiple event listeners don't
  cause problems.
CON:
- complex for web developers(?)
- The message must be read in order to reply
- Means that you can't use libraries that use sync messages. Only frameworks are possible as all message handling needs to be aware of the new 
syncmessages.





Atm, I personally prefer the proposal 3.


-Olli





Rick


A summary of points I find important and my comments, questions and concerns

# Discussion 1
## Glenn Maynard [2] Use case exposed:
Ability to cancel long-running synchronous worker task
Terminating the whole worker thread is the blunt way to do it; that's
no good since it requires starting a new thread for every keystroke, and
there may be significant startup costs (eg. loading search data).
= It's a legitimate use case that has no good solution today other than
cutting the task in smaller tasks between which a cancellation message
can be interleaved.


## Tab Atkins [3]
If we were to fix this, it needs to be done at the language level,
because there are language-level issues to be solved that can't be
hacked around by a specialized solution.
= I agree a lot with that point. This is a discussion that should be
had on es-discuss since JavaScript is the underlying language.
ECMAScript per se doesn't define a concurrency model and it's not even
on the table for ES.next, but might be in ES.next.next (7?). See [concurr]

## Jonas Sicking [4]
Ideas of providing control (read-only) over pending messages in workers.
(not part of the current Sync API, but interesting nonetheless)



# Discussion 2
## Joshua Bell [5]
This can be done today using bidirectional 

Re: Sync API for workers

2012-09-01 Thread Olli Pettay

On 09/01/2012 11:38 PM, Rick Waldron wrote:


So far, they all look async. Just calling them sync doesn't make them sync.


Sure they are sync. They are sync inside worker. We all know that we must not 
introduce
new sync APIs in the main thread.





Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-23 Thread Olli Pettay

On 08/22/2012 10:44 PM, Maciej Stachowiak wrote:


On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org 
mailto:o...@chromium.org wrote:


On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org 
mailto:rn...@webkit.org wrote:

On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org 
mailto:gl...@zewt.org wrote:

On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com wrote:

Ryosuke also raised the possibility of multiple text fields having 
separate UndoManagers. On Mac, most apps wipe they undo queue when
you change text field focus. WebKit preserves a single undo queue 
across text fields, so that tabbing out does not kill your ability to
undo. I don't know of any app where you get separate switchable 
persistent undo queues. Thins are similar on iOS.


Think of the use-case of a threaded email client where you can reply to any 
message in the thread. If it shows your composing mails inline (e.g. as
gmail does), the most common user expectation IMO is that each email gets it's 
own undo stack. If you undo the whole stack in one email you wouldn't
expect the next undo to start undo stuff in another composing mail. In either 
case, since there's a simple workaround (seamless iframes), I don't
think we need the added complexity of the attribute.


Depends on the user and their platform of choice. On the Mac I think it's 
pretty much never the case that changing focus within a window changes your
undo stack, it either has a shared one or wipes undo history on focus switch. 
So if GMail forced that, users would probably be surprised. I can
imagine a use case for having an API that allows multiple undo stacks on 
platforms where they are appropriate, but merges to a single undo stack on
platforms where they are not. However, I suspect an API that could handle this 
automatically would be pretty hairy. So maybe we should handle the
basic single-undo-stack use case first and then think about complexifying it.



I think the undo-stack per editing context (like input) is pretty basics, and 
certainly something I wouldn't remove from Gecko.
(Largely because using the same undo for separate input elements is just very 
weird, and forcing web apps to use iframes to achieve
 Gecko's current behavior would be horribly complicated.)


-Olli







Firefox in Windows has a separate undo list for each input.  I would 
find a single undo list strange.


Internet Explorer and WebKit don't.

While we're probably all biased to think that what we're used to is the 
best behavior, it's important to design our API so that implementors
need not to violate platform conventions. In this case, it might mean that 
whether text field has its own undo manager by default depends on the
platform convention.


Also, another option is that we could allow shadow DOMs to have their own undo 
stack. So, you can make a control that has it's own undo stack if you
want.


Again, I think it's not right to leave this purely up to the web page. That 
will lead to web apps that match their developer's platform of choice but
which don't seem quite right elsewhere.


BTW, I don't think the API should impose any requirements on how browsers 
handle undo for their built-in form controls. I have not read the spec close
enough to know if that is the case.


Regards,
Maciej






Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-23 Thread Olli Pettay

On 08/22/2012 11:16 PM, Maciej Stachowiak wrote:


On Aug 22, 2012, at 11:08 PM, Olli Pettay olli.pet...@helsinki.fi wrote:


On 08/22/2012 10:44 PM, Maciej Stachowiak wrote:


On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org 
mailto:o...@chromium.org wrote:


On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org 
mailto:rn...@webkit.org wrote:

On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org 
mailto:gl...@zewt.org wrote:

On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com wrote:

Ryosuke also raised the possibility of multiple text fields having separate 
UndoManagers. On Mac, most apps wipe they undo queue when you
change text field focus. WebKit preserves a single undo queue across text 
fields, so that tabbing out does not kill your ability to undo. I
don't know of any app where you get separate switchable persistent undo queues. 
Thins are similar on iOS.


Think of the use-case of a threaded email client where you can reply to any 
message in the thread. If it shows your composing mails inline
(e.g. as gmail does), the most common user expectation IMO is that each email 
gets it's own undo stack. If you undo the whole stack in one
email you wouldn't expect the next undo to start undo stuff in another 
composing mail. In either case, since there's a simple workaround
(seamless iframes), I don't think we need the added complexity of the attribute.


Depends on the user and their platform of choice. On the Mac I think it's 
pretty much never the case that changing focus within a window
changes your undo stack, it either has a shared one or wipes undo history on 
focus switch. So if GMail forced that, users would probably be
surprised. I can imagine a use case for having an API that allows multiple undo 
stacks on platforms where they are appropriate, but merges to a
single undo stack on platforms where they are not. However, I suspect an API 
that could handle this automatically would be pretty hairy. So
maybe we should handle the basic single-undo-stack use case first and then 
think about complexifying it.



I think the undo-stack per editing context (like input) is pretty basics, and 
certainly something I wouldn't remove from Gecko. (Largely
because using the same undo for separate input elements is just very weird, 
and forcing web apps to use iframes to achieve Gecko's current
behavior would be horribly complicated.)


It might be ok to let Web pages conditionally get Gecko-like separate undo 
stack behavior inside Firefox, at least on Windows.




(Firefox even seems
to do per-field undo on Mac, so I'm starting to think that it's more of a Gecko 
quirk than a Windows platform thing.)


It is not. Also some other browser engines behave the same way.





But, again, letting webpages force that behavior in Safari seems wrong to me. I 
don't think we should allow violating the platform conventions for
undo so freely. You seem to feel strongly that webpages should be able to align 
with the Gecko behavior, but wouldn't it be even worse to let them
forcibly violate the WebKit behavior?


It is not worse either way. Equally bad both ways. But, we're designing a new 
API here, so we should make the API as good as possible from the start.
And I think that means allowing multiple undo stack must be in. The default 
handling could be somehow platform specific.




So if there is an API for separate undo stacks, it has to handle the case where 
there's really a single undo stack. And that would potentially be
hard to program with.

On the other hand, there are certainly use cases where a single global undo 
stack is right (such as a page with a single rich text editor). And
it's easy to handle those cases without adding a lot of complexity. And if we 
get that right, we could try to add on something for conditional
multiple undo stacks.

Regards, Maciej









Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-23 Thread Olli Pettay

(And this time to the mailing list too. Sorry for spamming)


On 08/22/2012 11:16 PM, Maciej Stachowiak wrote:


On Aug 22, 2012, at 11:08 PM, Olli Pettay olli.pet...@helsinki.fi wrote:


On 08/22/2012 10:44 PM, Maciej Stachowiak wrote:


On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org 
mailto:o...@chromium.org wrote:


On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org 
mailto:rn...@webkit.org wrote:

On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org 
mailto:gl...@zewt.org wrote:

On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com wrote:

Ryosuke also raised the possibility of multiple text fields having separate 
UndoManagers. On Mac, most apps wipe they undo queue when you
change text field focus. WebKit preserves a single undo queue across text 
fields, so that tabbing out does not kill your ability to undo. I
don't know of any app where you get separate switchable persistent undo queues. 
Thins are similar on iOS.


Think of the use-case of a threaded email client where you can reply to any 
message in the thread. If it shows your composing mails inline
(e.g. as gmail does), the most common user expectation IMO is that each email 
gets it's own undo stack. If you undo the whole stack in one
email you wouldn't expect the next undo to start undo stuff in another 
composing mail. In either case, since there's a simple workaround
(seamless iframes), I don't think we need the added complexity of the attribute.


Depends on the user and their platform of choice. On the Mac I think it's 
pretty much never the case that changing focus within a window
changes your undo stack, it either has a shared one or wipes undo history on 
focus switch. So if GMail forced that, users would probably be
surprised. I can imagine a use case for having an API that allows multiple undo 
stacks on platforms where they are appropriate, but merges to a
single undo stack on platforms where they are not. However, I suspect an API 
that could handle this automatically would be pretty hairy. So
maybe we should handle the basic single-undo-stack use case first and then 
think about complexifying it.



I think the undo-stack per editing context (like input) is pretty basics, and 
certainly something I wouldn't remove from Gecko. (Largely
because using the same undo for separate input elements is just very weird, 
and forcing web apps to use iframes to achieve Gecko's current
behavior would be horribly complicated.)


It might be ok to let Web pages conditionally get Gecko-like separate undo 
stack behavior inside Firefox, at least on Windows.




(Firefox even seems
to do per-field undo on Mac, so I'm starting to think that it's more of a Gecko 
quirk than a Windows platform thing.)


It is not. Also some other browser engines behave the same way.





But, again, letting webpages force that behavior in Safari seems wrong to me. I 
don't think we should allow violating the platform conventions for
undo so freely. You seem to feel strongly that webpages should be able to align 
with the Gecko behavior, but wouldn't it be even worse to let them
forcibly violate the WebKit behavior?


It is not worse either way. Equally bad both ways. But, we're designing a new 
API here, so we should make the API as good as possible from the start.
And I think that means allowing multiple undo stack must be in. The default 
handling could be somehow platform specific.




So if there is an API for separate undo stacks, it has to handle the case where 
there's really a single undo stack. And that would potentially be
hard to program with.

On the other hand, there are certainly use cases where a single global undo 
stack is right (such as a page with a single rich text editor). And
it's easy to handle those cases without adding a lot of complexity. And if we 
get that right, we could try to add on something for conditional
multiple undo stacks.

Regards, Maciej









Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-23 Thread Olli Pettay

On 08/22/2012 11:28 PM, Ryosuke Niwa wrote:

On Wed, Aug 22, 2012 at 11:16 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com wrote:


On Aug 22, 2012, at 11:08 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

  On 08/22/2012 10:44 PM, Maciej Stachowiak wrote:
 
  On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org mailto:o...@chromium.org 
mailto:o...@chromium.org mailto:o...@chromium.org
wrote:
 
  On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org 
mailto:rn...@webkit.org mailto:rn...@webkit.org
mailto:rn...@webkit.org wrote:
 
 On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org 
mailto:gl...@zewt.org mailto:gl...@zewt.org mailto:gl...@zewt.org wrote:
 
 On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com 
mailto:m...@apple.com mailto:m...@apple.com
mailto:m...@apple.com wrote:
 
 Ryosuke also raised the possibility of multiple text fields 
having separate UndoManagers. On Mac, most apps wipe they undo queue when
 you change text field focus. WebKit preserves a single undo 
queue across text fields, so that tabbing out does not kill your
ability to
 undo. I don't know of any app where you get separate 
switchable persistent undo queues. Thins are similar on iOS.
 
 
  Think of the use-case of a threaded email client where you can reply 
to any message in the thread. If it shows your composing mails inline
(e.g. as
  gmail does), the most common user expectation IMO is that each email 
gets it's own undo stack. If you undo the whole stack in one email you
wouldn't
  expect the next undo to start undo stuff in another composing mail. In 
either case, since there's a simple workaround (seamless iframes), I don't
  think we need the added complexity of the attribute.
 
  Depends on the user and their platform of choice. On the Mac I think 
it's pretty much never the case that changing focus within a window
changes your
  undo stack, it either has a shared one or wipes undo history on focus 
switch. So if GMail forced that, users would probably be surprised. I can
  imagine a use case for having an API that allows multiple undo stacks 
on platforms where they are appropriate, but merges to a single undo
stack on
  platforms where they are not. However, I suspect an API that could 
handle this automatically would be pretty hairy. So maybe we should handle the
  basic single-undo-stack use case first and then think about 
complexifying it.
 
 
  I think the undo-stack per editing context (like input) is pretty 
basics, and certainly something I wouldn't remove from Gecko.
  (Largely because using the same undo for separate input elements is 
just very weird, and forcing web apps to use iframes to achieve
  Gecko's current behavior would be horribly complicated.)

It might be ok to let Web pages conditionally get Gecko-like separate undo 
stack behavior inside Firefox, at least on Windows. (Firefox even seems
to do per-field undo on Mac, so I'm starting to think that it's more of a 
Gecko quirk than a Windows platform thing.)

...

So if there is an API for separate undo stacks, it has to handle the case 
where there's really a single undo stack. And that would potentially be
hard to program with.

On the other hand, there are certainly use cases where a single global undo 
stack is right (such as a page with a single rich text editor). And
it's easy to handle those cases without adding a lot of complexity. And if 
we get that right, we could try to add on something for conditional
multiple undo stacks.


Maybe the solution is as simple as to make undoscope content attribute an 
optional feature.
Browsers/platforms that can have multiple undo managers
within a single document will support undoscope content attribute, and those 
that can't won't support it. Authors will then feature-detect undoscope
content attribute and support both cases.

What do you guys think?


There should be no optional features in this kind of API.


-Olli






- Ryosuke






Re: GamepadObserver (ie. MutationObserver + Gamepad)

2012-08-07 Thread Olli Pettay

On 08/07/2012 03:29 AM, Glenn Maynard wrote:

On Sat, Aug 4, 2012 at 4:24 AM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

5ms is quite low when the aim is 60Hz updates... but with 
incremental/generational GCs 5ms sounds very much possible.


5ms is an *eternity* when you're aiming for 60 FPS, where you only have 16.6ms 
per frame to play with.  That's 30% of your CPU budget just for memory
management.  It doesn't matter if it's 5ms every 100 frames, since it's the 
worst case you have to optimize for.  (I've spent a lot of time optimizing
non-web games to stay at 60 FPS, and it's a battle of microseconds, optimizing away .1ms 
here and .2ms there, so calling 5ms quite low is a bit
troubling.)


It is quite different if you need to assume that GC takes 180-250ms or if it 
takes only 5ms.
But sure, getting anything major done in 16.6ms, and even so that things work 
ok on slower machines too can be tricky.






Re: GamepadObserver (ie. MutationObserver + Gamepad)

2012-08-04 Thread Olli Pettay

On 08/04/2012 12:16 PM, Florian Bösch wrote:



On Sat, Aug 4, 2012 at 11:07 AM, b...@pettay.fi mailto:b...@pettay.fi b...@pettay.fi 
mailto:b...@pettay.fi wrote:

The update rate depends on the device. Tablet updates reach way beyond 
120HZ and even my 3D mouse clocks in at about 500 events/s. And a major
obstacle for a realtime input device is when the realtime app trying to 
use it stutters/jitters every quarter second because of 180-250ms
GC-pauses


That long GC pauses are bugs in the implementations. File some bugs on all 
the browser engines you see it (after testing latest nightly builds).

It doesn't matter if they're bugs (I often see them in conjunction to array 
buffer allocation).

Of course it matters. APIs shouldn't be designed based on implementation bugs



Even at the best of times the GC-pauses are no less
than 90ms, and even if you got it down to 60ms, that's still more than 5 frames 
@80hz. Until you get GC-pauses down to ~5ms or so, any GC use will
introduce unpleasant stuttering/jittering. And even then it's a close call to 
not miss a frame.

5ms is quite low when the aim is 60Hz updates... but with 
incremental/generational GCs 5ms sounds very much possible.




Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-12 Thread Olli Pettay

On 07/12/2012 12:07 PM, Yuval Sadan wrote:

I think we need to realize that a lot of the APIs that have been
designed in the past aren't terribly good APIs.

The IndexedDB API is rather new, and the manner in which it consistently uses 
event handlers on returned objects is rather innovative. The
DOMTransaction object is more similar to that.

In other words, I think it's more important to focus on what makes a
good API, than what is consistent with other DOM APIs.

Consistency has its value. Even if some is lacking, fixing it in some places 
and not in others might cause a jumble. Which is my feeling actually
about the IndexedDB API. Adding more syntactical variations can lead to hectic 
code.
However, I agree that it's not the primary concern.

Something that I really liked about the old API was the fact that
using it created very intuitive code. Basically you just write a class
the way you normally would write a class, and then pass in your
object:

x = {
   someState: 0,
   apply: function() { this.someState++; this.modifyDOM(); },
   unapply: function() { this.subState--; this.modifyDOMOtherWay(); },
   ...
};
undoManager.transact(x);


You can even do things like

undoManager.transact(createParagraphTransaction(params));

How's that different from:
function createParagrahTransaction(params) {
   x = new DOMTransaction(Create paragraph);
   x.apply = function() { ... use params... };
   x.onundo = function() { ... use params ... };
   return x;
}

Also, in your example, I think that in the JS-object proposal you won't be able 
to reference the original object's properties -- it will be lost, and
'this' is window.


'this' would be the object, not window when callback object is used.




The fact that we have to choose between creating APIs that feel like
DOM APIs or JS APIs I think is an indication that DOM APIs are
doing things wrong. There should be no difference between DOM APIs
and JS APIs.

It is a problem. But WebIDL and JS aren't two of the same.





Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

On 07/05/2012 08:00 AM, Adam Barth wrote:

On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi wrote:

On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:


On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

 On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

 Hi all,

 Sukolsak has been implementing the Undo Manager API in WebKit but
the fact undoManager.transact() takes a pure JS object with callback
 functions is
 making it very challenging.  The problem is that this object needs
to be kept alive by either JS reference or DOM but doesn't have a backing
C++
 object.  Also, as far as we've looked, there are no other
specification that uses the same mechanism.


 I don't understand what is difficult.
 How is that any different to
 target.addEventListener(foo, { handleEvent: function() {}})


It will be very similar to that except this object is going to have 3
callbacks instead of one.

The problem is that the event listener is a very special object in WebKit
for which we have a lot of custom binding code. We don't want to implement a
similar behavior for the DOM transaction because it's very error prone.



So, it is very much implementation detail.
(And I still don't understand how a callback can be so hard in this case.
There are plenty of different kinds of callback objects.
  new MutationObserver(some_callback_function_object) )


I haven't tested, by my reading of the MutationObserver implementation
in WebKit is that it leaks.  Specifically:

MutationObserver --retains-- MutationCallback --retains--
some_callback_function_object --retains-- MutationObserver

I don't see any code that breaks this cycle.



Ok. In Gecko cycle collector breaks the cycle. But very much an implementation 
detail.




DOM events

Probably EventListeners, not Events.


have a bunch of delicate code to avoid break these
reference cycles and avoid leaks.  We can re-invent that wheel here,

Or use some generic approach to fix such leaks.


but it's going to be buggy and leaky.

In certain kinds of implementations.



I appreciatie that these jQuery-style APIs are fashionable at the
moment, but API fashions come and go.  If we use this approach, we'll
need to maintain this buggy, leaky code forever.

Implementation detail. Very much so :)

Do JS callbacks cause implementation problems in Presto or Trident?




-Olli




Instead, we can save
ourselves a lot of pain by just using events, like the rest of the web
platform.

Adam



 Since I want to make the API consistent with the rest of the
platform and the implementation maintainable in WebKit, I propose the
following
 changes:

* Re-introduce DOMTransaction interface so that scripts can
instantiate new DOMTransaction().
* Introduce AutomaticDOMTransaction that inherits from
DOMTransaction and has a constructor that takes two arguments: a function
and an
 optional label


 After this change, authors can write:
 scope.undoManager.transact(new AutomaticDOMTransaction{__function
() {

   scope.appendChild(foo);
 }, 'append foo'));


 Looks somewhat odd. DOMTransaction would be just a container for a
callback?


Right. If we wanted, we can make DOMTransaction an event target and
implement execute, undo,  redo as event listeners to further simplify the
matter.



That could make the code more consistent with rest of the platform, but the
API would become harder to use.



- Ryosuke








Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

But anyhow, event based API is ok to me.
In general I prefer events/event listeners over other callbacks.


On 07/05/2012 11:37 AM, Olli Pettay wrote:

On 07/05/2012 08:00 AM, Adam Barth wrote:

On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi wrote:

On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:


On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

 On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

 Hi all,

 Sukolsak has been implementing the Undo Manager API in WebKit but
the fact undoManager.transact() takes a pure JS object with callback
 functions is
 making it very challenging.  The problem is that this object needs
to be kept alive by either JS reference or DOM but doesn't have a backing
C++
 object.  Also, as far as we've looked, there are no other
specification that uses the same mechanism.


 I don't understand what is difficult.
 How is that any different to
 target.addEventListener(foo, { handleEvent: function() {}})


It will be very similar to that except this object is going to have 3
callbacks instead of one.

The problem is that the event listener is a very special object in WebKit
for which we have a lot of custom binding code. We don't want to implement a
similar behavior for the DOM transaction because it's very error prone.



So, it is very much implementation detail.
(And I still don't understand how a callback can be so hard in this case.
There are plenty of different kinds of callback objects.
  new MutationObserver(some_callback_function_object) )


I haven't tested, by my reading of the MutationObserver implementation
in WebKit is that it leaks.  Specifically:

MutationObserver --retains-- MutationCallback --retains--
some_callback_function_object --retains-- MutationObserver

I don't see any code that breaks this cycle.



Ok. In Gecko cycle collector breaks the cycle. But very much an implementation 
detail.




DOM events

Probably EventListeners, not Events.


have a bunch of delicate code to avoid break these
reference cycles and avoid leaks.  We can re-invent that wheel here,

Or use some generic approach to fix such leaks.


but it's going to be buggy and leaky.

In certain kinds of implementations.



I appreciatie that these jQuery-style APIs are fashionable at the
moment, but API fashions come and go.  If we use this approach, we'll
need to maintain this buggy, leaky code forever.

Implementation detail. Very much so :)

Do JS callbacks cause implementation problems in Presto or Trident?




-Olli




Instead, we can save
ourselves a lot of pain by just using events, like the rest of the web
platform.

Adam



 Since I want to make the API consistent with the rest of the
platform and the implementation maintainable in WebKit, I propose the
following
 changes:

* Re-introduce DOMTransaction interface so that scripts can
instantiate new DOMTransaction().
* Introduce AutomaticDOMTransaction that inherits from
DOMTransaction and has a constructor that takes two arguments: a function
and an
 optional label


 After this change, authors can write:
 scope.undoManager.transact(new AutomaticDOMTransaction{__function
() {

   scope.appendChild(foo);
 }, 'append foo'));


 Looks somewhat odd. DOMTransaction would be just a container for a
callback?


Right. If we wanted, we can make DOMTransaction an event target and
implement execute, undo,  redo as event listeners to further simplify the
matter.



That could make the code more consistent with rest of the platform, but the
API would become harder to use.



- Ryosuke











Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

On 07/05/2012 05:15 PM, Adam Barth wrote:

On Thu, Jul 5, 2012 at 1:37 AM, Olli Pettay olli.pet...@helsinki.fi wrote:

On 07/05/2012 08:00 AM, Adam Barth wrote:

On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi
wrote:

On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:
So, it is very much implementation detail.

(And I still don't understand how a callback can be so hard in this case.
There are plenty of different kinds of callback objects.
   new MutationObserver(some_callback_function_object) )


I haven't tested, by my reading of the MutationObserver implementation
in WebKit is that it leaks.  Specifically:

MutationObserver --retains-- MutationCallback --retains--
some_callback_function_object --retains-- MutationObserver

I don't see any code that breaks this cycle.


Ok. In Gecko cycle collector breaks the cycle. But very much an
implementation detail.


DOM events


Probably EventListeners, not Events.


have a bunch of delicate code to avoid break these
reference cycles and avoid leaks.  We can re-invent that wheel here,


Or use some generic approach to fix such leaks.


but it's going to be buggy and leaky.


In certain kinds of implementations.


I appreciatie that these jQuery-style APIs are fashionable at the
moment, but API fashions come and go.  If we use this approach, we'll
need to maintain this buggy, leaky code forever.


Implementation detail. Very much so :)


Right, my point is that this style of API is difficult to implement
correctly, which means authors will end up suffering low-quality
implementations for a long time.


My point is that it is not too difficult to implement such API correctly,
it just happens to be difficult currently in one(?) implementation.




On Thu, Jul 5, 2012 at 2:22 AM, Olli Pettay olli.pet...@helsinki.fi wrote:

But anyhow, event based API is ok to me.
In general I prefer events/event listeners over other callbacks.


Great.  I'd recommend going with that approach because it will let us
provide authors with high-quality implementations of the spec much
sooner.

Adam






Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

On 07/05/2012 08:01 PM, Ojan Vafai wrote:

On Thu, Jul 5, 2012 at 7:15 AM, Adam Barth w...@adambarth.com 
mailto:w...@adambarth.com wrote:

On Thu, Jul 5, 2012 at 1:37 AM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:
  On 07/05/2012 08:00 AM, Adam Barth wrote:
  On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi
  wrote:
  On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:
  So, it is very much implementation detail.
 
  (And I still don't understand how a callback can be so hard in this 
case.
  There are plenty of different kinds of callback objects.
new MutationObserver(some_callback_function_object) )
 
  I haven't tested, by my reading of the MutationObserver implementation
  in WebKit is that it leaks.  Specifically:
 
  MutationObserver --retains-- MutationCallback --retains--
  some_callback_function_object --retains-- MutationObserver
 
  I don't see any code that breaks this cycle.
 
  Ok. In Gecko cycle collector breaks the cycle. But very much an
  implementation detail.
 
  DOM events
 
  Probably EventListeners, not Events.
 
  have a bunch of delicate code to avoid break these
  reference cycles and avoid leaks.  We can re-invent that wheel here,
 
  Or use some generic approach to fix such leaks.
 
  but it's going to be buggy and leaky.
 
  In certain kinds of implementations.
 
  I appreciatie that these jQuery-style APIs are fashionable at the
  moment, but API fashions come and go.  If we use this approach, we'll
  need to maintain this buggy, leaky code forever.
 
  Implementation detail. Very much so :)

Right, my point is that this style of API is difficult to implement
correctly, which means authors will end up suffering low-quality
implementations for a long time.

On Thu, Jul 5, 2012 at 2:22 AM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:
  But anyhow, event based API is ok to me.
  In general I prefer events/event listeners over other callbacks.

Great.  I'd recommend going with that approach because it will let us
provide authors with high-quality implementations of the spec much
sooner.


The downside of events is that they have a higher overhead than we originally 
thought was acceptable for mutation events (e.g. just computing the
ancestor chain is too expensive). Now that we fire less frequently, the 
overhead might be OK, but it's still not great IMO.

Only having a high-overhead option for any new APIs we add is problematic. I 
appreciate the implementation complexity concern, but I think we just
need to make callbacks work.

We could fire the event on the MutationObserver itself. That would be 
lightweight. That doesn't help though, right?

new MutationObserver().addEventListener('onMutation', function() {}) vs. new 
MutationObserver().observe(function() {})


MutationObserver ctor takes the callback, not observe().

We're certainly not going to change MutationObserver callback handling.
MutationObserver is already unprefixed and all.
And in MutationObserver case having just one callback object per observer keeps 
things easier.
In many other cases the possibility for many callbacks (eventlisteners) is 
preferred.


But about UndoManager, I'd like to see the proposal how to handle transactions 
using events.



-Olli







Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

Btw, is there something unique with UndoManager which causes implementation 
problems in WebKit?
There are plenty of other APIs not using eventlisteners which take JS 
callbacks: setTimeout, requestAnimationFrame,
Google's File System API, PeerConnection ... Why aren't those causing problems?

We shouldn't change the UndoManager API because of implementation issues, but 
if event based API ends up being better.



On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

Hi all,

Sukolsak has been implementing the Undo Manager API in WebKit but the fact 
undoManager.transact() takes a pure JS object with callback functions is
making it very challenging.  The problem is that this object needs to be kept 
alive by either JS reference or DOM but doesn't have a backing C++
object.  Also, as far as we've looked, there are no other specification that 
uses the same mechanism.

Since I want to make the API consistent with the rest of the platform and the 
implementation maintainable in WebKit, I propose the following changes:

  * Re-introduce DOMTransaction interface so that scripts can instantiate new 
DOMTransaction().
  * Introduce AutomaticDOMTransaction that inherits from DOMTransaction and has 
a constructor that takes two arguments: a function and an optional label

After this change, authors can write:
scope.undoManager.transact(new AutomaticDOMTransaction{function () {
 scope.appendChild(foo);
}, 'append foo'));

instead of:

scope.undoManager.transact({executeAutomatic: function () {
 scope.appendChild(foo);
}, label: 'append foo'});

And

document.undoManager.transact(new DOMTransaction({function () {
 // Draw a line on canvas
 }, function () {
 // Undraw a line
 }, function () { this.execute(); },
 'Draw a line'
}));

instead of:

document.undoManager.transact({ execute: function () {
 // Draw a line on canvas
 }, undo: function () {
 // Undraw a line
 }, redo: function () { this.execute(); },
 label: 'Draw a line'
});

Best,
Ryosuke Niwa
Software Engineer
Google Inc.






Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Olli Pettay

On 07/05/2012 10:05 PM, Ryosuke Niwa wrote:

Also, I think consistency matters a lot here. I'm not aware of any other 
Web-facing API that takes a pure object with callback functions.

Except of course event listeners. Well, addEventListener can take an object 
with _a_ callback function.


I don't think it's reasonable to agree on an unimplementable design.

How is the current UndoManager unimplementable? It is just a bit hard in one 
implementation.
(I must say I'm _very_ surprised to learn that JS callback objects can be so 
hard to implement in WebKit.)


In theory, mutation events can be implemented correctly but we couldn't, so 
we're
moving on and getting rid of it.

Mutation Events is a bad API, and implemented correctly is not clear since 
there isn't a proper
spec for Mutation Events. Bad APIs shouldn't be implemented. UndoManager is a 
different beast. I don't see anything bad how it
currently handles callbacks. (But I also don't object to change it to use 
events if a good API is designed.)



-Olli



Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-04 Thread Olli Pettay

On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

Hi all,

Sukolsak has been implementing the Undo Manager API in WebKit but the fact 
undoManager.transact() takes a pure JS object with callback functions is
making it very challenging.  The problem is that this object needs to be kept 
alive by either JS reference or DOM but doesn't have a backing C++
object.  Also, as far as we've looked, there are no other specification that 
uses the same mechanism.


I don't understand what is difficult.
How is that any different to
target.addEventListener(foo, { handleEvent: function() {}})




Since I want to make the API consistent with the rest of the platform and the 
implementation maintainable in WebKit, I propose the following changes:

  * Re-introduce DOMTransaction interface so that scripts can instantiate new 
DOMTransaction().
  * Introduce AutomaticDOMTransaction that inherits from DOMTransaction and has 
a constructor that takes two arguments: a function and an optional label

After this change, authors can write:
scope.undoManager.transact(new AutomaticDOMTransaction{function () {
 scope.appendChild(foo);
}, 'append foo'));


Looks somewhat odd. DOMTransaction would be just a container for a callback?



instead of:

scope.undoManager.transact({executeAutomatic: function () {
 scope.appendChild(foo);
}, label: 'append foo'});

And

document.undoManager.transact(new DOMTransaction({function () {
 // Draw a line on canvas
 }, function () {
 // Undraw a line
 }, function () { this.execute(); },
 'Draw a line'
}));

instead of:

document.undoManager.transact({ execute: function () {
 // Draw a line on canvas
 }, undo: function () {
 // Undraw a line
 }, redo: function () { this.execute(); },
 label: 'Draw a line'
});

Best,
Ryosuke Niwa
Software Engineer
Google Inc.






Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-04 Thread Olli Pettay

On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:

On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

Hi all,

Sukolsak has been implementing the Undo Manager API in WebKit but the 
fact undoManager.transact() takes a pure JS object with callback
functions is
making it very challenging.  The problem is that this object needs to 
be kept alive by either JS reference or DOM but doesn't have a backing C++
object.  Also, as far as we've looked, there are no other specification 
that uses the same mechanism.


I don't understand what is difficult.
How is that any different to
target.addEventListener(foo, { handleEvent: function() {}})


It will be very similar to that except this object is going to have 3 callbacks 
instead of one.

The problem is that the event listener is a very special object in WebKit for 
which we have a lot of custom binding code. We don't want to implement a
similar behavior for the DOM transaction because it's very error prone.


So, it is very much implementation detail.
(And I still don't understand how a callback can be so hard in this case. There 
are plenty of different kinds of callback objects.
 new MutationObserver(some_callback_function_object) )




Since I want to make the API consistent with the rest of the platform 
and the implementation maintainable in WebKit, I propose the following
changes:

   * Re-introduce DOMTransaction interface so that scripts can 
instantiate new DOMTransaction().
   * Introduce AutomaticDOMTransaction that inherits from 
DOMTransaction and has a constructor that takes two arguments: a function and an
optional label


After this change, authors can write:
scope.undoManager.transact(new AutomaticDOMTransaction{__function () {
  scope.appendChild(foo);
}, 'append foo'));


Looks somewhat odd. DOMTransaction would be just a container for a callback?


Right. If we wanted, we can make DOMTransaction an event target and implement 
execute, undo,  redo as event listeners to further simplify the matter.


That could make the code more consistent with rest of the platform, but the API 
would become harder to use.



- Ryosuke






Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-04 Thread Olli Pettay

On 07/05/2012 03:25 AM, Olli Pettay wrote:

On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:

On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

Hi all,

Sukolsak has been implementing the Undo Manager API in WebKit but the 
fact undoManager.transact() takes a pure JS object with callback
functions is
making it very challenging.  The problem is that this object needs to 
be kept alive by either JS reference or DOM but doesn't have a backing
C++
object.  Also, as far as we've looked, there are no other specification 
that uses the same mechanism.


I don't understand what is difficult.
How is that any different to
target.addEventListener(foo, { handleEvent: function() {}})


It will be very similar to that except this object is going to have 3 callbacks 
instead of one.

The problem is that the event listener is a very special object in WebKit for 
which we have a lot of custom binding code. We don't want to implement a
similar behavior for the DOM transaction because it's very error prone.


So, it is very much implementation detail.
(And I still don't understand how a callback can be so hard in this case. There 
are plenty of different kinds of callback objects.
  new MutationObserver(some_callback_function_object) )




Since I want to make the API consistent with the rest of the platform 
and the implementation maintainable in WebKit, I propose the following
changes:

   * Re-introduce DOMTransaction interface so that scripts can 
instantiate new DOMTransaction().
   * Introduce AutomaticDOMTransaction that inherits from 
DOMTransaction and has a constructor that takes two arguments: a function and an
optional label


After this change, authors can write:
scope.undoManager.transact(new AutomaticDOMTransaction{__function () {
  scope.appendChild(foo);
}, 'append foo'));


Looks somewhat odd. DOMTransaction would be just a container for a callback?


Right. If we wanted, we can make DOMTransaction an event target and implement 
execute, undo,  redo as event listeners to further simplify the matter.


That could make the code more consistent with rest of the platform, but the API 
would become harder to use.




Perhaps API could be something like
undomanager.transact(foo); That would return Transaction object which 
implements EventTarget.
Then in common case
undomanager.transact(foo).onundo = function(evt) { /* do something.*/}




- Ryosuke









Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-27 Thread Olli Pettay

On 06/26/2012 11:58 PM, Adam Klein wrote:

On Wed, Jun 20, 2012 at 12:29 AM, Anne van Kesteren ann...@annevk.nl 
mailto:ann...@annevk.nl wrote:

On Tue, Jun 19, 2012 at 10:52 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:
  end-of-microtask or end-of-task everywhere. And yes, some parsing /
  networking details may unfortunately be exposed,
  but in a way which should be quite random. Web devs just can't really 
rely
  on network packages to be delivered to parser in
  some exact way.

I think the original solution we had to not expose parser mutations
was better. Exposing this can lead to all kinds of subtle bugs that
are hard to detect for developers.


I take it from your reply that you and I had the same view of what's specced in 
DOM4.


DOM4 doesn't say anything about this. And because it doesn't special case 
parser initiated mutations,
those mutations should be there.


That is, that MutationObservers are not specified to be notified
of actions taken by the parser.

That would be still very odd. The randomness that there is, is visible to 
scripts already now.
You can't know the size of network packages, so you can't know exactly now 
documents are parsed etc.

It would be also odd for scripts which use mutation observer to start working 
suddenly when
document enters into some state (readyState == complete or some such).
During page load user initiated events could cause all sorts of mutations but 
there wasn't a way to get notified
about those.


Given that fact, it seems that either the spec should be changed (and by spec 
here I think the required changes are
in HTML, not DOM), or Firefox's implementation ought to be changed.

Anne, Ian, Olli, Jonas, your thoughts?

- Adam





Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-21 Thread Olli Pettay

On 06/20/2012 10:36 AM, Ryosuke Niwa wrote:

On Tue, Jun 19, 2012 at 1:52 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

  Ojan points out

that simply using end-of-task could expose low-level implementation 
detail of the parser to script (such as how much parsing is done in a
single task
before the parser yields).

Does Firefox do anything special here? Or does it simply use the same 
end-of-task delivery as everywhere else?


end-of-microtask or end-of-task everywhere. And yes, some parsing / 
networking details may unfortunately be exposed, but in a way which should be
quite random. Web devs just can't really rely on network packages to be 
delivered to parser in some exact way.


That randomness seems undesirable. Can we delay the delivery until 
DOMContentLoaded is fired so that we can have more consisnte behavior here?


Well, the randomness is about the same randomness which is exposed to web pages 
already.
  img src=http://www.example.org/nonexisting.png; 
onerror=console.log('img')
  scriptconsole.log('script')/script

the order of 'img' and 'script' in the console is random.


-Olli






- Ryosuke






Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-20 Thread Olli Pettay

On 06/20/2012 10:36 AM, Ryosuke Niwa wrote:

On Tue, Jun 19, 2012 at 1:52 PM, Olli Pettay olli.pet...@helsinki.fi 
mailto:olli.pet...@helsinki.fi wrote:

  Ojan points out

that simply using end-of-task could expose low-level implementation 
detail of the parser to script (such as how much parsing is done in a
single task
before the parser yields).

Does Firefox do anything special here? Or does it simply use the same 
end-of-task delivery as everywhere else?


end-of-microtask or end-of-task everywhere. And yes, some parsing / 
networking details may unfortunately be exposed, but in a way which should be
quite random. Web devs just can't really rely on network packages to be 
delivered to parser in some exact way.


That randomness seems undesirable. Can we delay the delivery until 
DOMContentLoaded is fired so that we can have more consisnte behavior here?


That prevents using MutationObserver for certain cases. Like when you stream 
data using an iframe.

Also, there are already many cases when networking/parsing handling is exposed 
to the web pages.
Just put any img onload=.. onerror= When the handlers run, the stuff after 
the img may or may not be in the document.






- Ryosuke






Re: Web Notifications

2012-06-20 Thread Olli Pettay

On 06/20/2012 11:58 AM, Anne van Kesteren wrote:

Hi,

The Web Notifications WG is planning to move Web Notifications to W3C
Last Call meaning we don't intend to change it. But we might have
missed something and would therefore appreciate your review of
http://dvcs.w3.org/hg/notifications/raw-file/tip/Overview.html and any
comments you might have at public-web-notificat...@w3.org.

Cheers,





Seems like tags are global. I think they should be per origin.


-Olli



Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-19 Thread Olli Pettay

On 06/19/2012 11:37 PM, Adam Klein wrote:

On Sun, Jun 17, 2012 at 12:17 PM, Ryosuke Niwa rn...@webkit.org 
mailto:rn...@webkit.org wrote:

On Sun, Jun 17, 2012 at 5:03 AM, Jonas Sicking jo...@sicking.cc 
mailto:jo...@sicking.cc wrote:

On Sat, Jun 16, 2012 at 7:04 AM, Rafael Weinstein rafa...@google.com 
mailto:rafa...@google.com wrote:
  I too thought we had intentionally spec'd them to not fire during 
load.
 
  The HTML spec is clear about this WRT Mutation Events:
 
  http://www.whatwg.org/specs/web-apps/current-work/#tree-construction:
 
  DOM mutation events must not fire for changes caused by the UA
  parsing the document. (Conceptually, the parser is not mutating the
  DOM, it is constructing it.) This includes the parsing of any content
  inserted using document.write() and document.writeln() calls.
 
  It seems like this should also apply to Mutation Observers, unless we
  have compelling reasons to do otherwise.

This was something that we got people complaining about with mutation
events over the years. Our answer used to be that mutation events
generally suck and you can't depend on them anyway. Obviously not an
argument we'd want to use for MutationObservers.

I can't think of any cases where you would *not* want these to fire
for parser mutations.


Agreed. I'm in favor of observers being notified for parser-initiated DOM 
mutations. The primary reason we don't fire mutation events for parser
insertion  removal is because they're synchronous and introduces all sorts 
of problems including security vulnerabilities but that isn't the case
with mutation observers.

One question. Should we also notify mutation observers immediately before 
executing synchronous scripts (i.e. script elements without differ or
async content attributes) to address Mihai's use case?


This is one part of a more general question (raised by Ojan on webkit.org/b/89351 
http://webkit.org/b/89351): what should the timing be for delivery
of these parser-initiated mutations? Mihai's use case is one example where we 
might want something other than end-of-task delivery.

Mihai's use case is related to 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17529 (where I think Gecko's 
behavior is the good one)
If we start to call mutation callbacks right before script execution, we would 
need to handle cases when
script element is moved in the DOM right before script execution etc. Ugly 
stuff.


 Ojan points out

that simply using end-of-task could expose low-level implementation detail of 
the parser to script (such as how much parsing is done in a single task
before the parser yields).

Does Firefox do anything special here? Or does it simply use the same 
end-of-task delivery as everywhere else?


end-of-microtask or end-of-task everywhere. And yes, some parsing / networking 
details may unfortunately be exposed,
but in a way which should be quite random. Web devs just can't really rely on 
network packages to be delivered to parser in
some exact way.



- Adam





Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-17 Thread Olli Pettay

On 06/17/2012 03:03 PM, Jonas Sicking wrote:

On Sat, Jun 16, 2012 at 7:04 AM, Rafael Weinstein rafa...@google.com wrote:

I too thought we had intentionally spec'd them to not fire during load.

The HTML spec is clear about this WRT Mutation Events:

http://www.whatwg.org/specs/web-apps/current-work/#tree-construction:

DOM mutation events must not fire for changes caused by the UA
parsing the document. (Conceptually, the parser is not mutating the
DOM, it is constructing it.) This includes the parsing of any content
inserted using document.write() and document.writeln() calls.

It seems like this should also apply to Mutation Observers, unless we
have compelling reasons to do otherwise.


This was something that we got people complaining about with mutation
events over the years. Our answer used to be that mutation events
generally suck and you can't depend on them anyway. Obviously not an
argument we'd want to use for MutationObservers.

I can't think of any cases where you would *not* want these to fire
for parser mutations.


I agree. Better to try to keep the API consistent and create mutation records 
for
all the mutations.







For example if you are building an XBL-like widget library which uses
the DOM under a node to affect behavior or rendering of some other
object. If you attach the widget before the node is fully parsed you
still need to know about modifications that happen due to parsing.

If you are tracking all nodes which has a particular class name or
element name (for example to attach behavior to them, or as a
performance improvement in order to keep a live list of nodes matching
a selector) then you need to know about mutations that the parser
performed.

Also, if changing code from using document.write to using
.insertAdjecentHTML or other DOM features, why should that change
whether observers are notified?

Are there use cases for *not* wanting to know about parser mutations,
but still know about script-initiated mutations? What about situations
when a node is moved around and so that parser ends up inserting nodes
not at the end of the document? I.e. if a node A is done parsing,
but then an ancestor B of the current parser insertion point is moved
to become a child of A, which causes the parser to again mutate A's
descendants.

/ Jonas






Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-06-17 Thread Olli Pettay

On 06/17/2012 10:17 PM, Ryosuke Niwa wrote:

On Sun, Jun 17, 2012 at 5:03 AM, Jonas Sicking jo...@sicking.cc 
mailto:jo...@sicking.cc wrote:

On Sat, Jun 16, 2012 at 7:04 AM, Rafael Weinstein rafa...@google.com 
mailto:rafa...@google.com wrote:
  I too thought we had intentionally spec'd them to not fire during load.
 
  The HTML spec is clear about this WRT Mutation Events:
 
  http://www.whatwg.org/specs/web-apps/current-work/#tree-construction:
 
  DOM mutation events must not fire for changes caused by the UA
  parsing the document. (Conceptually, the parser is not mutating the
  DOM, it is constructing it.) This includes the parsing of any content
  inserted using document.write() and document.writeln() calls.
 
  It seems like this should also apply to Mutation Observers, unless we
  have compelling reasons to do otherwise.

This was something that we got people complaining about with mutation
events over the years. Our answer used to be that mutation events
generally suck and you can't depend on them anyway. Obviously not an
argument we'd want to use for MutationObservers.

I can't think of any cases where you would *not* want these to fire
for parser mutations.


Agreed. I'm in favor of observers being notified for parser-initiated DOM 
mutations. The primary reason we don't fire mutation events for parser
insertion  removal is because they're synchronous and introduces all sorts of 
problems including security vulnerabilities but that isn't the case
with mutation observers.

One question. Should we also notify mutation observers immediately before 
executing synchronous scripts (i.e. script elements without differ or async
content attributes) to address Mihai's use case?


That would be rather odd. If someone needs to process mutation records before 
normal delivery time, there is always takeRecords()




- Ryosuke






Re: [whatwg] Fullscreen events dispatched to elements

2012-06-05 Thread Olli Pettay

On 06/05/2012 09:31 AM, Jer Noble wrote:


On Jun 4, 2012, at 11:23 PM, Robert O'Callahan rob...@ocallahan.org wrote:


If you implemented that proposal as-is then authors would usually need a 
listener on the document as well as the element, and as Chris pointed
out, it's simpler to just always listen on the document.

Is that true for the Webkit implementation or did you implement something 
slightly different?


Sorry, you're right; we did implement something slightly different.  We always 
dispatch a message to the element, and additionally one the document
if the element has been removed from the document.  So authors only have to add 
event listeners to one or the other.


That is rather unusual behavior. I don't recall any other cases when such 
additional event is dispatched if node is removed from document.



-Olli





-Jer






Re: CfC: publish a FPWD of Web Components Explainer; deadline May 9

2012-05-02 Thread Olli Pettay

I don't understand this.
The explainer doesn't look like something which should become a 
recommendation.

It just, well, explains how the various proposed APIs work.
So, why do we need explainer as FPWD?


-Olli

On 05/02/2012 11:22 PM, Arthur Barstow wrote:

As discussed during WebApps' May 1 f2f meeting [2], the Web Components
Explainer document is ready for a First Public Working Draft (FPWD)
publication and this a Call for Consensus (CfC) to do so:

http://dvcs.w3.org/hg/webcomponents/raw-file/tip/explainer/index.html

This CfC satisfies the group's requirement to record the group's
decision to request advancement.

By publishing this FPWD, the group sends a signal to the community to
begin reviewing the document. The FPWD reflects where the group is on
this spec at the time of publication; it does not necessarily mean there
is consensus on the spec's contents.

Positive response to this CfC is preferred and encouraged and silence
will be considered as agreement with the proposal. The deadline for
comments is May 9. Please send all comments to:

public-webapps@w3.org

-Art Barstow

[1] http://www.w3.org/2012/05/01-webapps-minutes.html#item03

 Original Message 
Subject: ACTION-659: Start a CfC to publish a FPWD of Web Components
Explainer (when an ED with TR template is available) (Web Applications
Working Group)
Date: Tue, 1 May 2012 19:16:17 +
From: ext Web Applications Working Group Issue Tracker
sysbot+trac...@w3.org
Reply-To: Web Applications Working Group public-webapps@w3.org
To: art.bars...@nokia.com



ACTION-659: Start a CfC to publish a FPWD of Web Components Explainer
(when an ED with TR template is available) (Web Applications Working Group)

http://www.w3.org/2008/webapps/track/actions/659

On: Arthur Barstow
Due: 2012-05-08

If you do not want to be notified on new action items for this group,
please update your settings at:
http://www.w3.org/2008/webapps/track/users/7672#settings









Re: GamepadObserver (ie. MutationObserver + Gamepad)

2012-05-02 Thread Olli Pettay

On 05/03/2012 12:48 AM, Rick Waldron wrote:

Instead of traditional DOM events being used for Other Events[1], and
considering the high frequency of Gamepad state changes, it might make
sense to provide an API similar to MutationObserver, where a
MutationRecord is created that has snapshots of current and previous
states of axes or buttons...


This is entirely hypothetical:

(new GamepadObserver(function(mutations) {

   console.log( mutations );
   /*
   {
 previousState: {
   readonly attribute string   id;
   readonly attribute long index;
   readonly attribute DOMTimeStamp timestamp;

// Either or both of the following, bases on the options list

   readonly attribute float[]  axes;
   readonly attribute float[]  buttons;
 }

 currentState: {
   readonly attribute string   id;
   readonly attribute long index;
   readonly attribute DOMTimeStamp timestamp;

// Either or both of the following, bases on the options list

   readonly attribute float[]  axes;
   readonly attribute float[]  buttons;
 }
   }
   */
})).observe(navigator.gamepads[0], { axesList: true });

//  axesList, buttonsList

[1] http://dvcs.w3.org/hg/gamepad/raw-file/tip/gamepad.html#other-events


Rick



no need for this kind of thing. Gamepad data is external, so dispatching 
events is better. The event can of course keep

a list of changes since the previous event dispatch.


-Olli




Re: [DOM3 Events/DOM4] re-dispatching trusted events with initEvent

2012-04-24 Thread Olli Pettay

On 04/24/2012 09:43 PM, Travis Leithead wrote:

Based on my reading of DOM4, initEvent makes it possible to transform
a trusted event into a non-trusted event and dispatch it. Is that
intentional?

AFAIK, yes



It is only currently supported in Firefox and Opera. In
IE, Chrome and Safari, the initEvent call is ignored in this
scenario. After the initEvent call is ignored, Chrome will allow you
to dispatch the event (unchanged), IE will not (per the prose
currently in DOM3 Events). Note, chrome doesn't report the
isTrusted property, so I can't tell if initEvent would have set
that flag to false (hope so)!

I'm trying to rationalize the behavior between DOM3 and DOM4.

DOM3 Events was pretty clear that you can't dispatch an event that
wasn't created with createEvent.

Sounds like a bug. That wasn't the intention when isTrusted was added.


Pretty simple. That's contrary to
DOM4 at the moment (which allows it as long as it's been
initialized); I wonder if there needs to be another check to prevent
re-dispatching a trusted event?. Is there a specific reason for the
current behavior?

DOM3 Events is not very clear about initEvent at the moment. Should
it be allowed to convert a trusted event to a non-trusted event?

Yes. It should be possible to re-dispatch events. But if a script
running on a web page dispatches event, the event must become
untrusted.


-Olli



Seems like trouble. Given that IE9 and Chrome/Safari don't allow it,
it won't be a compatibility issue to disallow it.

Let's come to an agreement on this so that the two specs can be
harmonious on this point.

-Travis








Re: [DOM3 Events/DOM4] re-dispatching trusted events with initEvent

2012-04-24 Thread Olli Pettay

On 04/25/2012 12:16 AM, Anne van Kesteren wrote:

On Tue, 24 Apr 2012 23:02:22 +0200, Boris Zbarsky bzbar...@mit.edu wrote:

(DOM3's language
about default actions confuses this; I suggest reading DOM4's event
section to get a good picture of how this actually works.)


Or rather how the DOM4 editor is choosing to conceptualize it, which
may not have much bearing on how it actually works in actual browsers.


Last time I discussed this with Jonas Sicking he agreed that Gecko could
change some things here and he also agreed with the model put forward.


It is not only about Gecko, but all the browser engines, at least last 
time I tested it.




If the model is wrong we should fix it of course.

It does indeed not apply universally and as far as I know HTML does
cater for those exceptions in various ways. It would be interesting to
know where it does not.

I'm not sure how extensions are relevant here. If you allow them to do
complex things then of course they will be complex to implement, but
there is not much we can do about that.








Re: Recent Sync XHR changes and impact on automatically translated JavaScript code

2012-03-20 Thread Olli Pettay


On 03/20/2012 06:09 PM, Gordon Williams wrote:

Hi,

I recently posted on
https://bugs.webkit.org/show_bug.cgi?id=72154
https://bugzilla.mozilla.org/show_bug.cgi?id=716765
about the change to XHR which now appears to be working its way into
Mainstream users' browsers.

As requested, I'll pursue on this list - apologies for the earlier bug
spam.

My issue is that I have WebGL JavaScript that is machine-generated from
a binary file - which is itself synchronous. It was working fine:

http://www.morphyre.com/scenedesign/go

It now fails on Firefox (and shortly on Chrome I imagine) because it
can't get an ArrayBuffer from a synchronous request. It may be possible
to split the execution and make it asynchronous, however this is a very
large undertaking as you may get an idea of from looking at the source.

My understanding is that JavaScript Workers won't be able to access
WebGL, so I am unable to just run the code as a worker.

What options do I have here?

* Extensive rewrite to try and automatically translate the code to be
asynchronous
* Use normal Synchronous XHR and send the data I require in text form,
then turn it back into an ArrayBuffer with JavaScript

Are there any other options?

Right now, #2 is looking like the only sensible option - which is a
shame as it will drastically decrease the UX.


#1 sounds like the only reasonable option.
You have now code like:
...
var x = new XMLHttpRequest();
x.open(...);
x.send();
...do something with x.response.


Why couldn't async work?

...
var x = new XMLHttpRequest();
x.open(...);
x.onload = function () { ...do something with x.response. }
x.send();

If you need to prevent user events while XHR is active, put some 
transparent overlay over the page.

If timers shouldn't run, wrap setTimeout with your own stuff which can
suspend timers when XHR is active.



-Olli







- Gordon








Re: Recent Sync XHR changes and impact on automatically translated JavaScript code

2012-03-20 Thread Olli Pettay

I think we should try to get rid of sync XHR in window context.
It takes time, and can be painful, but sync APIs in window
context are just not acceptable.

-Olli


On 03/20/2012 08:03 PM, Gordon Williams wrote:

Thanks for the suggestions...

Just so I'm certain: The #3 option is to run in a Worker, and then to
put all WebGL calls in an array, and then use the UI thread to check
that array and execute calls based on its contents?

There is a small amount of state querying of WebGL going on, so that's
probably going to stall quite badly in my case, but it's definitely a
solution. I suppose if this is going to be the accepted way forwards,
somebody could write a library that transparently passed WebGL calls
from a worker into the UI thread, and the transition might be relatively
painless.

I totally understand about the need to deter people from using the UI
thread, however it seems that while synchronous XHR exists at all,
deliberately removing features in some cases just makes developers lives
more difficult - and may force them into the synchronous JSON option -
which can't be good for anybody.

- Gordon

On 20/03/12 17:07, Jarred Nicholls wrote:

On Tue, Mar 20, 2012 at 12:09 PM, Gordon Williams g...@pur3.co.uk
mailto:g...@pur3.co.uk wrote:

Hi,

I recently posted on
https://bugs.webkit.org/show_bug.cgi?id=72154
https://bugzilla.mozilla.org/show_bug.cgi?id=716765
about the change to XHR which now appears to be working its way
into Mainstream users' browsers.

As requested, I'll pursue on this list - apologies for the earlier
bug spam.

My issue is that I have WebGL JavaScript that is machine-generated
from a binary file - which is itself synchronous. It was working fine:

http://www.morphyre.com/scenedesign/go

It now fails on Firefox (and shortly on Chrome I imagine) because
it can't get an ArrayBuffer from a synchronous request. It may be
possible to split the execution and make it asynchronous, however
this is a very large undertaking as you may get an idea of from
looking at the source.

My understanding is that JavaScript Workers won't be able to
access WebGL, so I am unable to just run the code as a worker.

What options do I have here?

* Extensive rewrite to try and automatically translate the code to
be asynchronous
* Use normal Synchronous XHR and send the data I require in text
form, then turn it back into an ArrayBuffer with JavaScript

Are there any other options?

Right now, #2 is looking like the only sensible option - which is
a shame as it will drastically decrease the UX.

- Gordon



#1 is the best option long term.  All web platform APIs in the window
context - going forward - are asynchronous and this isn't going to be
the last time someone runs into this issue.

#2 is a reasonable stop gap; and assuming things like large textures
are being downloaded, the text - preallocated TypedArray copy will be
shadowed by the wait for large I/O to complete from a remote source.

I believe there is a #3, which is a hybrid of sync APIs, Workers, and
message posting.  You can use a worker to perform these sync
operations and post data back to the main UI thread where an event
loop/handler runs and has access to the WebGL context.  Firefox 6+ and
Chrome 13+ have support for the structured cloning...there's overhead
involved but it works and might be an easier translation than creating
async JS.  Chrome 17+ has transferable objects, so data passing is
wicked fast.

Jarred








Re: Disallowing mutation events in shadow DOM

2012-02-23 Thread Olli Pettay

On 02/24/2012 01:38 AM, Ryosuke Niwa wrote:

Can we disallow mutation events inside shadow DOM?

Sounds good to me.
Whatever shadow dom spec will be implemented, mutation events shouldn't
fire there. Mutation observers should work.


-Olli





There is no legacy content that depends on mutation events API inside
shadow DOM, and we have a nice spec  implementation of new mutation
observer API already.

FYI, https://bugs.webkit.org/show_bug.cgi?id=79278

Best,
Ryosuke Niwa
Software Engineer
Google Inc.







Re: Disallowing mutation events in shadow DOM

2012-02-23 Thread Olli Pettay

On 02/24/2012 02:10 AM, Brian Kardell wrote:

Just to be clear on this:  what is the status of mutation observers?


They are in DOM 4. The API may still change a bit, but
there is already one implementation, and another one close to
ready.



If
there any chance shadow dom beats mutation observers to
standardization?

AFAIK, shadow DOM is quite far from being stable.



 I don't think so, but just checking...  If that turned
out to be the case it could be crippling shadow dom until such a time..

Brian

On Feb 23, 2012 6:46 PM, Dimitri Glazkov dglaz...@chromium.org
mailto:dglaz...@chromium.org wrote:

Sounds good. Filed a bug here:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16096

:DG

On Thu, Feb 23, 2012 at 3:38 PM, Ryosuke Niwa rn...@webkit.org
mailto:rn...@webkit.org wrote:
  Can we disallow mutation events inside shadow DOM?
 
  There is no legacy content that depends on mutation events API
inside shadow
  DOM, and we have a nice spec  implementation of new mutation
observer API
  already.
 
  FYI, https://bugs.webkit.org/show_bug.cgi?id=79278
 
  Best,
  Ryosuke Niwa
  Software Engineer
  Google Inc.
 
 






Re: CG for Speech JavaScript API

2012-02-14 Thread Olli Pettay

So, if I haven't made it clear before,
doing the initial standardization work in CG sounds ok to me.
I do expect that there will be a WG eventually, but perhaps
CG is a faster and more lightweight way to start - well continue from
what XG did.

-Olli


On 01/31/2012 06:01 PM, Glen Shires wrote:

We at Google propose the formation of a new Community Group to pursue a
JavaScript Speech API. Specifically, we are proposing this Javascript
API [1], which enables web developers to incorporate speech recognition
and synthesis into their web pages, and supports the majority of
use-cases in the Speech Incubator Group's Final Report [2]. This API
enables developers to use scripting to generate text-to-speech output
and to use speech recognition as an input for forms, continuous
dictation and control. For this first specification, we believe
this simplified subset API will accelerate implementation,
interoperability testing, standardization and ultimately developer
adoption. However, in the spirit of consensus, we are willing to broaden
this subset API to include additional Javascript API features in the
Speech Incubator Final Report.

We believe that forming a Community Group has the following advantages:

- It’s quick, efficient and minimizes unnecessary process overhead.

- We believe it will allow us, as a group, to reach consensus in an
efficient manner.

- We hope it will expedite interoperable implementations in multiple
browsers. (A good example is the Web Media Text Tracks CG, where
multiple implementations are happening quickly.)

- We propose the CG will use the public-webapps@w3.org
mailto:public-webapps@w3.org as its mailing list to provide visibility
to a wider audience, with a balanced web-centric view for new JavaScript
APIs.  This arrangement has worked well for the HTML Editing API CG [3].
Contributions to the specification produced by the Speech API CG will be
governed by the Community Group CLA and the CG is responsible for
ensuring that all Contributions come from participants that have agreed
to the CG CLA.  We believe the response to the CfC [4] has shown
substantial interest and support by WebApps members.

- A CG provides an IPR environment that simplifies future transition to
standards track.

Google plans to supply an implementation and a test suite for this
specification, and will commit to serve as editor.  We hope that others
will support this CG as they had stated support for the similar WebApps
CfC. [4]

Bjorn Bringert
Satish Sampath
Glen Shires

[1]
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
[2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
[3] http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
[4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html





Re: Speech Recognition and Text-to-Speech Javascript API - seeking feedback for eventual standardization

2012-01-09 Thread Olli Pettay

On 01/09/2012 04:59 PM, Arthur Barstow wrote:

Hi All,

As I indicated in [1], WebApps already has a relatively large number of
specs in progress and the group has agreed to add some new specs. As
such, to review any new charter addition proposals, I think we need at
least the following:

1. Relatively clear scope of the feature(s). (This information should be
detailed enough for WG members with relevant IP to be able to make an IP
assessment.)

2. Editor commitment(s)

3. Implementation commitments from at least two WG members

Is this really requirement nowadays?
Is there for example commitment to implement
File System API?
http://dev.w3.org/2009/dap/file-system/file-dir-sys.html

But anyway, I'm interested to implement the speech API,
and as far as I know, also other people involved with Mozilla
have shown interest.




4. Testing commitment(s)

Re the APIs in this thread - I think Glen's API proposal [2] adequately
addresses #1 above and his previous responses imply support for #2 but
it would be good for Glen, et al. to confirm. Re #3, other than Google,
I don't believe any other implementor has voiced their support for
WebApps adding these APIs. As such, I think we we need additional input
on implementation support (e.g. Apple, Microsoft, Mozilla, Opera, etc.).


It doesn't matter too much to me in which group the API will be 
developed (except that I'm against doing it in HTML WG).

WebApps is reasonably good place (if there won't be any IP issues.)




-Olli




Re the markup question - WebAppsdoes have some precedence for defining
markup (e.g. XBL2, Widget XML config). I don't have a strong opinion on
whether or not WebApps should include the type of markup in the XG
Report. I think the next step here is for WG members to submit comments
on this question. In particular, proponents of including markup in
WebApps' charter should respond to #1-4 above.

-AB

[1] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1474.html
[2]
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html



On 1/5/12 6:49 AM, ext Satish S wrote:


2) How does the draft incorporate with the existing input speech
API[1]? It seems to me as if it'd be best to define both the attribute
as the DOM APIs in a single specification, also because they share
several events (yet don't seem to be interchangeable) and the
attribute already has an implementation.


The input speech API proposal was implemented as input
x-webkit-speech in Chromium a while ago. A lot of the developer
feedback we received was about finer grained control including a
javascript API and letting the web application decide how to present
the user interface rather than tying it to the input element.

The HTML Speech Incubator Group's final report [1] includes a reco
element which addresses both these concerns and provides automatic
binding of speech recognition results to existing HTML elements. We
are not sure if the WebApps WG is a good place to work on
standardising such markup elements, hence did not include in the
simplified Javascript API [2]. If there is sufficient interest and
scope in the WebApps WG charter for the Javascript API and markup, we
are happy to combine them both in the proposal.

[1] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
[2]
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html



Thanks,
Peter

[1]
http://lists.w3.org/Archives/Public/public-xg-htmlspeech/2011Feb/att-0020/api-draft.html


On Thu, Jan 5, 2012 at 07:15, Glen Shires gshi...@google.com
mailto:gshi...@google.com wrote:
 As Dan Burnett wrote below: The HTML Speech Incubator Group [1]
has recently
 wrapped up its work on use cases, requirements, and proposals
for adding
 automatic speech recognition (ASR) and text-to-speech (TTS)
capabilities to
 HTML. The work of the group is documented in the group's Final
Report. [2]
 The members of the group intend this work to be input to one or more
 working groups, in W3C and/or other standards development
organizations such
 as the IETF, as an aid to developing full standards in this space.

 Because that work was so broad, Art Barstow asked (below) for a
relatively
 specific proposal. We at Google are proposing that a subset of it be
 accepted as a work item by the Web Applications WG.
Specifically, we are
 proposing this Javascript API [3], which enables web developers to
 incorporate speech recognition and synthesis into their web pages.
 This simplified subset enables developers to use scripting to
generate
 text-to-speech output and to use speech recognition as an input
for forms,
 continuous dictation and control, and it supports the majority
of use-cases
 in the Incubator Group's Final Report.

 We welcome your feedback and ask that the Web Applications WG
 consider accepting this Javascript API [3] as a work item.

 [1] charter: http://www.w3.org/2005/Incubator/htmlspeech/charter
 [2] report:

Re: Speech Recognition and Text-to-Speech Javascript API - seeking feedback for eventual standardization

2012-01-09 Thread Olli Pettay

On 01/09/2012 06:17 PM, Young, Milan wrote:

To clarify, are you interested in developing the entirety of the JS API
we developed in the HTML Speech XG, or just the subset proposed by
Google?


Not sure if you sent the reply to me only on purpose.
CCing the WG and XG lists.

Since from practical point of view
the API+protocol XG defined is a huge thing to implement at once, it
makes sense to implement it in pieces. Something like
(1) Initial API implementation. Some subset of what XG defined
Not necessarily exactly what Google proposed but something close to
it. Support for remote speech services could be in the initial API,
but if UA doesn't implement the protocol, it would just fail when
trying to connect to remove services.
(2) Simultaneously or later - depending on the protocol standardization
in IETF or elsewhere - support remote speech services
(3) implement some more of the API XG defined (if needed by web
developers or web services)
(4) Implement reco? I'm not at all convinced we need reco element
since automatic value binding makes it just a bit strange and
inconsistent.


This is the way web APIs tend to evolve. Implement first something quite 
small, and then add new features if/when needed.




-Olli





Thanks


-Original Message-
From: Olli Pettay [mailto:olli.pet...@helsinki.fi]
Sent: Monday, January 09, 2012 8:13 AM
To: Arthur Barstow
Cc: ext Satish S; Peter Beverloo; Glen Shires; public-webapps@w3.org;
public-xg-htmlspe...@w3.org; Dan Burnett
Subject: Re: Speech Recognition and Text-to-Speech Javascript API -
seeking feedback for eventual standardization

On 01/09/2012 04:59 PM, Arthur Barstow wrote:

Hi All,

As I indicated in [1], WebApps already has a relatively large number
of specs in progress and the group has agreed to add some new specs.
As such, to review any new charter addition proposals, I think we need



at least the following:

1. Relatively clear scope of the feature(s). (This information should
be detailed enough for WG members with relevant IP to be able to make
an IP
assessment.)

2. Editor commitment(s)

3. Implementation commitments from at least two WG members

Is this really requirement nowadays?
Is there for example commitment to implement File System API?
http://dev.w3.org/2009/dap/file-system/file-dir-sys.html

But anyway, I'm interested to implement the speech API, and as far as I
know, also other people involved with Mozilla have shown interest.




4. Testing commitment(s)

Re the APIs in this thread -  I think Glen's API proposal [2]
adequately addresses #1 above and his previous responses imply support



for #2 but it would be good for Glen, et al. to confirm. Re #3, other
than Google, I don't believe any other implementor has voiced their
support for WebApps adding these APIs. As such, I think we we need
additional input on implementation support (e.g. Apple, Microsoft,

Mozilla, Opera, etc.).

It doesn't matter too much to me in which group the API will be
developed (except that I'm against doing it in HTML WG).
WebApps is reasonably good place (if there won't be any IP issues.)




-Olli




Re the markup question -  WebAppsdoes have some precedence for

defining

markup (e.g. XBL2, Widget XML config). I don't have a strong opinion

on

whether or not WebApps should include the type of markup in the XG
Report. I think the next step here is for WG members to submit

comments

on this question. In particular, proponents of including markup in
WebApps' charter should respond to #1-4 above.

-AB

[1]

http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1474.html

[2]


http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/s
peechapi.html




On 1/5/12 6:49 AM, ext Satish S wrote:


2) How does the draft incorporate with the existinginput speech
API[1]? It seems to me as if it'd be best to define both the

attribute

as the DOM APIs in a single specification, also because they share
several events (yet don't seem to be interchangeable) and the
attribute already has an implementation.


Theinput speech  API proposal was implemented asinput
x-webkit-speech  in Chromium a while ago. A lot of the developer
feedback we received was about finer grained control including a
javascript API and letting the web application decide how to present
the user interface rather than tying it to theinput  element.

The HTML Speech Incubator Group's final report [1] includes areco
element which addresses both these concerns and provides automatic
binding of speech recognition results to existing HTML elements. We
are not sure if the WebApps WG is a good place to work on
standardising such markup elements, hence did not include in the
simplified Javascript API [2]. If there is sufficient interest and
scope in the WebApps WG charter for the Javascript API and markup, we
are happy to combine them both in the proposal.

[1] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
[2]


http://lists.w3.org

Re: [XHR2] timeout

2011-12-21 Thread Olli Pettay

On 12/21/2011 05:59 PM, Jarred Nicholls wrote:

On Wed, Dec 21, 2011 at 10:47 AM, Anne van Kesteren ann...@opera.com
mailto:ann...@opera.com wrote:

On Wed, 21 Dec 2011 16:25:33 +0100, Jarred Nicholls
jar...@webkit.org mailto:jar...@webkit.org wrote:

1. The spec says the timeout should fire after the specified
number of

milliseconds has elapsed since the start of the request.  I
presume this means literally that, with no bearing on whether or
not data is coming over the wire?


Right.


2. Given we have progress events, we can determine that data is
coming

over the wire and react accordingly (though in an ugly fashion,
semantically).  E.g., the author can disable the timeout or
increase the timeout.  Is that use case possible?  In other
words, should setting the timeout value during an active request
reset the timer?  Or should the
timer always be basing its elapsed time on the start time of the
request + the specified timeout value (an absolute point in the
future)?  I
understand the language in the spec is saying the latter, but
perhaps could use emphasis that the timeout value can be changed
mid-request.


http://dvcs.w3.org/hg/xhr/rev/__2ffc908d998f
http://dvcs.w3.org/hg/xhr/rev/2ffc908d998f


Brilliant, no doubts about it now ;)




Furthermore, if the timeout value is set to a value  0 but less
than the original value, and the elapsed time is past the
(start_time + timeout), do we fire the timeout or do we
effectively disable it?


The specification says has passed which seems reasonably clear to
me. I.e. you fire it.


Cool, agreed.



3. Since network stacks typically operate w/ timeouts based on data

coming over the wire, what about a different timeout attribute
that fires a timeout event when data has stalled, e.g.,
dataTimeout?  I think this type of timeout would be more
desirable by authors to have control over for
async requests, since today it's kludgey to try and simulate
that with
timers/progress events + abort().  Whereas with the overall request
timeout, library authors already simulate that easily with
timers + abort() in the async context.  For sync requests in
worker contexts, I can see a dataTimeout as being heavily
desired over a simple request timeout.


So if you receive no octet for dataTimeout milliseconds you get the
timeout event and the request terminates? Sounds reasonable.


Correct.  Same timeout exception/event shared with the request timeout
attribute, and similar setter/getter steps; just having that separate
criteria for triggering it.



Is there really need for dataTimeout? You could easily use progress 
events and .timeout to achieve similar functionality.
This was the reason why I originally asked that .timeout can be set also 
when XHR is active.


xhr.onprogress = function() {
  this.timeout += 250;
}


(timeout is being implemented in Gecko)


-Olli








--
Anne van Kesteren
http://annevankesteren.nl/


Thanks,
Jarred





Re: [XHR2] timeout

2011-12-21 Thread Olli Pettay

On 12/21/2011 08:59 PM, Jarred Nicholls wrote:

On Wed, Dec 21, 2011 at 1:34 PM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

On 12/21/2011 05:59 PM, Jarred Nicholls wrote:

On Wed, Dec 21, 2011 at 10:47 AM, Anne van Kesteren
ann...@opera.com mailto:ann...@opera.com
mailto:ann...@opera.com mailto:ann...@opera.com wrote:

On Wed, 21 Dec 2011 16:25:33 +0100, Jarred Nicholls
jar...@webkit.org mailto:jar...@webkit.org
mailto:jar...@webkit.org mailto:jar...@webkit.org wrote:

1. The spec says the timeout should fire after the specified
number of

milliseconds has elapsed since the start of the request.  I
presume this means literally that, with no bearing on
whether or
not data is coming over the wire?


Right.


2. Given we have progress events, we can determine that
data is
coming

over the wire and react accordingly (though in an ugly
fashion,
semantically).  E.g., the author can disable the timeout or
increase the timeout.  Is that use case possible?  In other
words, should setting the timeout value during an active
request
reset the timer?  Or should the
timer always be basing its elapsed time on the start
time of the
request + the specified timeout value (an absolute point
in the
future)?  I
understand the language in the spec is saying the
latter, but
perhaps could use emphasis that the timeout value can be
changed
mid-request.


http://dvcs.w3.org/hg/xhr/rev/2ffc908d998f
http://dvcs.w3.org/hg/xhr/rev/__2ffc908d998f

http://dvcs.w3.org/hg/xhr/__rev/2ffc908d998f
http://dvcs.w3.org/hg/xhr/rev/2ffc908d998f


Brilliant, no doubts about it now ;)




Furthermore, if the timeout value is set to a value  0
but less
than the original value, and the elapsed time is past the
(start_time + timeout), do we fire the timeout or do we
effectively disable it?


The specification says has passed which seems reasonably
clear to
me. I.e. you fire it.


Cool, agreed.



3. Since network stacks typically operate w/ timeouts
based on data

coming over the wire, what about a different timeout
attribute
that fires a timeout event when data has stalled, e.g.,
dataTimeout?  I think this type of timeout would be more
desirable by authors to have control over for
async requests, since today it's kludgey to try and simulate
that with
timers/progress events + abort().  Whereas with the
overall request
timeout, library authors already simulate that easily with
timers + abort() in the async context.  For sync requests in
worker contexts, I can see a dataTimeout as being heavily
desired over a simple request timeout.


So if you receive no octet for dataTimeout milliseconds you
get the
timeout event and the request terminates? Sounds reasonable.


Correct.  Same timeout exception/event shared with the request
timeout
attribute, and similar setter/getter steps; just having that
separate
criteria for triggering it.



Is there really need for dataTimeout? You could easily use progress
events and .timeout to achieve similar functionality.
This was the reason why I originally asked that .timeout can be set
also when XHR is active.

xhr.onprogress = function() {
  this.timeout += 250;
}


Then why have timeout at all?  Your workaround for a native dataTimeout
is analogous to using a setTimeout + xhr.abort() to simulate the request
timeout.

I can tell you why I believe we should have dataTimeout in addition to
timeout:

 1. Clean code, which is better for authors and the web platform.  To
achieve the same results as a native dataTimeout, your snippet would
need to be amended to maintain the time of the start of the request
and calculate the difference between that and the time the progress
event fired + your timeout value:

xhr.timeout = ((new Date()).getTime() - requestStart) + myTimeout;

A dataTimeout is a buffered timer that's reset on each octet of data
that's received; a sliding window of elapsed time before timing out.
  Every time the above snippet is calculated, it becomes more and
more erroneous; the margin of error increases because of time delays
of JS events being dispatched, etc.
 2

Re: XBL2, Component Model and WebApps' Rechartering [Was: Re: Consolidating charter changes]

2011-12-17 Thread Olli Pettay

On 12/17/2011 04:30 PM, Anne van Kesteren wrote:

On Thu, 24 Nov 2011 14:08:55 +0100, Arthur Barstow
art.bars...@nokia.com wrote:

All - What are the opinions on what, if anything, to do with XBL2
vis-a-vis the charter update? Leave it on the REC track, stop work and
publish it as a WG Note, something else?


I would leave it as, but add a note we might abandon it at some point in
favor of Components. No need to make an early call on that.


That sounds good to me.


-Olli







[1] http://www.w3.org/2008/webapps/wiki/CharterChanges#Additions_Agreed








[Pointer Lock] Few comments

2011-12-15 Thread Olli Pettay

Hi all,

few comments about the API

(1)
currently 
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html uses 
VoidCallback which isn't defined anywhere.


I guess there should be something like

void lock (in Element target,
   optional in LockSuccessCallback successCallback,
   optional in LockErrorCallback failureCallback);


[Callback,NoInterfaceObject]
interface LockSuccessCallback {
  void pointerLockSuccess();
};

[Callback,NoInterfaceObject]
interface LockErrorCallback {
  void pointerLockFailure();
};

Or if the new proposed callback syntax is used:
callback LockSuccessCallback = void pointerLockSuccess();
callback LockErrorCallback = void pointerLockFailure();


(2)
If another element is locked a user agent must transfer the mouse lock 
to the new target and call the pointerlocklost callback for the previous 
target.

There is no such thing as 'pointerlocklost callback'

(3)
Mouse lock must succeed only if the window is in focus and the 
user-agent is the active application of the operating system

What window? window object as in web page? Or OS level window?
What if lock is called in some iframe?

(4)
If the target is removed from the DOM tree after mouse lock is entered 
then mouse lock will be lost.

Should 'pointerlocklost' event be dispatched?




-Olli




Re: [Pointer Lock] Few comments

2011-12-15 Thread Olli Pettay

On 12/15/2011 11:27 PM, Vincent Scheib wrote:



On Thu, Dec 15, 2011 at 6:16 AM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

Hi all,

few comments about the API

(1)
currently
http://dvcs.w3.org/hg/__webevents/raw-file/default/__mouse-lock.html
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html
uses VoidCallback which isn't defined anywhere.

I guess there should be something like

void lock (in Element target,
   optional in LockSuccessCallback successCallback,
   optional in LockErrorCallback failureCallback);


[Callback,NoInterfaceObject]
interface LockSuccessCallback {
  void pointerLockSuccess();
};

[Callback,NoInterfaceObject]
interface LockErrorCallback {
  void pointerLockFailure();
};

Or if the new proposed callback syntax is used:
callback LockSuccessCallback = void pointerLockSuccess();
callback LockErrorCallback = void pointerLockFailure();


I used the concept of VoidCallback from other implemented specs. Are
there any issues with it other than that the spec should define
VoidCallback? e.g.
http://www.w3.org/TR/file-system-api/#the-voidcallback-interface
http://www.w3.org/TR/2009/WD-DataCache-20091029/#VoidCallback


well, in those specs VoidCallback is FunctionOnly= which it probably 
shouldn't be.

But there is ongoing discussion about removing FunctionOnly= from
WebIDL





(2)
If another element is locked a user agent must transfer the mouse
lock to the new target and call the pointerlocklost callback for the
previous target.
There is no such thing as 'pointerlocklost callback'


Spec typo, it should read pointerlocklost event.


dispatch pointerlocklost event...





(3)
Mouse lock must succeed only if the window is in focus and the
user-agent is the active application of the operating system
What window? window object as in web page? Or OS level window?
What if lock is called in some iframe?


The intent is the user-agent window and tab (if tabbed UA). Other than
UA security considerations, I propose there be no difference between
lock calls from a top level document or an iframe. Suggestions welcome
for a way to make this more clear than rewriting to be, ... succeed
only if the user-agent window (and tab, if a tabbed browser) is in focus
...


I'm just worried about iframes being able to lock mouse.
But perhaps that should be allowed only if the iframe is in
the same domain as the top level document.





(4)
If the target is removed from the DOM tree after mouse lock is
entered then mouse lock will be lost.
Should 'pointerlocklost' event be dispatched?


I'm not yet certain about the implementation practicalities, and need to
research more, but is seems we have these options:
a- don't send the event

A bit strange



b- send to the element after it has been detached

I would assume this. At least it would be consistent.



c- send to the nearest ancestor of the element that remains in the tree
Or perhaps send to the document. Should pointerlocklost always be 
dispatched to the document? If really needed, the event could have

property .unlockedElement or some such.



d- send to the element before it is detached

this is not possible. Well, possible, but would bring in all the
problems there are with mutation events


-Olli




Re: [Pointer Lock] Few comments

2011-12-15 Thread Olli Pettay

On 12/16/2011 01:04 AM, Darin Fisher wrote:



On Thu, Dec 15, 2011 at 1:39 PM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

On 12/15/2011 11:27 PM, Vincent Scheib wrote:



On Thu, Dec 15, 2011 at 6:16 AM, Olli Pettay
olli.pet...@helsinki.fi mailto:olli.pet...@helsinki.fi
mailto:Olli.Pettay@helsinki.__fi
mailto:olli.pet...@helsinki.fi wrote:

Hi all,

few comments about the API

(1)
currently
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html
http://dvcs.w3.org/hg/__webevents/raw-file/default/__mouse-lock.html

http://dvcs.w3.org/hg/__webevents/raw-file/default/__mouse-lock.html
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html
uses VoidCallback which isn't defined anywhere.

I guess there should be something like

void lock (in Element target,
   optional in LockSuccessCallback successCallback,
   optional in LockErrorCallback failureCallback);


[Callback,NoInterfaceObject]
interface LockSuccessCallback {
  void pointerLockSuccess();
};

[Callback,NoInterfaceObject]
interface LockErrorCallback {
  void pointerLockFailure();
};

Or if the new proposed callback syntax is used:
callback LockSuccessCallback = void pointerLockSuccess();
callback LockErrorCallback = void pointerLockFailure();


I used the concept of VoidCallback from other implemented specs. Are
there any issues with it other than that the spec should define
VoidCallback? e.g.
http://www.w3.org/TR/file-__system-api/#the-voidcallback-__interface
http://www.w3.org/TR/file-system-api/#the-voidcallback-interface
http://www.w3.org/TR/2009/WD-__DataCache-20091029/#__VoidCallback 
http://www.w3.org/TR/2009/WD-DataCache-20091029/#VoidCallback


well, in those specs VoidCallback is FunctionOnly= which it probably
shouldn't be.
But there is ongoing discussion about removing FunctionOnly= from
WebIDL





(2)
If another element is locked a user agent must transfer the mouse
lock to the new target and call the pointerlocklost callback
for the
previous target.
There is no such thing as 'pointerlocklost callback'


Spec typo, it should read pointerlocklost event.


dispatch pointerlocklost event...





(3)
Mouse lock must succeed only if the window is in focus and the
user-agent is the active application of the operating system
What window? window object as in web page? Or OS level window?
What if lock is called in some iframe?


The intent is the user-agent window and tab (if tabbed UA).
Other than
UA security considerations, I propose there be no difference between
lock calls from a top level document or an iframe. Suggestions
welcome
for a way to make this more clear than rewriting to be, ... succeed
only if the user-agent window (and tab, if a tabbed browser) is
in focus
...


I'm just worried about iframes being able to lock mouse.
But perhaps that should be allowed only if the iframe is in
the same domain as the top level document.


The fullscreen API requires that the IFRAME tag have an
allowfullscreen attribute:
http://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html#security-and-privacy-considerations


The spec is quite vague, but yes, something similar could work.

I wonder if we're going to have more this kinds of features. If so, 
perhaps the attribute should be changed.

Something like
allow=fullscreen pointerlock





Perhaps a similar approach would work for pointer lock?

-Darin





(4)
If the target is removed from the DOM tree after mouse lock is
entered then mouse lock will be lost.
Should 'pointerlocklost' event be dispatched?


I'm not yet certain about the implementation practicalities, and
need to
research more, but is seems we have these options:
a- don't send the event

A bit strange



b- send to the element after it has been detached

I would assume this. At least it would be consistent.



c- send to the nearest ancestor of the element that remains in
the tree

Or perhaps send to the document. Should pointerlocklost always be
dispatched to the document? If really needed, the event could have
property .unlockedElement or some such.



d- send to the element before it is detached

this is not possible. Well, possible, but would bring in all the
problems there are with mutation events


-Olli








Re: [Pointer Lock] Few comments

2011-12-15 Thread Olli Pettay

Filed https://bugzilla.mozilla.org/show_bug.cgi?id=711276
and https://bugs.webkit.org/show_bug.cgi?id=74660


On 12/16/2011 01:49 AM, Olli Pettay wrote:

On 12/16/2011 01:04 AM, Darin Fisher wrote:



On Thu, Dec 15, 2011 at 1:39 PM, Olli Pettay olli.pet...@helsinki.fi
mailto:olli.pet...@helsinki.fi wrote:

On 12/15/2011 11:27 PM, Vincent Scheib wrote:



On Thu, Dec 15, 2011 at 6:16 AM, Olli Pettay
olli.pet...@helsinki.fi mailto:olli.pet...@helsinki.fi
mailto:Olli.Pettay@helsinki.__fi
mailto:olli.pet...@helsinki.fi wrote:

Hi all,

few comments about the API

(1)
currently
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html
http://dvcs.w3.org/hg/__webevents/raw-file/default/__mouse-lock.html

http://dvcs.w3.org/hg/__webevents/raw-file/default/__mouse-lock.html
http://dvcs.w3.org/hg/webevents/raw-file/default/mouse-lock.html
uses VoidCallback which isn't defined anywhere.

I guess there should be something like

void lock (in Element target,
optional in LockSuccessCallback successCallback,
optional in LockErrorCallback failureCallback);


[Callback,NoInterfaceObject]
interface LockSuccessCallback {
void pointerLockSuccess();
};

[Callback,NoInterfaceObject]
interface LockErrorCallback {
void pointerLockFailure();
};

Or if the new proposed callback syntax is used:
callback LockSuccessCallback = void pointerLockSuccess();
callback LockErrorCallback = void pointerLockFailure();


I used the concept of VoidCallback from other implemented specs. Are
there any issues with it other than that the spec should define
VoidCallback? e.g.
http://www.w3.org/TR/file-__system-api/#the-voidcallback-__interface
http://www.w3.org/TR/file-system-api/#the-voidcallback-interface
http://www.w3.org/TR/2009/WD-__DataCache-20091029/#__VoidCallback
http://www.w3.org/TR/2009/WD-DataCache-20091029/#VoidCallback


well, in those specs VoidCallback is FunctionOnly= which it probably
shouldn't be.
But there is ongoing discussion about removing FunctionOnly= from
WebIDL





(2)
If another element is locked a user agent must transfer the mouse
lock to the new target and call the pointerlocklost callback
for the
previous target.
There is no such thing as 'pointerlocklost callback'


Spec typo, it should read pointerlocklost event.


dispatch pointerlocklost event...





(3)
Mouse lock must succeed only if the window is in focus and the
user-agent is the active application of the operating system
What window? window object as in web page? Or OS level window?
What if lock is called in some iframe?


The intent is the user-agent window and tab (if tabbed UA).
Other than
UA security considerations, I propose there be no difference between
lock calls from a top level document or an iframe. Suggestions
welcome
for a way to make this more clear than rewriting to be, ... succeed
only if the user-agent window (and tab, if a tabbed browser) is
in focus
...


I'm just worried about iframes being able to lock mouse.
But perhaps that should be allowed only if the iframe is in
the same domain as the top level document.


The fullscreen API requires that the IFRAME tag have an
allowfullscreen attribute:
http://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html#security-and-privacy-considerations



The spec is quite vague, but yes, something similar could work.

I wonder if we're going to have more this kinds of features. If so,
perhaps the attribute should be changed.
Something like
allow=fullscreen pointerlock





Perhaps a similar approach would work for pointer lock?

-Darin





(4)
If the target is removed from the DOM tree after mouse lock is
entered then mouse lock will be lost.
Should 'pointerlocklost' event be dispatched?


I'm not yet certain about the implementation practicalities, and
need to
research more, but is seems we have these options:
a- don't send the event

A bit strange



b- send to the element after it has been detached

I would assume this. At least it would be consistent.



c- send to the nearest ancestor of the element that remains in
the tree

Or perhaps send to the document. Should pointerlocklost always be
dispatched to the document? If really needed, the event could have
property .unlockedElement or some such.



d- send to the element before it is detached

this is not possible. Well, possible, but would bring in all the
problems there are with mutation events


-Olli












Re: Revisiting Command Elements and Toolbars

2011-12-14 Thread Olli Pettay

On 11/29/2011 07:58 AM, Ryosuke Niwa wrote:

Hi all,

I've been looking into the command element
http://dev.w3.org/html5/spec/Overview.html#the-command-element and how
a toolbar might be built
http://dev.w3.org/html5/spec/Overview.html#building-menus-and-toolbars by
them in the last several months.  In general, I'm thrilled to see this
feature being a part of HTML5.  I've chatted with several Web developers
internally at Google and also with some UA vendors before and during
TPAC 2011 about various ideas to utilize command elements.

Pros I found in the current spec:

  * Commands can be defined where UI is provided
  * Commands can be implicitly defined by accessKey content attribute
  * Defining in terms of anchor element, etc... will allow nice fallback

Cons (or at least ones I didn't think the current spec adequately address):

 1. Authors often want to fine-grained control over the appearance of
toolbars; UAs automatically rendering them in canonical form will
make it harder.

I thought the idea was that the toolbar *could* be kind of part of the
browser chrome, in which case we don't want to let authors to style
everything. Icon and text should be hopefully enough.



 2. In many web apps, commands are involved and associated with multiple
UI components, toolbars, side panel, context menu, etc... commands
being a part of UI components doesn't represent this model well.

There has been discussion about having abstract commands.
So, you could have menuitem command=foo and command id=foo 
onclick=dosomething(event) (Shouldn't probably be onclick but

oncommand or such)
We should probably copy some more XUL functionality to command handling 
here, so that event listeners for command can access also the original 
event dispatched to menuitem.

(In XUL command events have .sourceEvent)



 3. Many commands make sense only in the context of some widget in a
page. E.g. on a CMS dashboard, bold command only makes sense
inside a WYSIWYG editor. There ought to be mechanism to scope commands.
If focus is not in such special context, you could always disable the 
command.
I think it would be hard to make any automatic scoping, but handling it 
in script should be easy.




 4. Mixing UI-specific information such as hidden and checked with
more semantical information such as disabled or checked isn't clean.

Not sure what you mean here.


 5. Some commands may need to have non-boolean values. e.g. consider
BackColor (as in execCommand), this command can't just be checked.
It'll have values such as white and #fff.

perhaps command elements should have .value





Furthermore, it seems unfortunate that we already have a concept of
command in the editing API
http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html and methods on
document such as execCommand, queryCommandState, etc... yet commands
defined by command elements and accessKey content attribute don't
interact with them at all. It'll be really nice if we could use
execCommand to run an arbitrary command defined on a page, or ask what
the value of command is by queryCommandValue.

What are your thoughts on this topic?


I wouldn't mix execCommand and command API.
It should be trivial enough to call execCommand etc in command
event listeners. queryCommandState shouldn't be hard either if one just 
updates things in focus/blur event listeners.


Perhaps command API should be renamed.
Maybe ActionTarget API or some such.



-Olli





Best,
Ryosuke Niwa
Software Engineer
Google Inc.






Re: [XHR] responseType json

2011-12-12 Thread Olli Pettay

On 12/12/2011 03:12 PM, Jarred Nicholls wrote:

On Mon, Dec 12, 2011 at 5:37 AM, Anne van Kesteren ann...@opera.com
mailto:ann...@opera.com wrote:

On Sun, 11 Dec 2011 15:44:58 +0100, Jarred Nicholls
jar...@sencha.com mailto:jar...@sencha.com wrote:

I understand that's how you spec'ed it, but it's not how it's
implemented
in IE nor WebKit for legacy purposes - which is what I meant in
the above
statement.


What do you mean legacy purposes? responseType is a new feature. And
we added it in this way in part because of feedback from the WebKit
community that did not want to keep the raw data around.


I wasn't talking about responseType, I was referring to the pair of
responseText and responseXML being accessible together since the dawn of
time.


In case responseType is not set. If responseType is set, implementations
can optimize certain things.


 I don't know why WebKit and IE didn't take the opportunity to use
responseType

responseType is a new thing. Gecko hasn't changed behavior in case
responseType is not set.


and kill that behavior; don't ask me, I wasn't responsible
for it ;)


In the thread where we discussed adding it the person working on it
for WebKit did seem to plan on implementing it per the specification:


http://lists.w3.org/Archives/__Public/public-webapps/__2010OctDec/thread.html#msg799

http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/thread.html#msg799


Clearly not - shame, because now I'm trying to clean up the mess.




In WebKit and IE =9, a responseType of , text,
or document means access to both responseXML and responseText.
  I don't
know what IE10's behavior is yet.


IE8 could not have supported this feature and for IE9 I could not
find any documentation. Are you sure they implemented it?


I'm not positive if they did to be honest - I haven't found it
documented anywhere.



Given that Gecko does the right thing and Opera will too (next major
release I believe) I do not really see any reason to change the
specification.


I started an initiative to bring XHR in WebKit up-to-spec (see
https://bugs.webkit.org/show_bug.cgi?id=54162) and got a lot of push
back.  All I'm asking is that if I run into push back again, that I can
send them your way ;)




--
Anne van Kesteren
http://annevankesteren.nl/




--


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls






Re: [XHR2] Disable new response types for sync XHR in Window context

2011-11-15 Thread Olli Pettay

On 11/15/2011 09:33 PM, Jonas Sicking wrote:

On Tue, Nov 15, 2011 at 4:22 AM, Anne van Kesterenann...@opera.com  wrote:

On Mon, 14 Nov 2011 17:55:25 +0100, Jonas Sickingjo...@sicking.cc  wrote:


Yes, I think cross-origin should not work with sync. That is currently the
only synchronous communication mechanism cross origin. Without it a UA
could put up UI if it wants to explicitly allow users to control such
communication.


Eww. But you agree with my suggestion about exceptions? I can put that in
the specification and push to get it implemented in Opera, but it would help
if you said you agreed with the specifics to avoid surprises down the road.


So if I understand the proposal correctly:

After .open has been called with async=false:
* setting .responseType to anything other than  throws InvalidAccessError
* setting .wirthCredentials to true throws InvalidAccessError

Additionally, when calling .open with async=false, throw
InvalidAccessError if .responseType is set to anything other than 
or .withCredentials is true.

If that's the proposal, then this sounds good to me.



Sounds good to me to.
Also, if xhr is sync, accessing .response or responseType could throw





/ Jonas







[XHR2] Disable new response types for sync XHR in Window context

2011-11-11 Thread Olli Pettay

Hi all,

I think we should strongly encourage web devs to move away from
sync XHR (in Window context, not in Workers). It is bad for UI
responsiveness.

Unfortunately sync XHR has been used quite often with the old
text/xml types. But maybe we could disable sync XHR for the new
types, and also make .response to throw if it is used with
sync XHR.

Comments?



-Olli


http://www.w3.org/Bugs/Public/show_bug.cgi?id=14773
https://bugzilla.mozilla.org/show_bug.cgi?id=701787
https://bugs.webkit.org/show_bug.cgi?id=72154



  1   2   3   >