Re: [XHR2] Blobs, names and FormData

2011-06-30 Thread Julian Reschke

On 2011-06-29 18:34, Alfonso Martínez de Lizarrondo wrote:

...
No.

All I want is a way for the web page to state that when a Blob is used
in a FormData object it should be send to the server with a proposed
filename. No sniffing. No guessing. It's up to the script to suggest a
correct filename (if it wants), or use whatever is the default filename
used by the browser (Blob23g22g3024j23g209gj3g and the like, extensionless)
...


In which case there should also be a way to send the actual media type, no?




Re: [XHR2] Blobs, names and FormData

2011-06-30 Thread Alfonso Martínez de Lizarrondo
I thought that the browser could retrieve that info from the os based on the
proposed extension.
I just requested the part that I needed, if there's something else missing
then I guess that it should be possible to add it at the same time.
 El 30/06/2011 09:28, Julian Reschke julian.resc...@gmx.de escribió:


Re: [XHR2] Blobs, names and FormData

2011-06-30 Thread Julian Reschke

On 2011-06-30 09:54, Alfonso Martínez de Lizarrondo wrote:

I thought that the browser could retrieve that info from the os based on
the proposed extension.


1) the OS may not know

2) it also needs to be sent over the wire some way...


I just requested the part that I needed, if there's something else
missing then I guess that it should be possible to add it at the same time.

El 30/06/2011 09:28, Julian Reschke julian.resc...@gmx.de
mailto:julian.resc...@gmx.de escribió:





Re: [XHR2] Blobs, names and FormData

2011-06-30 Thread Anne van Kesteren

On Wed, 29 Jun 2011 20:17:52 +0200, Jonas Sicking jo...@sicking.cc wrote:

Just a small nit: We would also use blob for File objects with an
empty .name property, right?


I guess we can do that. getFile() should also specify a media type by the  
way.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: Mutation events replacement

2011-06-30 Thread Simon Pieters
On Wed, 29 Jun 2011 23:54:47 +0200, Rafael Weinstein rafa...@google.com  
wrote:


On Wed, Jun 29, 2011 at 7:13 AM, Aryeh Gregor simetrical+...@gmail.com  
wrote:

On Tue, Jun 28, 2011 at 5:24 PM, Jonas Sicking jo...@sicking.cc wrote:

This new proposal solves both these by making all the modifications
first, then firing all the events. Hence the implementation can
separate implementing the mutating function from the code that sends
out notifications.

Conceptually, you simply queue all notifications in a queue as you're
making modifications to the DOM, then right before returning from the
function you insert a call like flushAllPendingNotifications(). This
way you don't have to care at all about what happens when those
notifications fire.


So when exactly are these notifications going to be fired?  In
particular, I hope non-DOM Core specifications are going to have
precise control over when they're fired.  For instance, execCommand()
will ideally want to do all its mutations at once and only then fire
the notifications (which I'm told is how WebKit currently works).  How
will this work spec-wise?  Will we have hooks to say things like
remove a node but don't fire the notifications yet, and then have to
add an extra line someplace saying to fire all the notifications?
This could be awkward in some cases.  At least personally, I often say
things like call insertNode(foo) on the range in the middle of a
long algorithm, and I don't want magic happening at that point just
because DOM Range fires notifications before returning from
insertNode.

Also, even if specs have precise control, I take it the idea is
authors won't, right?  If a library wants to implement some fancy
feature and be compatible with users of the library firing these
notifications, they'd really want to be able to control when
notifications are fired, just like specs want to.  In practice, the
only reason this isn't an issue with DOM mutation events is because
they can say don't use them, and in fact people rarely do use them,
but that doesn't seem ideal -- it's just saying library authors
shouldn't bother to be robust.


In working on Model Driven Views (http://code.google.com/p/mdv), we've
run into exactly this problem, and have developed an approach we think
is promising.

The idea is to more or less take Jonas's proposal, but instead of
firing callbacks immediately before the outer-most mutation returns,
mutations are recorded for a given observer and handed to it as an
in-order sequence at the end of the event.

var observer = window.createMutationObserver(callback);
document.body.addSubtreeChangedObserver(observer);
document.body.addSubtreeAttributeChangedObserver(observer);
...
var div = document.createElement('div');
document.body.appendChild(div);
div.setAttribute('data-foo', 'bar');
div.innerHTML = 'bsomething/b isomething else/i';
div.removeChild(div.childNodes[1]);
...

// mutationList is an array, all the entries added to
// |observer| during the preceding script event
function callback(mutationList) {
// mutationList === [
//  { type: 'ChildlistChanged', target: document.body, inserted: [div] },
//  { type: 'AttributeChanged', target: div, attrName: 'data-foo' },
//  { type: 'ChildlistChanged', target: div, inserted: [b, i] },
//  { type: 'ChildlistChanged', target: div, removed: [i] }
// ];
}



Maybe this is a stupid question, since I'm not familiar at all with
the use-cases involved, but why can't we delay firing the
notifications until the event loop spins?  If we're already delaying
them such that there are no guarantees about what the DOM will look
like by the time they fire, it seems like delaying them further
shouldn't hurt the use-cases too much more.  And then we don't have to
put further effort into saying exactly when they fire for each method.


Agreed.

For context, after considering this issue, we've tentatively concluded
a few things that don't seem to be widely agreed upon:

1) In terms of when to notify observers: Sync is too soon. Async (add
a Task) is too late.

- The same reasoning for why firing sync callbacks in the middle of
DOM operations is problematic for C++ also applies to application
script. Calling mutation observers synchronously can invalidate the
assumptions of the code which is making the modifications. It's better
to allow one bit of code to finish doing what it needs to and let
mutation observers operate later over the changes.

- Many uses of mutation events would actually *prefer* to not run sync
because the originating code may be making multiple changes which
more or less comprise a transaction. For consistency and
performance, the abstraction which is watching changes would like to
operate on the final state.

- However, typical uses of mutation events do want to operate more or
less in the same event because they are trying to create a new
consistent state. They'd like to run after the application code is
finished, but before paint occurs or the next scheduled event runs.


If I 

Re: [Widgets] Mozilla open apps

2011-06-30 Thread Scott Wilson

On 29 Jun 2011, at 12:34, Marcos Caceres wrote:

 On Wed, Jun 29, 2011 at 12:08 PM, Charles McCathieNevile
 cha...@opera.com wrote:
 On Tue, 28 Jun 2011 20:17:38 +0200, Scott Wilson
 scott.bradley.wil...@gmail.com wrote:
 
 I think Bruce Lawson was dropping a big hint the other day to look again
 at the questions Mike posed a long while ago! I know there was discussion at
 the time, but I think both initiatives have moved on somewhat so its worth
 returning to.
 
 I agree that it is worth returning to.
 
 The TPAC meeting in Santa Clara might be a good chance to sit down in the
 same place and talk about it as well as email, which is generally a better
 way to clarify what the issues are but not always the most effective way to
 solve the hard ones.
 
 Are people likely to be in the Bay Area in the first week of November, and
 prepared to spend a bit of time discussing this?
 
 I think it is a great idea. However, there is a lot we can do in the
 *6 months* in between! :)
 
 These specs should be at REC by November. As the Last Call period for
 PC, Dig Sig, and API finished yesterday, Artb will send out a mail
 today to begin the PR preparation process for most of the Widget
 specs: WARP and View Modes have met their CR exit criteria, and are
 also ready to be moved to PR and REC. This means that by September,
 these specs will be in PR. And REC in November.
 
 If we are going to do anything about widgets, it needs to happen
 sooner rather than later.

Mike may want to correct me here, but I couldn't see anything in MOWA that 
would require changes to the Widgets specifications that are currently on 
track. 

I think we can get there via a Note on applying Widget specs in the case of Web 
Apps (packaged and non-packaged), or further spec work (perhaps under the FSW 
banner) to cover things like web app store interoperability and trust. 

 
 -- 
 Marcos Caceres
 http://datadriven.com.au




Re: Mutation events replacement

2011-06-30 Thread Alex Russell
On Thu, Jun 30, 2011 at 2:11 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wednesday, June 29, 2011, Aryeh Gregor simetrical+...@gmail.com
 wrote:
  On Tue, Jun 28, 2011 at 5:24 PM, Jonas Sicking jo...@sicking.cc wrote:
  This new proposal solves both these by making all the modifications
  first, then firing all the events. Hence the implementation can
  separate implementing the mutating function from the code that sends
  out notifications.
 
  Conceptually, you simply queue all notifications in a queue as you're
  making modifications to the DOM, then right before returning from the
  function you insert a call like flushAllPendingNotifications(). This
  way you don't have to care at all about what happens when those
  notifications fire.
 
  So when exactly are these notifications going to be fired?  In
  particular, I hope non-DOM Core specifications are going to have
  precise control over when they're fired.  For instance, execCommand()
  will ideally want to do all its mutations at once and only then fire
  the notifications (which I'm told is how WebKit currently works).  How
  will this work spec-wise?  Will we have hooks to say things like
  remove a node but don't fire the notifications yet, and then have to
  add an extra line someplace saying to fire all the notifications?
  This could be awkward in some cases.  At least personally, I often say
  things like call insertNode(foo) on the range in the middle of a
  long algorithm, and I don't want magic happening at that point just
  because DOM Range fires notifications before returning from
  insertNode.

 Heh. It's like spec people has to deal with the same complexities as
 implementors has had for years. Revenge at last!!

 Jokes aside. I think the way to do this is that the spec should
 introduce the concept of a compound mutating function. Functions
 like insertBefore, removeChild and the innerHTML setter should claim
 to be such functions. Any other function can also be defined to be
 such a function, such as your execCommand function.

 Whenever a mutation happens, the notifications for it is put on a
 list. Once the outermost compound mutation function exits, all
 notifications are fired.

  Also, even if specs have precise control, I take it the idea is
  authors won't, right?  If a library wants to implement some fancy
  feature and be compatible with users of the library firing these
  notifications, they'd really want to be able to control when
  notifications are fired, just like specs want to.  In practice, the
  only reason this isn't an issue with DOM mutation events is because
  they can say don't use them, and in fact people rarely do use them,
  but that doesn't seem ideal -- it's just saying library authors
  shouldn't bother to be robust.

 The problem is that there is no good way to do this. The only API that
 we could expose to JS is something like a beginBatch/endBatch pair of
 functions. But what do we do if the author never calls endBatch?

 This is made especially bad by the fact that JavaScript uses
 exceptions which makes it very easy to miss calling endBatch if an
 exception is thrown unless the developer uses finally, which most
 don't.


Since the execution turn is a DOM/host concept, we can add something like an
event handler to the scope which fires before exit. Something like:

   window.addEventListener(turnEnd, ...);

Listeners could be handed the mutation lists as members of the event object
they're provided. I know Rafael has more concrete ideas here about the
queues to be produced/consumed, but generally speaking, having the ability
to continue to add turnEnd listeners while still in a turn gives you the
power to operate on consistent state without forcing the start/end pair or
specific exception handling logic. Think of it as a script element's
finally block.


  Maybe this is a stupid question, since I'm not familiar at all with
  the use-cases involved, but why can't we delay firing the
  notifications until the event loop spins?  If we're already delaying
  them such that there are no guarantees about what the DOM will look
  like by the time they fire, it seems like delaying them further
  shouldn't hurt the use-cases too much more.  And then we don't have to
  put further effort into saying exactly when they fire for each method.
   But this is pretty obvious, so I assume there's some good reason not
  to do it.

 To enable things like widget libraries which want to keep state
 up-to-date with a DOM.

 / Jonas




CfC: publish Last Call Working Draft of Web IDL; deadline July 7

2011-06-30 Thread Arthur Barstow
As Cameron indicated in [1], all non-enhancements bugs for Web IDL are 
now resolved and as such, this is a Call for Consensus to publish a Last 
Call Working Draft of Web IDL:


  http://dev.w3.org/2006/webapi/WebIDL/

This CfC satisfies the group's requirement to record the group's 
decision to request advancement for this LCWD.


Note the Process Document states the following regarding the 
significance/meaning of a LCWD:


[[
http://www.w3.org/2005/10/Process-20051014/tr.html#last-call

Purpose: A Working Group's Last Call announcement is a signal that:

* the Working Group believes that it has satisfied its relevant 
technical requirements (e.g., of the charter or requirements document) 
in the Working Draft;


* the Working Group believes that it has satisfied significant 
dependencies with other groups;


* other groups SHOULD review the document to confirm that these 
dependencies have been satisfied. In general, a Last Call announcement 
is also a signal that the Working Group is planning to advance the 
technical report to later maturity levels.

]]

Positive response to this CfC is preferred and encouraged and silence 
will be considered as agreement with the proposal. The deadline for 
comments is July 7. Please send all comments to:


public-script-co...@w3.org

-Art Barstow





Re: Mutation events replacement

2011-06-30 Thread Olli Pettay

On 06/30/2011 12:54 AM, Rafael Weinstein wrote:

On Wed, Jun 29, 2011 at 7:13 AM, Aryeh Gregorsimetrical+...@gmail.com  wrote:

On Tue, Jun 28, 2011 at 5:24 PM, Jonas Sickingjo...@sicking.cc  wrote:

This new proposal solves both these by making all the modifications
first, then firing all the events. Hence the implementation can
separate implementing the mutating function from the code that sends
out notifications.

Conceptually, you simply queue all notifications in a queue as you're
making modifications to the DOM, then right before returning from the
function you insert a call like flushAllPendingNotifications(). This
way you don't have to care at all about what happens when those
notifications fire.


So when exactly are these notifications going to be fired?  In
particular, I hope non-DOM Core specifications are going to have
precise control over when they're fired.  For instance, execCommand()
will ideally want to do all its mutations at once and only then fire
the notifications (which I'm told is how WebKit currently works).  How
will this work spec-wise?  Will we have hooks to say things like
remove a node but don't fire the notifications yet, and then have to
add an extra line someplace saying to fire all the notifications?
This could be awkward in some cases.  At least personally, I often say
things like call insertNode(foo) on the range in the middle of a
long algorithm, and I don't want magic happening at that point just
because DOM Range fires notifications before returning from
insertNode.

Also, even if specs have precise control, I take it the idea is
authors won't, right?  If a library wants to implement some fancy
feature and be compatible with users of the library firing these
notifications, they'd really want to be able to control when
notifications are fired, just like specs want to.  In practice, the
only reason this isn't an issue with DOM mutation events is because
they can say don't use them, and in fact people rarely do use them,
but that doesn't seem ideal -- it's just saying library authors
shouldn't bother to be robust.


In working on Model Driven Views (http://code.google.com/p/mdv), we've
run into exactly this problem, and have developed an approach we think
is promising.

The idea is to more or less take Jonas's proposal, but instead of
firing callbacks immediately before the outer-most mutation returns,
mutations are recorded for a given observer and handed to it as an
in-order sequence at the end of the event.


What is the advantage comparing to Jonas' proposal?
I think one could implement your proposal on top
of Jonas' proposal - especially since both keep the order
of the mutations.
What is at the 'end' of the event? You're not talking about
DOM event here, but something else.
How is that different comparing to immediately before the outer-most 
mutation?




var observer = window.createMutationObserver(callback);

Why is createMutationObserver needed?



document.body.addSubtreeChangedObserver(observer);
document.body.addSubtreeAttributeChangedObserver(observer);
...
var div = document.createElement('div');
document.body.appendChild(div);
div.setAttribute('data-foo', 'bar');
div.innerHTML = 'bsomething/b  isomething else/i';
div.removeChild(div.childNodes[1]);
...

// mutationList is an array, all the entries added to
// |observer| during the preceding script event
function callback(mutationList) {
// mutationList === [
//  { type: 'ChildlistChanged', target: document.body, inserted: [div] },
//  { type: 'AttributeChanged', target: div, attrName: 'data-foo' },
//  { type: 'ChildlistChanged', target: div, inserted: [b, i] },
//  { type: 'ChildlistChanged', target: div, removed: [i] }
// ];
}



Maybe this is a stupid question, since I'm not familiar at all with
the use-cases involved, but why can't we delay firing the
notifications until the event loop spins?  If we're already delaying
them such that there are no guarantees about what the DOM will look
like by the time they fire, it seems like delaying them further
shouldn't hurt the use-cases too much more.  And then we don't have to
put further effort into saying exactly when they fire for each method.


Agreed.

For context, after considering this issue, we've tentatively concluded
a few things that don't seem to be widely agreed upon:

1) In terms of when to notify observers: Sync is too soon. Async (add
a Task) is too late.

- The same reasoning for why firing sync callbacks in the middle of
DOM operations is problematic for C++ also applies to application
script. Calling mutation observers synchronously can invalidate the
assumptions of the code which is making the modifications. It's better
to allow one bit of code to finish doing what it needs to and let
mutation observers operate later over the changes.

- Many uses of mutation events would actually *prefer* to not run sync
because the originating code may be making multiple changes which
more or less comprise a transaction. For consistency and
performance, 

Re: [widgets] What is the status and plan for Widget URI spec?

2011-06-30 Thread Robin Berjon
On Jun 29, 2011, at 16:33 , Arthur Barstow wrote:
 Robin - what is the status and plan for the Widget URI spec?
 
  http://dev.w3.org/2006/waf/widgets-uri/

The status is that it's hanging on URI scheme registration. I'm afraid that I 
simply don't have the bandwidth to handle that at this point. My preferred 
approach would be to skip URI registration and register it after it's 
successfully deployed as a de facto scheme but that's neither nice nor proper :)

I get the impression that Marcos wanted to change a few things about it, but 
otherwise it's workable.

 All - if you have any implementation data for this spec, please let us know.

Widgeon supports it.

-- 
Robin Berjon
  Robineko (http://robineko.com/)
  Twitter: @robinberjon






Re: [Widgets] Mozilla open apps

2011-06-30 Thread Robin Berjon
On Jun 28, 2011, at 20:17 , Scott Wilson wrote:
 I think Bruce Lawson was dropping a big hint the other day to look again at 
 the questions Mike posed a long while ago! I know there was discussion at the 
 time, but I think both initiatives have moved on somewhat so its worth 
 returning to.

Thanks for picking this up, I think it is indeed a good idea.

 On 20 Oct 2010, at 19:40, Mike Hanson wrote:
 In-Browser/live content usage
 Our goal is to encompass in-browser application usage, where some subset 
 of normal web browsing activity is identified as an app.  
 
 This means that we need to identify some subset of the URL value space that 
 belongs to an app.  Our current approach (which is close to the one 
 proposed by Google [1]) is to identify a URL set (Google allows regexes; we 
 propose domain-matching with a path prefix).  Google proposes allowing a 
 carve-out of browsable URLs, which can be visited without leaving the app, 
 presumably for federated login.
 
 Specifically, this means that the content element would need to be replaced 
 or augmented with some sort of app_urls or somesuch.  It also seems like the 
 HTML5 App Cache proposal is addressing the same problem space as content; is 
 there some way to harmonize all of this?  If we get this right we can 
 perhaps get a smooth continuum from live web site to dedicated brower 
 instance to widget.

I certainly think that content could be augmented to point to a URI (and the 
configuration would then be sent on its own). I think that this matches some 
ideas discussed previously in which a widget could be endowed with an HTTP 
origin in order to function as if it had been acquired from that source. This 
could be achieved for instance by actually acquiring it from said source (and 
keeping track of that fact) or by using a signature properly tying the two 
together.

With that approach, the configuration document becomes just a way of describing 
an app, irrespective of whether it's on the web or on a USB key. AppCache is 
just a way of controlling the caching of content on the Web, so that the two 
are complementary. Then there's packaging: it would be really nice if it could 
be used for more than widgets, notably to package a set of resources (CSS, 
images, scripts, etc.) that can then be acquired with a single request, what's 
more compressed. This would not just help performance, it would also help with 
dependency management when you're fanning your content out to multiple servers 
and as the content is being deployed you hit stupid issues with v2447 of the 
CSS being loaded alongside v2448 of the script, which then causes hard to track 
user bugs (and gets even more amusing with caching involved).

This then leaves widgets as nothing more than a convention to have the 
configuration document at a specific location inside a package (oh, and some 
silly rules about content localisation).

 Per-application metadata repository and access API
 We propose that the application repository maintain some metadata that is 
 in addition to, and along side, the manifest.  Specifically, an 
 authorization URL, signature, installation location and date, and perhaps an 
 update URL.

Have you looked at how this could integrate with PaySwarm for instance?

 You could try to use the Widget API for this, but the trust model isn't 
 exactly right.  Our intent is that the user has a trust relationship with a 
 store or directory, and has a less trusted relationship with the app; the 
 app does not discover the authorization URL, for example.  In our thinking 
 this implies that there is a app repository object that has a couple 
 methods; AFAIK there isn't an equivalent object that has the list of all 
 installed widgets in the spec.  Am I missing something?

You're correct, this isn't part of the widgets family of specs. But it's 
something that could be considered. There currently only is access to 
information about the widget itself, not about the runtime in which it is 
executing.

 XML vs. JSON
 Cultural nit: many web developers have trouble with complex XML encodings.  
 It's frustrating, but true.  Would the specification of a JSON dialect be 
 amenable, or is it that a non-starter?

The widget configuration hardly rates as complex. That being said, it should 
be rather straightforward to produce a JSON alternative (and both could be 
accepted, with one taking precedence if both exist). My primary concern would 
be localisation. While I'm not particularly fond of the content localisation 
mechanism available in widgets, the configuration localisation is actually 
quite usable (and, I think, useful). I can think of ways of representing that 
in JSON, but they aren't particularly nice. At all.

 Localization Model
 The xml:lang based approach is structural analogous (though somewhat tedious 
 to handle in JSON, but that's not really important).  In the absence of a 
 content element, the folder-based localization strategy could hit some 
 bumps.  

Re: Mutation events replacement

2011-06-30 Thread Robin Berjon
On Jun 4, 2009, at 12:07 , Jonas Sicking wrote:
 Here's an API that might work:
 
 The following methods are added to the Document, Element and
 DocumentFragment interfaces:
 
  addAttributeChangedListener(NodeDataCallback);
  addSubtreeAttributeChangedListener(NodeDataCallback);
  addChildlistChangedListener(NodeDataCallback);
  addSubtreeChangedListener(NodeDataCallback);
  addTextDataChangedListener(NodeDataCallback);
  removeAttributeChangedListener(NodeDataCallback);
  removeSubtreeAttributeChangedListener(NodeDataCallback);
  removeChildlistChangedListener(NodeDataCallback);
  removeSubtreeChangedListener(NodeDataCallback);
  removeTextDataChangedListener(NodeDataCallback);

Just a thought: I wonder if it might be interesting to also support:

  addClassListChangedListener(NodeDataCallback)
  addSubtreeClassListChangedListener(NodeDataCallback)
  removeClassListChangedListener(NodeDataCallback)
  removeSubtreeClassListChangedListener(NodeDataCallback)
?

I assume that you could get the same information with the Attribute variants, 
but with more noise (and if using SVG, with a lot more noise). I'll admit that 
I haven't thought this through properly, but the thought popped up because 
class changes is overwhelmingly what I find myself wanting to be notified of 
most often.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: [widgets] Plan to get Widget Updates LC ready?

2011-06-30 Thread Rich Tibbett

Arthur Barstow wrote:

Richard, Marcos - what is the plan to get Widget Updates spec LC ready
(see [1] for LC requirements)?

http://dev.w3.org/2006/waf/widgets-updates/


I think Marcos wanted to have a pass over the spec. We didn't receive 
much feedback on the previous Working Draft and we've since implemented 
this for Opera Extensions.


I'm happy to push this to LC pending Marcos' review.

- Rich



[webstorage] Plan to move the spec to Last Call Working Draft

2011-06-30 Thread Arthur Barstow
Given the lack of support for stopping work on Web Storage [1], I'd let 
to get consensus on the plan to move it to Last Call Working Draft.


Currently there are two open bugs:

1. Bug 12111: spec for Storage object getItem(key) method does not 
match implementation behavior. PLH created a version of the spec that 
addresses this bug [12111-fix].


2. Bug 13020: No user agent will implement the storage mutex so this 
passage does not reflect reality.


There are different opinions on the priority of Web Storage ...

* Web Storage is a low priority and the Editor will get to it when he 
gets to it


* Web Storage is a high priority because the lack of a LCWD will block 
at least the Widget Interface spec from progressing on the Rec track


There are various options on what to do next, including:

1. Fix 12111 and 13020 and move Web Storage to LCWD

2. Leave Web Storage as is and eventually implementations will match the 
spec


3. Do #1 in one version of the spec and keep #2 as a separate version of 
the spec (e.g. L1 and L2).


Comments on these options are welcome.

If you prefer #1, please indicate if you are willing to create a 
fix/patch for bug 13020.


-AB

[1] http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1110.html
[12111] http://www.w3.org/Bugs/Public/show_bug.cgi?id=12111
[12111-fix] http://www.w3.org/2011/06/Web%20Storage.html
[13020] http://www.w3.org/Bugs/Public/show_bug.cgi?id=13020




Publishing From-Origin Proposal as FPWD

2011-06-30 Thread Anne van Kesteren

Hi hi,

Is there anyone who has objections against publishing  
http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html as a FPWD.  
The idea is mainly to gather more feedback to see if there is any interest  
in taking this forward.


(Added public-web-security because of the potential for doing this in CSP  
instead. Though that would require a slight change of scope for CSP, which  
I'm not sure is actually desirable.)


Cheers,


--
Anne van Kesteren
http://annevankesteren.nl/



Re: [widgets] Plan to get Widget Updates LC ready?

2011-06-30 Thread Marcos Caceres
On Thu, Jun 30, 2011 at 3:42 PM, Rich Tibbett ri...@opera.com wrote:
 Arthur Barstow wrote:

 Richard, Marcos - what is the plan to get Widget Updates spec LC ready
 (see [1] for LC requirements)?

 http://dev.w3.org/2006/waf/widgets-updates/

 I think Marcos wanted to have a pass over the spec. We didn't receive much
 feedback on the previous Working Draft and we've since implemented this for
 Opera Extensions.

 I'm happy to push this to LC pending Marcos' review.

I'll try to get onto it in the next few weeks.



-- 
Marcos Caceres
http://datadriven.com.au



Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Tue, Jun 28, 2011 at 2:24 PM, Jonas Sicking jo...@sicking.cc wrote:

 1. DOMNodeRemoved is fired *before* a mutation takes place. This one's
 tricky since you have to figure out all the removals you're going to
 do, then fire events for them, and then hope that the mutations
 actually still makes sense.


In WebKit, at least, we don't do this during editing actions and
execCommand.  In particular, we've delayed DOMNodeRemoved events to fire
after all mutations are done (violating the spec).  We've got a few bug
reports saying that the event is not fired (i.e. invisible) to event
listeners on ancestor nodes because when the event is fired, the node has
already been removed.

Even before we made that change, WebKit always fired DOMNodeRemoved as we
remove nodes instead of figuring out all the removals because that's too
hard.

- Ryosuke


Re: Publishing From-Origin Proposal as FPWD

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 7:22 AM, Anne van Kesteren wrote:

 Hi hi,
 
 Is there anyone who has objections against publishing 
 http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html as a FPWD. The 
 idea is mainly to gather more feedback to see if there is any interest in 
 taking this forward.
 
 (Added public-web-security because of the potential for doing this in CSP 
 instead. Though that would require a slight change of scope for CSP, which 
 I'm not sure is actually desirable.)

I approve of publishing this as FWPD.

I also don't think it makes sense to tie this to CSP.

Regards,
Maciej




CfC: publish Proposed Recommendation for Widget Packaging and XML Configuration; deadline July 7

2011-06-30 Thread Arthur Barstow
The comment period for the 7-June-2011 LCWD of the Widget Packaging and 
XML Configuration spec ended with no comments and as documented in the 
spec's Implementation Report [ImplRept], there are 4 implementations 
that pass 100% of the test suite. As such, this is Call for Consensus to 
publish a Proposed Recommendation (PR) as indicated on May 26 [PR-plan]:


 http://dev.w3.org/2006/waf/widgets/

Note the Process Document includes the following regarding the entrance 
criteria for a PR and the WG's requirements:


[[
http://www.w3.org/2005/10/Process-20051014/tr.html#cfr

* Shown that each feature of the technical report has been implemented. 
Preferably, the Working Group SHOULD be able to demonstrate two 
interoperable implementations of each feature.

]]

If you have any comments about this proposal, please send them to 
public-webapps@w3.org by July 7 at the latest.


-Art Barstow

[ImplRept] http://dev.w3.org/2006/waf/widgets/imp-report/
[PR-Plan] 
http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0712.html






CfC: publish Proposed Recommendation for Widget Digital Signature; deadline July 7

2011-06-30 Thread Arthur Barstow
The comment period for the 7-June-2011 LCWD of the Widget Digital 
Signature spec ended with no comments and as documented in the spec's 
Implementation Report [ImplRept], there are 2 implementations that pass 
100% of the test suite's Mandatory feature tests. As such, this is Call 
for Consensus to publish a Proposed Recommendation (PR) as indicated on 
May 26 [PR-plan]:


 http://dev.w3.org/2006/waf/widgets-digsig/

Note the Process Document includes the following regarding the entrance 
criteria for a PR and the WG's requirements:


[[
http://www.w3.org/2005/10/Process-20051014/tr.html#cfr

* Shown that each feature of the technical report has been implemented. 
Preferably, the Working Group SHOULD be able to demonstrate two 
interoperable implementations of each feature.

]]

If you have any comments about this proposal, please send them to 
public-webapps@w3.org by July 7 at the latest.


-Art Barstow

[ImplRept] http://dev.w3.org/2006/waf/widgets-digsig/imp-report/
[PR-Plan] 
http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0712.html






[Bug 13104] New: 1) ping(msg); //allow client to send server ping as per websocket spec 2) onpong(); //allow client to receive response of ping

2011-06-30 Thread bugzilla
http://www.w3.org/Bugs/Public/show_bug.cgi?id=13104

   Summary: 1) ping(msg); //allow client to send server ping as
per websocket spec 2) onpong(); //allow client to
receive response of ping
   Product: WebAppsWG
   Version: unspecified
  Platform: Other
   URL: http://www.whatwg.org/specs/web-apps/current-work/#top
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P3
 Component: WebSocket API (editor: Ian Hickson)
AssignedTo: i...@hixie.ch
ReportedBy: contribu...@whatwg.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


Specification: http://dev.w3.org/html5/websockets/
Multipage: http://www.whatwg.org/C#top
Complete: http://www.whatwg.org/c#top

Comment:
1) ping(msg); //allow client to send server ping as per websocket spec
2) onpong(); //allow client to receive response of ping


Posted from: 65.5.190.254
User agent: Mozilla/5.0 (X11; Linux i686; rv:7.0a1) Gecko/20110630
Firefox/7.0a1

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
Hi Maciej!

First off, I really appreciate your willingness to get into the mix of
things. It's a hard problem and I welcome any help we can get to solve
it.

I also very much liked your outline of encapsulation and I would like
to start using the terminology you introduced.

I am even flattered to see the proposal you outlined, because it's
similar to the one we originally considered as part of the first
iteration of the API
(https://raw.github.com/dglazkov/component-model/cbb28714ada37ddbaf49b3b2b24569b5b5e4ccb9/dom.html)
or even earlier versions
(https://github.com/dglazkov/component-model/blob/ed6011596a0213fc1eb9f4a12544bb7ddd4f4894/api-idl.txt)

We did remove them however, and opted for the simplest possible API,
which effectively only exposes the shadow DOM part of the component
model (see my breakdown here
http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1345.html).

One of the things to keep in mind is that the proposal outlined in
http://dglazkov.github.com/component-model/dom.html is by no means a
complete component model API. It's just the smallest subset that can
already be useful in addressing some of the use cases listed in
http://wiki.whatwg.org/wiki/Component_Model_Use_Cases.

It seem obvious that it is better to have few small, closely related
useful bits that could be combined into a bigger picture rather than
one large monolithic feature that can't be teased apart.

As for addressing encapsulation concerns, one of the simplest things
we could is to introduce a flag on the ShadowRoot (we can discuss the
default value), which if set, prohibits access to it with the
element.shadow property.

:DG

On Wed, Jun 29, 2011 at 9:03 PM, Maciej Stachowiak m...@apple.com wrote:


 I am not a fan of this API because I don't think it provides sufficient 
 encapsulation. The words encapsulation and isolation have been used in 
 different ways in this discussion, so I will start with an outline of 
 different possible senses of encapsulation that could apply here.

 == Different kinds of encapsulation ==

 1) Encapsulation against accidental exposure - DOM Nodes from the shadow tree 
 are not leaked via pre-existing generic APIs - for example, events flowing 
 out of a shadow tree don't expose shadow nodes as the event target,

 2) Encapsulation against deliberate access - no API is provided which lets 
 code outside the component poke at the shadow DOM. Only internals that the 
 component chooses to expose are exposed.

 3) Inverse encapsulation - no API is provided which lets code inside the 
 component see content from the page embedding it (this would have the effect 
 of something like sandboxed iframes or Caja).

 4) Isolation for security purposes - it is strongly guaranteed that there is 
 no way for code outside the component to violate its confidentiality or 
 integrity.

 5) Inverse isolation for security purposes - it is strongly guaranteed that 
 there is no way for code inside the component to violate the confidentiality 
 or integrity of the embedding page.


 I believe the proposed API has property 1, but not properties 2, 3 or 4. The 
 webkitShadow IDL attribute violates property #2, I assume it is obvious why 
 the others do not hold.

 I am not greatly interested in 3 or 4, but I believe #2 is important for a 
 component model.


 == Why is encapsulation (type 2) important for components? ==

 I believe type 2 encapsulation is important, because it allows components to 
 be more maintainable, reusable and robust. Type 1 encapsulation keeps 
 components from breaking the containing page accidentally, and can keep the 
 containing page from breaking the component. If the shadow DOM is exposed, 
 then you have the following risks:

 (1) A page using the component starts poking at the shadow DOM because it can 
 - perhaps in a rarely used code path.
 (2) The component is updated, unaware that the page is poking at its guts.
 (3) Page adopts new version of component.
 (4) Page breaks.
 (5) Page author blames component author or rolls back to old version.

 This is not good. Information hiding and hiding of implementation details are 
 key aspects of encapsulation, and are good software engineering practice. 
 Dmitri has argued that pages today do a version of components with no 
 encapsulation whatsoever, because many are written by monolithic teams that 
 control the whole stack. This does not strike me as a good argument. 
 Encapsulation can help teams maintain internal interfaces as they grow, and 
 can improve reusability of components to the point where maybe sites aren't 
 quite so monolithically developed.

 Furthermore, consider what has happened with JavaScript. While the DOM has no 
 good mechanism for encapsulation, JavaScript offers a choice. Object 
 properties are not very encapsulated at all, by default anyone can read or 
 write. But local variables in a closure are fully encapsulated. It's more and 
 more consider a good practice in JavaScript to build 

Re: Component Model: Landing Experimental Shadow DOM API in WebKit

2011-06-30 Thread Maciej Stachowiak

On Jun 29, 2011, at 9:08 AM, Dimitri Glazkov wrote:

 Hi Folks!
 
 With use cases (http://wiki.whatwg.org/wiki/Component_Model_Use_Cases)

So I looked at this list of use cases. It seems to me almost none of these are 
met by the proposal at http://dglazkov.github.com/component-model/dom.html.

Can you give a list of use cases that are actually intended to be addressed by 
this proposal?

(I would be glad to explain in more detail why the requirements aren't 
satisfied in case it isn't obvious; the main issues being that the proposal on 
the table can't handle multiple bindings, can't form controls, and lacks proper 
(type 2) encapsulation).

These use cases are also, honestly, rather vague. In light of this, it's very 
hard to evaluate the proposal, since it has no obvious relationship to its 
supporting use cases.

Could we please get the following to be able to evaluate this component 
proposal:

- List of use cases, ideally backed up with very concrete examples, not just 
vague high-level statements
- Identification of which use cases the proposal even intends to address, and 
which will possibly be addressed later
- Explanation of how the proposal satisfies the use cases it is intended to 
address
- For bonus points, explanation of how XBL2 fails to meet the stated use cases

Regards,
Maciej




Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 10:57 AM, Dimitri Glazkov wrote:

 Hi Maciej!
 
 First off, I really appreciate your willingness to get into the mix of
 things. It's a hard problem and I welcome any help we can get to solve
 it.
 
 I also very much liked your outline of encapsulation and I would like
 to start using the terminology you introduced.
 
 I am even flattered to see the proposal you outlined, because it's
 similar to the one we originally considered as part of the first
 iteration of the API
 (https://raw.github.com/dglazkov/component-model/cbb28714ada37ddbaf49b3b2b24569b5b5e4ccb9/dom.html)
 or even earlier versions
 (https://github.com/dglazkov/component-model/blob/ed6011596a0213fc1eb9f4a12544bb7ddd4f4894/api-idl.txt)
 
 We did remove them however, and opted for the simplest possible API,
 which effectively only exposes the shadow DOM part of the component
 model (see my breakdown here
 http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1345.html).
 
 One of the things to keep in mind is that the proposal outlined in
 http://dglazkov.github.com/component-model/dom.html is by no means a
 complete component model API. It's just the smallest subset that can
 already be useful in addressing some of the use cases listed in
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases.
 
 It seem obvious that it is better to have few small, closely related
 useful bits that could be combined into a bigger picture rather than
 one large monolithic feature that can't be teased apart.

The problem is that some pervasive properties (encapsulation, security, etc) 
can't be added after the fact to a system that doesn't have them designed in.

 
 As for addressing encapsulation concerns, one of the simplest things
 we could is to introduce a flag on the ShadowRoot (we can discuss the
 default value), which if set, prohibits access to it with the
 element.shadow property.

Why is that better than my proposal? I believe all the benefits I listed for my 
proposal over yours still apply to this new proposal. Can you either rebut 
those stated benefits, or tell me what benefits this version has over mine?



Regards,
Maciej




Re: Mutation events replacement

2011-06-30 Thread David Flanagan

[Callback, NoInterfaceObject]
interface MutationCallback
{
// aNode is the node to which the listener was added.
// aChangeTarget is the node in which the mutation was made.
void handleMutation(in Node aNode, in Node aChangeTarget);
};


Won't the callback be invoked as if it were a method of the node to 
which the listener was added?  That is, inside the callback function 
won't the value of 'this' be the same as the value of the aNode argument?


David Flanagan



Re: Mutation events replacement

2011-06-30 Thread David Flanagan


Aryeh Gregor wrote:

Maybe this is a stupid question, since I'm not familiar at all with
the use-cases involved, but why can't we delay firing the
notifications until the event loop spins?  If we're already delaying
them such that there are no guarantees about what the DOM will look
like by the time they fire, it seems like delaying them further
shouldn't hurt the use-cases too much more.  And then we don't have to
put further effort into saying exactly when they fire for each method.
  But this is pretty obvious, so I assume there's some good reason not
to do it.


I'll add my own possibly stupid question... Can we go in the opposite 
direction and fire mutation events immediately without queuing, but 
forbid any DOM modifications from the event callbacks?  Libraries that 
simply want to keep their internal state in sync with the DOM can do 
that.  Code that really needs to modify the DOM would have to manually 
queue a task with setTimeout() to make the change later.


DOM Level 2 has the notion of readonly nodes--any attempt to modify them 
throws NO_MODIFICATION_ALLOWED_ERR. I've never understood how nodes 
became readonly, and the concept seems to have been removed from 
DOMCore, but I suppose it could be re-introduced if it allowed simpler 
mutation events.


David




Re: Mutation events replacement

2011-06-30 Thread Olli Pettay

On 06/30/2011 09:36 PM, David Flanagan wrote:

[Callback, NoInterfaceObject]
interface MutationCallback
{
// aNode is the node to which the listener was added.
// aChangeTarget is the node in which the mutation was made.
void handleMutation(in Node aNode, in Node aChangeTarget);
};


Won't the callback be invoked as if it were a method of the node to
which the listener was added? That is, inside the callback function
won't the value of 'this' be the same as the value of the aNode argument?


'this' won't be the same as aNode if
{ handleMutation: function(aNode,aChangeTarget) { ... } }
syntax is used.


-Olli






Re: [webstorage] Plan to move the spec to Last Call Working Draft

2011-06-30 Thread Scott Wilson
On 30 Jun 2011, at 14:55, Arthur Barstow wrote:

 Given the lack of support for stopping work on Web Storage [1], I'd let to 
 get consensus on the plan to move it to Last Call Working Draft.
 
 Currently there are two open bugs:
 
 1. Bug 12111: spec for Storage object getItem(key) method does not match 
 implementation behavior. PLH created a version of the spec that addresses 
 this bug [12111-fix].

 
 2. Bug 13020: No user agent will implement the storage mutex so this passage 
 does not reflect reality.
 
 There are different opinions on the priority of Web Storage ...
 
 * Web Storage is a low priority and the Editor will get to it when he gets to 
 it
 
 * Web Storage is a high priority because the lack of a LCWD will block at 
 least the Widget Interface spec from progressing on the Rec track
 
 There are various options on what to do next, including:
 
 1. Fix 12111 and 13020 and move Web Storage to LCWD

+1

 
 2. Leave Web Storage as is and eventually implementations will match the spec
 
 3. Do #1 in one version of the spec and keep #2 as a separate version of the 
 spec (e.g. L1 and L2).
 
 Comments on these options are welcome.
 
 If you prefer #1, please indicate if you are willing to create a fix/patch 
 for bug 13020.

Yes.

 
 -AB
 
 [1] http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1110.html
 [12111] http://www.w3.org/Bugs/Public/show_bug.cgi?id=12111
 [12111-fix] http://www.w3.org/2011/06/Web%20Storage.html
 [13020] http://www.w3.org/Bugs/Public/show_bug.cgi?id=13020
 
 




Re: Mutation events replacement

2011-06-30 Thread Boris Zbarsky

On 6/30/11 2:56 PM, David Flanagan wrote:

I'll add my own possibly stupid question... Can we go in the opposite
direction and fire mutation events immediately without queuing, but
forbid any DOM modifications from the event callbacks?


Forbid DOM modifications to all DOMs?  Or just one DOM?  Is 
window.close() forbidden?  Is spinning the event loop (e.g. sync XHR) 
forbidden?


This is actually a pretty hard problem to solve, and still wouldn't 
really solve the performance issues for DOM events


-Boris



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
Maciej, as promised on #whatwg, here's a more thorough review of your
proposal. I am in agreement in the first parts of your email, so I am
going to skip those.

 == Are there other limitations created by the lack of encapsulation? ==

 My understanding is yes, there are some serious limitations:

 (1) It won't be possible (according to Dmitri) to attach a binding to an 
 object that has a native shadow DOM in the implementation (e.g. form 
 controls). That's because there can only be one shadow root, and form 
 controls have already used it internally and made it private. This seems like 
 a huge limitation. The ability to attach bindings/components to form elements 
 is potentially a huge win - authors can use the correct semantic element 
 instead of div soup, but still have the total control over look and feel from 
 a custom script-based implementation.

 (2) Attaching more than one binding with this approach is a huge hazard. 
 You'll either inadvertently blow away the previous, or won't be able to 
 attach more than one, or if your coding is sloppy, may end up mangling both 
 of them.

 I think these two limitations are intrinsic to the approach, not incidental.

I would like to frame this problem as multiple-vs-single shadow tree
per element.

Encapsulation is achievable with single shadow tree per element by
removing access via webkitShadow. You can discover whether a tree
exists (by the fact that an exception is thrown when you attempt to
set webkitShadow), but that's hardly breaking encapsulation.

The issues you've described above are indeed real -- if you view
adding new behavior to elements a process of binding, that is
something added to existing elements, possibly more than once. If we
decide that this the correct way to view attaching behavior, we
definitely need to fix this.

I attempted to articulate a different view here
http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0941.html.
Here, adding new behavior to elements means creating a sub-class of an
element. This should be a very familiar programming concept, probably
more understood than the decorator or mixin-like binding approach.

For the key use case of UI widgets, sub-classing is very natural. I
take a div, and sub-class it into a hovercard
(http://blog.twitter.com/2010/02/flying-around-with-hovercards.html).
I rarely bind a hovercard behavior to some random element -- not just
because I typically don't need to, but also because I expect a certain
behavior from the base element from which to build on. Binding a
hovercard to an element that doesn't display its children (like img or
input) is useless, since I want to append child nodes to display that
user info.

I could then make superhovercard by extending the hovercard. The
single shadow DOM tree works perfectly in this case, because you
either:
1) inherit the tree of the subclass and add behavior;
2) override it.

In cases where you truly need a decorator, use composition. Once we
have the basics going, we may contemplate concepts like inherited
(http://dev.w3.org/2006/xbl2/#the-inherited-element) to make
sub-classing more convenient.

Sub-classing as a programming model is well-understood, and easy to grasp.

On the other hand, the decorators are less known and certainly carry
hidden pains. How do you resolve API conflicts (two bindings have two
properties/functions by the same name)? As a developer, how do you
ensure a stable order of bindings (bindings competing for the z-index
and depending on the order of they are initialized, for example)?



 == OK Mr. Fancypants, do you have a better proposal? ==

 I haven't thought deeply about this, but here's a sketch of a component model 
 that is just as trivial to use and implement as what is proposed, yet 
 provides true encapsulation. It sticks with the idea that the only way to 
 create a component DOM is programmatically. But it immediately provides some 
 advantages that I'll expand on after throwing out the IDL:

 interface Component {
    // deliberately left blank
 }

 interface TreeScope : Node
  {
    readonly TreeScope parentTreeScope;
    Element getElementById (in DOMString elementId);
 }

  interface BindingRoot : TreeScope {
   attribute bool applyAuthorSheets;
   readonly attribute Element bindingHost;
  };

 [Callback]
 interface ComponentInitializer {
  void initialize(in BindingRoot binding);
 };

  partial interface Document {
     Component createComponent(in Node template, in ComponentInitializer 
 initializer);
  };

 partial interface Element {
    bindComponent(in Component component);
    unbindComponent(in Component component);
 }

 The way this works is as follows:

 (1) The component provider creates a DOM tree that provides templates for 
 bindings that instantiate that component.
 (2) The component provider also makes an init function (represented above as 
 a callback interface) which is called whenever an instance of the component 
 is bound (see below).
 (3) The component provider 

Re: Mutation events replacement

2011-06-30 Thread David Flanagan

On 6/30/11 12:26 PM, Boris Zbarsky wrote:

On 6/30/11 2:56 PM, David Flanagan wrote:

I'll add my own possibly stupid question... Can we go in the opposite
direction and fire mutation events immediately without queuing, but
forbid any DOM modifications from the event callbacks?


Forbid DOM modifications to all DOMs?  Or just one DOM? 
Is it clearer is I say forbid any modifications to the document tree?  
I suspect that DOM Level 2 Core covers the modifications I have in mind 
when it throws NO_MODIFICATION_ALLOWED_ERR.


It would be nice to only lock the document tree in which the mutation 
occurred.  That seems doable to me, but maybe I'm missing something.  In 
Jonas's original proposal, is the notifyingCallbacks flag per-document 
or global? (I'm assuming per-document, but the intent is not actually 
clear to me from the text).
Is window.close() forbidden?  Is spinning the event loop (e.g. sync 
XHR) forbidden?
I wasn't intending that those be forbidden.  Won't those cases be 
problematic whatever mutation event solution is adopted?


The point of my proposal was to guarantee that mutation events are 
delivered when the tree is in its freshly-mutated state and avoid the 
need to maintain a list of pending callbacks.  The fact that the DOM 
already has the NO_MODIFICATION_ALLOWED_ERR infrastructure (and that it 
goes mostly unused) seems helpful.  The current proposal relies on a 
per-document notifyingCallbacks flag.  Adding a per-document 
theTreeIsLocked flag seems like it might be of comparable specification 
complexity.  (I'll defer to Boris and other about the implementation 
complexity.)


From a web developer's perspective what should a mutation event mean?

a) The document tree just changed. The current state of the tree 
reflects the change and no other changes have occurred in the meantime. 
You can look, but you can't touch the tree.


b) The document has changed, but the current state of the tree may 
include other, subsequent changes that I'm not going to tell you about 
yet.  Feel free to change the tree and mess things up even more for the 
next event handler in the queue.  :-)


I think a is more useful, easier for web developers to understand, and 
less surprising.


This is actually a pretty hard problem to solve, and still wouldn't 
really solve the performance issues for DOM events
Still better than current DOM Mutation event, though right?  Are you 
saying that synchronous callbacks on a readonly tree would have worse 
performance than Jonas's and Olli's proposal?



-Boris


David



RE: [indexeddb] openCursor optional parameters issue

2011-06-30 Thread Israel Hilerio
On Tuesday, June 28, 2011 7:31 PM, Jonas Sicking wrote:
 On Tue, Jun 28, 2011 at 4:59 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Tuesday, June 28, 2011 12:49 PM, Jonas Sicking wrote:
  On Tue, Jun 28, 2011 at 10:53 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Monday, June 27, 2011 8:21 PM, Jonas Sicking wrote:
   On Mon, Jun 27, 2011 at 11:42 AM, Israel Hilerio
   isra...@microsoft.com
   wrote:
The IDBObjectStore.openCursor method is defined to have two
optional
   parameters:
* IDBRequest openCursor (in optional any range, in optional
unsigned short direction) raises (IDBDatabaseException);
   
Based on the examples in the spec, it seems we're envisioning
the method
   to be used in the following ways:
* objStore.openCursor();
* objStore.openCursor(keyRange);
* objStore.openCursor(keyRange, IDBCursor.PREV);
* objStore.openCursor(IDBCursor.PREV);
  
   No, that's not how optional parameters work in WebIDL. In order to
   specify an optional parameter, you always have to specify all
   preceding optional parameters. So only the following syntaxes are
   valid:
  
   * objStore.openCursor();
   * objStore.openCursor(keyRange);
   * objStore.openCursor(keyRange, IDBCursor.PREV);
  
Having any for the keyRange type makes it difficult to detect
the correct
   overloaded parameter for openCursor.
  
   The reason the first parameter is of type 'any' is so that you can
   pass either a IDBKeyRange or a value. So for example:
  
   req = objStore.openCursor(hello); req = index.openCursor(4);
  
   are valid. When called with a simple value on an object store the
   cursor will obviously always return 0 or 1 rows. For indexes it
   could return any number of rows though.
  
   This is actually already specified if you look at the steps for
   opening a
  cursor.
   The same holds true for many other functions, such as .get and .delete.
  
   However it's a very subtle feature that's easy to miss. If you
   have suggestions for how to make this more clear in the spec I'd
   love to hear them. I've been thinking that we should add
   non-normative, easy-to-understand text to explain each function,
   similar to what the
   HTML5 spec does when defining APIs.
  
   / Jonas
  
   What you're saying makes a lot of sense.  That was what I
   originally thought
  but what confused me was some of the examples in the current spec
  which suggest we want to do the following (Section 3.3.5):
   * objStore.openCursor(IDBCursor.PREV);
 
  I don't think we should allow this. The benefit of saving the author
  from writing objStore.openCursor(nulll, IDBCursor.PREV) isn't worth
  the complexity that is introduced. IMHO. We should just fix the example
 instead.
 
   Independent of how up to date the examples are, the issue with the
   way it is
  currently spec'ed is that there is an implied dependency between
  keyRange and Cursor direction.  In other words, you can't open a
  cursor without any keyRange and just a direction.  One possible way
  to resolve this is to allow the keyRange to be nullable.  This will
  allow us to define a cursor without a keyRange and with a direction:
   * objStore.openCursor(null, IDBCursor.PREV);
  
   Without something like this, it is not easy to get a list of all
   the records on
  the store going in the opposite direction from IDBCursor.NEXT.
 
  Indeed, it was the intent that this should be allowed. I suspect we
  simply haven't kept up to date with WebIDL changing under us. But I
  do think that the text in the algorithm does say to do the right
  thing when no keyrange (or key
  value) is supplied.
 
  / Jonas
 
  My concern is not having a clean mechanism to retrieve a regular cursor
 with an inverted order without knowing any records (first or last) in the
 list.  This seems like a common operation that is not supported today.
 
  These are some of the alternatives that I believe we have:
  * Support a null value for IDBKeyRange:
         -IDBRequest objStore.openCursor(null, IDBCursor.PREV);
  * Introduce a new specialized method to handle this scenario:
         -IDBRequest objStore.openDirectionalCursor(IDBCursor.PREV);
          * This will default internally to an IDBKeyRange with the properties
 defined below.
          * One advantage of this approach is that we don't have to expose a 
  new
 IDBKeyRange constructor.
  * Define a new static keyRange constructor that is a catch all:
         -static IDBKeyRange.all();
          * The values for the new constructor would be:
                 IDBKeyRange.lower = undefined
                 IDBKeyRange.upper = undefined
                 IDBKeyRange.lowerOpen = false
                 IDBKeyRange.upperOpen = false
          * I believe these satisfy the conditions for a key is in a key 
  range section
 [1].
 
          * We could pass this new keyRange to the existing openCursor method:
             -objStore.openCursor(IDBKeyRange.all(), IDBCursor.PREV);
 
  Let 

Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:

 Maciej, as promised on #whatwg, here's a more thorough review of your
 proposal. I am in agreement in the first parts of your email, so I am
 going to skip those.
 
 == Are there other limitations created by the lack of encapsulation? ==
 
 My understanding is yes, there are some serious limitations:
 
 (1) It won't be possible (according to Dmitri) to attach a binding to an 
 object that has a native shadow DOM in the implementation (e.g. form 
 controls). That's because there can only be one shadow root, and form 
 controls have already used it internally and made it private. This seems 
 like a huge limitation. The ability to attach bindings/components to form 
 elements is potentially a huge win - authors can use the correct semantic 
 element instead of div soup, but still have the total control over look and 
 feel from a custom script-based implementation.
 
 (2) Attaching more than one binding with this approach is a huge hazard. 
 You'll either inadvertently blow away the previous, or won't be able to 
 attach more than one, or if your coding is sloppy, may end up mangling both 
 of them.
 
 I think these two limitations are intrinsic to the approach, not incidental.
 
 I would like to frame this problem as multiple-vs-single shadow tree
 per element.
 
 Encapsulation is achievable with single shadow tree per element by
 removing access via webkitShadow. You can discover whether a tree
 exists (by the fact that an exception is thrown when you attempt to
 set webkitShadow), but that's hardly breaking encapsulation.
 
 The issues you've described above are indeed real -- if you view
 adding new behavior to elements a process of binding, that is
 something added to existing elements, possibly more than once. If we
 decide that this the correct way to view attaching behavior, we
 definitely need to fix this.
 
 I attempted to articulate a different view here
 http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0941.html.
 Here, adding new behavior to elements means creating a sub-class of an
 element. This should be a very familiar programming concept, probably
 more understood than the decorator or mixin-like binding approach.

How would your subclass idea resolve the two problems above?

 
 For the key use case of UI widgets, sub-classing is very natural. I
 take a div, and sub-class it into a hovercard
 (http://blog.twitter.com/2010/02/flying-around-with-hovercards.html).
 I rarely bind a hovercard behavior to some random element -- not just
 because I typically don't need to, but also because I expect a certain
 behavior from the base element from which to build on. Binding a
 hovercard to an element that doesn't display its children (like img or
 input) is useless, since I want to append child nodes to display that
 user info.
 
 I could then make superhovercard by extending the hovercard. The
 single shadow DOM tree works perfectly in this case, because you
 either:
 1) inherit the tree of the subclass and add behavior;
 2) override it.
 
 In cases where you truly need a decorator, use composition. Once we
 have the basics going, we may contemplate concepts like inherited
 (http://dev.w3.org/2006/xbl2/#the-inherited-element) to make
 sub-classing more convenient.
 
 Sub-classing as a programming model is well-understood, and easy to grasp.
 
 On the other hand, the decorators are less known and certainly carry
 hidden pains. How do you resolve API conflicts (two bindings have two
 properties/functions by the same name)? As a developer, how do you
 ensure a stable order of bindings (bindings competing for the z-index
 and depending on the order of they are initialized, for example)?

I think decorators have valid use cases. For example, let's say I want to make 
a component that extracts microformat or microdata marked up content from an 
element and present hover UI to allow handy access to it. For example, it could 
extract addresses and offer map links. I would want this to work on any 
element, even if the element already has an active behavior implemented by a 
component. I should not have to subclass every type of element I may want to 
apply this to. It's especially problematic if you have to subclass even 
different kinds of built in elements. Do I need separate subclasses for div, 
span, address, section p, and whatever other kind of element I imagine this 
applying to? That doesn't seem so great.

You are correct that figuring out how multiple bindings work is tricky. But 
even if we choose not to do it, making components truly encapsulated does not 
make it any harder to have a one-binding-only model with no inheritance.


 Notice that this scheme is not significantly more complex to use, spec or 
 implement than the shadow/shadowHost proposal. And it provides a number of 
 advantages:
 
 A) True encapsulation is possible, indeed it is the easy default path. The 
 component provider has to go out of its way to 

Re: Mutation events replacement

2011-06-30 Thread Boris Zbarsky

On 6/30/11 4:15 PM, David Flanagan wrote:

Forbid DOM modifications to all DOMs? Or just one DOM?

Is it clearer is I say forbid any modifications to the document tree?


There are multiple document trees around is the point.


It would be nice to only lock the document tree in which the mutation
occurred. That seems doable to me, but maybe I'm missing something.


I think you are.  What happens if the document tree containing the 
iframe that the document you're mutating contains is modified?



Is window.close() forbidden? Is spinning the event loop (e.g. sync
XHR) forbidden?

I wasn't intending that those be forbidden. Won't those cases be
problematic whatever mutation event solution is adopted?


Not problematic from the point of view of the DOM implementor, since 
they will run when the system is in a consistent state.



The point of my proposal was to guarantee that mutation events are
delivered when the tree is in its freshly-mutated state and avoid the
need to maintain a list of pending callbacks.


That would be nice; the problem is that there are compound mutation 
operations that can have bad intermediate states after part of the 
mutation has happened but before it's complete.  That's what the concern 
is about.



 From a web developer's perspective what should a mutation event mean?

a) The document tree just changed. The current state of the tree
reflects the change and no other changes have occurred in the meantime.
You can look, but you can't touch the tree.


What happens when the web page asks for layout information at this 
point?  Is it OK to force layout updates partway through a mutation?



This is actually a pretty hard problem to solve, and still wouldn't
really solve the performance issues for DOM events

Still better than current DOM Mutation event, though right? Are you
saying that synchronous callbacks on a readonly tree would have worse
performance than Jonas's and Olli's proposal?


In Gecko's case, yes: we would need to sync various other state more to 
be ready for whatever insanity the callee script chooses to perpetrate 
other than DOM mutations (which I will posit we can just throw on if we 
want, per your proposal)...


-Boris



Re: Mutation events replacement

2011-06-30 Thread James Robinson
On Thu, Jun 30, 2011 at 1:15 PM, David Flanagan dflana...@mozilla.comwrote:


 This is actually a pretty hard problem to solve, and still wouldn't really
 solve the performance issues for DOM events

 Still better than current DOM Mutation event, though right?  Are you saying
 that synchronous callbacks on a readonly tree would have worse performance
 than Jonas's and Olli's proposal?


I suspect, although I have not measured, than entering/leaving the JS vm
every time an attribute was modified or a node was creating would have
significantly higher overhead than batching up the calls to happen later.
 Consider generating a large amount of DOM by setting innerHTML.

- James


  -Boris

 David




Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
On Thu, Jun 30, 2011 at 1:32 PM, Maciej Stachowiak m...@apple.com wrote:

 On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:

 Maciej, as promised on #whatwg, here's a more thorough review of your
 proposal. I am in agreement in the first parts of your email, so I am
 going to skip those.

 =3D=3D Are there other limitations created by the lack of encapsulation=
? =3D=3D

 My understanding is yes, there are some serious limitations:

 (1) It won't be possible (according to Dmitri) to attach a binding to a=
n object that has a native shadow DOM in the implementation (e.g. form cont=
rols). That's because there can only be one shadow root, and form controls =
have already used it internally and made it private. This seems like a huge=
 limitation. The ability to attach bindings/components to form elements is =
potentially a huge win - authors can use the correct semantic element inste=
ad of div soup, but still have the total control over look and feel from a =
custom script-based implementation.

 (2) Attaching more than one binding with this approach is a huge hazard=
. You'll either inadvertently blow away the previous, or won't be able to a=
ttach more than one, or if your coding is sloppy, may end up mangling both =
of them.

 I think these two limitations are intrinsic to the approach, not incide=
ntal.

 I would like to frame this problem as multiple-vs-single shadow tree
 per element.

 Encapsulation is achievable with single shadow tree per element by
 removing access via webkitShadow. You can discover whether a tree
 exists (by the fact that an exception is thrown when you attempt to
 set webkitShadow), but that's hardly breaking encapsulation.

 The issues you've described above are indeed real -- if you view
 adding new behavior to elements a process of binding, that is
 something added to existing elements, possibly more than once. If we
 decide that this the correct way to view attaching behavior, we
 definitely need to fix this.

 I attempted to articulate a different view here
 http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0941.html.
 Here, adding new behavior to elements means creating a sub-class of an
 element. This should be a very familiar programming concept, probably
 more understood than the decorator or mixin-like binding approach.

 How would your subclass idea resolve the two problems above?

In the case of extending elements with native shadow DOM, you have to
use composition or have something like inherited, where you nest
native shadow tree in your own.

In the case case of attaching multiple bindings -- you just can't.
That's the difference between inheritance and mixins :)



 For the key use case of UI widgets, sub-classing is very natural. I
 take a div, and sub-class it into a hovercard
 (http://blog.twitter.com/2010/02/flying-around-with-hovercards.html).
 I rarely bind a hovercard behavior to some random element -- not just
 because I typically don't need to, but also because I expect a certain
 behavior from the base element from which to build on. Binding a
 hovercard to an element that doesn't display its children (like img or
 input) is useless, since I want to append child nodes to display that
 user info.

 I could then make superhovercard by extending the hovercard. The
 single shadow DOM tree works perfectly in this case, because you
 either:
 1) inherit the tree of the subclass and add behavior;
 2) override it.

 In cases where you truly need a decorator, use composition. Once we
 have the basics going, we may contemplate concepts like inherited
 (http://dev.w3.org/2006/xbl2/#the-inherited-element) to make
 sub-classing more convenient.

 Sub-classing as a programming model is well-understood, and easy to gras=
p.

 On the other hand, the decorators are less known and certainly carry
 hidden pains. How do you resolve API conflicts (two bindings have two
 properties/functions by the same name)? As a developer, how do you
 ensure a stable order of bindings (bindings competing for the z-index
 and depending on the order of they are initialized, for example)?

 I think decorators have valid use cases. For example, let's say I want to=
 make a component that extracts microformat or microdata marked up content =
from an element and present hover UI to allow handy access to it. For examp=
le, it could extract addresses and offer map links. I would want this to wo=
rk on any element, even if the element already has an active behavior imple=
mented by a component. I should not have to subclass every type of elemen=
t I may want to apply this to. It's especially problematic if you have to =
subclass even different kinds of built in elements. Do I need separate sub=
classes for div, span, address, section p, and whatever other kind of eleme=
nt I imagine this applying to? That doesn't seem so great.

 You are correct that figuring out how multiple bindings work is tricky. B=
ut even if we choose not to do it, making components truly encapsulated doe=
s 

Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 2:07 PM, Dimitri Glazkov wrote:

 On Thu, Jun 30, 2011 at 1:32 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:
 
 
 In the case of extending elements with native shadow DOM, you have to
 use composition or have something like inherited, where you nest
 native shadow tree in your own.

Why should a Web developer need to know or care which HTML elements have a 
native shadow DOM to be able to attach components to them? Is this actually 
something we want to specify? Would we specify exactly what the native shadow 
DOM is for each element to make it possible to inherit them? This seems like it 
would lock in a lot of implementation details of form controls and so strikes 
me as a bad direction.

 
 In the case case of attaching multiple bindings -- you just can't.
 That's the difference between inheritance and mixins :)

OK, so your proposal would be unable to address my microformat decorator sample 
use case at all, no matter how it was modified. It would also not be able to 
handle both a Web page and a browser extension attaching behavior to the same 
element via components at the same time. Those seem like major limitations.

 
 To make further progress, I would like to concentrate on resolving
 these two issues:
 
 1) should we use object inheritance (one shadow subtree) or mixins
 (multiple shadow subtrees)?
 
 I think it's possible to partially table this issue. If mixing are required, 
 then raw access to the shadow tree is not viable. But using inheritance / 
 single binding is possible with either proposal.
 
 I think that changes a lot of nomenclature though, right? You don't
 have bindings with inheritance. It's just you or your sub-class.
 Also, element.bindComponent doesn't make much sense if you can only
 inherit the relationship.

You can call it attachComponent if you want. Or setComponent. I think we can 
make the way of attaching to a native element different from the way you 
inherit from another component. I don't really see how element.shadow = 
whatever is a better fit for inheritance than 
element.bindComponent(whatever).

Still, I think this is diving too far into the details where we are not even 
clear on the use cases.

 
 
 2) do we need webkitShadow or similar accessor to shadow subtree(s)?
 
 This question is a helpful one. I haven't seen any reason articulated for 
 why such an accessor is required. The fact that it's not present in other 
 similar technologies seems like proof that it is not required.
 
 Yes, I will work on use cases. Though this concept is certainly
 present in other technologies. Just take a look at Silverlight and its
 LogicalTreeHelper
 (http://msdn.microsoft.com/en-us/library/ms753391.aspx).

Is there anything that Silverlight can do that Mozilla's XBL, sXBL, and HTC 
can't, as a result of this choice?

 
 
 
 
 I think these are all resolved by supplying use cases and rationale. Right?
 
 If so, I think we need a real list of use cases to be addressed. The one 
 provided seems to bear no relationship to your original proposal (though I 
 believe my rough sketch satisfies more of them as-is and is more obviously 
 extensible to satisfying more of them).
 
 Did you mean the hovercard? I bet I can write a pretty simple bit of
 code that would usefully consume the API from my proposal.

I meant the wiki list of use cases.

For concrete use cases, the most valuable kind would be examples from real Web 
sites, including the URL of the original, a description of how it works, and 
the code it uses to make that happen. Made-up examples can be illustrative but 
won't help us sort out questions of what are Web authors really doing and what 
do they need? which seem to come up a lot in this discussion.

Regards,
Maciej








Re: [indexeddb] openCursor optional parameters issue

2011-06-30 Thread Jonas Sicking
On Thu, Jun 30, 2011 at 1:19 PM, Israel Hilerio isra...@microsoft.com wrote:
 On Tuesday, June 28, 2011 7:31 PM, Jonas Sicking wrote:
 On Tue, Jun 28, 2011 at 4:59 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Tuesday, June 28, 2011 12:49 PM, Jonas Sicking wrote:
  On Tue, Jun 28, 2011 at 10:53 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Monday, June 27, 2011 8:21 PM, Jonas Sicking wrote:
   On Mon, Jun 27, 2011 at 11:42 AM, Israel Hilerio
   isra...@microsoft.com
   wrote:
The IDBObjectStore.openCursor method is defined to have two
optional
   parameters:
* IDBRequest openCursor (in optional any range, in optional
unsigned short direction) raises (IDBDatabaseException);
   
Based on the examples in the spec, it seems we're envisioning
the method
   to be used in the following ways:
* objStore.openCursor();
* objStore.openCursor(keyRange);
* objStore.openCursor(keyRange, IDBCursor.PREV);
* objStore.openCursor(IDBCursor.PREV);
  
   No, that's not how optional parameters work in WebIDL. In order to
   specify an optional parameter, you always have to specify all
   preceding optional parameters. So only the following syntaxes are
   valid:
  
   * objStore.openCursor();
   * objStore.openCursor(keyRange);
   * objStore.openCursor(keyRange, IDBCursor.PREV);
  
Having any for the keyRange type makes it difficult to detect
the correct
   overloaded parameter for openCursor.
  
   The reason the first parameter is of type 'any' is so that you can
   pass either a IDBKeyRange or a value. So for example:
  
   req = objStore.openCursor(hello); req = index.openCursor(4);
  
   are valid. When called with a simple value on an object store the
   cursor will obviously always return 0 or 1 rows. For indexes it
   could return any number of rows though.
  
   This is actually already specified if you look at the steps for
   opening a
  cursor.
   The same holds true for many other functions, such as .get and .delete.
  
   However it's a very subtle feature that's easy to miss. If you
   have suggestions for how to make this more clear in the spec I'd
   love to hear them. I've been thinking that we should add
   non-normative, easy-to-understand text to explain each function,
   similar to what the
   HTML5 spec does when defining APIs.
  
   / Jonas
  
   What you're saying makes a lot of sense.  That was what I
   originally thought
  but what confused me was some of the examples in the current spec
  which suggest we want to do the following (Section 3.3.5):
   * objStore.openCursor(IDBCursor.PREV);
 
  I don't think we should allow this. The benefit of saving the author
  from writing objStore.openCursor(nulll, IDBCursor.PREV) isn't worth
  the complexity that is introduced. IMHO. We should just fix the example
 instead.
 
   Independent of how up to date the examples are, the issue with the
   way it is
  currently spec'ed is that there is an implied dependency between
  keyRange and Cursor direction.  In other words, you can't open a
  cursor without any keyRange and just a direction.  One possible way
  to resolve this is to allow the keyRange to be nullable.  This will
  allow us to define a cursor without a keyRange and with a direction:
   * objStore.openCursor(null, IDBCursor.PREV);
  
   Without something like this, it is not easy to get a list of all
   the records on
  the store going in the opposite direction from IDBCursor.NEXT.
 
  Indeed, it was the intent that this should be allowed. I suspect we
  simply haven't kept up to date with WebIDL changing under us. But I
  do think that the text in the algorithm does say to do the right
  thing when no keyrange (or key
  value) is supplied.
 
  / Jonas
 
  My concern is not having a clean mechanism to retrieve a regular cursor
 with an inverted order without knowing any records (first or last) in the
 list.  This seems like a common operation that is not supported today.
 
  These are some of the alternatives that I believe we have:
  * Support a null value for IDBKeyRange:
         -IDBRequest objStore.openCursor(null, IDBCursor.PREV);
  * Introduce a new specialized method to handle this scenario:
         -IDBRequest objStore.openDirectionalCursor(IDBCursor.PREV);
          * This will default internally to an IDBKeyRange with the 
  properties
 defined below.
          * One advantage of this approach is that we don't have to expose a 
  new
 IDBKeyRange constructor.
  * Define a new static keyRange constructor that is a catch all:
         -static IDBKeyRange.all();
          * The values for the new constructor would be:
                 IDBKeyRange.lower = undefined
                 IDBKeyRange.upper = undefined
                 IDBKeyRange.lowerOpen = false
                 IDBKeyRange.upperOpen = false
          * I believe these satisfy the conditions for a key is in a key 
  range section
 [1].
 
          * We could pass this new keyRange to the existing openCursor 
  

Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Thu, Jun 30, 2011 at 1:15 PM, David Flanagan dflana...@mozilla.comwrote:

 avoid the need to maintain a list of pending callbacks.


Yeah, this is one reason I like Rafael's proposal of having a list of
mutations.  In many editing apps, you want to get a list of mutation events
for each editing action or execCommand call instead of receiving a mutation
event for each DOM mutation and having to pile them up.

a) The document tree just changed. The current state of the tree reflects
 the change and no other changes have occurred in the meantime. You can look,
 but you can't touch the tree.

 b) The document has changed, but the current state of the tree may include
 other, subsequent changes that I'm not going to tell you about yet.  Feel
 free to change the tree and mess things up even more for the next event
 handler in the queue.  :-)

 I think a is more useful, easier for web developers to understand, and less
 surprising.


Are there use cases where having just b or b + list of mutations is not
sufficient (i.e. being synchronous is necessary) ?

- Ryosuke


Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
On Thu, Jun 30, 2011 at 2:21 PM, Maciej Stachowiak m...@apple.com wrote:

 On Jun 30, 2011, at 2:07 PM, Dimitri Glazkov wrote:

 On Thu, Jun 30, 2011 at 1:32 PM, Maciej Stachowiak m...@apple.com wrote:

 On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:


 In the case of extending elements with native shadow DOM, you have to
 use composition or have something like inherited, where you nest
 native shadow tree in your own.

 Why should a Web developer need to know or care which HTML elements have a
 native shadow DOM to be able to attach components to them? Is this
 actually something we want to specify? Would we specify exactly what the
 native shadow DOM is for each element to make it possible to inherit them?
 This seems like it would lock in a lot of implementation details of form
 controls and so strikes me as a bad direction.

There's a very interesting distinction here. You don't attach
components to DOM elements. DOM elements _are_ components. The only
way to make a component is by sub-classing it from an existing
element. In this case, there is no distinction between native and
non-native implementations. If I sub-class from HTMLTextareaElement, I
can either reuse or override its shadow DOM. If I subclass from
ProfileInformationWidget (which in itself is a subclass of
HTMLDivElement), I can do exactly the same two things.


 In the case case of attaching multiple bindings -- you just can't.
 That's the difference between inheritance and mixins :)

 OK, so your proposal would be unable to address my microformat decorator
 sample use case at all, no matter how it was modified. It would also not be
 able to handle both a Web page and a browser extension attaching behavior to
 the same element via components at the same time. Those seem like major
 limitations.

Yes, they are. I think it's worth considering whether component model
should be aiming to address these cases, or a simple
mutation-event-style spec is enough to address them. Just like we want
the component model to be a useful functionality, covering a range of
use cases, we don't want it to become the and-the-kitchen-sink spec
that's XBL2.


 To make further progress, I would like to concentrate on resolving

 these two issues:

 1) should we use object inheritance (one shadow subtree) or mixins

 (multiple shadow subtrees)?

 I think it's possible to partially table this issue. If mixing are required,
 then raw access to the shadow tree is not viable. But using inheritance /
 single binding is possible with either proposal.

 I think that changes a lot of nomenclature though, right? You don't
 have bindings with inheritance. It's just you or your sub-class.
 Also, element.bindComponent doesn't make much sense if you can only
 inherit the relationship.

 You can call it attachComponent if you want. Or setComponent. I think we can
 make the way of attaching to a native element different from the way you
 inherit from another component. I don't really see how element.shadow =
 whatever is a better fit for inheritance than
 element.bindComponent(whatever).
 Still, I think this is diving too far into the details where we are not even
 clear on the use cases.


 2) do we need webkitShadow or similar accessor to shadow subtree(s)?

 This question is a helpful one. I haven't seen any reason articulated for
 why such an accessor is required. The fact that it's not present in other
 similar technologies seems like proof that it is not required.

 Yes, I will work on use cases. Though this concept is certainly
 present in other technologies. Just take a look at Silverlight and its
 LogicalTreeHelper
 (http://msdn.microsoft.com/en-us/library/ms753391.aspx).

 Is there anything that Silverlight can do that Mozilla's XBL, sXBL, and HTC
 can't, as a result of this choice?




 I think these are all resolved by supplying use cases and rationale. Right?

 If so, I think we need a real list of use cases to be addressed. The one
 provided seems to bear no relationship to your original proposal (though I
 believe my rough sketch satisfies more of them as-is and is more obviously
 extensible to satisfying more of them).

 Did you mean the hovercard? I bet I can write a pretty simple bit of
 code that would usefully consume the API from my proposal.

 I meant the wiki list of use cases.
 For concrete use cases, the most valuable kind would be examples from real
 Web sites, including the URL of the original, a description of how it works,
 and the code it uses to make that happen. Made-up examples can be
 illustrative but won't help us sort out questions of what are Web authors
 really doing and what do they need? which seem to come up a lot in this
 discussion.

Yep. That's a volume of work, so please be patient with me :)

 Regards,
 Maciej









Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Boris Zbarsky

On 6/30/11 5:45 PM, Dimitri Glazkov wrote:

There's a very interesting distinction here. You don't attach
components to DOM elements. DOM elements _are_ components. The only
way to make a component is by sub-classing it from an existing
element. In this case, there is no distinction between native and
non-native implementations. If I sub-class from HTMLTextareaElement, I
can either reuse or override its shadow DOM.


Back up.

In this particular case, there may well be behavior attached to the 
textarea that makes assumptions about the shadow DOM's structure.  This 
seems like a general statement about components.


So if you override a shadow DOM, you better override the behavior too, 
right?


If you reuse the shadow DOM, you either don't get access to it from your 
component, or the old behavior still needs to be unhooked (since you can 
now violate its invariants).


Does that match your mental model?  Or are we talking about totally 
different things somehow?


-Boris



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
On Thu, Jun 30, 2011 at 2:50 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 6/30/11 5:45 PM, Dimitri Glazkov wrote:

 There's a very interesting distinction here. You don't attach
 components to DOM elements. DOM elements _are_ components. The only
 way to make a component is by sub-classing it from an existing
 element. In this case, there is no distinction between native and
 non-native implementations. If I sub-class from HTMLTextareaElement, I
 can either reuse or override its shadow DOM.

 Back up.

 In this particular case, there may well be behavior attached to the textarea
 that makes assumptions about the shadow DOM's structure.  This seems like a
 general statement about components.

 So if you override a shadow DOM, you better override the behavior too,
 right?

Ouch. This one is tricky. I now see it. We can't really expect the
author to design to this level of decoupling.


 If you reuse the shadow DOM, you either don't get access to it from your
 component, or the old behavior still needs to be unhooked (since you can now
 violate its invariants).


 Does that match your mental model?  Or are we talking about totally
 different things somehow?

No, you've highlighted a real flaw in my reply there.


 -Boris





Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
On Thu, Jun 30, 2011 at 2:50 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 6/30/11 5:45 PM, Dimitri Glazkov wrote:

 There's a very interesting distinction here. You don't attach
 components to DOM elements. DOM elements _are_ components. The only
 way to make a component is by sub-classing it from an existing
 element. In this case, there is no distinction between native and
 non-native implementations. If I sub-class from HTMLTextareaElement, I
 can either reuse or override its shadow DOM.

 Back up.

 In this particular case, there may well be behavior attached to the textarea
 that makes assumptions about the shadow DOM's structure.  This seems like a
 general statement about components.

 So if you override a shadow DOM, you better override the behavior too,
 right?

Ouch. This one is tricky. I now see it. We can't really expect the
author to design to this level of decoupling.


 If you reuse the shadow DOM, you either don't get access to it from your
 component, or the old behavior still needs to be unhooked (since you can now
 violate its invariants).


 Does that match your mental model?  Or are we talking about totally
 different things somehow?

No, you've highlighted a real flaw in my reply there.


 -Boris




Re: Component Model: Landing Experimental Shadow DOM API in WebKit

2011-06-30 Thread Garrett Smith
On 6/29/11, Dimitri Glazkov dglaz...@chromium.org wrote:
 Hi Folks!

 With use cases (http://wiki.whatwg.org/wiki/Component_Model_Use_Cases)
 firmed up, and isolation
 (http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0900.html),
 inheritance
 (http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0941.html)
 out of the way, a component model for the Web can be viewed as a
 three-piece puzzle:

 1) Shadow DOM (http://glazkov.com/2011/01/14/what-the-heck-is-shadow-dom/)

| var slider = document.getElementsById(foo);
| console.log(slider.firstChild); // returns null

In which browser?

| // Create an element with a shadow DOM subtree.
| var input = document.body.appendChild(document.createElement('input'));
| // Add a child to it.
| var test = input.appendChild(document.createElement('p'));

What should that do, other than throw an error?

 with its encapsulation properties in regard to events, styles, and DOM
 scoping;
 2) Associating a chunk of shadow DOM and Javascript behavior with a
 DOM element -- that's the actual Component part;

I've always wanted a method to clone events and js properties, so you
can have say:

form.cloneObject(true);

And that form's controls will retain its their `value`, `checked`, et al.

 3) Declarative (markup) way of expressing the above.

 Since this is still a largish puzzle, difficult to solve by
 theoretical examination, we would like to start by landing the first
 piece (the shadow DOM bits) as an experimental API in WebKit.

I've got another idea but I'm not going to say what it is.
-- 
Garrett



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Dimitri Glazkov
Perhaps the right solution is to require inherited and disallow
access to shadow DOM tree if the sub-class is not overriding the
subtree?

:DG



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Boris Zbarsky

On 6/30/11 6:04 PM, Dimitri Glazkov wrote:

Perhaps the right solution is to requireinherited  and disallow
access to shadow DOM tree if the sub-class is not overriding the
subtree?


I don't know.  First, I'm not sure what problem we're solving.  Second, 
I'm not sure what inherited does  Third, who is being disallowed 
access?


-Boris




Re: Mutation events replacement

2011-06-30 Thread David Flanagan

On 6/30/11 1:45 PM, James Robinson wrote:
On Thu, Jun 30, 2011 at 1:15 PM, David Flanagan dflana...@mozilla.com 
mailto:dflana...@mozilla.com wrote:



This is actually a pretty hard problem to solve, and still
wouldn't really solve the performance issues for DOM events

Still better than current DOM Mutation event, though right?  Are
you saying that synchronous callbacks on a readonly tree would
have worse performance than Jonas's and Olli's proposal?


I suspect, although I have not measured, than entering/leaving the JS 
vm every time an attribute was modified or a node was creating would 
have significantly higher overhead than batching up the calls to 
happen later.  Consider generating a large amount of DOM by setting 
innerHTML.


- James

So what if the calls were batched up and invoked synchronously before 
the operation returns, as in Olli's proposal, but in addition, the 
document was made re-only while the callbacks were running?  I don't 
want to argue strongly for it, but it does seem like a huge 
simplification if it wouldn't break important use cases.


David


Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Thu, Jun 30, 2011 at 1:35 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  The point of my proposal was to guarantee that mutation events are
 delivered when the tree is in its freshly-mutated state and avoid the
 need to maintain a list of pending callbacks.


 That would be nice; the problem is that there are compound mutation
 operations that can have bad intermediate states after part of the
 mutation has happened but before it's complete.  That's what the concern is
 about.




   From a web developer's perspective what should a mutation event mean?

 a) The document tree just changed. The current state of the tree
 reflects the change and no other changes have occurred in the meantime.
 You can look, but you can't touch the tree.


 What happens when the web page asks for layout information at this point?
  Is it OK to force layout updates partway through a mutation?


I think most developers are concerned with paint to avoid flickering and not
so much about layout.

- Ryosuke


Re: Mutation events replacement

2011-06-30 Thread Boris Zbarsky

On 6/30/11 6:33 PM, Ryosuke Niwa wrote:

I think most developers are concerned with paint to avoid flickering and
not so much about layout.


I meant from the implementation's point of view.  E.g. if a node is 
partially inserted into the DOM, is it OK to trigger layout?  The answer 
may depend on the invariants the layout engine assumes about the DOM...


-Boris



Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Thu, Jun 30, 2011 at 3:55 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/30/11 6:33 PM, Ryosuke Niwa wrote:

 I think most developers are concerned with paint to avoid flickering and
 not so much about layout.


 I meant from the implementation's point of view.  E.g. if a node is
 partially inserted into the DOM, is it OK to trigger layout?  The answer may
 depend on the invariants the layout engine assumes about the DOM...


What do you mean by it being partially inserted?  Like document
relationship, etc... aren't consistent?

- Ryosuke


Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Wed, Jun 29, 2011 at 6:11 PM, Jonas Sicking jo...@sicking.cc wrote:

   Maybe this is a stupid question, since I'm not familiar at all with
  the use-cases involved, but why can't we delay firing the
  notifications until the event loop spins?  If we're already delaying
  them such that there are no guarantees about what the DOM will look
  like by the time they fire, it seems like delaying them further
  shouldn't hurt the use-cases too much more.  And then we don't have to
  put further effort into saying exactly when they fire for each method.
   But this is pretty obvious, so I assume there's some good reason not
  to do it.

 To enable things like widget libraries which want to keep state
 up-to-date with a DOM.


I agree that in many cases, libraries will be able to update states before
application gets back control.  However, callbacks are called synchronously
sometimes and asynchronously sometimes might cause problems in some
situations.  Consider the following situation.

I write a function that mutates DOM and updates some internal state based on
a value automatically provided by some library (say jQuery, etc...) upon in
a DOM mutation callback.

Later, my colleague adds new feature and decides to call my function in a
middle of DOM mutation callback.  And oops!  My function doesn't work
because it relied on the assumption that DOM mutation callbacks are called
synchronously when I mutate DOM.

- Ryosuke


Re: Mutation events replacement

2011-06-30 Thread Rafael Weinstein
On Thu, Jun 30, 2011 at 4:05 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 06/30/2011 12:54 AM, Rafael Weinstein wrote:

 On Wed, Jun 29, 2011 at 7:13 AM, Aryeh Gregorsimetrical+...@gmail.com
  wrote:

 On Tue, Jun 28, 2011 at 5:24 PM, Jonas Sickingjo...@sicking.cc  wrote:

 This new proposal solves both these by making all the modifications
 first, then firing all the events. Hence the implementation can
 separate implementing the mutating function from the code that sends
 out notifications.

 Conceptually, you simply queue all notifications in a queue as you're
 making modifications to the DOM, then right before returning from the
 function you insert a call like flushAllPendingNotifications(). This
 way you don't have to care at all about what happens when those
 notifications fire.

 So when exactly are these notifications going to be fired?  In
 particular, I hope non-DOM Core specifications are going to have
 precise control over when they're fired.  For instance, execCommand()
 will ideally want to do all its mutations at once and only then fire
 the notifications (which I'm told is how WebKit currently works).  How
 will this work spec-wise?  Will we have hooks to say things like
 remove a node but don't fire the notifications yet, and then have to
 add an extra line someplace saying to fire all the notifications?
 This could be awkward in some cases.  At least personally, I often say
 things like call insertNode(foo) on the range in the middle of a
 long algorithm, and I don't want magic happening at that point just
 because DOM Range fires notifications before returning from
 insertNode.

 Also, even if specs have precise control, I take it the idea is
 authors won't, right?  If a library wants to implement some fancy
 feature and be compatible with users of the library firing these
 notifications, they'd really want to be able to control when
 notifications are fired, just like specs want to.  In practice, the
 only reason this isn't an issue with DOM mutation events is because
 they can say don't use them, and in fact people rarely do use them,
 but that doesn't seem ideal -- it's just saying library authors
 shouldn't bother to be robust.

 In working on Model Driven Views (http://code.google.com/p/mdv), we've
 run into exactly this problem, and have developed an approach we think
 is promising.

 The idea is to more or less take Jonas's proposal, but instead of
 firing callbacks immediately before the outer-most mutation returns,
 mutations are recorded for a given observer and handed to it as an
 in-order sequence at the end of the event.

 What is the advantage comparing to Jonas' proposal?

You guys did the conceptual heavy lifting WRT this problem. Jonas's
proposal solves the main problems with current mutation events: (1)
they fire too often, (2) they are expensive because of event
propagation, (3) they are crashy WRT some DOM operations.

If Jonas's proposal is the ultimate solution, I think it's a good
outcome and a big improvement over existing spec or tearing out
mutation events. I'm asking the group to consider a few changes which
I'm hoping are improvements.

I'll be happy if I fail =-).

---

My concern with Jonas's proposal is that its semantics depend on
context (inside vs. outside of a mutation notification). I feel like
this is at least a conceptual problem. That, and I kind of shudder
imagining trying to explain to a webdev why and when mutation
notifications are sync vs async.

The place it seems likely to fall down is when someone designs an
abstraction using mutation events and depends on them firing
synchronously -- then they or someone else attempt to use it inside
another abstraction which uses mutation events. How likely is that? I
don't even have a guess, but I'm pretty surprised at the crazy things
people did with current mutation events.

Our proposal's semantics aren't dependent on context.

Additionally, our proposal makes it clear that handling a mutation
notification is an exercise in dealing with an arbitrary number of
ways the DOM could have changed since you were last called. I.e.
providing the list of changes.

In short, I feel like our proposal is just a small tweak on Jonas's.
It is more direct in its form and API about the actually difficultly
of being a mutation observer.

Also, I'll just note a difference in view: We view it as fundamentally
a bad thing to have more than one actor operating at a time (where
actor == event handler, or abstraction which observes mutations). It
seems as though you guys view this as a good thing (i.e. All other
problems aside, mutation events *should* be synchronous).

The example I keep using internally is this: an app which uses

a) A constraint library which manages interactions between form values
(observes data mutations, makes data mutations)
b) A templating library (like MDV) which maps data to DOM (observes
both DOM and data mutations, makes both DOM and data mutations)
c) A widget library (like jQuery) which 

Re: Mutation events replacement

2011-06-30 Thread Ryosuke Niwa
On Thu, Jun 30, 2011 at 5:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/30/11 7:01 PM, Ryosuke Niwa wrote:

 What do you mean by it being partially inserted?  Like document
 relationship, etc... aren't consistent?


 That would be one example, yes.  Firing mutation events as you go involves
 making those relationships consistent at every step of a multipart mutation,
 whereas in some situations it may be simpler and faster to have intermediate
 inconsistent states.


Yeah, I hear you.  It's particularly painful in the editing world where we
have to make multiple DOM mutations.

I'd really like to know if having a list of mutations will address David's
use case.

- Ryosuke


Re: [FileAPI] Updates to FileAPI Editor's Draft

2011-06-30 Thread Gregg Tavares (wrk)
On Tue, Jun 21, 2011 at 10:17 AM, Arun Ranganathan a...@mozilla.com wrote:

 **

 Sorry if these have all been discussed before. I just read the File API for
 the first time and 2 random questions popped in my head.

  1) If I'm using readAsText with a particular encoding and the data in the
 file is not actually in that encoding such that code points in the file can
 not be mapped to valid code points what happens? Is that implementation
 specific or is it specified? I can imagine at least 3 different behaviors.


 This should be specified better and isn't.  I'm inclined to then return the
 file in the encoding it is in rather than force an encoding (in other words,
 ignore the encoding parameter if it is determined that code points can't be
 mapped to valid code points in the encoding... also note that we say to 
 Replace
 bytes or sequences of bytes that are not valid according to the charset with
 a single U+FFFD character 
 [Unicodehttp://dev.w3.org/2006/webapi/FileAPI/#Unicode
 ]).  Right now, the spec isn't specific to this scenario (... if the
 user agent cannot decode blob using encoding, then let charset be null
 before the algorithmic steps, which essentially forces UTF-8).

 Can we list your three behaviors here, just so we get them on record?
  Which behavior do you think is ideal?  More importantly, is substituting
 U+FFFD and defaulting to UTF-8 good enough for your scenario above?


The 3 off the top of my head were

1) Throw an exception. (content not valid for encoding)
2) Remap bad codes to some other value (sounds like that's the one above)
3) Remove the bad character

I see you've listed a 4th, Ignore the encoding on error, assume utf-8.
That one seems problematic because of partial reads. If you are decoding as
shift-jis, have returned a partial read, and then later hit a bad code
point, the stuff you've seen previously will all need to change by switching
to no encoding.

I'd chose #2 which it sounds like is already the case according the spec.

Regardless of what solution is chosen is there a way for me to know
something was lost?





  2) If I'm reading using readAsText a multibyte encoding (utf-8,
 shift-jis, etc..) is it implementation dependent whether or not it can
 return partial characters when returning partial results during reading? In
 other words,  Let's say the next character in a file is a 3 byte code point
 but the reader has only read 2 of those 3 bytes so far. Is implementation
 dependent whether result includes those 2 bytes before reading the 3rd byte
 or not?


 Yes, partial results are currently implementation dependent; the spec. only
 says they SHOULD be returned.  There was reluctance to have MUST condition
 on partial file reads.  I'm open to revisiting this decision if the
 justification is a really good one.


I'm assuming by MUST condition you mean a UA doesn't have to support
partial reads at all, not that how partial reads work shouldn't be
specified.

Here's an example.

Assume we stick with unknown characters get mapped to U+FFFD.
Assume my stream is utf8 and in hex the bytes are.

E3 83 91 E3 83 91

That's 2 code points of 0x30D1. Now assume the reader has only read the
first 5 bytes.

Should the partial results be

(a) filereader.result.length == 1 where the content is 0x30D1

 or should the partial result be

(b) filereader.result.length == 2 where the content is 0x30D1, 0xFFFD
 because at that point the E3 83 at the end of the partial result is not a
valid codepoint

I think the spec should specify that if the UA supports partial reads it
should follow example (a)





 -- A*



Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Roland Steiner
On Fri, Jul 1, 2011 at 7:01 AM, Dimitri Glazkov dglaz...@google.com wrote:

 On Thu, Jun 30, 2011 at 2:50 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 6/30/11 5:45 PM, Dimitri Glazkov wrote:
 
  There's a very interesting distinction here. You don't attach
  components to DOM elements. DOM elements _are_ components. The only
  way to make a component is by sub-classing it from an existing
  element. In this case, there is no distinction between native and
  non-native implementations. If I sub-class from HTMLTextareaElement, I
  can either reuse or override its shadow DOM.
 
  Back up.
 
  In this particular case, there may well be behavior attached to the
 textarea
  that makes assumptions about the shadow DOM's structure.  This seems like
 a
  general statement about components.
 
  So if you override a shadow DOM, you better override the behavior too,
  right?

 Ouch. This one is tricky. I now see it. We can't really expect the
 author to design to this level of decoupling.


I don't think that's insurmountable. Since we don't do aspect oriented
components, a component's tree is always the same - either at the root, or
attached to some inherited element of a sub-class component. So the
behavior can work on that tree without having to know whether it's used
vanilla, or within a sub-class.

Now, if the sub-class doesn't in fact use inherited, that means its
component tree in effect overrides the original component's tree. The
original tree and it's attached behavior are just ignored and go hide in a
corner.

However, (later on) we may need then to also allow sub-classing the
behavior, i.e., handing off of the interface of the original component to
its sub-class. That in turn may have security implications - you probably
don't want a component to be able to sub-class a file-upload control and
hijack events, etc.


Cheers,

- Roland


Re: Publishing From-Origin Proposal as FPWD

2011-06-30 Thread Daniel Veditz
On 6/30/11 9:31 AM, Maciej Stachowiak wrote:
 
 On Jun 30, 2011, at 7:22 AM, Anne van Kesteren wrote:
 (Added public-web-security because of the potential for doing
 this in CSP instead. Though that would require a slight change
 of scope for CSP, which I'm not sure is actually desirable.)
 
 I approve of publishing this as FWPD.
 
 I also don't think it makes sense to tie this to CSP.

Conceptually it's similar to the CSP frame-ancestors
directive--which we've decided doesn't fit in CSP either. Most of
CSP is can load while frame-ancestors was can be loaded by.
We've proposed that the frame-ancestors functionality be moved into
an expanded/standardized X-Frame-Options mechanism, but a
standardized From-Origin would work just as well (better?).

It may still make sense to put From-Origin in the WebSecurity
(not-quite) WG along with CORS rather than free floating in WebApps.
But I don't have strong feelings about that. Mozilla would be
interested in implementing this feature regardless.

-Dan Veditz