Re: Art steps down - thank you for everything

2016-01-29 Thread Alex Russell
Sorry to hear you're leaving us, Art. Your skills and humor will be missed.

On Fri, Jan 29, 2016 at 7:51 AM, Philippe Le Hegaret  wrote:

> Thank you Art.
>
> You carried out this group and community over so many years.
>
> Your first email to the AC was entitled "Just say NO?" as a response to a
> proposal from W3C. It will take a while for me to realize you won't be
> standing and come to the microphone to challenge us as you used to do for
> so many years.
>
> Philippe
>
>
> On 01/28/2016 10:45 AM, Chaals McCathie Nevile wrote:
>
>> Hi folks,
>>
>> as you may have noticed, Art has resigned as a co-chair of the Web
>> Platform group. He began chairing the Web Application Formats group
>> about a decade ago, became the leading co-chair when it merged with Web
>> APIs to become the Web Apps working group, and was instrumental in
>> making the transition from Web Apps to the Web Platform Group. (He also
>> chaired various other W3C groups in that time).
>>
>> I've been very privileged to work with Art on the webapps group for so
>> many years - as many of you know, without him it would have been a much
>> poorer group, and run much less smoothly. He did a great deal of work
>> for the group throughout his time as co-chair, efficiently, reliably,
>> and quietly.
>>
>> Now we are three co-chairs, we will work between us to fill Art's shoes.
>> It won't be easy.
>>
>> Thanks Art for everything you've done for the group for so long.
>>
>> Good luck, and I hope to see you around.
>>
>> Chaals
>>
>>
>


Re: Informal Service Worker working session

2015-07-17 Thread Alex Russell
Thanks everyone! Started a draft agenda page here; please pile in!

https://github.com/slightlyoff/ServiceWorker/wiki/july_20_2015_meeting_agenda

On Wed, Jul 15, 2015 at 10:38 PM, Benjamin Kelly bke...@mozilla.com wrote:

 On Sat, Jul 4, 2015 at 7:26 AM, Alex Russell slightly...@google.com
 wrote:

 As many SW participants are going to be in town for the WebApps F2F on
 the 21st, Google San Francisco is hosting a working day, 9am-5pm PST on
 July 20th to work through open issues and discuss future work.

 If you're attending, or would like to, simply RSVP here:
 http://doodle.com/hqm3ga8pfepidy7r


 Alex,

 Thanks for hosting!

 In preparation for the meeting we've come up with a rough list of things
 we'd like to discuss next week:

  - Clarify behavior in places where the fetch spec has not been integrated
 into other specs yet.  For example, intercepting something that is
 currently same-origin with a synthetic or CORS response, how interception
 works with CSP, etc.  Clearly Chrome has done something for these cases and
 we'd like to be compatible where possible.
  - Consider adding a foreign fetch feature to communicate with a SW on a
 different origin.  Straw man of the concept can be found at
 https://wiki.whatwg.org/wiki/Foreign_Fetch .
  - Discuss navigator.connect().  In particular, can the use cases
 motivating navigator.connect() be satisfied with a simpler solution like
 the foreign fetch concept.
  - Discuss how to make it easier to use multiple service workers for the
 same site.  For example, currently its difficult to update two service
 workers coherently.  One will always be a newer version than the other.
  - Discuss how to handle heavy-weight processing for things like
 background sync without introducing fetch event latency.  This could be
 using multiple service workers (with issues above addressed) or possible
 supporting SharedWorker, etc.
  - Consider using the service worker script URL to identify the service
 worker instead of its scope.  This would move us closer to not requiring a
 scope for service workers that aren't handling fetch events.
  - Consider allowing specific features, like fetch and push, to be
 specified at registration time.  Again, the goal is to get away from the
 current situation where registering a service worker immediately implies
 fetch event handling.
  - Consider providing an API for creating a service worker without going
 through the installation life cycle.
  - Share information about how we plan to avoid abuse of push and
 background sync events.

 Anyway, we just wanted to give people a chance to think about some of this
 before we meet.  Obviously we may not have time to cover all of this in a
 day, but it would be nice to cover any contentious bits.

 Thanks again and see you all next week.

 Ben



Informal Service Worker working session

2015-07-04 Thread Alex Russell
Hey all,

Apologies for the late notice.

As many SW participants are going to be in town for the WebApps F2F on the
21st, Google San Francisco is hosting a working day, 9am-5pm PST on July
20th to work through open issues and discuss future work.

If you're attending, or would like to, simply RSVP here:
http://doodle.com/hqm3ga8pfepidy7r

Regards


Re: WebApp installation via the browser

2014-06-02 Thread Alex Russell
On Mon, Jun 2, 2014 at 2:06 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, May 30, 2014 at 5:40 PM, Jeffrey Walton noloa...@gmail.com
 wrote:
  Are there any platforms providing the feature? Has the feature gained
  any traction among the platform vendors?

 The webapps platform that we use in FirefoxOS and Firefox Desktop
 allows any website to be an app store. I *think*, though I'm not 100%
 sure, that this works in Firefox for Android as well.

 I'm not sure what you mean by side loaded, but we're definitely
 trying to allow normal websites to provide the same experience as the
 firefox marketplace. The user doesn't have to turn on any developer
 mode or otherwise do anything otherwise special to use such a
 marketplace. The user simply needs to browse to the website/webstore
 and start using it.

 The manifest spec that is being developed in this WG is the first step
 towards standardizing the same capability set. It doesn't yet have the
 concept of an app store, instead any website can self-host itself as
 an app.


The Chrome team is excited about this direction and is collaborating on the
manifest format in order to help make aspects of this real. In particular
we're excited to see a Service Worker entry added to the format in a future
version as well as controls for window decorations and exit extents.


 It's not clear to me if there's interest from other browser vendors
 for allowing websites to act as app stores, for now we're focusing the
 standard on simpler use cases.


I can only speak for the Chrome team, but the idea of a page as an
app-store seems less important than the concept of the page *as* an app.


Re: WebKit interest in ServiceWorkers (was Re: [manifest] Utility of bookmarking to home screen, was V1 ready for wider review)

2014-02-18 Thread Alex Russell
On Tue, Feb 18, 2014 at 4:59 AM, Arthur Barstow art.bars...@nokia.comwrote:

 On 2/17/14 9:17 AM, ext Jungkee Song wrote:

  On Mon, Feb 17, 2014 at 9:38 PM, Arthur Barstow 
 art.bars...@nokia.commailto:
 art.bars...@nokia.com wrote:

 The only process requirement for a FPWD is that the group record
 consensus to publish it. However, it's usually helpful if the FPWD
 is feature complete from a breadth perspective but there is no
 expectation the FPWD is complete from a depth perspective. As
 such, if there are missing features, it would be good to mention
 that in the ED and/or file related bugs.

 I believe things are mostly addressed in a breadth perspective albeit
 quite a few issues are still being discussed and sorted out. We are
 currently drafting the ED and thought the F2F is sort of a right time to
 have a consensus for FPWD but think it'll be nicer if we can make it even
 before that to get a wider review as soon as possible.


 Given the broad interest in this spec, I think it would be helpful to move
 toward FPWD as soon as possible. Would you please give a rough
 guestimate on when you think spec can ready for a CfC to publish a FPWD?


I've been waiting until we have all the algorithms filled in. It's a
non-sensical document until then.


Re: [webcomponents] Imperative API for Insertion Points

2014-02-16 Thread Alex Russell
On Sun, Feb 16, 2014 at 12:52 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 16, 2014, at 12:42 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 15, 2014, at 11:30 PM, Alex Russell slightly...@google.com wrote:

 On Sat, Feb 15, 2014 at 4:57 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi all,

 I’d like to propose one solution for

 [Shadow]: Specify imperative API for node distribution
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429

 because select content attribute doesn’t satisfy the needs of
 framework/library authors to support conditionals in their templates,
 and doesn’t satisfy my random image element use case below.


 *== Use Case ==*
 Random image element is a custom element that shows one of child img
 elements chosen uniformally random.

 e.g. the markup of a document that uses random-image-element may look
 like this:
 random-image-element
   img src=kitten.jpg
   img src=cat.jpg
   img src=webkitten.jpg
 /random-image-element

 random-image-element displays one out of the three img child elements
 when a user clicks on it.

 As an author of this element, I could modify the DOM and add style
 content attribute directly on those elements
 but I would rather use shadow DOM to encapsulate the implementation.


 *== API Proposal ==*

 Add two methods void add(Element) and void remove(Element) to content
 element.
 (We can give them more descriptive names. I matched select element for
 now).

 Each content element has an ordered list of **explicitly inserted nodes*
 *.

 add(Element element) must act according to the following algorithm:

1. If the content element's shadow host's node tree doesn't contain _
*element*_, throw HierarchyRequestError.
2. If element is already in some other content element's _*explicitly
inserted nodes*_
then call remove with _*element*_ on that content element.
3. Append _*element*_ to the end of _*explicitly inserted nodes*_.


 remove(Element element) must act according to the following algorithm:

1. If the content element's _*explicitly inserted nodes*_ does not
contain _*element*_, throw NotFoundError.


 Throwing exceptions is hostile to usability.


 If people are so inclined, we don’t have to throw an exception and
 silently fail.


1. Remove _*element*_ from _*explicitly inserted nodes*_.


 The idea here is that _*explicitly inserted nodes*_ of an insertion
 point A would be the list of distributed nodes of A but
 I haven't figured out exactly how _*explicitly inserted nodes*_ should
 interact with select content attribute.

 I think the simplest model would be _*explicitly inserted nodes*_ simply
 overriding whatever select content attribute was
 trying to do but I don't have a strong opinion about how they should
 interact yet.

 I don't think it makes sense to support redistributions, etc... at least
 in the initial API.


 This proposal has an advantage over the existing proposal on
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429:

1. It doesn't require UA calling back to JS constantly to match
elements
2. Point 1 implies we don't expose when distribution happens for
select content attribute.

 This doesn't seem like progress. I'd hope an imperative API would,
 instead, be used to explain how the existing system works and then propose
 layering that both accommodates the existing system and opens new areas for
 programmatic use.

 We can imagine such a system for programmatic Shadow DOM with some sort of
 distribute(Element) callback that can be over-ridden and use add/remove
 methods to do final distribution.


 The problem here is that such a callback must be called on every node upon
 any state change because UAs have no way of knowing what causes
 redistribution for a given component.  As as a matter of fact, some use
 cases may involve changing the node distributions based on some JS objects
 state.  And having authors codify such conditions for UAs is much more
 cumbersome than letting them re-distribute nodes at their will.


 To give you more concrete example, in the case of my random image element,
 how can UA notice that user clicking on the element should trigger
 reconstruction of the composed tree?


Isn't the stated design of the custom element that it re-constructs the
composed tree with a random image every time it's clicked? It's not
actually clear what you wanted here because there isn't any example code to
go on.


  Should the script call some method like redistribute() on the host upon
 click?  But then, since the element needs to pick a child uniformly random,
 it probably needs to keep track of the number of children to be distributed
 and return true exactly when that node was passed into the callback.
  That’s an extremely cumbersome API at least for my use case.


I have the sense that if you produced example code you'd be able to make a
better guess about what's onerous and what isn't. As it is, we're debating
hypotheticals.

Here's a version of your component

Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-15 Thread Alex Russell
On Sat, Feb 15, 2014 at 1:57 AM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 14, 2014, at 9:00 AM, Erik Arvidsson a...@chromium.org wrote:




 On Thu, Feb 13, 2014 at 9:00 PM, Maciej Stachowiak m...@apple.com wrote:


  On Feb 13, 2014, at 4:01 PM, Alex Russell slightly...@google.com
 wrote:

 A closure is an iron-clad isolation mechanism for object ownership with
 regards to the closing-over function object. There's absolutely no
 iteration of the closed-over state of a function object; any such
 enumeration would be a security hole (as with the old Mozilla
 object-as-param-to-eval bug). You can't get the value of foo in this
 example except with the consent of the returned function:


 var maybeVendFoo = function() {
   var foo = 1;
   return function(willMaybeCall) {
 if (/* some test */) { willMaybeCall(foo); }
   }
 };

 Leakage via other methods can be locked down by the first code to run in
 an environment (caja does this, and nothing prevents it from doing this for
 SD as it can pre-process/filter scripts that might try to access internals).


 Caja is effective for protecting a page from code it embeds, since the
 page can have a guarantee that its code is the first to run. But it cannot
 be used to protect embedded code from a page, so for example a JS library
 cannot guarantee that objects it holds only in closure variables will not
 leak to the surrounding page...


 That is incorrect. It is definitely possible to write code that does not
 leak to the environment. It is painful to do because like Ryosuke wrote you
 cannot use any of the built in functions or objects. You can only use
 primitives and literals. But with a compile to JS language this can be made
 less painful and in the days of LLVM to JS compilers this seems like a
 trivial problem.


 Let's assume for the sake of argument that there was actually a practical
 way to do this for nontrivial code[*]. Even if that were the case, it would
 not be relevant to the way in which today's JS programs use closures. They
 find closures useful even without a transpiler to a primitives-only subset
 of the language. So one cannot claim the value of closures for
 encapsulation follows from this theoretical possibility.


This misses a good chunk of cultural JS practice (which, I should note,
still isn't a substitute for the previously requested use-cases and example
code): in many frameworks, closure-captured state is considered to be
hostile as it prevents monkey-patches and makes extension difficult.
Many, instead, lean on convention (some form of _-based prefix/suffix) to
denote semi-private state.

Closures most frequently find use in areas where JS's lexical binding
leaves much to be desired, (ab)using the function context as a way to
create new lexical bindings.

Arguing about them as a containment structure without this cultural context
isn't particularly enlightening.

Looking forward to hearing more about possible uses for the type of
encapsulation you're proposing.

Regards


 Regards,
 Maciej


 * - And really, it is not that practical. Many algorithms require a
 variable-sized data structure, or one that is indexable, or one that is
 associative - and you can't do any of that with only primitives and a fixed
 set of variable slots. And you can't do anything to a primitive that would
 potentially invoke a method call explicitly or implicitly, which includes
 such things as number-to-string conversion, using functions on the Math
 object on numbers, and the majority of things you would do to a string
 (such as converting to/from char codes). What remains is pretty narrow.
 This is setting aside that you would not be able to do anything useful to
 the outside world besides return a value. For reference, the LLVM-to-JS
 compilers that exist make major use of non-primitives, in particular
 Emscripten uses a giant typed array to represent memory.




Re: [webcomponents] Imperative API for Insertion Points

2014-02-15 Thread Alex Russell
On Sat, Feb 15, 2014 at 4:57 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi all,

 I’d like to propose one solution for

 [Shadow]: Specify imperative API for node distribution
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429

 because select content attribute doesn’t satisfy the needs of
 framework/library authors to support conditionals in their templates,
 and doesn’t satisfy my random image element use case below.


 *== Use Case ==*
 Random image element is a custom element that shows one of child img
 elements chosen uniformally random.

 e.g. the markup of a document that uses random-image-element may look like
 this:
 random-image-element
   img src=kitten.jpg
   img src=cat.jpg
   img src=webkitten.jpg
 /random-image-element

 random-image-element displays one out of the three img child elements when
 a user clicks on it.

 As an author of this element, I could modify the DOM and add style content
 attribute directly on those elements
 but I would rather use shadow DOM to encapsulate the implementation.


 *== API Proposal ==*

 Add two methods void add(Element) and void remove(Element) to content
 element.
 (We can give them more descriptive names. I matched select element for
 now).

 Each content element has an ordered list of **explicitly inserted nodes**.

 add(Element element) must act according to the following algorithm:

1. If the content element's shadow host's node tree doesn't contain _
*element*_, throw HierarchyRequestError.
2. If element is already in some other content element's _*explicitly
inserted nodes*_
then call remove with _*element*_ on that content element.
3. Append _*element*_ to the end of _*explicitly inserted nodes*_.


 remove(Element element) must act according to the following algorithm:

1. If the content element's _*explicitly inserted nodes*_ does not
contain _*element*_, throw NotFoundError.


Throwing exceptions is hostile to usability.



1. Remove _*element*_ from _*explicitly inserted nodes*_.


 The idea here is that _*explicitly inserted nodes*_ of an insertion point
 A would be the list of distributed nodes of A but
 I haven't figured out exactly how _*explicitly inserted nodes*_ should
 interact with select content attribute.

 I think the simplest model would be _*explicitly inserted nodes*_ simply
 overriding whatever select content attribute was
 trying to do but I don't have a strong opinion about how they should
 interact yet.

 I don't think it makes sense to support redistributions, etc... at least
 in the initial API.


 This proposal has an advantage over the existing proposal on
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429:

1. It doesn't require UA calling back to JS constantly to match
elements
2. Point 1 implies we don't expose when distribution happens for
select content attribute.

 This doesn't seem like progress. I'd hope an imperative API would,
instead, be used to explain how the existing system works and then propose
layering that both accommodates the existing system and opens new areas for
programmatic use.

We can imagine such a system for programmatic Shadow DOM with some sort of
distribute(Element) callback that can be over-ridden and use add/remove
methods to do final distribution.

I'm deeply skeptical of appeals to defeat/elide layering on the basis of
performance arguments. Real-world systems often have fast-paths for common
operations and we should note that a self-hosted DOM would feel no
particular pain about calling back to JS. If your mental model is that
the world is C++ and JS is bolt-on, you're bound to get this continuously
wrong.

Regards


Re: [manifest] Utility of bookmarking to home screen, was V1 ready for wider review

2014-02-15 Thread Alex Russell
On Sat, Feb 15, 2014 at 5:56 AM, Marcos Caceres w...@marcosc.com wrote:

 tl;dr: I strongly agree (and data below shows) that installable web apps
 without offline capabilities are essentially useless.

 Things currently specified in the manifest are supposed to help make these
 apps less useless (as I said in the original email, they by no means give
 us the dream of installable web apps, just one little step closer) - even
 if we had SW tomorrow, we would still need orientation, display mode, start
 URL, etc.

 So yes, SW and manifest will converge... questions for us to decide on is
 when? And if appcache can see us through this transitional period to having
 SW support in browsers? I believe we can initially standardize a limited
 set of functionality, while we continue to wait for SW to come into
 fruition which could take another year or two.


SW will becoming to chrome ASAP. We're actively implementing. Jonas or
Nikhil can probably provide more Mozilla context.

My personal view is that isn't not a good user experience to offer the
affordance if the resulting system can't be trusted. That is to say, if we
plow on with V1 without a (required) offline story, I'm not sure what we've
really won. Happy for this to go to LC, but wouldn't recommend that Chrome
For Android implement.


 On Saturday, February 15, 2014 at 1:37 AM, Alex Russell wrote:

  I further think that the marginal utility in bookmarking something to
 the homescreen (sorry, yes, I'm focusing on mobile first) is low if it
 doesn't have a Service Worker / Appcache associated.

 Although I've not published this research yet, this is strongly backed by
 evidence. Nearly all applications in the top 78,000 websites that opt. into
 being standalone applications via apple-mobile-web-app-capable do not, in
 fact, work as standalone applications. If anyone is interested to try this
 for themselves, here is the raw dataset listing all the sites [1] - you
 will need an iPhone to test them. The data set is from Oct. 2013, but
 should still be relevant. Just pick some at random and add to homescreen;
 it makes for depressing viewing.

 There are a few exceptions (listed below) - but those are the exceptions,
 not the rule.
  It's strictly second-class-citizen territory to have web bookmarks
 that routinely don't do anything meaningful when offline.

 Yes, but there are a number of factors that contribute to this: not just
 offline (e.g., flexbox support is still fairly limited, dev tools still
 suck, cross-browser is a nightmare, even how navigation works differs
 across UAs!, limited orientation-locking support, etc.).

 However, to your point the data we have shows that about 50 sites in the
 top 78K declare an appcache [2], while there are 1163 sites that declare
 apple-mobile-web-app-capable. So yeah, appcache, as we all know, is a bit
 of a failure. Some of the sites that declare it actually have it commented
 out... like they tried it and just gave up.

 Interestingly, only 10 sites in the dataset are both capable of running
 standalone AND declare offline:

 1. forecast.io
 2. timer-tab.com
 3. capitalone.com
 4. rachaelrayshow.com
 5. delicious.com
 6. forbesmiddleeast.com
 7. shopfato.com.br
 8. ptable.com
 9 authenticjobs.com

 10. swedenabroad.com

 So, yeah... 10 / 1163 = 0.0085... or, :_(.

 Anyway... do you think it's ok for us to just standardize the limited
 things in the manifest? We could have those at LC like in 2 weeks and then
 spin up V2 to have convergence with SW. Better still, the SW spec can just
 specify how it wants to work with manifests.

 [1] https://gist.github.com/marcoscaceres/7419589
 [2] https://gist.github.com/marcoscaceres/9018819
 --
 Marcos Caceres






Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Alex Russell
On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Feb 14, 2014, at 2:50 PM, Elliott Sprehn espr...@chromium.org wrote:

 On Fri, Feb 14, 2014 at 2:39 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/14/14 5:31 PM, Jonas Sicking wrote:

 Also, I think that the Type 2 encapsulation has the same
 characteristics. If the component author does things perfectly and
 doesn't depend on any outside code


 And never invokes any DOM methods on the nodes in the component's
 anonymous content.  Which is a pretty strong restriction; I'm having a bit
 of trouble thinking of a useful component with this property.


 I think my biggest issue with Type-2 is that unlike the languages cited
 for providing private it's trying to mimic it provides no backdoor for
 tools and frameworks to get at private state and at the same time it
 doesn't add any security benefits.


 Except that JavaScript doesn’t have “private”.


Right, it only has the stronger form (closures) and the weaker form (_
prefixing properties and marking them non-enumerable using defineProperty).
SD as currently defined is the second.

  Ruby, Python, Java, C# and almost all other modern languages that
 provide a private facility for interfaces (as advocated by the Type-2
 design) provide a backdoor through reflection to get at the variables and
 methods anyway. This allowed innovation like AOP, dependency injection,
 convention based frameworks and more.

 So if we provide Type-2 I'd argue we _must_ provide some kind of escape
 hatch to still get into the ShadowRoot from script. I'm fine providing some
 kind of don't let CSS styles enter me feature, but hiding the shadowRoot
 property from the Element makes no sense.


 I don’t see how the above two sentences lead to a consolation that we must
 provide an escape hatch to get shadow root from script given that such an
 escape hatch already exists if the component authors end up using builtin
 DOM functions.


It's the difference between using legit methods and hacking around the
platform. If it's desirable to allow continued access in these situations,
why isn't .shadowRoot an acceptable speed bump? If it's not desirable,
isn't the ability to get around the restriction *at all* a bug to be fixed
(arguing, implicitly, that we should be investigating stronger primitives
that Maciej and I were discussing to enable Type 4)?

 We all agree it's not a security boundary and you can go through great
 lengths to get into the ShadowRoot if you really wanted, all we've done by
 not exposing it is make sure that users include some crazy
 jquery-make-shadows-visible.js library so they can build tools like Google
 Feedback or use a new framework or polyfill.


 I don’t think Google Feedback is a compelling use case since all
 components on Google properties could simply expose “shadow” property
 themselves.


So you've written off the massive coordination costs of adding a uniform to
all code across all of Google and, on that basis, have suggested there
isn't really a problem? ISTM that it would be a multi-month (year?) project
to go patch every project in google3 and then wait for them to all deploy
new code.

Perhaps you can imagine a simpler/faster way to do it that doesn't include
getting owners-LGTMs from nearly every part of google3 and submitting tests
in nearly every part of the tree??


 Since you have preciously claimed that instantiating a template element
 may not be a common pattern for custom elements / web components, I have a
 hard time accepting the claim that you’re certain accessing shadow root is
 a common coding pattern.


Surely as the person asking for the more restricted form, the onus falls to
*you* to make the argument that the added restrictions show their value.

  So given that we should have ShadowRoot.getPrivateType2Root(element) to
 provide a sensible modern api like other languages, is providing the
 shadowRoot property on the Element any different?


 We’re disagreeing on the premise that we should have
 ShadowRoot.getPrivateType2Root.

 I think we need to steer this conversation back to CSS's ability to style
 the ShadowRoot. There's no reason we can't provide a no styles can enter
 me flag while still having the shadowRoot property and the node
 distribution APIs.


 That might be an interesting topic to discuss but www-style discussion
 appears to indicate that we need to settle encapsulation discussion in
 public-webaps regardless.

 - R. Niwa




Re: [manifest] V1 ready for wider review

2014-02-14 Thread Alex Russell
On Wed, Feb 12, 2014 at 5:21 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Feb 12, 2014 at 12:06 PM, Marcos Caceres mar...@marcosc.com
 wrote:
  The editors of the [manifest] spec have now closed all substantive
 issues for  v1.
 
  The spec defines the following:
 
  * A link relationship for manifests (so they can be used with link
 rel=manifest).
 
  * A standard file name for a manifest resource
 (/.well-known/manifest.json). Works the same as /favicon.ico for when
 link rel=manifest is missing.
 
  * The ability to point to a start-url.
 
  * Basic screen orientation hinting for when launching a web app.
 
  * Launch the app in different display modes: fullscreen, minimal-ui,
 open in browser, etc.
 
  * A way of for scripts to check if the application was launched from a
 bookmark (i.e., similar to Safari's navigator.standalone).
 
  * requestBookmark(), which is a way for a top-level document to request
 it be bookmarked by the user. To not piss-off users, requires explicit user
 action to actually work. Expect buttoninstall my app/button everywhere
 on the Web now :)
 
  If you are wondering where some missing feature is, it's probably slated
 for [v2]. The reason v1 is so small is that it's all we could get agreement
 on amongst implementers (it's a small set, but it's a good set to kick
 things off and get us moving... and it's a small spec, so easy to quickly
 read over).
 
  We would appreciate your feedback on this set of features - please file
 [bugs] on GitHub. We know it doesn't fully realize *the dream* of
 installable web apps - but it gets us a few steps closer.
 
  If we don't get any significant objections, we will request to
 transition to LC in a week or so.

 I still think that leaving out name and icons from a manifest about
 bookmarks is a big mistake. I just made my case here

 http://lists.w3.org/Archives/Public/www-tag/2014Feb/0039.html

 Basically I think we need to make the manifest more self sufficient. I
 think that we're getting Ruby's postulate the wrong way around by
 making the file that describes the bookmark not contain all the data
 about the bookmark. Instead the two most important pieces about the
 bookmark, name and icons, will live in a completely separate HTML
 file, often with no way to find yourself from the manifest to that
 separate HTML file.


I agree. I further think that the marginal utility in bookmarking something
to the homescreen (sorry, yes, I'm focusing on mobile first) is low if it
doesn't have a Service Worker / Appcache associated. It's strictly
second-class-citizen territory to have web bookmarks that routinely don't
do anything meaningful when offline.


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-13 Thread Alex Russell
On Thu, Feb 13, 2014 at 2:35 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Feb 13, 2014 at 12:04 AM, Alex Russell slightly...@google.com
 wrote:
  Until we can agree on this, Type 2 feels like an attractive nuisance
 and, on
  reflection, one that I think we should punt to compilers like caja in the
  interim. If toolkits need it, I'd like to understand those use-cases from
  experience.

 I think Maciej explains fairly well in
 http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1364.html
 why it's good to have. Also, Type 2 can be used for built-in elements,
 which I thought was one of the things we are trying to solve here.


I encourage you to go through the exercise that arv has.

What does it mean, in practice, to *really* defend against deliberate
access (Maciej's Type 2). If you were to try to implement a built-in using
what, in your mind, is Type 2, would it work? Would you really be able to
hang privileged user access off that implementation?

Any time I consider the question, it leads me to want to lock down all
routes to access outside some (unspecified, and I fear unspecifiable until
we get *much* stronger primitives) relationship between a script execution
context and some subset of the DOM. This is painful because DOM makes
transport across worlds so trivial. Iframes, built-in-controls and caja
have all done this, but they do it by going for Type 4.

There is no spoon. Type 2 is a mirage.


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-13 Thread Alex Russell
On Thu, Feb 13, 2014 at 1:25 PM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 12, 2014, at 4:04 PM, Alex Russell slightly...@google.com wrote:



 In discussion with Elliot and Erik, there appears to be an additional
 complication: any of the DOM manipulation methods that aren't locked down
 (marked non-configurable and filtered, ala caja) create avenues to get
 elements from the Shadow DOM and then inject styles. E.g., even with Arv's
 lockdown sketch:

   https://gist.github.com/arv/8857167

 You still have most of your work ahead of you. DocumentFragment provides
 tons of ins, as will all incidentally composed APIs.


 I'm not totally clear on what you're saying. I believe you're pointing out
 that injecting hooks into the scripting environment a component runs in
 (such as by replacing methods on global protototypes) can allow the shadow
 root to be accessed even if no explicit access is given. I agree. This is
 not addressed with simple forms of Type 2 encapsutation. It is a non-goal
 for Type 2.


I'd like to understand what differentiates simple forms of Type 2
encapsulation from other potential forms that still meet the Type 2
criteria. Can you walk me through an example and show how they would be
used in a framework?


  This is fraught.


 Calling something fraught is not an argument.


Good news! I provided an argument in the following sentence to help
contextualize my conclusion and, I had hoped, lead you to understand why
I'd said that.


 To get real ocap-style denial of access to the shadow DOM, we likely need
 to intercept and check all DOM accesses. Is the system still usable at this
 point? It's difficult to know. In either case, a system like caja *can*
 exist without explicit supportwhich raises the question: what's the
 goal? Is Type 2 defined by real denial of access? Or is the request for a
 fig-leaf (perception of security)?


 Type 2 is not meant to be a security mechanism.


I'd like to see an example of Type 2 isolation before I agree to that.


 It is meant to be an encapsulation mechanism. Let me give a comparison.
 Many JavaScript programmers choose to use closures as a way to store
 private data for objects. That is an encapsulation mechanism. It is not, in
 itself, a hard security mechanism. If the caller can hook your global
 environment, and for example modify commonly used Object methods, then they
 may force a leak of your data.


A closure is an iron-clad isolation mechanism for object ownership with
regards to the closing-over function object. There's absolutely no
iteration of the closed-over state of a function object; any such
enumeration would be a security hole (as with the old Mozilla
object-as-param-to-eval bug). You can't get the value of foo in this
example except with the consent of the returned function:

var maybeVendFoo = function() {
  var foo = 1;
  return function(willMaybeCall) {
if (/* some test */) { willMaybeCall(foo); }
  }
};

Leakage via other methods can be locked down by the first code to run in an
environment (caja does this, and nothing prevents it from doing this for SD
as it can pre-process/filter scripts that might try to access internals).

Getting to closure-strength encapsulation means neutering all potential
DOM/CSS access. Maybe I'm dense, but that seems stronger than the simple
form of Type 2.

If you're making the case that it might be helpful to folks trying to
implement Type 4 if the platform gave them a way to neuter access without
so much pre-processing/runtime-filtering, I could take that as an analog
with marking things non-configurable in ES. But it seems you think there's
an interim point that developers will use directly. I don't understand that
and would like to.

But that does not mean using closers for data hiding is a fig-leaf or
 attractive nuisance.


Agreed, but only because they're stronger than you imply by analogy. What
I'm arguing is that if closures are the right analogy for some variant of
Shadow DOM then they'd need to get MUCH stronger  (Type 4) to meet that
charge.


 It's simply taking access to internals out of the toolset of common and
 convenient things, thereby reducing the chance of a caller inadvertently
 coming to depend on implementation details. ES6 private symbols serve a
 similar role.


Sadly, private symbols don't look likely to make an appearance in ES6.


 The proposal is merely to provide the same level of protection for the
 shadow DOM.


 This is the struggle I have with Type 2. I can get my mind around Type 4
 and want it very badly. So badly, in fact, that I bore most people I talk
 to with the idea of creating primitives that explain x-origin iframes (some
 sort of renderer worker, how are the graphics contexts allocated and
 protected? what does it mean to navigate? what sorts of constraints can we
 put on data-propagation approaches for flowed layouts that can keep us out
 of security hell?).


 (1) The form of Type 1 encapsulation offered by current Shadow DOM specs

Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-12 Thread Alex Russell
On Tue, Feb 11, 2014 at 5:16 PM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 11, 2014, at 4:04 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 On Tue, Feb 11, 2014 at 3:50 PM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 11, 2014, at 3:29 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:



 Dimitri, Maciej, Ryosuke - is there a mutually agreeable solution here?


 I am exactly sure what problem this thread hopes to raise and whether
 there is a need for anything other than what is already planned.


 In the email Ryosuke cited, Tab said something that sounded like a claim
 that the WG had decided to do public mode only:

 http://lists.w3.org/Archives/Public/www-style/2014Feb/0221.html
 Quoting Tab:

 The decision to do the JS side of Shadow DOM this way was made over a
 year ago.  Here's the relevant thread for the decision:
 
 http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/thread.html#msg312
 
 (it's rather long) and a bug tracking it
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562.


 I can't speak for Ryosuke but when I saw this claim, I was honestly
 unsure whether there had been a formal WG decision on the matter that I'd
 missed. I appreciate your clarification that you do not see it that way.


 Quoting Dmitri again:

 The plan is, per thread I mentioned above, is to add a flag to
 createShadowRoot that hides it from DOM traversal APIs and relevant CSS
 selectors: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144.


 That would be great. Can you please prioritize resolving this bug[1]? It
 has been waiting for a year, and at the time the private/public change was
 made, it sounded like this would be part of the package.


 Can you help me understand why you feel this needs to be prioritized? I
 mean, I don't mind, but it would be great if I had an idea on what's the
 driving force behind the urgency?


 (1) It blocks the two dependent issues I mentioned.
 (2) As a commenter on a W3C spec and member of the relevant WG, I think I
 am entitled to a reasonably prompt level of response from a spec editor.
 This bug has been open since November 2012. I think I have waited long
 enough, and it is fair to ask for some priority now. If it continues to go
 on, then an outside observer might get the impression that failing to
 address this bug is deliberate stalling. Personally, I prefer to assume
 good faith, and I think you have just been super busy. But it would show
 good faith in return to address the bug soon.

 Note: as far as I know there is no technical issue or required feedback
 blocking bug 20144. However, if there is any technical input you need, or
 if you would find it helpful to have a spec diff provided to use as you see
 fit, I would be happy to provide such. Please let me know!


 It seems like there are a few controversies that are gated on having the
 other mode defined:
 - Which of the two modes should be the default (if any)?


 This is re-opening the old year-old discussion, settled in
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg800,
 right?


 I'm not sure what you mean by settled. You had a private meeting and the
 people there agreed on what the default should be. That is fine. Even using
 that to make a provisional editing decision seems fine. However, I do not
 believe that makes it settled for purposes of the WG as a whole. In
 particular, I have chosen not to further debate which mode should be the
 default until both modes exist, something that I've been waiting on for a
 while. I don't think that means I lose my right to comment and to have my
 feedback addressed.

 In fact, my understanding of the process is this: the WG is required to
 address any and all feedback that comes in at any point in the process. And
 an issue is not even settled to the point of requiring explicit reopening
 unless there is a formal WG decision (as opposed to just an editor's
 decision based on their own read of input from the WG.)




 - Should shadow DOM styling primitives be designed so that they can work
 for private/closed components too?


 Sure. The beauty of a hidden/closed mode is that it's a special case of
 the open mode, so we can simply say that if a shadow root is closed, the
 selectors don't match anything in that tree. I left the comment to that
 effect on the bug.


 Right, but that leaves you with no styling mechanism that offers more
 fine-grained control, suitable for use with closed mode. Advocates of the
 current styling approach have said we need not consider closed mode at all,
 because the Web Apps WG has decided on open mode. If what we actually
 decided is to have both (and that is my understanding of the consensus),
 then I'd like the specs to reflect that, so the discussion in www-style can
 be based on facts.

 As a more basic point, mention of closed mode to exclude it from /shadow
 most likely has to exist in the shadow styling spec, not just the Shadow
 DOM spec. So there is a cross-spec 

Re: RE : RE : Sync IO APIs in Shared Workers

2013-12-06 Thread Alex Russell
fOn Thu, Dec 5, 2013 at 2:14 AM, Ke-Fong Lin ke-fong@4d.com wrote:

  1) Sync APIs are inherently easier to use than async ones, and they are
 much
  less error prone. JS developers are not C++ developers. Whenever
 possible, it's
  just better to make things more simpler and convenient.
 
 This argument is not particularly helpful. Apart from that, many JS APIs
 use callbacks,
 all developers are-or-have to be aware of them.

 Yes, JS web developers are well used to that.
 Yet, sync APIs are simpler and much less error prone.


It's unclear that they're less error prone. It's clear that they're easier.

For instance, the cases I bring up in the post which Jonas cited are
instances where, when presented with small amounts of data or work,
synchronous APIs may indeed be correct and easy. But when the data scales
up to some amount (a fuzzy line at best), they become attractive nuisances;
things which you must then teach developers not to use lest they screw up
the end-user experience -- which, on the client, is really the only thing
that matters.


 If something easy can be done easily, do it the easy way.


And if that something can't be correct or easily proven to be so, we
should stop offering error-ish-prone ways of doing that thing.


  3) It does no harm.
 
 It's not particularly fun re-writing async methods from the webpage to
 be sync for workers, or otherwise using shims to avoid redundancy. The
 extra semantic load on the namespaces (docs and otherwise) isn't all
 that pleasing either. There is a cost.

 You may well use the usual async version of the API in a worker.
 In which case, there is no need for re-writes.


...assuming you're only ever porting code from a document to a worker and
not the other way around. I think this hopelessly naive.


Re: RE : Sync IO APIs in Shared Workers

2013-12-06 Thread Alex Russell
On Wed, Dec 4, 2013 at 4:38 PM, Charles Pritchard ch...@jumis.com wrote:


 On 12/4/13, 2:43 AM, Ke-Fong Lin wrote:

 IMHO, we should make sync APIs available in both dedicated and shared
 workers.
 In order of importance:

 1) Sync APIs are inherently easier to use than async ones, and they are
 much
 less error prone. JS developers are not C++ developers. Whenever
 possible, it's
 just better to make things more simpler and convenient.


 This argument is not particularly helpful. Apart from that, many JS APIs
 use callbacks,
 all developers are-or-have to be aware of them.


  2) Sync APIs do the job. We are talking about web-apps, not heavy load
 servers.
 High performance applications will use async APIs anyway. I'd rather
 think there
 are a lot of use cases where the dedicated or shared worker would do a
 lot of small
 and short duration work, suitable for sync coding. Why force the
 complication of async
 on developers ? If easy things can be done easily, then let it be.


 Promises seem to have solved quite a it of the syntactic cruft/issues.


They help, but there's more that JS can do here to help. Generators/yeild
will help in many cases, and a async or await keyword can be used to
hide promises entirely in a future version of hte language with annotated
functions. We can do a lot to alleviate the burden even more, and I'm
excited about that future.


 Devs are already in an async world when doing JS.


+1.

I can't speak for the Blink team, but I (sort of obviously) think that sync
APIs in workers are, at best, features destined to get little use and even
less love. Code that uses them won't be portable outside of (specific kinds
(!!!) of) workers, and code that wants to be library-ish will need to add
the async indirection no matter what.

Lastly, while i have sympathy for Jonas' argument about event-loop
concurrency creating thread-like issues as you pile more actors in, the
beauty of Workers is that they largely allow a coordinating document to
decouple actors and give them their own workspaces (workers). That the
problem arises in theory says nothing about how often it will in practice.


  3) It does no harm.


 It's not particularly fun re-writing async methods from the webpage to be
 sync for workers, or otherwise using shims to avoid redundancy. The extra
 semantic load on the namespaces (docs and otherwise) isn't all that
 pleasing either. There is a cost.






Re: [HTML Imports]: Sync, async, -ish?

2013-11-27 Thread Alex Russell
On Wed, Nov 27, 2013 at 9:46 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Stepping back a bit, I think we're struggling to ignore the elephant in
 the room. This elephant is the fact that there's no specification (or API)
 that defines (or provides facilities to control) when rendering happens.
 And for that matter, what rendering means.

 The original reason why script blocks execution until imports are loaded
 was not even related to rendering. It was a simple solution to an ordering
 problem -- if I am inside a script block, I am assured that any script
 before it had also run (whether it came from imports or not). It's the same
 reason why ES modules need a new HTML element (or script type at the very
 list).

 Blocking rendering was as a side effect, since we simply took the plumbing
 from stylesheets.

 Then, events took a bewildering turn. Suddenly, this side effect turned
 into a feature/bug and now we're knee-deep in the sync-vs-async argument.
  And that's why all solutions look bad.

 With elements attribute, we're letting the user of the import pick the
 poison they prefer (would you like your page to be slow or would you rather
 it flash spastically?)

 With sync or async attribute, we're faced with an enormous
 responsibility of predicting the right default for a new feature. Might
 as well flip a coin there.

 I say we call out the elephant.


Agree entirely. Most any time we get into a situation where the UA can't
do the right thing it's because we're trying to have a debate without all
the information. There's a big role for us to play in setting defaults one
way or the other, particularly when they have knock-on optimization
effects, but that's something we know how to do.


 We need an API to control when things appear on screen. Especially, when
 things _first_ appear on screen.


+1000!!!

I'll take a stab at it. To prevent running afoul of existing heuristics in
runtimes regarding paint, I suggest this be declarative. That keeps us from
blocking anything based on a script element. To get the engine into the
right mode as early as possible, I also suggest it be an attribute on an
early element (html, link, or meta). Using meta http-equiv=...
gives us a hook into possibly exposing the switch as an HTTP header,
although it makes any API less natural as we don't then have a place in the
DOM to hang it from.

In terms of API capabilities, we can cut this a couple of ways (not
entirely exclusive):


   1. Explicit paint control, all the time, every time. This is very unlike
   the current model and, on pages that opt into it, would make them entirely
   dependent on JS for getting things on screens.
  1. This opens up a question of scoping: should all paints be blocked?
  Only for some elements? Should layouts be delayed until paints are
  requested? Since layouts are difficult to scope, what does paint scoping
  mean for them?
  2. An alternative might be a flag that's a one-time edge trigger:
  something that delays the *first* paint and, via an API, perhaps other
  upcoming paints, but which does not block the course of regular
  painting/layout.
  3. We would want to ensure that any API doesn't lock us into a
  contract of running code when a page is hidden doesn't actually
need to be
  repainted (based on layout invalidation, etc.) or is hidden.
   2. Some sort of a paint threshold value (in ms) that defines how
   quickly the system should try to call back into script to kick off a paint
   and a timeout value for how long it should wait before painting anyway.
   Could be combined with #1.

A first cut that takes some (but not all) of this into account might look
like:

html paintpolicy=explicit !-- defaults to implicit --
  ...

  script

// Explicit first-paint which switches
// mode to implicit painting thereafter:

window.requestAnimationFrame(function(timestamp) {

  document.documentElement.paint();

  document.documentElement.paintPolicy = implicit;

});

  /script


This leaves questions of what individual elements can do to paint
themselves unresolved, but something we should also investigate.

Thoughts?


 :DG




Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-14 Thread Alex Russell
On Wednesday, March 6, 2013, Tobie Langel wrote:

 On Wednesday, March 6, 2013 at 5:51 PM, Jarred Nicholls wrote:
  This is an entirely different conversation though. I don't know the
 answer to why sync interfaces are there and expected, except that some
 would argue that it makes the code easier to read/write for some devs.
 Since this is mirrored throughout other platform APIs, I wouldn't count
 this as a fault in IDB specifically.

 Sync APIs are useful to do I/O inside of a Worker.


I don't understand why that's true. Workers have a message-oriented API
that's inherently async. They can get back to their caller whenevs.
What's the motivator for needing this?


 They're also critical for data consistency in some scenarios, e.g.
 updating the database after a successful xhr request when a worker is about
 to be terminated.


Unload-catching is a known bug in much o the web platform. Why would we
enable it here?


Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-14 Thread Alex Russell
On Thursday, March 14, 2013, Tab Atkins Jr. wrote:

 On Thu, Mar 14, 2013 at 6:36 PM, Glenn Maynard gl...@zewt.orgjavascript:;
 wrote:
  On Thu, Mar 14, 2013 at 1:54 PM, Alex Russell 
  slightly...@google.comjavascript:;
 
  wrote:
  I don't understand why that's true. Workers have a message-oriented API
  that's inherently async. They can get back to their caller whenevs.
 What's
  the motivator for needing this?
 
  Being able to write synchronous code is one of the basic uses for
 Workers in
  the first place.  Synchronously creating streams is useful in the same
 way
  that other synchronous APIs are useful, such as FileReaderSync.
 
  That doesn't necessarily mean having a synchronous API for a complex
  interface like this is the ideal approach (there are other ways to do
 it),
  but that's the end goal.

 Yes, this seems to be missing the point of Workers entirely.  If all
 you have are async apis, you don't need Workers in the first place, as
 you can just use them in the main thread without jank.  Workers exist
 explicitly to allow you to do expensive synchronous stuff without
 janking the main thread.  (Often, the expensive synchronous stuff will
 just be a bunch of calculations, so you don't have to explicitly break
 it up into setTimeout-able chunks.)

 The entire reason for most async (all?) APIs is thus irrelevant in a
 Worker, and it may be a good idea to provide sync versions, or do
 something else that negates the annoyance of dealing with async code.


My *first* approach to this annoyance would be to start adding some async
primitives to the platform that don't suck so hard; e.g., Futures/Promises.
Saying that you should do something does not imply that doubling up on API
surface area for a corner-case is the right solution.


  (FYI, the messaging in Workers isn't inherently async; it just happens to
  only have an async interface.  There's been discussion about adding a
  synchronous interface to messaging.)

 Specifically, this was for workers to be able to synchronously wait
 for messages from their sub-workers.  Again, the whole point for async
 worker messaging is to prevent the main thread from janking, which is
 irrelevant inside of a worker.

 ~TJ



Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-14 Thread Alex Russell
On Thursday, March 14, 2013, Glenn Maynard wrote:

 On Thu, Mar 14, 2013 at 8:58 PM, Tab Atkins Jr. 
 jackalm...@gmail.comjavascript:_e({}, 'cvml', 'jackalm...@gmail.com');
  wrote:

 The entire reason for most async (all?) APIs is thus irrelevant in a

 Worker, and it may be a good idea to provide sync versions, or do
 something else that negates the annoyance of dealing with async code.


 I agree, except that async APIs are also useful and relevant in workers.
  Sometimes you want synchronous code and sometimes you want asynchronous
 code, depending on what you're doing.


 On Thu, Mar 14, 2013 at 9:19 PM, Alex Russell 
 slightly...@google.comjavascript:_e({}, 'cvml', 'slightly...@google.com');
  wrote:

 My *first* approach to this annoyance would be to start adding some async
 primitives to the platform that don't suck so hard; e.g., Futures/Promises.
 Saying that you should do something does not imply that doubling up on API
 surface area for a corner-case is the right solution.


 Futures are nothing but a different async API.  They're in no way
 comparable to synchronous code.


I didn't imply they were. But addressing the pain point of asynchronous
code that's hard to use doesn't imply that the only answer is a synchronous
version. This is not a particularly hard or subtle point.


 But, as I said, it's true that a second synchronous interface isn't
 necessarily the best solution for complex APIs like IndexedDB.  At least in
 this particular case, if we have a synchronous messaging API I might call
 the synchronous IDB interface unnecessary.

 --
 Glenn Maynard




Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-07 Thread Alex Russell
On Wednesday, March 6, 2013, Ian Fette (イアンフェッティ) wrote:

 I seem to recall we contemplated people writing libraries on top of IDB
 from the beginning. I'm not sure why this is a bad thing.


It's not bad as an assumption, but it can quickly turn into an excuse for
API design malpractice because it often leads to the (mistaken) assumption
that user-provided code is as cheap as browser-provided API. Given that
users pull their libraries from the network more often than from disk (and
must parse/compile, etc.), the incentives of these two API-providers could
not be more different. That's why it's critical that API designers try to
forestall the need for libraries for as long as possible when it comes to
web features.


 We originally shipped web sql / sqlite, which was a familiar interface
 for many and relatively easy to use, but had a sufficiently large API
 surface area that no one felt they wanted to document the whole thing such
 that we could have an inter-operable standard. (Yes, I'm simplifying a bit.)


Yeah, I recall that the SQLite semantics were the big obstacle.


 As a result, we came up with an approach of What are the fundamental
 primitives that we need?, spec'd that out, and shipped it. We had
 discussions at the time that we expected library authors to produce
 abstraction layers that made IDB easier to use, as the fundamental
 primitives approach was not necessarily intended to produce an API that
 was as straightforward and easy to use as what we were trying to replace.
 If that's now what is happening, that seems like a good thing, not a
 failure.


It's fine in the short run to provide just the low-level stuff and work up
to the high-level things -- but only when you can't predict what the
high-level needs will be. Assuming that's what the WG's view was, you're
right; feature not bug, although there's now more work to do.

Anyhow, IDB is incredibly high-level in many places and primitive in
others. ISTM that it's not easy to get a handle on it's intended level of
abstraction.


 On Wed, Mar 6, 2013 at 10:14 AM, Alec Flett alecfl...@chromium.orgwrote:

 My primary takeaway from both working on IDB and working with IDB for some
 demo apps is that IDB has just the right amount of complexity for really
 large, robust database use.. but for a welcome to noSQL in the browser it
 is way too complicated.

 Specifically:

1. *versioning* - The reason this exists in IDB is to guarantee a
schema (read: a fixed set of objectStores + indexes) for a given set of
operations.  Versioning should be optional. And if versioning is optional,
so should *opening* - the only reason you need to open a database is
so that you have a handle to a versioned database. You can *almost* 
 implement
versioning in JS if you really care about it...(either keep an explicit
key, or auto-detect the state of the schema) its one of those cases where
80% of versioning is dirt simple  and the complicated stuff is really about
maintaining version changes across multiply-opened windows. (i.e. one
window opens an idb, the next window opens it and changes the schema, the
first window *may* need to know that and be able to adapt without
breaking any in-flight transactions) -
2. *transactions* - Also should be optional. Vital to complex apps,
but totally not necessary for many.. there should be a default transaction,
like db.objectStore(foo).get(bar)
3. *transaction scoping* - even when you do want transactions, the api
is just too verbose and repetitive for get one key from one object store
- db.transaction(foo).objectStore(foo).get(bar) - there should be
implicit (lightweight) transactions like db.objectStore(foo).get(bar)
4. *forced versioning* - when versioning is optional, it should be
then possible to change the schema during a regular transaction. Yes, this
is a lot of rope but this is actually for much more complex apps, rather
than simple ones. In particular, it's not uncommon for more complex
database systems to dynamically create indexes based on observed behavior
of the API, or observed data (i.e. when data with a particular key becomes
prevalent, generate an index for it) and then dynamically use them if
present. At the moment you have to do a manual close/open/version change to
dynamically bump up the version - effectively rendering fixed-value
versions moot (i.e. the schema for version 23 in my browser may look
totally different than the schema for version 23 in your browser) and
drastically complicating all your code (Because if you try to close/open
while transactions are in flight, they will be aborted - so you have to
temporarily pause all new transactions, wait for all in-flight transactions
to finish, do a close/open, then start running all pending/paused
transactions.) This last case MIGHT be as simple as adding
db.reopen(newVersion) to the existing spec.
5. 

IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-06 Thread Alex Russell
Comments inline. Adding some folks from the IDB team at Google to the
thread as well as public-webapps.

On Sunday, February 17, 2013, Miko Nieminen wrote:



 2013/2/15 Shwetank Dixit shweta...@opera.com

  Why did you feel it was necessary to write a layer on top of IndexedDB?


 I think this is the main issue here.

 As it stands, IDB is great in terms of features and power it offers, but
 the feedback I recieved from other devs was that writing raw IndexedDB
 requires an uncomfortable amount of verbosity even for some simple tasks
 (This can be disputed, but that is the views I got from some of the
 developers I interacted with). Adding that much amount of code (once again,
 im talking of raw IndexedDB) makes it less readable and understandable. For
 beginners, this all seemed very intimidating, and for some people more
 experienced, it was a bit frustrating.


 After my experiments with IDB, I don't feel that it is particularly
 verbose. I have to admit that often I prefer slightly verbose syntax over
 shorter one when it makes reading the code easier. In IDB's case, I think
 this is the case.



  For the latter bit, I reckon it would be a good practice for groups
 working on low-level APIs to more or less systematically produce a library
 that operates at a higher level. This would not only help developers in
 that they could pick that up instead of the lower-level stuff, but more
 importantly (at least in terms of goals) it would serve to validate that
 the lower-level design is indeed appropriate for librarification.


 I think that would be a good idea. Also, people making those low level
 APIs should still keep in mind that the resulting code should not be too
 verbose or complex. Librarification should be an advantage, but not a de
 facto requirement for developers when it comes to such APIs. It should
 still be feasable for them to write code in the raw low level API without
 writing uncomfortably verbose or complex code for simple tasks. Spec
 designers of low level APIs should not take this as a license to make
 things so complex that only they and a few others understand it, and then
 hope that some others will go ahead and make it simple for the 'common
 folk' through an abstraction library.


 I quite don't see how to simplify IDB syntax much more.


I've avoided weighing in on this thread until I had more IDB experience.
I've been wrestling with it on two fronts of late:


   - A re-interpretation of the API based on Futures:

   https://github.com/slightlyoff/DOMFuture/tree/master/reworked_APIs/IndexedDB
   - A new async LocalStorage design + p(r)olyfill that's bootstrapped on
   IDB:
   https://github.com/slightlyoff/async-local-storage

While you might be right that it's unlikely that the API can be
simplified, I think it's trivial to extend it in ways that make it easier
to reason about and use.

This thread started out with a discussion of what might be done to keep
IDB's perceived mistakes from reoccurring. Here's a quick stab at both an
outline of the mistakes and what can be done to avoid them:


   - *Abuse of events*
   The current IDB design models one-time operations using events. This *can
   * make sense insofar as events can occur zero or more times in the
   future, but it's not a natural fit. What does it mean for oncomplete to
   happen more than once? Is that an error? Are onsuccess and onerror
   exclusive? Can they both be dispatched for an operation? The API isn't
   clear. Events don't lead to good design here as they don't encapsulate
   these concerns. Similarly, event handlers don't chain. This is natural, as
   they could be invoked multiple times (conceptually), but it's not a good
   fit for data access. It's great that IDB as async, and events are the
   existing DOM model for this, but IDB's IDBRequest object is calling out for
   a different kind of abstraction. I'll submit Futures for the job, but
   others might work (explicit callback, whatever) so long as they maintain
   chainability + async.

   - *Implicitness*
   IDB is implicit in a number of places that cause confusion for folks
   not intimately familiar with the contract(s) that IDB expects you to enter
   into. First, the use of events for delivery of notifications means that
   sequential-looking code that you might expect to have timing issues
   doesn't. Why not? Because IDB operates in some vaguely async way; you can't
   reason at all about events that have occurred in the past (they're not
   values, they're points in time). I can't find anywhere in the spec that the
   explicit gaurantees about delivery timing are noted (
   http://www.w3.org/TR/IndexedDB/#async-api), so one could read IDB code
   that registers two callbacks as having a temporal dead-zone: a space in
   code where something might have happened but which your code might not have
   a chance to hear about. I realize that in practice this isn't the case;
   event delivery for these is asynchronous, but the soonest timing 

Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-06 Thread Alex Russell
On Wednesday, March 6, 2013, Glenn Maynard wrote:

 On Wed, Mar 6, 2013 at 8:01 AM, Alex Russell 
 slightly...@google.comjavascript:_e({}, 'cvml', 'slightly...@google.com');
  wrote:

 Comments inline. Adding some folks from the IDB team at Google to the
 thread as well as public-webapps.


 (I don't want to cold CC so many people, and anybody working on an IDB
 implementation should be on -webapps already, so I've trimmed the CC to
 that.  I'm not subscribed to -tag, so a mail there would probably bounce
 anyway.)


- *Abuse of events*
The current IDB design models one-time operations using events. This *
can* make sense insofar as events can occur zero or more times in the
future, but it's not a natural fit. What does it mean for oncomplete to
happen more than once? Is that an error? Are onsuccess and onerror
exclusive? Can they both be dispatched for an operation? The API isn't
clear. Events don't lead to good design here as they don't encapsulate
these concerns. Similarly, event handlers don't chain. This is natural, as
they could be invoked multiple times (conceptually), but it's not a good
fit for data access. It's great that IDB as async, and events are the
existing DOM model for this, but IDB's IDBRequest object is calling out 
 for
a different kind of abstraction. I'll submit Futures for the job, but
others might work (explicit callback, whatever) so long as they maintain
chainability + async.


 I disagree.  DOM events are used this way across the entire platform.


So which part do you disagree with? That events are a bad model for a
one-time action? Or that it's not clear what the expected contract is?
Going by what you've written below, I have to assume the latter, so I'll
just say this: try sitting a non-webdev down with IDB or any other DOM API
that works this way and try to get them to figure it out from code samples.
Yes, yes, being a webdev means knowing the platform idioms, but if we can
agree they're confusing and difficult, we can start to do something about
it. And in any case, you haven't refuted the former; events are simply a
bad model here.


 Everybody understands it, it works well, and coming up with something
 different can only add more complexity and inconsistency to the platform by
 having additional ways to model the same job.  I disagree both that we need
 a new way of handling this, and that IDB made a mistake in using the
 standard mechanism in an ordinary, well-practiced way.


- *Doubled API surface for sync version*
I assume I just don't understand why this choice was made, but the
explosion of API surface area combined with the conditional availability 
 of
this version of the API make it an odd beast (to be charitable).

 There's currently no other way to allow an API to be synchronous in
 workers but only async in the UI thread.


Of course not...but what does that have to do with the price of fish? The
core question is what's motivating a sync API here in the first place.

I won't be responding to the rest of your message.


 There was some discussion about a generalized way to allow workers to
 block on a message from another thread, which would make it possible to
 implement a synchronous shim for any async API in JavaScript.  In theory
 this could make it unnecessary for each API to have its own synchronous
 interface.  It wouldn't be as convenient, and probably wouldn't be suitable
 for every API, but for big, complex interfaces like IDB it might make
 sense.  There might also be other ways to express synchronous APIs based on
 their async interfaces without having a whole second interface (eg. maybe
 something like a method to block until an event is received).


- *The idea that this is all going to be wrapped up by libraries
anyway*

 I don't have an opinion about IDB specifically yet, but I agree that this
 is wrong.

 People have become so used to using wrappers around APIs that they've come
 to think of them as normal, and that we should design APIs assuming people
 will keep doing that.

 People wrap libraries when they're hard to use, and if they're hard to use
 then they're badly designed.  Just because people wrap bad APIs isn't an
 excuse for designing more bad APIs.  Wrappers for basic usage are always a
 bad thing: you always end up with lots of them, which means everyone is
 using different APIs.  When everyone uses the provided APIs directly, we
 can all read each others' code and all of our code interoperates much more
 naturally.

 (As you said, this is only referring to wrappers at the same level of
 abstraction, of course, not libraries providing higher-level abstractions.)

 --
 Glenn Maynard




Re: Monkeypatching document.createElement() is wrong

2013-02-12 Thread Alex Russell
+others who have been involved in the design phase of the Google proposal

So there are several viable points in the design space here. I'll try to
outline them quickly:


   1. An internal lifecycle driver for element + shadow creation.
   In this strategy, an element's constructor either calls
   createShadow()/finalizeInitialization() methods directly, or calls the
   superclass constructor to ensure that they are invoked.
   2. External lifecycle driver.
   In this design, it's up to whoever new's up an Element to ensure that
   it's fully formed before injecting it into the DOM.

The current design codifies the second.

Regarding Audio() and Image(), it's possible to model them as having
internal already called flags on their shadow creation methods that
prevent double initialization by createElement(). But I agree that it's
messier and muddies the de-sugaring story.

Dimitri? Dominic?

On Tuesday, February 12, 2013, Anne van Kesteren wrote:

 If the goal of custom elements is to expose the guts of what happens

 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#monkeypatch-create-element
 is the wrong solution. Currently new Image() and createElement(img)
 are equivalent and no additional magic is required. Same for new
 Audio() and createElement(audio). What we want is that

 var x = document.createElement(name)

 maps to / is identical to

 var x = new name's-corresponding-object

 and nothing else.


 --
 http://annevankesteren.nl/




Re: Please add constructors for File and FileList

2013-02-06 Thread Alex Russell
Greetings Victor!



 On Dec 10, 2012, at 12:02 PM, Victor Costan wrote:

  Dear Web Applications WG,
 
  1) Please add a File constructor.


 This has cropped up a few times :)  I've logged a spec bug for this
 feature: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20887

 Could you flesh out your original use case a bit more?  As currently
 expressed, it sums up to I could write better unit tests which doesn't
 constitute what I'm looking for in a use case.


My view on this sort of thing is always different: there should always be
exposed constructors for interfaces for which there are observable
instances in the scripting environment. The debat, then, is what they
should do. For that we need use-cases.


 It strikes me that the chief ask is to bind a Blob to name.  This would
 make life simpler with FormData and with existing server applications.
  It's been pointed out that the barrier between Blob and File is pretty
 thin, and I'm open to jettisoning the thought pattern that we should think
 of File objects as strictly filesystem backed (on disk).  So, what if we
 allowed the Blob constructor to take a name also?  This might allow Blobs
 to fall into the 80 of the 80-20 rule :)

 Could you update the bug (or this listserv) with better use cases?

 I'm a bit less upbeat about:


  2) Please add a FileList constructor.
 
  What I really want is some way to add files to an input
  type=file listed in
 
 http://wiki.whatwg.org/wiki/New_Features_Awaiting_Implementation_Interest
 
  I think that one reasonable way to get there is to have a FileList
  constructor that takes an array of File instances, and have the
  ability to assign that FileList to the files attribute of the input.
  This avoids making FileList mutable.
 
  This would also help me write better tests for my code. Currently,
  input type=file is the only form field whose value can't be set
  inside JavaScript, so I can't write automated tests for input
  type=file-related code.
 
  Asides from improving testing, this would allow me to implement the
  following _easily_:
 
  * filters for uploaded content (e.g. resize a big image before
 uploading)
  * saving the file a user selected in an IndexedDB, and loading it
  back into the input type=file if the page is accidentally
  refreshed
 
  These features can be implemented without FileList support, but they
  impact the application's design. For example, filters can be
  implemented using FormData from XMLHttpRequest level 2 to upload
  Blobs, but switching from plain form submission to XHR-based
  submission can impact other parts of the application code, whereas
  changing the files behind the input would be a localized change.


 As it stands currently, a FileList object typically stems from user
 action.  For instance, interacting with the file picker or with drag and
 drop.  In general, this model is desirable for security reasons -- the
 legacy HTML form and file picker ensures that web applications aren't doing
 things they shouldn't be doing with the underlying file system without user
 participation. I don't actually think it is desirable to modify input type
 = file… from within JavaScript.

 I'd like a better understanding of the broader implications of a FileList
 constructor, just out of abundant caution.  I have similar concerns about
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=17125 which is also filed
 on the FileList object, but these seem easier to manage.

 Hope to hear back,

 -- A*







Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-12-03 Thread Alex Russell
Sorry for the late response.

Adding more create* methods feels like a bug. I understand that there are
a couple of concerns/arguments here:

   - Current implementations that aren't self-hosting are going to have
   trouble with the idea of unattached (floating) ShadowRoot instances
   - As a result, the mental model implementers seem to have is that new
   ShadowRoot(element) has side-effects *on the element*, and that pretty
   clearly feels wrong. A future when re-attaching a ShadowRoot to a different
   element solves this (root.attach(element)?), but it's not planned for now.
   - new may lead to errors when a ShadowRoot instance is allocated out
   of one window and an element to attach to is from another. The general DOM
   solution of allocate out of the element's ownerDocument window feels
   right here, but isn't elegant in some corner cases.

So while I still favor something like new ShadowRoot().attach(element) or
new ShadowRoot(element), I think I can live with the create*() version
for now.

I would like for us to support one of the forward-looking versions,
however, if only in a known-limited form.


On Tue, Nov 20, 2012 at 12:08 AM, Dimitri Glazkov dglaz...@google.comwrote:

 I made the change to the editor's draft:
 http://dvcs.w3.org/hg/webcomponents/rev/e0dfe2ac8104

 You can read the shiny new parts of the spec here:

 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#extensions-to-element

 Please let me know if I goofed up something, preferably by filing bugs :)

 :DG




Re: CSP 1.1 DOM design

2012-11-05 Thread Alex Russell
Inline.


On Mon, Nov 5, 2012 at 9:27 AM, Mike West mk...@google.com wrote:

 On Sun, Nov 4, 2012 at 9:58 PM, Alex Russell slightly...@google.comwrote:

 Looking at Section 3.4 of the CSP 1.1 draft [1], I'm noticing that the
 IDL specified feels very, very strange to use from the JS perspective.


 Thanks for taking a look! This is great feedback.


  For instance, the name document.SecurityPolicy would indicate to a
 mere JS hacker like me that the SecurityPolicy is a class from which
 instances will be created. Instead, it's an instance of the SecurityPolicy
 interface. A more idiomatic name might be document.policy,
 document.csp, or document.securityPolicy as leading-caps tend to be
 reserved for classes, not instances.


 Adam, do you remember why we ran with 'SecurityPolicy' rather than
 'securityPolicy'? I know we discussed it, but I can only find the comment
 resulting from that discussion (
 https://bugs.webkit.org/show_bug.cgi?id=91707#c5).


 Similarly, it's not possible (AFAICT) to new-up an instance of
 SecurityPolicy and no API provided for parsing a policy to understand how
 it would react.


 That's an interesting suggestion. What's the use-case you see for
 providing a mechanism for parsing/examining a policy?


I'm hitting this in real time as I'm trying to write an extension that
merges/displays policies. Long story short, I've got a default user
policy that I'd like to apply to each page until/unless that page gives me
a more locked-down policy. I'd also like to know what has been blocked in
the page. The current API is balls for all of this. Luckily CSP parsing is
easy. On the downside, matching and getting the rule application right
isn't.

There's already a parser/matcher in the implementation. Not exposing it is,
on a more philosophical basis, simply bad design.


 The only thing I can come up with off the top of my head is the tool we
 briefly chatted about that would help developers understand the impact of a
 policy. :)


That was going to be my other use-case: I want to build a tool that shows
people how to construct policies well. But it's a minor point; the larger
issue is that it's bad layering to say the browser does this but no, you
can't have access to that parser/matching logic. The default here has to
be for API designers to show why they MUST NOT expose something like this
when they're providing an API, not on developers to show that they MUST
have the feature. Don't worry, though, most of the DOM gets this wrong. But
we don't have to here! Hooray!




 Lastly, there's no serialization method provided. A toString()
 implementation might work well.


 What would the string representation of the object look like? Just the
 original policy?


Yep.


 One complication is that the page's active policy might be created by the
 union of several policies (one sent per HTTP, one in a meta tag, etc).
 Would we want to retain that representation in a string version?


Yes. Isn't that the applied policy?

Also, speaking as someone writing the union logic himself in JS at the
moment, I'd love for union/intersection methods to be made available. They
should take SecurityPolicy instances/strings (var-args style) as arguments
and return a new SecurityPolicy instance (not locked down).




 readonly attribute DOMString[] reportURIs;


 We decided at TPAC to remove the reportURIs getter unless someone has a
 really good use-case for it.


First, that's just totally backwards. If it's in the serialization, it
needs to be in your API unless there's a compelling reason to remove it. As
I've been working with this stuff, I also think the per-policy
domain/setting flags should be exposed on SecurityPolicy instances as well.
reportUR[I|L]s should just be part of that list.

Next, not only should reportURLs be there, but there should be an event you
can catch for violations with the JSON payload you'd send to the server
delivered to the DOM as well.




 One open issue: I'm not sure If allowsEval, allowsInlineScript, and
 allowsInlineStyle should just be boolean getters or if they should stay
 methods.


 I like the idea of converting these `allowEval()`-style calls to read-only
 booleans. Perhaps 'isActive' as well.


 Also, it's unclear if the current document's policy should simply be a
 locked-down instance of a SecurityPolicy class that has accessors for each
 of the policy items (script-src, object-src, style-src, img-src,
 media-src, frame-src, font-src, connect-src).


 I think that's more or less what the current interface does. (e.g.
 `document.SecurityPolicy.allowsFontFrom('xxx')` is an accessor for the
 effective permissions granted via the 'font-src' directive). Would you
 prefer more direct access to the policy? We'd shied away from that on the
 assumption that this interface required less knowledge of CSP in order to
 usefully include on a page. Should we revisit that question?


Yes  = )

I think it's good to have the test methods. I also think it's good to have
a full

Re: CSP 1.1 DOM design

2012-11-05 Thread Alex Russell
On Mon, Nov 5, 2012 at 10:56 AM, David Bruant bruan...@gmail.com wrote:

  Le 05/11/2012 11:32, Alex Russell a écrit :

 On Mon, Nov 5, 2012 at 1:08 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/4/12 3:58 PM, Alex Russell wrote:

  DOMString toString();


 This should probably be:

   stringifier;

 instead (which in ES will produce a toString on the prototype, but is
 more clear about the point, and might do different things in other binding
 languages).


  Other binding languages don't matter, but OK.

 I heard Google is working on this Dart thing. Unless Google redefine
 APIs for every single web browser capability, it will probably need to
 define WebIDL bindings for Dart. But what do I know...


You know, we should go talk to the guys who designed the Dart DOM...oh,
wait, that's me (and arv and jacobr and jmesserly)!

We did *exactly* that:

http://www.dartlang.org/articles/improving-the-dom/

The W3C/WebIDL DOM sucks and every self-respecting language will re-define
the API, preserving the invariants and basic functionality, but aligning
the surface area, types, and calling conventions with the idioms
and intrinsics of the host language:

Python: http://docs.python.org/2/library/xml.etree.elementtree.html
Ruby: http://rubydoc.info/gems/hpricot/0.8.6/frames
Java: http://www.jdom.org/news/index.html
Dart: http://www.dartlang.org/articles/improving-the-dom/
 http://api.dartlang.org/docs/bleeding_edge/dart_html.html


 Yes, it's a lot of work, but if you're not taking care to create a great
API for one of your most frequently-used libraries, you're screwing your
language and your users. I posit that every language with enough users to
matter will do this exercise (JS has done so many times over in the form of
the ubiquitous library tax that slows so many sites).

FWIW, we can still use WebIDL as a stepping stone to fix the b0rken JS
bindings. But we need to collectively stop pretending that:


   1. We should be designing JS APIs though the lens of what WebIDL
   can/can't do
   2. That there are other language consumers of WebIDL-defined DOM APIs
   that both matter and will not go their own way when presented with
   un-idiomatic/kludgy designs.

Perhaps there's some future world in which we decide that having an IDL to
describe API invariants is a good idea (although it doesn't do that today
to any reasonable degree), but nobody I know is clamoring for that.


   Another thing to think about is whether reportURIs should really be an
 IDL array (which does NOT produce a JS array on the JS side, so it really
 depends on the expected use cases).


  I'll advocate for a JS array wherever we surface an array-like
 collection. It's long past time that we stopped shitting on users with
 ad-hoc collection types.

 Arguably, ES6 symbols may give a re-birth to ad-hoc collection types by
 allowing safe (uncollidable) extension of built-ins. I think an IDL array
 is fine (as far as I can tell, the difference with a regular array is just
 a different prototype).


That's enough to make it toxic.


Re: CSP 1.1 DOM design

2012-11-05 Thread Alex Russell
On Mon, Nov 5, 2012 at 12:14 PM, David Bruant bruan...@gmail.com wrote:

  Le 05/11/2012 12:50, Alex Russell a écrit :

  On Mon, Nov 5, 2012 at 10:56 AM, David Bruant bruan...@gmail.com wrote:

  Le 05/11/2012 11:32, Alex Russell a écrit :

 On Mon, Nov 5, 2012 at 1:08 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/4/12 3:58 PM, Alex Russell wrote:

  DOMString toString();


 This should probably be:

   stringifier;

 instead (which in ES will produce a toString on the prototype, but is
 more clear about the point, and might do different things in other binding
 languages).


  Other binding languages don't matter, but OK.

  I heard Google is working on this Dart thing. Unless Google redefine
 APIs for every single web browser capability, it will probably need to
 define WebIDL bindings for Dart. But what do I know...


  You know, we should go talk to the guys who designed the Dart DOM...oh,
 wait, that's me (and arv and jacobr and jmesserly)!

 I wrote every single web browser capability, not DOM.


WebIDL is about DOM. Does it even have other users? If you don't expose an
API from a C++ impl to a more dynamic host language, it doesn't make sense.
To the extent that DOM is C++ objects vended to dynamic languages, yes,
I maintain that every API should be re-cast in terms of language-native
idioms. WebIDL gets you part of the way there for JS (and formerly for
Java). But not the whole way. When it prevents improvements (as it so often
does), it's a bug and not a feature.


 I agree with you about the DOM. The Dart effort (and DOM4 to some extent)
 is excellent for what it did to the DOM API (great job guys!).




  Yes, it's a lot of work, but if you're not taking care to create a
 great API for one of your most frequently-used libraries, you're screwing
 your language and your users. I posit that every language with enough users
 to matter will do this exercise (JS has done so many times over in the form
 of the ubiquitous library tax that slows so many sites).

  FWIW, we can still use WebIDL as a stepping stone to fix the b0rken JS
 bindings. But we need to collectively stop pretending that:


1. We should be designing JS APIs though the lens of what WebIDL
can/can't do

Is anyone really doing this?


Part of my job is reviewing this sort of proposal and let me assure you
that *everyone* does this. IDL *is handy. *More to the point, it's the
language of the specs we have now, and the default mode for writing new
ones is copy/paste some IDL from another spec that looks close to what I
need and then hack away until it's close. This M.O. is exacerbated by the
reality that most of the folks writing these specs are C++ hackers, not JS
developers. For many, WebIDL becomes a safety blanket that keeps them from
having to ever think about the operational JS semantics or be confronted
with the mismatches.


 WebIDL didn't exist a couple of years ago and people were designing APIs
 anyways.


...in IDL. WebIDL is descended from MIDL/XPIDL which is descended from
CORBA IDL. WebIDL is merely a compatible subset + JS-leaning superset.


 Also, WebIDL is still in flux; if WebIDL is limitating, just ask for a
 change in WebIDL. I've seen a bunch of posts from Boris Zbarsky in that
 direction.


Heh. Who do you think advocated for DOM prototypes in the right locations?
Or has been continuiously advocating that DOM designs not use create* but
instead lean on new? ;-)


1. That there are other language consumers of WebIDL-defined DOM APIs
that both matter and will not go their own way when presented with
un-idiomatic/kludgy designs.

If you've read WebIDL recently, you've realize that only an ECMAScript
 binding is defined.


Also my doing from TPAC 2011.


  I'm not sure anyone pretends there is another language consuming WebIDL.


Your tried to.


 I feel you have some misconceptions regarding WebIDL.


Perhaps there's some future world in which we decide that having an
 IDL to describe API invariants is a good idea (although it doesn't do that
 today to any reasonable degree), but nobody I know is clamoring for that.


Another thing to think about is whether reportURIs should really be
 an IDL array (which does NOT produce a JS array on the JS side, so it
 really depends on the expected use cases).


  I'll advocate for a JS array wherever we surface an array-like
 collection. It's long past time that we stopped shitting on users with
 ad-hoc collection types.

  Arguably, ES6 symbols may give a re-birth to ad-hoc collection types by
 allowing safe (uncollidable) extension of built-ins. I think an IDL array
 is fine (as far as I can tell, the difference with a regular array is just
 a different prototype).


  That's enough to make it toxic.

 I don't understand this point. I'm not 100% up-to-date on ES6 classes, but
 it seems that WebIDL arrays are the equivalent of doing class MadeUpName
 extends Array{}. If that's the case, do you think extending Array using
 ES6

CSP 1.1 DOM design

2012-11-04 Thread Alex Russell
Hi all,

Looking at Section 3.4 of the CSP 1.1 draft [1], I'm noticing that the IDL
specified feels very, very strange to use from the JS perspective.

For instance, the name document.SecurityPolicy would indicate to a mere
JS hacker like me that the SecurityPolicy is a class from which instances
will be created. Instead, it's an instance of the SecurityPolicy interface.
A more idiomatic name might be document.policy, document.csp, or
document.securityPolicy as leading-caps tend to be reserved for classes,
not instances.

Similarly, it's not possible (AFAICT) to new-up an instance of
SecurityPolicy and no API provided for parsing a policy to understand how
it would react.

Lastly, there's no serialization method provided. A toString()
implementation might work well. Here's some IDL and sample code that shows
how it might be repaired:

[NamedConstructor=SecurityPolicy,
 NamedConstructor=SecurityPolicy(DOMString policy),
 NamedConstructor=SecurityPolicy(DOMString policy, DOMString origin)]
interface SecurityPolicy {
readonly attribute DOMString[] reportURIs;
bool allowsEval();
bool allowsInlineScript();
bool allowsInlineStyle();
bool allowsConnectionTo(DOMString url);
bool allowsFontFrom(DOMString url);
bool allowsFormAction(DOMString url);
bool allowsFrameFrom(DOMString url);
bool allowsImageFrom(DOMString url);
bool allowsMediaFrom(DOMString url);
bool allowsObjectFrom(DOMString url);
bool allowsPluginType(DOMString type);
bool allowsScriptFrom(DOMString url);
bool allowsStyleFrom(DOMString url);
bool isActive();
DOMString toString();
};

// Examples from the draft:
var isCSPSupported = !!document.securityPolicy;
// or:
var isCSPSupported = (typeof SecurityPolicy != undefined);

var isCSPActive = document.securityPolicy.isActive();

// Parse an ssl-only policy as though it were applied to example.com and
then test it:
var policy = new SecurityPolicy(
default-src https:; script-src https: 'unsafe-inline'; style-src
https: 'unsafe-inline',
https://example.com;);
// Can I load a font over HTTP?
policy.allowsFontFrom(http://example.com/;); // false

One open issue: I'm not sure If allowsEval, allowsInlineScript, and
allowsInlineStyle should just be boolean getters or if they should stay
methods. Also, it's unclear if the current document's policy should simply
be a locked-down instance of a SecurityPolicy class that has accessors for
each of the policy items (script-src, object-src, style-src, img-src,
media-src, frame-src, font-src, connect-src). I'm inclined to say yes.
Thoughts?

[1]:
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html#script-interfaces--experimental


Re: [webcomponents] HTML Parsing and the template element

2012-05-02 Thread Alex Russell
What Tab said.

On Tue, Apr 24, 2012 at 5:45 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Apr 24, 2012 at 9:12 AM, Clint Hill clint.h...@gmail.com wrote:
 Hmm. I have to say that I disagree that your example below shows a
 template within a template. That is IMO 1 template wherein there is
 iteration syntax.

 The iteration syntax is basically an element - the example that Arv
 gave even used element-like syntax, with open and close tags.  That
 iteration element is inside of a template.

 If iteration uses a different tagname than normal templating (say,
 iterate), thus avoiding the nesting template in template
 problem, you still have the problem of nesting iteration, which is
 *also* a common ability for template systems.

 Any way you slice it, common templating scenarios will create problems
 if you don't hook it up to a proper parser at some point.  Might as
 well do that early so you can immediately delve into it with DOM
 methods and whatnot, rather than delaying it and keeping it as flat
 text until the point of use.

 ~TJ




Re: [webcomponents] Custom Elements Spec

2012-05-02 Thread Alex Russell
On Wed, May 2, 2012 at 12:42 AM, Dimitri Glazkov dglaz...@chromium.org wrote:
 Based on the hallway conversations at the F2F, here are some notes for
 the upcoming Custom Elements spec.

 Custom tags vs. is attribute
 - is attribute is awkward, overly verbose
 - custom tags introduce local semantics
 - generally viewed as a rabbit-hole discussion in WebApps scope
 - Tantek (tantek) suggested we work this out in HTML WG
 - perhaps start with something as simple as reserving x- prefix on
 HTML tags for local semantics.

 Instantiation and running script
 - both Microsoft and Mozilla folks wish to avoid running script when
 instantiating elements, which is a valid concern (mutation events
 redux)

I'm having trouble with this. Components are defined as script-driven
lifecycles. Script *is* the runtime.

 - instantiation of the element must set up the prototype chain
 properly, since ES5 does not allow prototype swizzling

ES 6 will in an appendix. We can (ab)use __proto__ for now.

 - Tony (tross) is worried that even if handled asynchronously, the
 performance characteristics of running script when parsing HTML should
 be carefully considered
 - Jonas (sicking) reiterated that it is _critical_ that the custom
 element's behavior is strongly bound to the lifetime of its element

Can't agree more. This is another strong reason to simply have custom
elements simple be JS objects created the standard way. That way the
identity is unambiguous and unchangeable. That some implementations
may have C++ gunk hanging out in the background is neither here nor
there. It's an implementation detail.

 Random ideas from various people:
 - Minimal custom elements: spec building the prototype chain in
 parser, spec template tag. Given these, the rest of the spec can be
 implemented in JS.
 - Start writing the spec with element instantiation, evaluate
 performance issues and tweak until awesome.
 - Ship Shadow DOM, reserve x- prefix, and let the Web devs start
 using new stuff. Study what comes back and see what else needs to be
 done.

Outstanding!




Re: QSA, the problem with :scope, and naming

2011-10-31 Thread Alex Russell
On Fri, Oct 21, 2011 at 12:41 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Oct 20, 2011 at 2:33 PM, Lachlan Hunt lachlan.h...@lachy.id.au
 wrote:
 Not necessarily.  It depends what exactly it means for a selector to
 contain
 :scope for determining whether or not to enable the implied :scope
 behaviour.  Consider:

  foo.find(:not(:scope));

 Ooh, this is an interesting case too. So here's the full list of cases which
 we need defined behavior for (again looking at Alex and Yehuda here).

 In the following DOM

 body id=3
  div id=0/div
  div id=context foo=bar
   div id=1/div
   div class=class id=2/div
   div class=withChildren id=3div class=child id=4/div/div
  /div
  div id=5/div
  div id=6/div
 /body

 What would each of the following .findAll calls return. I've included my
 guessed based on the discussions so far:

 var e = document.getElementById('context');

 e.findAll(div)  // returns ids 1,2,3,4
 e.findAll()      // returns an empty list
 e.findAll(#3)  // returns id 3, but not the body node
 e.findAll( div) // returns ids 1,2,3
 e.findAll([foo=bar]) // returns nothing
 e.findAll([id=1]) // returns id 1
 e.findAll(:first-child) // returns id 1
 e.findAll(+ div) // returns id 5
 e.findAll(~ div) // returns id 5, 6
 e.findAll(:scope)

Returns the context.

 e.findAll(div:scope)

Returns the context.

 e.findAll([foo=bar]:scope)

Returns the context.

 e.findAll(:scope div)

1, 2, 3, 4

 e.findAll(div:scope div)

1, 2, 3, 4

 e.findAll(div:scope #3)

3

 e.findAll(body  :scope  div)

1, 2, 3, 4

 e.findAll(div, :scope)

context, 1, 2, 3, 4

 e.findAll(body  :scope  div, :scope)

context, 1, 2, 3, 4

 e.findAll(:not(:scope))

empty set



Re: QSA, the problem with :scope, and naming

2011-10-31 Thread Alex Russell
What Tab said  = )

On Sun, Oct 30, 2011 at 9:23 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Sat, Oct 29, 2011 at 8:28 PM, Cameron McCormack c...@mcc.id.au wrote:
 On 20/10/11 3:50 AM, Alex Russell wrote:

 I strongly agree that it should be an Array *type*, but I think just
 returning a plain Array is the wrong resolution to our NodeList
 problem. WebIDL should specify that DOM List types *are* Array types.
 It's insane that we even have a NodeList type which isn't a real array
 at all. Adding a parallel system when we could just fix the one we
 have (and preserve the value of a separate prototype for extension) is
 wonky to me.

 That said, I'd *also* support the ability to have some sort of
 decorator mechanism before return on .find() or a way to re-route the
 prototype of the returned Array.

 +heycam to debate this point.

 Late replying here again, apologies, but I agree with others who say that an
 actual Array object should be returned from this new API given that it is
 not meant to be live.  What benefit is there from returning a NodeList?

 If it's a NodeList (or something else that *subclasses* Array) we can
 do fun things like add .find to it, which returns the sorted union of
 calling .find on all the elements within it.  Returning a plain Array
 doesn't let us do that.

 ~TJ




Re: QSA, the problem with :scope, and naming

2011-10-31 Thread Alex Russell
On Sun, Oct 30, 2011 at 1:23 PM, Rick Waldron waldron.r...@gmail.com wrote:


 On Sat, Oct 29, 2011 at 11:28 PM, Cameron McCormack c...@mcc.id.au wrote:

 On 20/10/11 3:50 AM, Alex Russell wrote:

 I strongly agree that it should be an Array *type*, but I think just
 returning a plain Array is the wrong resolution to our NodeList
 problem. WebIDL should specify that DOM List types *are* Array types.
 It's insane that we even have a NodeList type which isn't a real array
 at all. Adding a parallel system when we could just fix the one we
 have (and preserve the value of a separate prototype for extension) is
 wonky to me.

 That said, I'd *also* support the ability to have some sort of
 decorator mechanism before return on .find() or a way to re-route the
 prototype of the returned Array.

 +heycam to debate this point.

 Late replying here again, apologies, but I agree with others who say that
 an actual Array object should be returned from this new API given that it is
 not meant to be live.  What benefit is there from returning a NodeList?

 As much as I hate saying this: introducing a third return type would be
 counter-productive, as you'd now have live NodeList, static NodeList and
 subclassed Array. Congratulations, the cluster-f*ck continues in true form.

Live NodeList instances don't need to be considered here. They're the
result of an API which generates them, and that API can be described
in terms of Proxies. No need to complicate NodeList or imply that we
need a different type.

Making NodeList instances real array unifies them all. We can get that done too.



Re: QSA, the problem with :scope, and naming

2011-10-31 Thread Alex Russell
On Mon, Oct 31, 2011 at 2:03 PM, Cameron McCormack c...@mcc.id.au wrote:
 On 31/10/11 1:56 PM, Alex Russell wrote:

 Live NodeList instances don't need to be considered here. They're the
 result of an API which generates them, and that API can be described
 in terms of Proxies. No need to complicate NodeList or imply that we
 need a different type.

 Making NodeList instances real array unifies them all. We can get that
 done too.

 Don't live and static NodeLists use the same prototype?

Yes, I envision they would. The restrictions on live lists are
probably going to be created by a proxy that wraps them and manages
their semantics.

 If they are
 separate, I don't see any problem with making them real arrays, but I am
 not sure what the implications of that are.  Array.isArray == true, I guess?

For JS, it just means having a working .length property (in all the
ways that Arrays allow it to be used). Arrays are identical to other
objects in all other respects.

  Do we have that ability within the bounds of ECMAScript yet? Note that we
 can already make NodeList.prototype === Array.prototype if we want, using
 appropriate Web IDL annotations.

We'll need ES 6 proxies to get the working .length thing today. Not ideal.



Re: QSA, the problem with :scope, and naming

2011-10-31 Thread Alex Russell
On Mon, Oct 31, 2011 at 2:18 PM, Charles Pritchard ch...@jumis.com wrote:
 On 10/31/11 2:03 PM, Cameron McCormack wrote:

 On 31/10/11 1:56 PM, Alex Russell wrote:

 Live NodeList instances don't need to be considered here. They're the
 result of an API which generates them, and that API can be described
 in terms of Proxies. No need to complicate NodeList or imply that we
 need a different type.

 Making NodeList instances real array unifies them all. We can get that
 done too.

 Don't live and static NodeLists use the same prototype?  If they are
 separate, I don't see any problem with making them real arrays, but I am
 not sure what the implications of that are.  Array.isArray == true, I guess?
  Do we have that ability within the bounds of ECMAScript yet? Note that we
 can already make NodeList.prototype === Array.prototype if we want, using
 appropriate Web IDL annotations.

 Array seems to work fine in WebKit:
 document.getElementsByTagName('div').__proto__.__proto__ = Array.prototype;

 dojo just reimplements NodeList as an array:
 http://dojotoolkit.org/reference-guide/dojo/NodeList.html

The reason we did it that way is because there's no other way to
create an intermediate type with the magic .length property.

 I don't understand what real array means, other than the prototype
 equivalence.

 If NodeList were an array, what's the behavior of running push on NodeList?
 The list may end up with non-node objects if push is not supplemented.

 -Charles






Re: Is BlobBuilder needed?

2011-10-25 Thread Alex Russell
+1!

On Mon, Oct 24, 2011 at 3:52 PM, Jonas Sicking jo...@sicking.cc wrote:
 Hi everyone,

 It was pointed out to me on twitter that BlobBuilder can be replaced
 with simply making Blob constructable. I.e. the following code:

 var bb = new BlobBuilder();
 bb.append(blob1);
 bb.append(blob2);
 bb.append(some string);
 bb.append(myArrayBuffer);
 var b = bb.getBlob();

 would become

 b = new Blob([blob1, blob2, some string, myArrayBuffer]);

 or look at it another way:

 var x = new BlobBuilder();
 becomes
 var x = [];

 x.append(y);
 becomes
 x.push(y);

 var b = x.getBlob();
 becomes
 var b = new Blob(x);

 So at worst there is a one-to-one mapping in code required to simply
 have |new Blob|. At best it requires much fewer lines if the page has
 several parts available at once.

 And we'd save a whole class since Blobs already exist.

 / Jonas





Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Alex Russell
On Wed, Oct 19, 2011 at 7:01 PM, Lachlan Hunt lachlan.h...@lachy.id.au wrote:
 On 2011-10-19 16:08, Alex Russell wrote:

 On Wed, Oct 19, 2011 at 1:54 PM, Lachlan Huntlachlan.h...@lachy.id.au
  wrote:

 I have attempted to address this problem before and the algorithm for
 parsing a *scoped selector string* (basically what you're calling a
 rootedSelector) existed in an old draft [1].

 That draft also allowed the flexibility of including an explicit :scope
 pseudo-class in the selector, which allows for conditional expressions to
 be
 built into the selector itself that can be used to check the state of the
 scope element or any of its ancestors.

 We could accomodate that by looking at the passed selector and trying
 to determine if it includes a :scope term. If so, avoid prefixing.

 Yes, that's exactly what the draft specified.

Great! So if we specify this behavior for .find() too, I think we're
in good shape.

 That'd allow this sort of flexibility for folks who want to write
 things out long-hand or target the scope root in the selector,
 possibly returning itself.

 I don't see a use case for wanting the proposed method to be able to return
 the element itself.  The case where it's useful for elements matching :scope
 to be the subject of a selector is where you're trying to filter a list of
 elements.

 e.g.
  document.querySelectorAll(.foo:scope, list);
  // Returns all elements from list that match.

 But this wouldn't make sense

  el.find(.foo:scope) // Return itself if it matches.

Ok, I'm fine with not allowing that.

 That result seems effectively like a less efficient boolean check that is
 already handled by el.matchesSelector(.foo).

matchesSelector...really? We've gotta get a better name for that = )

 I''d also support a resolution for this sort of power-tool that
 forces people to use document.qsa(...,scopeEl) to get at that sort
 of thing.

 If there was no special handling to check for an explicit :scope, that would
 mean that any selector that does include :scope explicitly would not match
 anything at all.

 e.g. el.findAll(:scopep);

yeah, that occurred to me after sending the last mail.

 That would be equivalent to:

  document.querySelectorAll(:scope :scopep, el);

 Which won't match anything.

 That might keep things simpler from an implementation perspective and
 doesn't sacrifice any functionality being requested.

Eh, I'm not sure it's sane though. Putting in checking for :scope in
the selector and not prefixing if it occurs seems the only reasonable
thing. There's a corner case I haven't formed an opinion on though:

   el.find(div span :scope .whatevs);

...does what? I think it's an error. :scope will need to occur in
the first term or not at all for .find().

 --
 Lachlan Hunt - Opera Software
 http://lachy.id.au/
 http://www.opera.com/




Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Alex Russell
On Thu, Oct 20, 2011 at 3:07 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Oct 18, 2011 at 9:42 AM, Alex Russell slightly...@google.com wrote:
 Lachlan and I have been having an...um...*spirited* twitter discussion
 regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
 and :scope. He asked me to continue here, so I'll try to keep it
 short:

 The rooted forms of querySelector and querySelectorAll are mis-designed.

 Discussions about a Scoped variant or :scope pseudo tacitly
 acknowledge this, and the JS libraries are proof in their own right:
 no major JS library exposes the QSA semantic, instead choosing to
 implement a rooted search.

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names. APIs gain
 use both through naming and through use. On today's internet -- the
 one where 50% of all websites include jQuery -- you could even go with
 element.$(selector) and everyone would know what you mean: it's
 clearly a search rooted at the element on the left-hand side of the
 dot.

 Ceteris peribus, shorter is better. When there's a tie that needs to
 be broken, the more frequently used the API, the shorter the name it
 deserves -- i.e., the larger the component of its meaning it will gain
 through use and repetition and not naming and documentation.

 I know some on this list might disagree, but all of the above is
 incredibly non-controversial today. Even if there may have been
 debates about scoping or naming when QSA was originally designed,
 history has settled them. And QSA lost on both counts.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from. I'd like to instead propose that we
 shorten all of this up and kill both stones by introducing a new API
 pair, find and findAll, that are rooted as JS devs expect. The
 above becomes:

   element.findAll( div  .thinger);

 Out come the knives! You can't start a selector with a combinator!

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 Of course, :scope in this case is just a special case of the ID
 rooting hack, but if we're going to have it, we can kill both birds
 with it.

 Obvious follow up questions:

 Q.) Why do we need this at all? Don't the toolkits already just do
 this internally?
 A.) Are you saying everyone, everywhere, all the time should need to
 use a toolkit to get sane behavior from the DOM? If so, what are we
 doing here, exactly?

 Q.) Shorter names? Those are for weaklings!
 A.) And humans. Who still constitute most of our developers. Won't
 someone please think of the humans?

 Q.) You're just duplicating things!
 A.) If you ignore all of the things that are different, then that's
 true. If not, well, then no. This is a change. And a good one for the
 reasons listed above.

 Thoughts?

 I like the general idea here. And since we're changing behavior, I
 think it's a good opportunity to come up with shorter names. Naming is
 really hard. The shorter names we use, the more likely it is that
 we're going to break webpages which are messing around with the
 prototype chain and it increases the risk that we'll regret it later
 when we come up with even better functions which should use those
 names.

So long as the slots are still writable, no loss. Their patches into
the prototype chain still exist. Being afraid of this when we're on
top seems really, *REALLY* strange to me.

 Say that we come up with an even better query language than
 selectors, at that point .find will simply not be available to us.

Premature optimization. And $ is still available ;-)

 However, it does seem like selectors are here to stay. And as much as
 they have shortcomings, people seem to really like them for querying.

 So with that out of the way, I agree that the CSS working group
 shouldn't be what is holding us back. However we do need a precise
 definition of what the new function does. Is prepending

Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Alex Russell
On Thu, Oct 20, 2011 at 6:55 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Oct 18, 2011 at 9:42 AM, Alex Russell slightly...@google.com wrote:
 Lachlan and I have been having an...um...*spirited* twitter discussion
 regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
 and :scope. He asked me to continue here, so I'll try to keep it
 short:

 The rooted forms of querySelector and querySelectorAll are mis-designed.

 Discussions about a Scoped variant or :scope pseudo tacitly
 acknowledge this, and the JS libraries are proof in their own right:
 no major JS library exposes the QSA semantic, instead choosing to
 implement a rooted search.

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names. APIs gain
 use both through naming and through use. On today's internet -- the
 one where 50% of all websites include jQuery -- you could even go with
 element.$(selector) and everyone would know what you mean: it's
 clearly a search rooted at the element on the left-hand side of the
 dot.

 Ceteris peribus, shorter is better. When there's a tie that needs to
 be broken, the more frequently used the API, the shorter the name it
 deserves -- i.e., the larger the component of its meaning it will gain
 through use and repetition and not naming and documentation.

 I know some on this list might disagree, but all of the above is
 incredibly non-controversial today. Even if there may have been
 debates about scoping or naming when QSA was originally designed,
 history has settled them. And QSA lost on both counts.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from. I'd like to instead propose that we
 shorten all of this up and kill both stones by introducing a new API
 pair, find and findAll, that are rooted as JS devs expect. The
 above becomes:

   element.findAll( div  .thinger);

 Out come the knives! You can't start a selector with a combinator!

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 Of course, :scope in this case is just a special case of the ID
 rooting hack, but if we're going to have it, we can kill both birds
 with it.

 Obvious follow up questions:

 Q.) Why do we need this at all? Don't the toolkits already just do
 this internally?
 A.) Are you saying everyone, everywhere, all the time should need to
 use a toolkit to get sane behavior from the DOM? If so, what are we
 doing here, exactly?

 Q.) Shorter names? Those are for weaklings!
 A.) And humans. Who still constitute most of our developers. Won't
 someone please think of the humans?

 Q.) You're just duplicating things!
 A.) If you ignore all of the things that are different, then that's
 true. If not, well, then no. This is a change. And a good one for the
 reasons listed above.

 Thoughts?

 Oh, and as a separate issue. I think .findAll should return a plain
 old JS Array. Not a NodeList or any other type of host object.

I strongly agree that it should be an Array *type*, but I think just
returning a plain Array is the wrong resolution to our NodeList
problem. WebIDL should specify that DOM List types *are* Array types.
It's insane that we even have a NodeList type which isn't a real array
at all. Adding a parallel system when we could just fix the one we
have (and preserve the value of a separate prototype for extension) is
wonky to me.

That said, I'd *also* support the ability to have some sort of
decorator mechanism before return on .find() or a way to re-route the
prototype of the returned Array.

+heycam to debate this point.

 One of
 the use cases is being able to mutate the returned value. This is
 useful if you're for example doing multiple .findAll calls (possibly
 with different context nodes) and want to merge the resulting lists
 into a single list.

Agreed. An end to the Array.slice() hacks would be great.



Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Alex Russell
On Thu, Oct 20, 2011 at 12:05 PM, Lachlan Hunt lachlan.h...@lachy.id.au wrote:
 On 2011-10-20 12:50, Alex Russell wrote:

 On Thu, Oct 20, 2011 at 6:55 AM, Jonas Sickingjo...@sicking.cc  wrote:

 Oh, and as a separate issue. I think .findAll should return a plain
 old JS Array. Not a NodeList or any other type of host object.

 I strongly agree that it should be an Array *type*, but I think just
 returning a plain Array is the wrong resolution to our NodeList
 problem. WebIDL should specify that DOM List types *are* Array types.

 We need NodeList separate from Array where they are live lists.

No we don't. The fact that there's someone else who has a handle to
the list and can mutate it underneath you is a documentation issue,
not a question of type...unless the argument is that the slots should
be non-configurable, non-writable except by the browser that's also
holding a ref to it.

  I forget
 the reason we originally opted for a static NodeList rather than Array when
 this issue was originally discussed a few years ago.



Re: QSA, the problem with :scope, and naming

2011-10-20 Thread Alex Russell
On Thu, Oct 20, 2011 at 3:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/20/11 7:18 AM, Alex Russell wrote:

 No we don't. The fact that there's someone else who has a handle to
 the list and can mutate it underneath you

 There is no sane way to mutate the list on the part of the browser if
 someone else is also messing with it, because the someone else can violate
 basic invariants the browser's behavior needs to maintain.

Right. So you need to vend an apparently-immutable Array, one which
can only be changed by the browser. I think that could be accomplished
in terms of Proxies. But it's still an Array type.

 unless the argument is that the slots should
 be non-configurable, non-writable except by the browser that's also
 holding a ref to it.

 Yes.

 Though I don't know what slots you're talking about; the only sane JS
 implementation of live nodelists is as a proxy.  There's no way to get the
 behaviors that browsers have for them otherwise.

But it can be a Proxy to an *Array*, not to some weird non-Array type.



Re: QSA, the problem with :scope, and naming

2011-10-19 Thread Alex Russell
On Wed, Oct 19, 2011 at 2:26 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/18/11 8:08 PM, Alex Russell wrote:

 The other excuse is that adding special cases (which is what you're
 asking
 for) slows down all the non-special-case codepaths.  That may be fine for
 _your_ usage of querySelectorAll, where you use it with a particular
 limited
 set of selectors, but it's not obvious that this is always a win.

 Most browsers try to optimize what is common.

 Yes, but what is common for Yehuda may well not be globally common.

Yehuda is representing jQuery. I'll take his opinion as the global
view unless he choses to say he's representing a personal opinion.

 There's also the question of premature optimization.  Again, I'd love to see
 a non-synthetic situation where any of this matters.  That would be a much
 more useful point to reason from than some sort of hypothetical faith-based
 optimization.

The jQuery team did look to see what selector are hottest against
their engine at some point and explicitly optimize short selectors as
a result. The simple forms seem to be the most common.

Regards



Re: QSA, the problem with :scope, and naming

2011-10-19 Thread Alex Russell
On Wed, Oct 19, 2011 at 4:39 AM, Ojan Vafai o...@chromium.org wrote:
 Overall, I wholeheartedly support the proposal.
 I don't really see the benefit of allowing starting with a combinator. I
 think it's a rare case that you actually care about the scope element and in
 those cases, using :scope is fine. Instead of element.findAll( div 
 .thinger), you use element.findAll(:scope  div  .thinger). That said, I
 don't object to considering the :scope implied if the selector starts with a
 combinator.

Right, I think the argument for allowing a combinator start is two-fold:

1.) the libraries allow it, so should DOM
2.) we know the thing on the left, it's the implicit scope. Shorter is
better, so allowing the implicitness here is a win on that basis

I have a mild preference for argument #2. Shorter, without loss of
clarity, for common stuff should nearly always win.

 On Tue, Oct 18, 2011 at 6:15 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 10/18/11 7:38 PM, Alex Russell wrote:

 The resolution I think is most natural is to split on ,

 That fails with :any, with the expanded :not syntax, on attr selectors,
 etc.

 You can split on ',' while observing proper paren and quote nesting, but
 that can get pretty complicated.

 Can we define it as a sequence of selectors and be done with it? That way it
 can be defined as using the same parsing as CSS.


 A minor point is how to order the
 items in the returned flattened list are ordered (document order? the
 natural result of concat()?).

 Document order.

 Definitely.

 -Boris







Re: QSA, the problem with :scope, and naming

2011-10-19 Thread Alex Russell
On Wed, Oct 19, 2011 at 9:29 AM, Anne van Kesteren ann...@opera.com wrote:
 On Wed, 19 Oct 2011 17:22:46 +0900, Alex Russell slightly...@google.com
 wrote:

 Yehuda is representing jQuery. I'll take his opinion as the global
 view unless he choses to say he's representing a personal opinion.

 You misunderstand. Boris is contrasting with CSS. Selectors are used in more
 than just querySelectorAll() and their usage differs wildly.

Sure, of course, but suggesting that the optimizations for both need
to be the same is also a strange place to start the discussion from.
The QSA or find() implementation *should* differ to the extent that it
provides developer value and is a real-world bottleneck.



Re: QSA, the problem with :scope, and naming

2011-10-19 Thread Alex Russell
On Wed, Oct 19, 2011 at 1:54 PM, Lachlan Hunt lachlan.h...@lachy.id.au wrote:
 On 2011-10-18 18:42, Alex Russell wrote:

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names.

 I know the names suck.  The names we ended up with certainly weren't the
 first choice of names we were going for, but sadly ended up with after a
 long drawn out naming debate and a misguided consensus poll to override what
 should have been an editorial decision.  So, if we do introduce new methods,
 personally I'd be happy to use sensible names for any them, if the rest of
 the group will allow it this time.

It should *still* be an editorial decision. Shorter is better. This is
well-trod ground. We have plenty of evidence for what JS devs really
want. Lets get on with it.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

    element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from.

 The current design is capable of handling many more use cases than the
 single use case that you are trying to optimise for here.

That's OK. I'm not stoning the current design. See below. I'm
suggesting we build on it and provide the API people are making heavy
use of today. This cow path deserves not just paving, but
streetlights, wide shoulders, and a bike lane.

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

   HTMLDocument.prototype.find =
   HTMLElement.prototype.find = function(rootedSelector) {
      return this.querySelector(:scope  + rootedSelector);
    }

    HTMLDocument.prototype.findAll =
    HTMLElement.prototype.findAll = function(rootedSelector) {
      return this.querySelectorAll(:scope  + rootedSelector);
    }

 This is an incomplete way of dealing with the problem, as it doesn't
 correctly handle comma separated lists of selectors, so the parsing problem
 cannot be as trivial as prepending :scope .  It would also give a strange
 result if the author passed an empty string

  findAll();

  :scope  +  = :scope = meaning to return itself.

Yes, yes. Pseudo-code. I snipped other code I posted to not handle
obvious corner cases to prevent posting eye-watering walls of code as
well. Happy to draft a longer/more-complete straw-man, but nobody's
*actually* going to implement it this way in any case. As an aside,
it's shocking how nit-picky and anti-collaborative this group is.
*sigh*

 In another email, you wrote:

 The resolution I think is most natural is to split on , and assume
 that all selectors in the list are :scope prefixed and that.

 Simple string processing to split on , is also ineffective as it doesn't
 correctly deal with commas within functional notation pseudo-classes,
 attribute selectors, etc.

See, again, subsequent follow-ups.

 I have attempted to address this problem before and the algorithm for
 parsing a *scoped selector string* (basically what you're calling a
 rootedSelector) existed in an old draft [1].

 That draft also allowed the flexibility of including an explicit :scope
 pseudo-class in the selector, which allows for conditional expressions to be
 built into the selector itself that can be used to check the state of the
 scope element or any of its ancestors.

We could accomodate that by looking at the passed selector and trying
to determine if it includes a :scope term. If so, avoid prefixing.
That'd allow this sort of flexibility for folks who want to write
things out long-hand or target the scope root in the selector,
possibly returning itself. I''d also support a resolution for this
sort of power-tool that forces people to use document.qsa(...,
scopeEl) to get at that sort of thing.

 (But that draft isn't perfect.  It has a few known bugs in the definition,
 including one that would also make it return the context node itself under
 certain circumstances where an explicit :scope selector is used.)

 [1]
 http://dev.w3.org/cvsweb/~checkout~/2006/webapi/selectors-api2/Overview.html?rev=1.29;content-type=text%2Fhtml#processing-selectors



QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
Lachlan and I have been having an...um...*spirited* twitter discussion
regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
and :scope. He asked me to continue here, so I'll try to keep it
short:

The rooted forms of querySelector and querySelectorAll are mis-designed.

Discussions about a Scoped variant or :scope pseudo tacitly
acknowledge this, and the JS libraries are proof in their own right:
no major JS library exposes the QSA semantic, instead choosing to
implement a rooted search.

Related and equally important, that querySelector and querySelectorAll
are often referred to by the abbreviation QSA suggests that its name
is bloated and improved versions should have shorter names. APIs gain
use both through naming and through use. On today's internet -- the
one where 50% of all websites include jQuery -- you could even go with
element.$(selector) and everyone would know what you mean: it's
clearly a search rooted at the element on the left-hand side of the
dot.

Ceteris peribus, shorter is better. When there's a tie that needs to
be broken, the more frequently used the API, the shorter the name it
deserves -- i.e., the larger the component of its meaning it will gain
through use and repetition and not naming and documentation.

I know some on this list might disagree, but all of the above is
incredibly non-controversial today. Even if there may have been
debates about scoping or naming when QSA was originally designed,
history has settled them. And QSA lost on both counts.

I therefore believe that this group's current design for scoped
selection could be improved significantly. If I understand the latest
draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

Both then name and the need to specify :scope are punitive to
readers and writers of this code. The selector is *obviously*
happening in relationship to element somehow. The only sane
relationship (from a modern JS hacker's perspective) is that it's
where our selector starts from. I'd like to instead propose that we
shorten all of this up and kill both stones by introducing a new API
pair, find and findAll, that are rooted as JS devs expect. The
above becomes:

   element.findAll( div  .thinger);

Out come the knives! You can't start a selector with a combinator!

Ah, but we don't need to care what CSS thinks of our DOM-only API. We
can live and let live by building on :scope and specifying find* as
syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
 return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
 return this.querySelectorAll(:scope  + rootedSelector);
   }

Of course, :scope in this case is just a special case of the ID
rooting hack, but if we're going to have it, we can kill both birds
with it.

Obvious follow up questions:

Q.) Why do we need this at all? Don't the toolkits already just do
this internally?
A.) Are you saying everyone, everywhere, all the time should need to
use a toolkit to get sane behavior from the DOM? If so, what are we
doing here, exactly?

Q.) Shorter names? Those are for weaklings!
A.) And humans. Who still constitute most of our developers. Won't
someone please think of the humans?

Q.) You're just duplicating things!
A.) If you ignore all of the things that are different, then that's
true. If not, well, then no. This is a change. And a good one for the
reasons listed above.

Thoughts?



Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Tue, Oct 18, 2011 at 5:42 PM, Alex Russell slightly...@google.com wrote:
 Lachlan and I have been having an...um...*spirited* twitter discussion
 regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
 and :scope. He asked me to continue here, so I'll try to keep it
 short:

 The rooted forms of querySelector and querySelectorAll are mis-designed.

 Discussions about a Scoped variant or :scope pseudo tacitly
 acknowledge this, and the JS libraries are proof in their own right:
 no major JS library exposes the QSA semantic, instead choosing to
 implement a rooted search.

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names. APIs gain
 use both through naming and through use.

Sorry, this should say meaning. APIs gain *meaning* through both use
and naming.

 On today's internet -- the
 one where 50% of all websites include jQuery -- you could even go with
 element.$(selector) and everyone would know what you mean: it's
 clearly a search rooted at the element on the left-hand side of the
 dot.

 Ceteris peribus, shorter is better. When there's a tie that needs to
 be broken, the more frequently used the API, the shorter the name it
 deserves -- i.e., the larger the component of its meaning it will gain
 through use and repetition and not naming and documentation.

 I know some on this list might disagree, but all of the above is
 incredibly non-controversial today. Even if there may have been
 debates about scoping or naming when QSA was originally designed,
 history has settled them. And QSA lost on both counts.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from. I'd like to instead propose that we
 shorten all of this up and kill both stones by introducing a new API
 pair, find and findAll, that are rooted as JS devs expect. The
 above becomes:

   element.findAll( div  .thinger);

 Out come the knives! You can't start a selector with a combinator!

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 Of course, :scope in this case is just a special case of the ID
 rooting hack, but if we're going to have it, we can kill both birds
 with it.

 Obvious follow up questions:

 Q.) Why do we need this at all? Don't the toolkits already just do
 this internally?
 A.) Are you saying everyone, everywhere, all the time should need to
 use a toolkit to get sane behavior from the DOM? If so, what are we
 doing here, exactly?

 Q.) Shorter names? Those are for weaklings!
 A.) And humans. Who still constitute most of our developers. Won't
 someone please think of the humans?

 Q.) You're just duplicating things!
 A.) If you ignore all of the things that are different, then that's
 true. If not, well, then no. This is a change. And a good one for the
 reasons listed above.

 Thoughts?




Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Tue, Oct 18, 2011 at 6:00 PM, Erik Arvidsson a...@chromium.org wrote:
 On Tue, Oct 18, 2011 at 09:42, Alex Russell slightly...@google.com wrote:
 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 I like the way you think. Can I subscribe to your mailing list?

Heh. Yes ;-)

 One thing to point out with the desugar is that it has a bug and most
 JS libs have the same but. querySelectorAll allows multiple selectors,
 separated by a comma and to do this correctly you need to parse the
 selector which of course requires tons of code so no one does this.
 Lets fix that by building this into the platform.

I agree. I left should have mentioned it. The resolution I think is
most natural is to split on , and assume that all selectors in the
list are :scope prefixed and that. A minor point is how to order the
items in the returned flattened list are ordered (document order? the
natural result of concat()?).



Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
Hi Matt,

On Tue, Oct 18, 2011 at 6:25 PM, Matt Shulman mat...@google.com wrote:
 I think the query selector functionality is important enough that one
 could easily justify adding additional APIs to make this work
 better/faster, even if they overlap with existing APIs.  But, it would
 be unfortunate if more APIs were added to the DOM and libraries still
 weren't able to use them because the semantics didn't end up being
 quite right.
 It seems like the right approach would be to take jquery and rewrite
 it to use this new API and then see empirically whether it gives the
 same selection behavior as before and see how much of a performance or
 simplicity gain there is after doing this.

No need to wait. We had something nearly identical for this in Dojo
using an ID prefix hack. It looked something like this:

(function(){
var ctr = 0;
query = function(query, root){
root = root||document;
var rootIsDoc = (root.nodeType == 9);
var doc = rootIsDoc ? root : 
(root.ownerDocment||document);

if(!rootIsDoc || (~+.indexOf(query.charAt(0)) = 0)){
// Generate an ID prefix for the selector
root.id = root.id||(qUnique+(ctr++));
query = #+root.id+ +query;
}

return Array.prototype.slice.call(
doc.querySelectorAll(query)
);
};
})();

This is exactly the same dance that :scope does.

 (I think it's a good thing to allow selectors to start with
 combinators.  That seems very useful.)

 On Tue, Oct 18, 2011 at 9:47 AM, Alex Russell slightly...@google.com wrote:
 On Tue, Oct 18, 2011 at 5:42 PM, Alex Russell slightly...@google.com wrote:
 Lachlan and I have been having an...um...*spirited* twitter discussion
 regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
 and :scope. He asked me to continue here, so I'll try to keep it
 short:

 The rooted forms of querySelector and querySelectorAll are mis-designed.

 Discussions about a Scoped variant or :scope pseudo tacitly
 acknowledge this, and the JS libraries are proof in their own right:
 no major JS library exposes the QSA semantic, instead choosing to
 implement a rooted search.

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names. APIs gain
 use both through naming and through use.

 Sorry, this should say meaning. APIs gain *meaning* through both use
 and naming.

 On today's internet -- the
 one where 50% of all websites include jQuery -- you could even go with
 element.$(selector) and everyone would know what you mean: it's
 clearly a search rooted at the element on the left-hand side of the
 dot.

 Ceteris peribus, shorter is better. When there's a tie that needs to
 be broken, the more frequently used the API, the shorter the name it
 deserves -- i.e., the larger the component of its meaning it will gain
 through use and repetition and not naming and documentation.

 I know some on this list might disagree, but all of the above is
 incredibly non-controversial today. Even if there may have been
 debates about scoping or naming when QSA was originally designed,
 history has settled them. And QSA lost on both counts.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from. I'd like to instead propose that we
 shorten all of this up and kill both stones by introducing a new API
 pair, find and findAll, that are rooted as JS devs expect. The
 above becomes:

   element.findAll( div  .thinger);

 Out come the knives! You can't start a selector with a combinator!

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 Of course, :scope in this case is just a special case

Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Tue, Oct 18, 2011 at 8:59 PM, Brian Kardell bkard...@gmail.com wrote:
 I know that there were discussions that crossed over into CSS about a
 @global or a :context which could sort of include things outside the
 scope as part of the query but not be the subject.  Does any of that
 relate here?

I suppose it does, but only as an implementation detail. Nothing more
than the ID prefix hack or :scope are really necessary to get the
API we need.

 PS
 Out come the knives! You can't start a selector with a combinator!
 Even on CSS lists this has been proposed inside of pseudos... Numerous
 times and in numerous contexts.   It seems to me that everyone (even
 the people who disagree with the proposal) knows what it means
 immediately - but you are right... That's always the response.  So at
 the risk of being stabbed by an angry mob:  Can someone explain _why_
 you can't - under absolutely any circumstances - begin a selector with
 a combinator - even if there appears to be wide agreement that it
 makes sense in a finite set of circumstances?



 On Tue, Oct 18, 2011 at 12:42 PM, Alex Russell slightly...@google.com wrote:
 Lachlan and I have been having an...um...*spirited* twitter discussion
 regarding querySelectorAll, the (deceased?) queryScopedSelectorAll,
 and :scope. He asked me to continue here, so I'll try to keep it
 short:

 The rooted forms of querySelector and querySelectorAll are mis-designed.

 Discussions about a Scoped variant or :scope pseudo tacitly
 acknowledge this, and the JS libraries are proof in their own right:
 no major JS library exposes the QSA semantic, instead choosing to
 implement a rooted search.

 Related and equally important, that querySelector and querySelectorAll
 are often referred to by the abbreviation QSA suggests that its name
 is bloated and improved versions should have shorter names. APIs gain
 use both through naming and through use. On today's internet -- the
 one where 50% of all websites include jQuery -- you could even go with
 element.$(selector) and everyone would know what you mean: it's
 clearly a search rooted at the element on the left-hand side of the
 dot.

 Ceteris peribus, shorter is better. When there's a tie that needs to
 be broken, the more frequently used the API, the shorter the name it
 deserves -- i.e., the larger the component of its meaning it will gain
 through use and repetition and not naming and documentation.

 I know some on this list might disagree, but all of the above is
 incredibly non-controversial today. Even if there may have been
 debates about scoping or naming when QSA was originally designed,
 history has settled them. And QSA lost on both counts.

 I therefore believe that this group's current design for scoped
 selection could be improved significantly. If I understand the latest
 draft (http://www.w3.org/TR/selectors-api2/#the-scope-pseudo-class)
 correctly, a scoped search for multiple elements would be written as:

   element.querySelectorAll(:scope  div  .thinger);

 Both then name and the need to specify :scope are punitive to
 readers and writers of this code. The selector is *obviously*
 happening in relationship to element somehow. The only sane
 relationship (from a modern JS hacker's perspective) is that it's
 where our selector starts from. I'd like to instead propose that we
 shorten all of this up and kill both stones by introducing a new API
 pair, find and findAll, that are rooted as JS devs expect. The
 above becomes:

   element.findAll( div  .thinger);

 Out come the knives! You can't start a selector with a combinator!

 Ah, but we don't need to care what CSS thinks of our DOM-only API. We
 can live and let live by building on :scope and specifying find* as
 syntactic sugar, defined as:

  HTMLDocument.prototype.find =
  HTMLElement.prototype.find = function(rootedSelector) {
     return this.querySelector(:scope  + rootedSelector);
   }

   HTMLDocument.prototype.findAll =
   HTMLElement.prototype.findAll = function(rootedSelector) {
     return this.querySelectorAll(:scope  + rootedSelector);
   }

 Of course, :scope in this case is just a special case of the ID
 rooting hack, but if we're going to have it, we can kill both birds
 with it.

 Obvious follow up questions:

 Q.) Why do we need this at all? Don't the toolkits already just do
 this internally?
 A.) Are you saying everyone, everywhere, all the time should need to
 use a toolkit to get sane behavior from the DOM? If so, what are we
 doing here, exactly?

 Q.) Shorter names? Those are for weaklings!
 A.) And humans. Who still constitute most of our developers. Won't
 someone please think of the humans?

 Q.) You're just duplicating things!
 A.) If you ignore all of the things that are different, then that's
 true. If not, well, then no. This is a change. And a good one for the
 reasons listed above.

 Thoughts?






Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Tue, Oct 18, 2011 at 9:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/18/11 4:20 PM, Yehuda Katz wrote:

  * Speeding up certain operations like `#foo` and `body`. There is *no
    excuse* for it being possible to implement userland hacks that
    improve on the performance of querySelectorAll.

 Sure there is.  One such excuse, for example, is that the userland hacks
 have different behavior from querySelectorAll in many cases.  Now the author
 happens to know that the difference doesn't matter in their case, but the
 _browser_ has no way to know that.

 The other excuse is that adding special cases (which is what you're asking
 for) slows down all the non-special-case codepaths.  That may be fine for
 _your_ usage of querySelectorAll, where you use it with a particular limited
 set of selectors, but it's not obvious that this is always a win.

Most browsers try to optimize what is common. Or has that fallen out
of favor while I wasn't looking?

 This may be the result of browsers failing to cache the result of parsing
 selectors

 Yep.  Browsers don't cache it.  There's generally no reason to.  I have yet
 to see any real-life testcase bottlenecked on this part of querySelectorAll
 performance.

    or something else, but the fact remains that qSA can be noticably
    slower than the old DOM methods, even when Sizzle needs to parse the
    selector to look for fast-paths.

 I'd love to see testcases showing this.

 jQuery also handles certain custom pseudoselectors, and it might be nice
 if it was possible to register JavaScript functions that qSA would use
 if it found an unknown pseudo

 This is _very_ hard to reasonably unless the browser can trust those
 functions to not do anything weird.  Which of course it can't.  So your
 options are either much slower selector matching or not having this. Your
 pick.

 -Boris





Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Wed, Oct 19, 2011 at 12:45 AM, Brian Kardell bkard...@gmail.com wrote:
 Some pseudos can contain selector groups, so it would be more than just
 split on comma.

Yes, yes, of course. I've written one of these parsers. Just saying
that the impl would split selector groups and prefix them all with
:scope 

 On Oct 18, 2011 7:40 PM, Alex Russell slightly...@google.com wrote:

 On Tue, Oct 18, 2011 at 6:00 PM, Erik Arvidsson a...@chromium.org wrote:
  On Tue, Oct 18, 2011 at 09:42, Alex Russell slightly...@google.com
  wrote:
  Ah, but we don't need to care what CSS thinks of our DOM-only API. We
  can live and let live by building on :scope and specifying find* as
  syntactic sugar, defined as:
 
   HTMLDocument.prototype.find =
   HTMLElement.prototype.find = function(rootedSelector) {
      return this.querySelector(:scope  + rootedSelector);
    }
 
    HTMLDocument.prototype.findAll =
    HTMLElement.prototype.findAll = function(rootedSelector) {
      return this.querySelectorAll(:scope  + rootedSelector);
    }
 
  I like the way you think. Can I subscribe to your mailing list?

 Heh. Yes ;-)

  One thing to point out with the desugar is that it has a bug and most
  JS libs have the same but. querySelectorAll allows multiple selectors,
  separated by a comma and to do this correctly you need to parse the
  selector which of course requires tons of code so no one does this.
  Lets fix that by building this into the platform.

 I agree. I left should have mentioned it. The resolution I think is
 most natural is to split on , and assume that all selectors in the
 list are :scope prefixed and that. A minor point is how to order the
 items in the returned flattened list are ordered (document order? the
 natural result of concat()?).





Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Alex Russell
On Wed, Oct 19, 2011 at 12:46 AM, Sean Hogan shogu...@westnet.com.au wrote:
 On 19/10/11 7:20 AM, Yehuda Katz wrote:

 I agree entirely.

 I have asked a number of practitioner friends about this scenario:

 div id=parent
 p id=childspan id=inlineContent/span/p
 /div

  document.getElementById(child).querySelectorAll(div span); // returns
 #inline

 In 100% of cases, people consider this behavior *broken*. Not just
 interesting, I wouldn't have expected that, but who came up with that!?.
 In all cases involving JavaScript practitioners, people expect
 querySelectorAll to operate on the element as though the element was the
 root of a new document, and where combinators are relative to the element.


 It matches the definition of CSS selectors, so I don't think it can be
 called broken. For this case, node.querySelectorAll(div span) finds all
 span's (in document order) which are contained within the invoking node and
 checks that they match the selector expression, in this case simply checking
 they are a descendant of a div.

 The new definition being promoted is:
 - start at the containing node
 - find all descendant div's
 - for every div, find all descendant span's.
 - with the list of span's, remove duplicates and place in document-order

 Once you understand the proper definition it is hard to see this new
 definition as more logical.
 To me, the problem here is some (not all) Javascript practitioners not
 learning the proper definition of CSS selectors.

I'm just going to assume you're trolling and not respond to anything
else you post here.

 We already knew this was true since all JavaScript libraries that
 implement selectors implemented them in this way.


 To me, this indicates that there's no problem here. If you want to use an
 alternative definition of selectors then you use a JS lib that supports
 them. If you want to use the DOM API then you learn how CSS selectors work.

 I don't see JS libs ever calling the browsers querySelectorAll (or even a
 new findAll) without parsing the selector string first because:
 - JS libs support selectors that haven't been implemented on all browsers
 - JS libs support selectors that are never going to be part of the standard

 Since JS libs will always parse selector strings and call qSA, etc as
 appropriate, I can't see much benefit in creating DOM methods that accept
 non-standard selector strings.

 Sean





Re: XBL2 is dead.

2011-10-06 Thread Alex Russell
On Mon, Sep 26, 2011 at 8:28 AM, Anne van Kesteren ann...@opera.com wrote:
 On Thu, 22 Sep 2011 20:30:24 +0200, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 Further, instead of packaging Web Components into one omnibus
 offering, we will likely end up with several free-standing specs or
 spec addendums:

 1) Shadow DOM, the largest bag of with XBL2's donated organs --
 probably its own spec;
 2) Constructible and extensible DOM objects  which should probably
 just be part of DOM Core and HTML;
 3) Declarative syntax for gluing the first 2 parts together -- HTML
 spec seems like a good fit; and
 4) Confinement primitives, which is platformization of the lessons
 learned from Caja (http://code.google.com/p/google-caja/), integrated
 with element registration.

 It's still not very clear to me what any of this means and how it will fit
 together.

While Dimitri works on the wiki version (pending his vacation), let me
lay them out in a slightly different order (2, 3, 1, 4):

 - Today's DOM is actively hostile to idiomatic use in JavaScript. The
current WebIDL draft fixes some of this (yay for real prototypes!) but
not all. What we're suggesting is that, at least for HTML, we should
close the circuit on this as a matter of hygiene if nothing else.
Practically speaking, that means: giving HTML element types *real*
constructors (e.g. today's new Image(), not just create* factories),
allowing them to be subclassed in the same idiomatic way everything
else in JS can, and giving them meaningful prototypes (handled by
WebIDL). Combined, these give us a way to think about building new
elements but without any connection to markup. They're just new JS
types that just happen to be DOM nodes. The fact that we think of them
differently today is *A BUG*, and one that we can fix. Best of all,
this is exactly the sort of thing that UI libraries like JQuery UI,
Dojo, Closure, YUI, etc, etc. do all day long but without
infrastructure to *really* participate in DOM.

 - Declarative syntax is sugar that makes all this programmatic stuff
amenable to tooling and web developers who are more comfortable with
HTML than JS

 - Once we've got custom element types, it sure would be handy to be
able to hide away your UI implementation. Shadow DOM, a concept
cribbed from XBL and friends, can provide this. Once you can have a
scriptable shadow which hides its elements away from regular
traversal, your element's API becomes more useful since your guts
aren't spilling out for the world to view. We've already refactored
many WebKit internal element implementations to use Shadow DOM to
great effect, so the value is clear. Exposing it to content authors is
the next obvious step.

  - Describing all of what happens above in terms of the fewest number
of primitive APIs keeps us honest. Small, orthogonal APIs instead of
one monolithic thing help us drive consistency through the platform.
The less that's described as spec magic, the more we have to lean on
the generative composition of things web developers already know. For
instance, being able to subclass plain old JS types from DOM makes
it possible to define all of this stuff as though you'd just written
out something like:

   function MyElementType(attrs) {
  HTMLElement.call(this); // superclass ctor call, needed for
mixin properties
  this.shadow = new ShadowRoot(this); // not magic, just new-ing
up this element's shadow root
  // custom ctor behavior here
   }
   // delegate to the plain-old prototype chain.
   MyElementType.prototype = Object.create(HTMLElement.prototype, { ... });

This might not look right to a spec author's eyes, but trust me, this
is how idiomatic JS subclassing of DOM *should* look. A version of
this component model built out of small primitives allows us to make
DOM work with it's environment, not against it, a principle that I
think should be a primary goal in all of our designs.

 Having either a specification or examples to shoot at would be
 helpful. Once it is more clear what each of these parts is going to look
 like, it might be easier for me to comment on how you suggest we split them.


 Why split it like this? Several reasons:

 a) they are independently moving parts. For example, just shadow DOM,
 all by itself, is already a useful tool in the hands of Web
 developers. It's our job as spec developers to ensure that these bits
 comprise a coherent whole, but from implementation perspective, they
 don't need to block one another.

 How do you construct a shadow DOM though declaratively without a component?


 b) each belongs in the right place. For example, making DOM objects
 extensible is a concern inside of the DOM Core spec. Declarative
 syntax really needs to live in HTML. Also...

 c) some parts are too small to be their own spec.
 Constructible/extensible DOM objects bit does not even have an API
 surface.

 d) And finally, every bit has potential of solving problems that are
 more general than just about components. We shouldn't 

Re: HTML element content models vs. components

2011-10-03 Thread Alex Russell
+1

What Charles said = )

On Wed, Sep 28, 2011 at 5:22 PM, Charles Pritchard ch...@jumis.com wrote:

 On 9/27/2011 11:39 PM, Roland Steiner wrote:

 Expanding on the general web component discussion, one area that hasn't
 been touched on AFAIK is how components fit within the content model of HTML
 elements.
 Take for example a list (http://www.whatwg.org/specs/**
 web-apps/current-work/**multipage/grouping-content.**html#the-ul-elementhttp://www.whatwg.org/specs/web-apps/current-work/multipage/grouping-content.html#the-ul-element):


 ol and ul have Zero or more li elements as content model, while
 li is specified to only be usable within ol, ul and menu.

 Now it is not inconceivable that someone would like to create a component
 x-li that acts as a list item, but expands on it. In order to allow this,
 the content model for ol, ul, menu would need to be changed to
 accomodate this. I can see this happening in a few ways:


 A.) allow elements derived from a certain element to always take their
 place within element content models.

 In this case, only components whose host element is derived from li
 would be allowed within ol, ul, menu, whether or not it is rendered
 (q.v. the Should the shadow host element be rendered? thread on this ML).


 B.) allow all components within all elements.

 While quite broad, this may be necessary in case the host element isn't
 rendered and perhaps derivation isn't used. Presumably the shadow DOM in
 this case contains one - or even several - li elements as topmost elements
 in the tree.


 C.) Just don't allow components to be used in places that have a special
 content model.


 Thoughts?


 Consider the CSS content model: we can easily override the model of various
 tags.
 Then consider ARIA role types, where we can easily override the semantics
 of various tags.

 I'm a big fan of using appropriate tag names, but I'm not convinced that
 HTML should restrict CSS or ARIA.
 The HTML5 editor has repeatedly tried to enforce option C, restricting
 components in the DOM tree in relation to ARIA and HTML Canvas.

 Why bother over-specifying? Why remove that flexibility?

 HTML tag names are fantastic, I'm not saying lets just toss HTML, but I
 don't think HTML is the top of the hierarchy.
 We have ARIA for semantics, CSS for display and DOM for serialization.


 -Charles





Re: RfC: Last Call Working Draft of Web IDL; deadline October 18

2011-09-28 Thread Alex Russell
I would, again, like to bring up the issue of non-constructable
constructors as the default in WebIDL. It is onerous to down-stream
authors to leave such a foot-gun in the spec if they're *expected* to
provide constructors for most classes (and this is JS we're talking
about, so they are) and it is hostile to web developers to implicitly
encourage this sort of brokenness with regards to the target language.

None of the arguments presented for non-constructable-ctors as the
default have substantively addressed WebIDL's responsibility to either
JS or to other spec authors, instead fobbing the requirements back
onto them.

Regards

On Tue, Sep 27, 2011 at 12:56 PM, Arthur Barstow art.bars...@nokia.com wrote:
 On September 27 a Last Call Working Draft of Web IDL was published:

  http://www.w3.org/TR/2011/WD-WebIDL-20110927/

 The deadline for comments is October 18 and all comments should be sent to:

 public-script-co...@w3.org

 The comment tracking doc for the previous LC is:

  http://dev.w3.org/2006/webapi/WebIDL/lc1.txt

 Cameron, Philippe - if you think it is necessary, please fwd this e-mail to
 ECMA TC39.

 -AB





Re: [DOM4] Remove Node.isSameNode

2011-09-16 Thread Alex Russell
On Fri, Sep 9, 2011 at 6:38 PM, Sean Hogan shogu...@westnet.com.au wrote:
 On 10/09/11 11:00 AM, Jonas Sicking wrote:

 On Fri, Sep 9, 2011 at 2:27 PM, Sean Hoganshogu...@westnet.com.au
  wrote:

 On 10/09/11 3:21 AM, Jonas Sicking wrote:

 It's a completely useless function. It just implements the equality
 operator. I believe most languages have a equality operator already.
 Except Brainfuck [1]. But the DOM isn't implementable in Brainfuck
 anyway as it doesn't have objects, so I'm ok with that.

 [1] http://en.wikipedia.org/wiki/Brainfuck

 If a DOM implementation returns  node-wrappers instead of exposing the
 actual nodes then you could end up with different node-refs for the same
 node. I'm not sure whether that violates other requirements of the spec.

 I would expect that to violate the DOM spec. I.e. I would say that if
 an implementation returned true for

 someNode.firstChild != someNode.firstChild

 then I would say that that that shouldn't be allowed by the DOM.

 / Jonas

 The other scenario I can think of is casting. What if I want an object that
 only implements the Element interface of an element, even if it is a
 HTMLInputElement? The two objects will not be equal, but will represent the
 same node. I imagine that was the motivation for initially including the
 method.

JS doesn't have casting. At a minimum it should be removed from JS bindings.

 Having said that, if no-one is using it then it is completely useless.



Re: HTMLElement.register--giving components tag names

2011-09-06 Thread Alex Russell
On Sat, Sep 3, 2011 at 8:20 PM, Ian Hickson i...@hixie.ch wrote:
 On Sat, 3 Sep 2011, Dominic Cooney wrote:
 
  I think the XBL approach is far superior here -- have authors use
  existing elements, and use XBL to augment them. For example, if you
  want the user to select a country from a map, you can use a select
  with a list of countries in option elements in the markup, but then
  use CSS/XBL to bind that select to a component that instead makes
  the select look like a map, with all the interactivity that implies.

 That sounds appealing, but it looks really hard to implement from where
 we right now.

 I don't think it's hard is a good reason to adopt an inferior solution,

Likewise, intimating that something is better because it's hard is a
distraction.

 especially given that this is something that will dramatically impact the
 Web for decades to come.

The more complex the thing, the more we're saddled with. XBL(2) is
more complex than the proposed model. It likewise needs to be
justified all the more.

 XBL already has multiple implementations in various forms. I certainly
 agree that we should adjust XBL2 to take into account lessons we have
 learnt over the past five years, such as dropping namespaces and merging
 it into HTML instead of forcing an XML language on authors, but taking a
 significantly less capable solution simply because XBL is difficult seems
 like a very poor trade-off.

It *may* be capable of handling the use-cases in question, but that
case hasn't been made, and from where I sit, it's not easy or trivial
to do by inspection.

Regards



Re: Custom tags over wire, was Re: HTMLElement.register--giving components tag names

2011-09-02 Thread Alex Russell
Since Dimitri has already said everything I would, and better, I just
want to very quickly second his point about where we are today vs.
where we fear we might be: non-trivial apps have *already* given up on
HTML. Suggesting that there's an un-semantic future that will be
*caused* by the component model is to fight a battle that's already
lost.

The only question we need to answer now is: how do we repair the situation?

In spending the last 9 months thinking and working through the issues
Dimitri presents below, our strongest theory now is that there *is* a
market for semantics, that it *does* organize around winners (e.g.,
Microformats), and that we're missing a mechanism for more directly
allowing authors to express their intent at app construction time in
ways that don't either pull them fully out of markup/html (see:
Closure, GWT, etc.).

Instead of imagining total anarchy, imagine a world where something
like jquery-for-WebComponents comes along: a winner toolkit that a
statistically significant fraction of the web uses. Once that intent
is in markup and not code, it helps us set the agenda for the next
round of HTML's evolution.

Think of Web Components and custom elements not as (marginally) a way
to defeat HTML's semantics, but as a way for developers to get back in
touch with markup and a path for HTML's evolution that paves a
sustainable path.

The toolkits and frameworks of today are the TODO list for the current
round of HTML's evolution, and Web Components give us a better, *more*
semantic, lower-friction way to evolve in the future.

On Fri, Sep 2, 2011 at 11:47 AM, Dimitri Glazkov dglaz...@chromium.org wrote:
 On Fri, Sep 2, 2011 at 2:30 AM, Anne van Kesteren ann...@opera.com wrote:
 On Wed, 31 Aug 2011 19:29:28 +0200, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 To put it differently, you want to start with a well-known element in
 markup, and, through the magic of computing, this element _becomes_
 your component in the DOM tree. In other words, the markup:

 button becomes=x-awesome-buttonWeee!!/button

 Becomes:

 x-awesome-buttonWeee!!/x-awesome-button

 This does not work for assistive technology. That is, you would still have
 to completely implement the button element from scratch, including all its
 semantics such as keyboard accessibility, etc.

 Ah, thanks Anne! I do indeed need to enumerate...

 Fear 6: Accessibility. Accessibility! Accessibility!?

 I contend that the Component Model does not make accessibility any
 worse. And likely the opposite.

 By allowing ATs to traverse into shadow subtrees, and ensuring that
 the shadow subtrees are well-behaving accessibility citizens, you
 allow authors of components to encapsulate good practices and aid in
 killing the re-created poorly anti-pattern. That's what Sencha,
 SproutCore, Dijit all try to do -- and the Component Model will enable
 them do this right. In fact, things like access keys or even z-index
 are quite hard (impossible) to get right, unless you have something
 like a well-functioning shadow DOM.

 This leaves us with the argument of replacing semantics. Since we're
 in business of sub-typing HTML elements, we don't necessarily need to
 forego their semantics:

 // ...
 var AwesomeButton = HTMLButtonElement.extend(awesomeButtonInitializerBag);
 Element.register('x-awesome-button', AwesomeButton);
 // ...

 should give you a thing that behaves like a button, with the awesome
 behavior added.

 In the situations where existing semantics are representative, but
 deficient, you are much better off replacing them anyway:

 button becomes=x-plus-one-button+1/button


 What we need is not a becomes= attribute (that renames an element and
 therefore forgoes its semantics) but rather a way to get complete control
 over a semantic element and tweak aspects of it. Otherwise creating such
 controls is prohibitively expensive and only useful if you have vast
 resources.

 I would argue that replacing is exactly the right thing to do. You are
 changing an element from having some basic meaning to a more specific
 meaning. Replacement seems natural and matches what authors do today.


 Examples of elements that should not be replaced but could be changed by a
 binding: Having a sortable binding for table; Exposing cite= on
 blockquote; Turning a select listing countries into a map.

 Great! Let's go through them:

 * Sortable binding for a table is really just a table subclass with
 some event listeners registered.

 * Exposing cite on blockquote sounds like something CSS should do.
 There's no extra behavior, and you're not really creating a new type
 of element. It's just extra boxes.

 * Turning select listing countries into a map -- composition to the rescue!:

 x-country-map
   select
      optionLilliput
      optionBlefuscu
   /select
 /x-country-map

 From the author's perspective, you don't actually need the select
 element. If you intend to show a map on cool browsers and select on
 the less cool ones, you are 

Re: Custom tags over wire, was Re: HTMLElement.register--giving components tag names

2011-09-02 Thread Alex Russell
On Fri, Sep 2, 2011 at 3:58 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/2/11 3:00 PM, Alex Russell wrote:

 On Fri, Sep 2, 2011 at 1:40 PM, Charles Pritchardch...@jumis.com  wrote:


 On 9/2/11 12:10 PM, Alex Russell wrote:


 Since Dimitri has already said everything I would, and better, I just
 want to very quickly second his point about where we are today vs.
 where we fear we might be: non-trivial apps have *already* given up on
 HTML. Suggesting that there's an un-semantic future that will be
 *caused* by the component model is to fight a battle that's already
 lost.

 The only question we need to answer now is: how do we repair the
 situation?

 In spending the last 9 months thinking and working through the issues
 Dimitri presents below, our strongest theory now is that there *is* a
 market for semantics, that it *does* organize around winners (e.g.,
 Microformats), and that we're missing a mechanism for more directly
 allowing authors to express their intent at app construction time in
 ways that don't either pull them fully out of markup/html (see:
 Closure, GWT, etc.).

 Instead of imagining total anarchy, imagine a world where something
 like jquery-for-WebComponents comes along: a winner toolkit that a
 statistically significant fraction of the web uses. Once that intent
 is in markup and not code, it helps us set the agenda for the next
 round of HTML's evolution.



 Alex, Dimitri:

 1.
 I've found ARIA to be an appropriate microformat for new components.
 That is what it was designed for, after all.


 ARIA is how we envision components developed in this world will
 communicate with assistive technology. Nothing in Web Components is
 designed to supplant or replace it.


 I suggest looking at ARIA as more than a method for communicating with
 assistive technology.
 It's a means for communicating UI component states.

And Web Components is designed explicitly to work with it.

 Similarly, WCAG is a series of principles for designing usable, high quality
 applications.

 ARIA presents a set of semantic roles that don't exist in HTML, and
 for those, alignment with custom element implementations is
 outstanding. Components that express those semantics can use ARIA to
 help communicate what they mean, taking the burden off of the users
 of the components to understand and manage the role/state groups.


 When working with accessibility, it's super-set of HTML:
  img { role: 'img'; };
  img[alt=] { role: 'presentation'; }


 Yes, I'd like to see Components express aligned semantics, such as
 button:aria-pressed,
 in the shadow DOM. It's the same method we use with the Canvas subtree.

Pseudo-classes as a state mechanism need a lot more examination,
AFAICT. It's a non-extensible vocabulary, it doesn't appear to be
scriptable, and as a result, we haven't been able to use them to
handle things like states for animations (which should be transitions
between states in a node-graph).

+shans

 How should ATs be notified/detect that there is a shadow DOM?

Focus works as it always has. It can move inside the shadow.

 I'd imagine the accessibility tree simply contains the appropriate data,
 but for ATs using simple DOM, and getAttribute, should they now check for
 a .shadow attribute on elements in addition to their usual heuristics?

That'll work for components with a public shadow, which I think will
be most of them. Components can also note state by setting role/state
directly on the outer component.

 data-* works for arbitrary metadata, and aria-*  for UI semantics.


 Yes! And for low-complexity tasks, they're perfect. For larger,
 reusable components that need bundled behavior (Closure, JQuery UI,
 Dijit), the task of setting up and managing all of this can be made
 easier for users and developers of components by giving them a place
 to hang behavior, shadow DOM, and data model concerns from.


 We're certainly experimenting, with the Canvas tag and subtree.

The component approach turns this around, by letting you construct a
logical tree that might have canvas elements in the shadow, meaning
you don't need a hidden tree per sae.

 It seems like event.preventDefault() is an important hook to keep in mind.
 Is Web Components, in some manner, calling for a registerDefault method?


 2.
 ARIA 1.0 may not be sufficient, but I do think it's been designed to be
 forward compatible, and meta compatible with HTML5.
 I can, for instance, use: role=spreadsheet grid even though
 spreadsheet
 is not an ARIA 1.0 role; thus forward compatibility, and semantic
 lenience.


 Nothing we're doing reduces the utility or need for ARIA. It works
 *great* with these component types, and to the extent that we can
 align them, I'm excited by how much easier it's going to be for
 component authors to be, allowing them to focus on concerns like a11y
 instead of how do I get this thing to fly in the first place?


 I agree, I think it'll work great, and be easier.

 I'm hopeful there will be lessons to apply

Re: Mutation events replacement

2011-06-30 Thread Alex Russell
On Thu, Jun 30, 2011 at 2:11 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wednesday, June 29, 2011, Aryeh Gregor simetrical+...@gmail.com
 wrote:
  On Tue, Jun 28, 2011 at 5:24 PM, Jonas Sicking jo...@sicking.cc wrote:
  This new proposal solves both these by making all the modifications
  first, then firing all the events. Hence the implementation can
  separate implementing the mutating function from the code that sends
  out notifications.
 
  Conceptually, you simply queue all notifications in a queue as you're
  making modifications to the DOM, then right before returning from the
  function you insert a call like flushAllPendingNotifications(). This
  way you don't have to care at all about what happens when those
  notifications fire.
 
  So when exactly are these notifications going to be fired?  In
  particular, I hope non-DOM Core specifications are going to have
  precise control over when they're fired.  For instance, execCommand()
  will ideally want to do all its mutations at once and only then fire
  the notifications (which I'm told is how WebKit currently works).  How
  will this work spec-wise?  Will we have hooks to say things like
  remove a node but don't fire the notifications yet, and then have to
  add an extra line someplace saying to fire all the notifications?
  This could be awkward in some cases.  At least personally, I often say
  things like call insertNode(foo) on the range in the middle of a
  long algorithm, and I don't want magic happening at that point just
  because DOM Range fires notifications before returning from
  insertNode.

 Heh. It's like spec people has to deal with the same complexities as
 implementors has had for years. Revenge at last!!

 Jokes aside. I think the way to do this is that the spec should
 introduce the concept of a compound mutating function. Functions
 like insertBefore, removeChild and the innerHTML setter should claim
 to be such functions. Any other function can also be defined to be
 such a function, such as your execCommand function.

 Whenever a mutation happens, the notifications for it is put on a
 list. Once the outermost compound mutation function exits, all
 notifications are fired.

  Also, even if specs have precise control, I take it the idea is
  authors won't, right?  If a library wants to implement some fancy
  feature and be compatible with users of the library firing these
  notifications, they'd really want to be able to control when
  notifications are fired, just like specs want to.  In practice, the
  only reason this isn't an issue with DOM mutation events is because
  they can say don't use them, and in fact people rarely do use them,
  but that doesn't seem ideal -- it's just saying library authors
  shouldn't bother to be robust.

 The problem is that there is no good way to do this. The only API that
 we could expose to JS is something like a beginBatch/endBatch pair of
 functions. But what do we do if the author never calls endBatch?

 This is made especially bad by the fact that JavaScript uses
 exceptions which makes it very easy to miss calling endBatch if an
 exception is thrown unless the developer uses finally, which most
 don't.


Since the execution turn is a DOM/host concept, we can add something like an
event handler to the scope which fires before exit. Something like:

   window.addEventListener(turnEnd, ...);

Listeners could be handed the mutation lists as members of the event object
they're provided. I know Rafael has more concrete ideas here about the
queues to be produced/consumed, but generally speaking, having the ability
to continue to add turnEnd listeners while still in a turn gives you the
power to operate on consistent state without forcing the start/end pair or
specific exception handling logic. Think of it as a script element's
finally block.


  Maybe this is a stupid question, since I'm not familiar at all with
  the use-cases involved, but why can't we delay firing the
  notifications until the event loop spins?  If we're already delaying
  them such that there are no guarantees about what the DOM will look
  like by the time they fire, it seems like delaying them further
  shouldn't hurt the use-cases too much more.  And then we don't have to
  put further effort into saying exactly when they fire for each method.
   But this is pretty obvious, so I assume there's some good reason not
  to do it.

 To enable things like widget libraries which want to keep state
 up-to-date with a DOM.

 / Jonas




Re: Model-driven Views

2011-04-28 Thread Alex Russell
On Tue, Apr 26, 2011 at 7:32 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Apr 25, 2011 at 9:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/22/11 8:35 PM, Rafael Weinstein wrote:
 Myself and a few other chromium folks have been working on a design
 for a formalized separation between View and Model in the browser,
 with needs of web applications being the primary motivator.

 Our ideas are implemented as an experimental Javascript library:
 https://code.google.com/p/mdv/ and the basic design is described here:
 http://mdv.googlecode.com/svn/trunk/docs/design_intro.html.

 The interesting thing to me is that the DOM is what's meant to be the model
 originally, as far as I can tell, with the CSS presentation being the
 view

 I guess we ended up with too much view leakage through the model so we're
 adding another layer of model, eh?

 There's always multiple layers of model in any non-trivial system.  ^_^

 In this case, the original DOM as model is valid in the sense of the
 page as a more-or-less static document, where it's the canonical
 source of information.  With an app, though, the data canonically
 lives in Javascript, with the DOM being relegated to being used to
 display the data and allow user interaction.  MDV is one possibility
 for making this relationship cleaner and simpler.

Right. DOM-as-model works here in the sense that if you consider
existing DOM elements to be participants in a hidden (un-exposed,
non-extensible) model, then this is simply a way of using the DOM
hierarchy to make that other model axis available, extensible, and
pluggable. MDV still needs a little help in the areas where existing
HTML is strongest as a model (relationship to forms, etc.), but it's
already close enough that the value properties of form elements make
sense when bound.

Teasing out how conjoined things have become in HTML/DOM has been
difficult so far, not least of all because DOM fails to make clear how
attributes, properties, and serialization are meant to behave WRT the
model of some chunk of markup/DOM.

Regards



Re: Model-driven Views

2011-04-28 Thread Alex Russell
On Thu, Apr 28, 2011 at 12:09 PM, Maciej Stachowiak m...@apple.com wrote:

 On Apr 28, 2011, at 2:33 AM, Jonas Sicking wrote:

 On Thu, Apr 28, 2011 at 2:02 AM, Maciej Stachowiak m...@apple.com wrote:

 On Apr 27, 2011, at 6:46 PM, Rafael Weinstein wrote:




 What do you think?


 - Is this something you'd like to be implemented in the browsers,

 Yes.

  and if yes, why? What would be the reasons to not just use script
  libraries (like your prototype).

 FAQ item also coming for this.

 Having heard Rafael's spiel for this previously, I believe there are some 
 things that templating engines want to do, which are hard to do efficiently 
 and conveniently using the existing Web platform.

 However, I think it would be better to add primitives to the Web platform 
 that could be used by the many templating libraries that already exist, at 
 least as a first step:

 - There is a lot of code built using many of the existing templating 
 solutions. If we provide primitives that let those libraries become more 
 efficient, that is a greater immediate payoff than creating a new 
 templating system, where Web apps would have to be rewritten to take 
 advantage.

 - It seems somewhat hubristic to assume that a newly invented templating 
 library is so superior to all the already existing solutions that we should 
 encode its particular design choices into the Web platform immediately.

 - This new templating library doesn't have enough real apps built on it yet 
 to know if it is a good solution to author problems.

 - Creating APIs is best done incrementally. API is forever, on the Web.

 - Looking at the history of querySelector(), I come to the following 
 conclusion: when there are already a lot of library-based solutions to a 
 problem, the best approach is to provide technology that can be used inside 
 those libraries to improve them; this is more valuable than creating an API 
 with a primary goal of direct use. querySelector gets used a lot more via 
 popular JavaScript libraries than directly, and should have paid more 
 attention to that use case in the first place.

 Perhaps there are novel arguments that will dissuade me from this line of 
 thinking, but these are my tentative thoughts.

 I agree with much of this. However it's hard to judge without a bit
 more meat on it. Do you have any ideas for what such primitives would
 look like?

 That's best discussed in the context of Rafael explaining what limitations 
 prevent his proposal from working as well as it could purely as a JS library.

The goal for this work is explicitly *not* to leave things to
libraries -- I'd like for that not to creep into the discussion as an
assumption or a pre-req. Libraries are expensive, slow, and lead to a
tower-of-babel problem. On the other hand, good layering and the
ability to explain current behavior in terms of fewer, smaller
primitives is desirable, if only to allow libraries to play whatever
role they need to when the high-level MDV system doesn't meet some
particular need.

 The one specific thing I recall from a previous discussion of this proposal 
 is that a way is needed to have a section of the DOM that is inactive - 
 doesn't execute scripts, load anything, play media, etc - so that your 
 template pattern can form a DOM but does not have side effects until the 
 template is instantiated.

Right. The contents of the template element are in that inactive state.

 This specific concept has already been discussed on the list, and it seems 
 like it would be very much reusable for other DOM-based templating systems, 
 if it wasn't tied to a specific model of template instantiation and updates.

Having it be a separately addressable primitive sounds like a good
thing...perhaps as some new Element type?



Re: Rename XBL2 to something without X, B, or L?

2010-12-21 Thread Alex Russell
How 'bouts a shorter version of Tab's suggestion: Web Components ?

On Thu, Dec 16, 2010 at 5:59 AM, Anne van Kesteren ann...@opera.com wrote:
 On Thu, 16 Dec 2010 14:51:39 +0100, Robin Berjon ro...@berjon.com wrote:

 On Dec 14, 2010, at 22:24 , Dimitri Glazkov wrote:

 Looking at the use cases and the problems the current XBL2 spec is
 trying address, I think it might be a good idea to rename it into
 something that is less legacy-bound?

 I strongly object. We have a long and proud tradition of perfectly
 horrible and meaningless names such as XMLHttpRequest. I don't see why we'd
 ever have to change.

 Shadow HTML Anonymous DOm for the Web!

 Cause I know you are being serious I will be serious as well and point out
 that XMLHttpRequest's name is legacy bound as that is what implementations
 call it and applications are using. XBL2 has none of that.


 --
 Anne van Kesteren
 http://annevankesteren.nl/





Re: [XHR2] FormData for form

2010-09-14 Thread Alex Russell
I have a preference for the second syntax. These sorts of classes
should always be new-able.

On Tue, Sep 14, 2010 at 10:46 AM, Jonas Sicking jo...@sicking.cc wrote:
 Hi All,

 There was some discussions regarding the syntax for generating a
 FormData object based on the data in an existing form. I had
 proposed the following syntax

 myformelement.getFormData();

 however it was pointed out that the downside with this API is that
 it's not clear that a new FormData object is created every time.
 Instead the following syntax was proposed:

 new FormData(myformelement);

 however I don't see this syntax in the new XHR L2 drafts. Is this
 merely an oversight or was the omission intentional?

 I'm fine with either syntax, but since we're getting close to shipping
 Firefox 4, and I'd like to include this functionality (in fact, it's
 been shipping for a long time in betas), I'd like to see if how much
 consensus the various proposals carried.

 / Jonas





addEventListener naming

2009-04-24 Thread Alex Russell
From this thread on whatwg:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-April/019379.html

and per Hixie's request that I re-direct this particular discussion here:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-April/019381.html

The DOM function addEventListener is probably too long. It should,
instead, be named something much shorter owing to the amount of
exercise it receives. Further, it should default the last parameter to
be false (non-capture-phase). This call:

node.addEventListener(click, function(e) { /* ... */ }, false);

Should be able to be written as (e.g.):

node.listen(click, function(e) { /* ... */ });

Similarly, removeEventListener should be aliased as unlisten. As a
further help, the common-case operation of listening-for-a-single-call
is currently written as:

var h = function(e) {
/*  */
node.removeEventListener(h);
};
node.addEventListener(click, h);

And given how common this operation it, it should probably have an alias:

node.listenOnce(click, function(e) { /* ... */ });

Regards



Re: [selectors-api] SVG WG Review of Selectors API

2009-01-26 Thread Alex Russell


On Jan 26, 2009, at 1:49 PM, Lachlan Hunt wrote:



Alex Russell wrote:
Can this be represented in a :not() clause somehow? Foisting more  
work onto script is the wrong answer.


No.


How about not yet?

Needing to do this filtering in script is clearly a spec bug. QSA is  
already littered with them, but an inability to filter an intrinsic  
property of a tag in the query language that's native to the platform  
is tactable. We just need to invent a pseudo-property for elements  
which can be matched by a :not([someProperty=your_ns_here]).


 The SVG WG explicitly requested an example illustrating how to  
filter elements based on the namespace URI that works in the general  
case, given that there is no longer a namespace prefix resolution  
mechanism supported in this version of the API.


So they obliquely pointed out a spec bug. Don't get me wrong, I'm no  
lover of namespaces. Frankly I think they're a bug. But SVG is stuck  
w/ 'em until it can find a way to evolve out of the XML ooze. Until  
that time, we must surely be able to do better by folks who want to  
try to make the platform feel unified, no?


Regards

I'm well aware that with the specific markup example given in the  
spec, the selector svgvideo would have the same result in this  
case, but that doesn't work for the general case, which is what the  
SVG WG wanted to see.





Re: [access-control] Rename spec?

2009-01-14 Thread Alex Russell


I do agree the title is important and support either of the  
proposed new titles (preference goes with Resource). One question  
I have here is whether Domain would be more accurate than Origin.


Domain does not capture significance of the scheme and port, while  
Origin does. I'm updating the draft to use terminology a bit more  
consistent now so it should become less confusing. (E.g. I'm  
removing cross-site in favor of cross-origin as the latter has a  
clearly defined meaning and the former is just used on blogs.)


This seems both condescending and useless. Nearly everyone knows what  
cross domain and same domain policy mean, whereas cross origin  
is just what language lawyers say to make regular web developers feel  
bad (AFICT).


Please end the madness.

Regards



Re: [access-control] Rename spec?

2009-01-14 Thread Alex Russell


Feels like URL vs. URI to me, which for the 80% case is simply bike- 
shedding. I appreciate that there is a question of specificity and  
that your clarification is more correct...but is that a good enough  
reason to do it?


Regards

On Jan 14, 2009, at 11:14 AM, Anne van Kesteren wrote:

On Wed, 14 Jan 2009 17:52:50 +0100, Alex Russell  
a...@dojotoolkit.org wrote:
I do agree the title is important and support either of the  
proposed new titles (preference goes with Resource). One  
question I have here is whether Domain would be more accurate  
than Origin.


Domain does not capture significance of the scheme and port, while  
Origin does. I'm updating the draft to use terminology a bit more  
consistent now so it should become less confusing. (E.g. I'm  
removing cross-site in favor of cross-origin as the latter has a  
clearly defined meaning and the former is just used on blogs.)


This seems both condescending and useless. Nearly everyone knows  
what cross domain and same domain policy mean, whereas cross  
origin is just what language lawyers say to make regular web  
developers feel bad (AFICT).


Please end the madness.


Well, both are important (and different, origin is a superset), no?  
E.g. document.domain clearly represents a domain, where as the  
MessageEvent interface has an origin attribute that gives back an  
origin. This very draft defines two headers with the name origin in  
them. It seems to me that developers will quickly pick up the  
difference.