Re: [custom-elements] Prefix x- for custom elements like data- attributes

2016-04-25 Thread Brian Kardell
On Mon, Apr 25, 2016 at 1:06 PM, Bang Seongbeom <bangseongb...@hotmail.com>
wrote:

> It would be good to restrict custom element's name to start with like
> 'x-' for the future standards. User-defined custom attributes; data
> attributes are also restricted its name to start with 'data-' so we can
> define easily new standard attribute names ('aria-*' or everything
> except for 'data-*'.)
>

You can't really reasonably further restrict future HTML though.  Relaxing
it is easier than restricting it.  I can't really really understand why
you'd want to in this case as they are dasherized.  HTML doesn't need
dasherized native elements.

In practice attributes aren't really restricted either - there is a
veritable ocean of custom attributes out there that are not data-*
prefixed.  data-* attributes do give you a potentially nicer API for
dealing with attribute-oriented properties.  Things like angular used ng-*
and, realistically, those are probably safe.  It's not likely that html
needs those in the future.   They've also done some interesting things at
looking at what is actually functionally valid - like attribute names that
are surrounded by braces or parens.  In any case, since people are well
aware that they -can- use any old attributes, it kind of doesn't matter
what the spec says when it comes to new standards.  If it would break the
web, it would break the web.

Same with custom tags really, HTML has always permitted them because that's
how it is forward parsable... But they haven't had a way to be useful.
Custom elements, make them useful, but put them in a compelling box that
allow us to add anything that isn't dasherized.  That was a long long long
way in the making, I can't honestly see it being undone in an even stricter
fashion.



-- 
Brian Kardell :: @briankardell


Re: [Custom Elements] Extension of arbitrary elements at runtime.

2016-04-11 Thread Brian Kardell
On Sun, Apr 10, 2016 at 11:11 PM, /#!/JoePea <trus...@gmail.com> wrote:

> The is="" attribute lets one specify that some element is actually an
> extended version of that element.
>
> But, in order for this to work, the Custom Element definition has to
> deliberately extend that same basic element type or else it won't
> work.
>
> It'd be nice if a Custom Element definition could be arbitrarily
> applied to any type of element, with the is="" tag for example, and
> that the element would then be upgraded to the extending type at
> runtime. The custom element could be told what class it is extending
> at runtime in order to perhaps act differently using conditional
> statements.
>
> So, writing defining the element could be like this:
>
> ```js
> let isDynamic = true
> document.registerElement('some-element', {
>   createdCallback: function() {
> if (this.typeExtended == 'DIV")
>   // ...
> if (this.typeExtended == 'BUTTON')
>   // ...
>   },
> }, isDynamic)
> ```
>
> then using the element could be like this:
>
> ```html
> 
> 
> 
> ```
>
> What are your thoughts on such a way to extend any type of element at
> runtime? Could it be a way for augmenting, for example, an existing
> app without necessarily having to modify it's markup, just simply
> adding is="" attributes as needed? Would this make things too
> complicated?
>
> The real reason I thought of this idea is because:
> https://github.com/infamous/infamous/issues/5
>
> There might be a better way, but thought I'd mention it just in case
> it sparks any ideas.
>
> Cheers!
> - Joe
>
> /#!/JoePea
>
>

Is there a reason that you cannot wrap with fallback?  For example, in your
github issue you are given and existing app with markup like:


  
Hello
  


and the issue wanted to change it to


  
    Hello
  


Is there a reason it could it not just be


  
  
Hello




There isn't really a significant difference between div and motor-scene to
non-supporting browsers.


-- 
Brian Kardell :: @briankardell


Re: Telecon / meeting on first week of April for Web Components

2016-03-21 Thread Brian Kardell
On Mar 21, 2016 3:17 PM, "Ryosuke Niwa"  wrote:
>
> For people participating from Tokyo and Europe, would you prefer having
it in early morning or late evening?
>
> Because Bay Area, Tokyo, and Europe are almost uniformly distributed
across the timezone, our time slots are limited:
>
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160405=900=248=268
>
> Do people from Tokyo can participate in the meeting around midnight?
>
> If so, we can schedule it at UTC 3PM, which is 8AM in bay area, midnight
in Tokyo, and 5AM in Europe.
>
> Another option is at UTC 7AM, which is 11PM in bay area, 3PM in Tokyo,
and 8AM in Europe.
>
> - R. Niwa
>

I can afford to attend remotely! :)


Re: Art steps down - thank you for everything

2016-01-28 Thread Brian Kardell
On Jan 28, 2016 10:49 AM, "Chaals McCathie Nevile" 
wrote:
>
> Hi folks,
>
> as you may have noticed, Art has resigned as a co-chair of the Web
Platform group. He began chairing the Web Application Formats group about a
decade ago, became the leading co-chair when it merged with Web APIs to
become the Web Apps working group, and was instrumental in making the
transition from Web Apps to the Web Platform Group. (He also chaired
various other W3C groups in that time).
>
> I've been very privileged to work with Art on the webapps group for so
many years - as many of you know, without him it would have been a much
poorer group, and run much less smoothly. He did a great deal of work for
the group throughout his time as co-chair, efficiently, reliably, and
quietly.
>
> Now we are three co-chairs, we will work between us to fill Art's shoes.
It won't be easy.
>
> Thanks Art for everything you've done for the group for so long.
>
> Good luck, and I hope to see you around.
>
> Chaals
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
>  cha...@yandex-team.ru - - - Find more at http://yandex.com
>

Thanks for all your efforts and work Art!  Also for coming to find me and
giving me a ride to TPAC when I got lost in Santa Clara.  Not all chairs
would do that :)


Re: Custom elements contentious bits

2015-12-10 Thread Brian Kardell
On Thu, Dec 10, 2015 at 3:23 PM, Anne van Kesteren <ann...@annevk.nl> wrote:

> On Wed, Nov 25, 2015 at 3:16 PM, Domenic Denicola <d...@domenic.me> wrote:
> > A bit ago Jan put together an initial draft of the "contentious bits"
> for custom elements, in preparation for our January F2F. Today I went
> through and expanded on the issues he put together, with the result at
> https://github.com/w3c/webcomponents/wiki/Custom-Elements:-Contentious-Bits.
> It morphed into a kind of agenda for the meeting, containing "Previously
> contentious bits", "Contentious bits", "Other things to work out", and
> "Other issues worth mentioning".
> >
> > It would be lovely if other vendors could take a look, and fill in
> anything they think is missing, or correct any inaccuracies.
>
> So my impression is that Apple is still in favor of synchronous
> construction. Talking to developers from Ember.js they care about that
> too (to the extent they even think this problem is worthwhile
> solving). The "upgrade" problem is a more general problem we also have
> with service workers and such. There's some kind of boostrapping thing
> that might warrant a more general solution.
>
> Would be great to have some cards on the table.
>
> And with respect to that, Mozilla is interested in shipping Shadow
> DOM. We continue to have concerns with regards to lack of integration
> with the HTML Standard, but hope those will get resolved. Custom
> elements is less of a priority for us at this point, so we're not sure
> what to make of this meeting if things are still up in the air.
>
>
> --
> https://annevankesteren.nl/
>
>

I'd really like to understand where things really are with
async/sync/almost sync - does anyone have more notes or would they be
willing to provide more exlpanation?  I've read the linked contentious bit
and I'm still not sure that I understand.  I can say, for whatever it is
worth, that given some significant time now (we're a few years in) with web
component polyfills at this point I do see more clearly the desire for
sync.  It's unintuitive at some level in a world where we (including me)
tend to really want to make things async but if I am being completely
honest, I've found an increasing number of times where this is a actually
little nightmarish to deal with and I feel almost like perhaps this might
be something of a "least worst" choice.  Perhaps there's some really good
ideas that I just haven't thought of/stumbled across yet but I can tell you
that for sure a whole lot of frameworks and even apps have their own
lifecycles in which they reason about things and the async nature makes
this very hard in those cases.

Shadow DOM will definitely help address a whole lot of my cases because
it'll hide one end of things, but I can definitely see cases where even
that doesn't help if I need to actually coordinate.  I don't know if it is
really something to panic about but I feel like it's worth bringing up
while there are discussions going on.  The declarative nature and the
seeming agreement to adopt web-component _looking_ tags even in situations
where they are not exactly web components makes it easy enough to have
mutually agreeing "enough" implementations of things.  For example, I
currently have a few custom elements for which I have both a "native"
definition and an angular directive so that designers I know who write HTML
and CSS can learn a slightly improved vocabulary, say what they mean and
quickly get a page setup while app engineers can then simply make sure they
wire up the right implementation for the final product.  This wasn't my
first choice:  I tried going purely native but problems like the one
described above created way too much contention, more code, pitfalls and
performance issues.  In the end it was much simpler to have two for now and
reap a significant portion of the benefit if not the whole thing.

Anywho... I'm really curious to understand where this stands atm or where
various companies disagree if they do.


-- 
Brian Kardell :: @briankardell


Re: App-to-App interaction APIs - one more time, with feeling

2015-10-21 Thread Brian Kardell
or prone. Maybe I am
> misunderstanding you?
>
>
>
> - Daniel
>
>
>
> From: Paul Libbrecht [mailto:p...@hoplahup.net]
> Sent: Sunday, October 18, 2015 9:38 AM
> To: Daniel Buchner <dabuc...@microsoft.com>
> Cc: public-webapps@w3.org
> Subject: Re: App-to-App interaction APIs - one more time, with feeling
>
>
>
> Daniel,
>
> as far as I can read the post, copy-and-paste-interoperability would be a
> "sub-task" of this.
> It's not a very small task though.
> In my world, E.g., there was a person who inventend a "math" protocol
> handler. For him it meant that formulæ be read out loud (because his mission
> is making the web accessible to people with disabilities including eyes) but
> clearly there was no way to bring a different target.
>
> Somehow, I can't really be convinced by such a post except asking the user
> what is the sense of a given flavour or even protocol handler which, as we
> know, is kind of error-prone. Agree?
>
> paul
>
> PS: I'm still struggling for the geo URL scheme to be properly handled but
> it works for me in a very very tiny spectrum of apps (GMaps >
> Hand-edited-HTML-in-Mails-through-Postbox > Blackberry Hub > Osmand). This
> is certainly a good example of difficult sequence of choices.
>
>
>
>
> Paul Libbrecht
> 18 octobre 2015 18:38
> Daniel,
>
> as far as I can read the post, copy-and-paste-interoperability would be a
> "sub-task" of this.
> It's not a very small task though.
> In my world, E.g., there was a person who inventend a "math" protocol
> handler. For him it meant that formulæ be read out loud (because his mission
> is making the web accessible to people with disabilities including eyes) but
> clearly there was no way to bring a different target.
>
> Somehow, I can't really be convinced by such a post except asking the user
> what is the sense of a given flavour or even protocol handler which, as we
> know, is kind of error-prone. Agree?
>
> paul
>
> PS: I'm still struggling for the geo URL scheme to be properly handled but
> it works for me in a very very tiny spectrum of apps (GMaps >
> Hand-edited-HTML-in-Mails-through-Postbox > Blackberry Hub > Osmand). This
> is certainly a good example of difficult sequence of choices.
>
>
> Daniel Buchner
> 14 octobre 2015 18:33
>
> Hey WebAppers,
>
>
>
> Just ran into this dragon for the 1,326th time, so thought I would do a
> write-up to rekindle discussion on this important area of developer need the
> platform currently fails to address:
> http://www.backalleycoder.com/2015/10/13/app-to-app-interaction-apis/. We
> have existing APIs/specs that get relatively close, and my first instinct
> would be to leverage those and extend their capabilities to cover the
> broader family of use-cases highlighted in the post.
>
>
>
> I welcome your ideas, feedback, and commentary,
>
>
>
> - Daniel
>
>



-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Is polyfilling future web APIs a good idea?

2015-08-10 Thread Brian Kardell
.




  On Aug 7, 2015, at 7:07 AM, Brian Kardell bkard...@gmail.com wrote:
 
  On Thu, Aug 6, 2015 at 6:50 PM, Glen Huang curvedm...@gmail.com wrote:
  @William @Matthew
 
  Ah, thanks. Now I think prollyfill is prolly a good name. :)
 
  @Brian
 
  Actually, I had this pattern in mind:
 
  When no browsers ship the API:
 
  ```
  if (HTMLElement.prototype.foo) {
   HTMLElement.prototype._foo = HTMLElement.prototype.foo;
  } else {
   HTMLElement.prototype._foo = polyfill;
  };
  ```
 
  This assumes you'll match, which - again depending on how far you are
  might be a big bet... Personally, I wouldn't use that myself if
  writing something -- Seems a lot like  when people simply provided N
  versions of the same prefixed properties instead of just one, it has
  potential to go awry... No one can actually vary because they've done
  the equivalent of shipping the unprefixed thing inadvertently
  intending it to be an experiment, but it wasnt.
 
 
  When at least two browsers ship this API:
 
  ```
  if (!HTMLElement.prototype.foo) {
  HTMLElement.prototype.foo = polyfill;
  }
  HTMLElement.prototype._foo = function() {
   console.warn(deprecated);
   return this.foo();
  };
  ```
 
  But it's not deprecated in browsers that don't support it, it's a
  polyfill at that point and aside from the console.warn (which again,
  in this case seems incorrect in the message at least) it should be
  generally be identical to the oneliner I gave before - the prototype
  for _foo is the polyfill version.
 
 
 
  --
  Brian Kardell :: @briankardell :: hitchjs.com



Re: Is polyfilling future web APIs a good idea?

2015-08-06 Thread Brian Kardell
On Thu, Aug 6, 2015 at 6:50 PM, Glen Huang curvedm...@gmail.com wrote:
 @William @Matthew

 Ah, thanks. Now I think prollyfill is prolly a good name. :)

 @Brian

 Actually, I had this pattern in mind:

 When no browsers ship the API:

 ```
 if (HTMLElement.prototype.foo) {
   HTMLElement.prototype._foo = HTMLElement.prototype.foo;
 } else {
   HTMLElement.prototype._foo = polyfill;
 };
 ```

This assumes you'll match, which - again depending on how far you are
might be a big bet... Personally, I wouldn't use that myself if
writing something -- Seems a lot like  when people simply provided N
versions of the same prefixed properties instead of just one, it has
potential to go awry... No one can actually vary because they've done
the equivalent of shipping the unprefixed thing inadvertently
intending it to be an experiment, but it wasnt.


 When at least two browsers ship this API:

 ```
 if (!HTMLElement.prototype.foo) {
  HTMLElement.prototype.foo = polyfill;
 }
 HTMLElement.prototype._foo = function() {
   console.warn(deprecated);
   return this.foo();
 };
 ```

But it's not deprecated in browsers that don't support it, it's a
polyfill at that point and aside from the console.warn (which again,
in this case seems incorrect in the message at least) it should be
generally be identical to the oneliner I gave before - the prototype
for _foo is the polyfill version.



-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Is polyfilling future web APIs a good idea?

2015-08-04 Thread Brian Kardell
On Tue, Aug 4, 2015 at 8:22 PM, Glen Huang curvedm...@gmail.com wrote:

There's actually a lot of questions in here, so let me take them one
at a time...

 On second thought, what's the difference between prollyfills and libraries
A major difference is that it's hard to translate libraries into
standards regardless of the approach they use.  We just don't do it.
We have libraries like jQuery that are as successful as we can ever
reasonably expect anything to get - it's inarguable that jQuery is
used more than any single browser, for example - and yet we didn't
just standardize jQuery.   What's more, we wouldn't for lots of
technical and political reasons.  jQuery wasn't made with becoming a
standard in mind and it didn't propose things in same standards sense
before hand or early -- a lot of the approach/style matter too (see
below).  Aspects of it could have been - jQuery has individuals
representing in standards committees (me, for example) and prollyfills
give us a way to do this - ecma, for example, produces a lot of
prollyfills as they go and actually get use and feedback before it's
way too late.

 exposed web APIs in a functional style (e.g., node1._replaceWith(node2) vs 
 replaceWith(node2, node1)? Or in a wrapper style like jquery does? Prefixing 
 APIs doesn't seem to be that different from using custom APIs?

It could be, but the further you get from the actual way it will be
used, the more we will debate on what will happen if you change its
surface.  A prollyfill is as close as we can approximate to the real
proposal without shooting ourselves in the foot.  It lets developers
and standards people work together, answer questions about uptake and
confusion, identify use cases and edgecases, etc.

You might say the prefixing approach resembles native APIs more closely, but 
when changing your code to use native APIs, modifying one character or several 
doesn't really make much difference (they are the same if you find  replace), 
as long as you have to modify the code.

Definitely not as simple if you change the whole pattern - asking
someone to grep an entire codebase is a bigger ask than a nice simple
pattern that lets you just say something like:

// Hey, our prollyfill matches native, now it's a polyfill!
HTMLElement.prototype.foo = HTMLElement.prototype._foo;



-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Is polyfilling future web APIs a good idea?

2015-08-03 Thread Brian Kardell
On Mon, Aug 3, 2015 at 9:07 PM, Glen Huang curvedm...@gmail.com wrote:
 Brian,

 prollyfills seems pragmatic. But what about when the logic of an API changes, 
 but not the name? The node.replaceWith() API for example is about to be 
 revamped to cover some edge cases. If the prollyfills exposed 
 node._replaceWith(), what should it do when the new node.replaceWith() comes? 
 Expose node._replaceWith2()? This doesn't seem to scale.


Why would it need to?  Just like any library, you import a version and
deal with incompatibilities when you upgrade?


 But I do see the benefit of prefixing in prollyfills. node.replaceWith() used 
 to be node.replace(). If we exposed _replace() earlier, we can swap the 
 underlying function with node.replaceWith() when we release a new version, 
 and old code immediately benefit from the new API. But over time, prollyfills 
 are going to accumulate a lot obsolete APIs. Do you think we should use 
 semver to introduce breaking changes? Or these obsolete APIs should always be 
 there?


Yes, I think authors will opt in to an API and that API may contain
breaking changes or backcompat changes, I think that's up to people
implementing and maintaining to experiment with.  Too early to say
what will be more successful, but I don't forsee things growing
forever - at some point people remove polyfills too in practice... In
theory you could use something like FT-labs polyfill as a service to
make any browser 'normalized' but that gets really heavy if it isn't
targeted and goes back too far in practice.  No one is even writing
polyfills for IE6 anymore - most don't even go back to IE8.


 And if we are going this route, I think we need blessing from the WG. They 
 have to promise they will never design an API that starts with the prefix we 
 used.

We have that in web components already (no native element will be a
dasherized name - in most practical terms, attributes too), for all
things CSS (-vendor-foo just has no vendor and becomes --foo) and when
you're talking about DOM - yeah, we dont have one, but no DOM will
contain an leading underscore, I can just about promise that without
any agreements - but I agree it'd be great if we just had one.



-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Is polyfilling future web APIs a good idea?

2015-08-02 Thread Brian Kardell
On Sun, Aug 2, 2015 at 9:39 PM, Glen Huang curvedm...@gmail.com wrote:
 I'm pretty obsessed with all kinds of web specs, and invest heavily in tools 
 based on future specs. I was discussing with Tab the other day about whether 
 he thinks using a css preprocessor that desugars future css is a good idea. 
 His answer was surprisingly (at least to me) negative, and recommended sass. 
 His arguments were that

 1. the gramma is in flux and can change
 2. css might never offer some constructs used in sass, at least with very low 
 priority.

 I think these are good points, and it reduced my enthusiasm for future spec 
 based css preprocessors. But this got me thinking about polyfills for future 
 web APIs. Are they equally not recommended? Likewise, the APIs might change, 
 and for DOM operations we should rely on React because the native DOM might 
 never offer such declarative APIs, at least with very low priority. Do 
 polyfills like WebReflection's DOM4 look promising? For new projects, should 
 I stick with polyfills that only offers compatibilities for older browser, 
 and for future spec features, only use libraries that offer similar features 
 but invent their own APIs, or should I track future specs and use these 
 unstable polyfills?

 I'm torn on this subject. Would like to be enlightened.
[snip]

TL;DR: Yes, I think they are good - really good actually, with some
best practices.

CSS is a slightly different beast at the moment because it is not
(yet) extensible, but let's pretend for a moment that it is so that a
uniform answer works ok...

This was why I and others advocated defining the idea of/using the
term prollyfill as opposed to a polyfill.  With a polyfill you are
filling in gaps and cracks in browser support for an established
standard, with a prollyfill you might be charting some new waters.  In
a sense, you're taking a guess.  If history is any indicator then the
chances that it will ultimately ship that way without change is very
small until it really ships in two interoperable browsers that way.
There's more to it than slight semantics too I think:  Polyfill was
originally defined as above and now for many developers the
expectation is that this is what it's doing.  In other words, it's
just providing a fill for something which will ultimately be native,
therefore won't change.  Except, as we are discussing, this might not
be so.  Personally, I think this matters in a big way because so much
depends on people understanding things:  If users had understood and
respected vendor-prefixed CSS for use as intended, for example, they
wouldn't have been much of a problem -- but they were.  Users didn't
understand that and things shipped natively, so vendors had to adjust
- things got messy.

Debates about this took up a lot of email space in early extensible
web cg lists - my own take remains unchanged, mileage may vary:

It is my opinion that when possible, we should 'prefix' prollyfilled
APIs - this could be something as simple as an underscore in DOM APIs
or a --property in CSS, etc.  Hopefully this makes it obvious that
it is not native and is subject to change, but that isn't the reason
to do it.  The reason to do it is the one above:  it *may* actually
change so you shouldn't mislead people to think otherwise - it
potentially affects a lot.  For example, if something gets very
popular masquerading as native but no one will actually implement
natively it without changes - they are stuck having to deal with
shitty compromises in standards to keep the web from breaking.  Also,
what happens when devs sell a standard with the promise that it's
going to be native and then we rip that rug out from underneath them.

For me then, following a nice pattern where authors opt in and provide
whether or not to prefix is ideal.  Since authors opt in, just like
they do with a library and it can work in all browsers, it can
version, and it's way better than vendor prefixes on native.  Yes,
your code won't automatically run faster if it is implemented
natively- but depending on how far along the track you are, it might
be very long odds that it will ship just like that.  If you get very
lucky, your last version of prollyfill becomes polyfill and if a
site wants to use the native, they can tweak a single arg and it's off
to the races.

Realistically, I think that prollyfills are probably the only way to
strike the right balance of incentives and disincentives that allow
the standards community to do good things, create a good feedback loop
that developers can actually be involved in and measure something
experimental before we ship it.

-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Inheritance Model for Shadow DOM Revisited

2015-04-30 Thread Brian Kardell
On Thu, Apr 30, 2015 at 2:00 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2015, at 4:43 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Apr 28, 2015 at 7:09 PM, Ryosuke Niwa rn...@apple.com wrote:
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass wants 
 to insert matches exactly what the author of superclass considered, the 
 subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped in 
 a span.

 So what if we flipped this as well and came up with an imperative API
 for shadow as a function. I.e. shadow as an actual function?
 Would that give us agreement?

 We object on the basis that shadow as a function is fundamentally 
 backwards way of doing the inheritance.  If you have a MyMapView and define a 
 subclass MyScrollableMapView to make it scrollable, then MyScrollableMapView 
 must be a MyMapView.  It doesn't make any sense for MyScrollableMapView, for 
 example, to be a ScrollView that then contains MyMapView.  That's has-a 
 relationship which is appropriate for composition.

 - R. Niwa



Is there really a hard need for inheritance over composition? Won't
composition ability + an imperative API that allows you to properly
delegate to the stuff you contain be just fine for a v1?



-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: Proposal for changes to manage Shadow DOM content distribution

2015-04-22 Thread Brian Kardell
On Apr 21, 2015 10:29 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Apr 21, 2015, at 10:17 PM, Brian Kardell bkard...@gmail.com wrote:

 On Apr 21, 2015 8:22 PM, Ryosuke Niwa rn...@apple.com wrote:
 
  Hi all,
 
  Following WebApps discussion last year [1] and earlier this year [2]
about template transclusions and inheritance in shadow DOM, Jan Miksovsky
at Component Kitchen, Ted O'Connor and I (Ryosuke Niwa) at Apple had
a meeting where we came up with changes to the way shadow DOM distributes
nodes to better fit real world use cases.
 
  After studying various real world use of web component APIs as well as
exiting GUI frameworks, we noticed that selector based node distribution as
currently spec'ed doesn't address common use cases and the extra
flexibility CSS selectors offers isn't needed in practice.  Instead, we
propose named insertion slots that could be filled with the contents in
the original DOM as well as contents in subclasses.  Because the proposal
uses the same slot filling mechanism for content distributions
and inheritance transclusions, it eliminates the need for multiple shadow
roots.
 
  Please take a look at our proposal at
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
 
  [1]
https://lists.w3.org/Archives/Public/public-webapps/2014AprJun/0151.html
  [2]
https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0611.html
 

 I just wanted to note that a month or two I tried to assume nothing and
come up with a bare essentials concept which involved named slots.  Is
there a proposed a way to project from an attribute value into content or
from attribute to attribute..?

 In other words, if I had x-foo blah=hello . Can I map blah into a
slot or identify an attribute value in my template *as* a slot?

 Not at the moment but I could imagine that such a feature could be easily
added. e.g.

 x-foo blah=hello

 !-- implementation --
 template element=x-foo
   content attrslot=blah
 /template

 - R. Niwa


For the record, I'd love to see that discussed as part of a real proposal
because I think it's pretty useful - you can see lots of things essentially
trying to do custom elements in the wild with a similar need and it
honestly seems easier than element based slots technically speaking so it
would be a shame to lack it.


Re: Proposal for changes to manage Shadow DOM content distribution

2015-04-21 Thread Brian Kardell
On Apr 21, 2015 8:22 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi all,

 Following WebApps discussion last year [1] and earlier this year [2]
about template transclusions and inheritance in shadow DOM, Jan Miksovsky
at Component Kitchen, Ted O'Connor and I (Ryosuke Niwa) at Apple had
a meeting where we came up with changes to the way shadow DOM distributes
nodes to better fit real world use cases.

 After studying various real world use of web component APIs as well as
exiting GUI frameworks, we noticed that selector based node distribution as
currently spec'ed doesn't address common use cases and the extra
flexibility CSS selectors offers isn't needed in practice.  Instead, we
propose named insertion slots that could be filled with the contents in
the original DOM as well as contents in subclasses.  Because the proposal
uses the same slot filling mechanism for content distributions
and inheritance transclusions, it eliminates the need for multiple shadow
roots.

 Please take a look at our proposal at
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution

 [1]
https://lists.w3.org/Archives/Public/public-webapps/2014AprJun/0151.html
 [2]
https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0611.html


I just wanted to note that a month or two I tried to assume nothing and
come up with a bare essentials concept which involved named slots.  Is
there a proposed a way to project from an attribute value into content or
from attribute to attribute..?

In other words, if I had x-foo blah=hello . Can I map blah into a slot
or identify an attribute value in my template *as* a slot?


Re: Minimum viable custom elements

2015-02-04 Thread Brian Kardell
On Wed, Feb 4, 2015 at 1:54 PM, Alice Boxhall aboxh...@google.com wrote:

 On Wed, Feb 4, 2015 at 10:36 AM, Ryosuke Niwa rn...@apple.com wrote:


 On Feb 4, 2015, at 10:12 AM, Brian Kardell bkard...@gmail.com wrote:

 On Wed, Feb 4, 2015 at 12:41 PM, Chris Bateman chrisb...@gmail.com
 wrote:

 Yeah, I had noted in that post that wrapping a native element with a
 custom element was an option - only drawback is that the markup isn't as
 terse (which is generally advertised as one of the selling points of Custom
 Elements). But that doesn't seem like a deal breaker to me, if subclassing
 needs to be postponed.

 Chris


 As I pointed out ealier:

 input is=x-foo

 x-fooinput/x-foo

 seems like barely a ternseness savings worth discussing.


 Indeed.  Also, authors are used to the idea of including a fallback
 content inside an element after canvas and object elements and this fits
 well with their mental model.


 I'm just trying to get my head around this pattern. In this example, does
 the web page author or the custom element developer embed the input? And
 who is responsible for syncing the relevant attributes across? In reality,
 isn't this going to look more like

 x-checkbox checked=true
 input type=checkbox checked=true
 /x-checkbox

 or as a slightly contrived example,

 x-slider min=-100 max=100 value=0 step=5
 input type=range min=-100 max=100 value=0 step=5
 /x-slider

 Or does the custom element get its state from the embedded element?


the custom element uses its contents as input and, in the simplest sense,
just moves it or maps it during creation... In a more complicated world
with something more like shadow dom (a separate topic) it might be
'projected'

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Minimum viable custom elements

2015-02-04 Thread Brian Kardell
On Wed, Feb 4, 2015 at 12:41 PM, Chris Bateman chrisb...@gmail.com wrote:

 Yeah, I had noted in that post that wrapping a native element with a
 custom element was an option - only drawback is that the markup isn't as
 terse (which is generally advertised as one of the selling points of Custom
 Elements). But that doesn't seem like a deal breaker to me, if subclassing
 needs to be postponed.

 Chris


As I pointed out ealier:

input is=x-foo

x-fooinput/x-foo

seems like barely a ternseness savings worth discussing.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-02-04 Thread Brian Kardell
On Wed, Feb 4, 2015 at 7:56 AM, Olli Pettay o...@pettay.fi wrote:

 On 02/03/2015 04:22 PM, Brian Kardell wrote:



 On Tue, Feb 3, 2015 at 8:06 AM, Olli Pettay o...@pettay.fi mailto:
 o...@pettay.fi wrote:

 On 02/02/2015 09:22 PM, Dimitri Glazkov wrote:

 Brian recently posted what looks like an excellent framing of the
 composition problem:

 https://briankardell.__wordpress.com/2015/01/14/__friendly-
 fire-the-fog-of-dom/
 https://briankardell.wordpress.com/2015/01/14/
 friendly-fire-the-fog-of-dom/

 This is the problem we solved with Shadow DOM and the problem I
 would like to see solved with the primitive being discussed on this thread.


[snip]

 If ShadowRoot had something like attribute DOMString name?; which defaults
 to null and null means deep(name) or deep(*) wouldn't be able

 to find the mount, that would let the component itself to say whether it
 can deal with outside world poking it CSS.


That actually doesn't sound crazy to me.  I mean, it actually fits
pretty nicely into the conceptual model I think and it would add a whole
additional layer of possible protection which is explainable in sort of
todays terms with minimal new 'stuff'... the combinator is new anyway and
you're dealing with mount in what seems like a good way there.  I think I
like it.


[snip]

 [Perhaps a bit off topic to the style isolation]
 In other words, I'm not very happy to add super complicated Shadow DOM to
 the platform if it doesn't really provide anything new which
 couldn't be implemented easily with script libraries and a bit stricter
 coding styles and conventions.


I'd suggest that you're radically over-stating - you really can't easily
solve this problem, even with much stricter coding style, as I explained
in that post.  This is a problem and even without named mount protection
above, this would be a giant leap forward because the -default- thing is to
not match.  Doing 'right' by default is a giant win.  That said, as I say
above, I kinda like the named mount idea...


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-02-03 Thread Brian Kardell
On Tue, Feb 3, 2015 at 8:06 AM, Olli Pettay o...@pettay.fi wrote:

 On 02/02/2015 09:22 PM, Dimitri Glazkov wrote:

 Brian recently posted what looks like an excellent framing of the
 composition problem:

 https://briankardell.wordpress.com/2015/01/14/
 friendly-fire-the-fog-of-dom/

 This is the problem we solved with Shadow DOM and the problem I would
 like to see solved with the primitive being discussed on this thread.



 random comments about that blog post.



 [snip]
 We need to be able to select mount nodes explicitly, and perhaps
 explicitly say that all such nodes should be selected.
 So, maybe, deep(mountName) and deep(*)

 Is there a reason you couldn't do that with normal CSS techniques, no
additional combinator?  something like /mount/[id=foo] ?


[snip]

 It still needs to be possible from the hosting page to say “Yes, I mean
 all buttons should be blue”
 I disagree with that. It can very well be possible that some component
 really must control the colors itself. Say, it uses
 buttons to indicate if traffic light is red or green. Making both those
 buttons suddenly blue would break the whole concept of the
 component.


By the previous comment though it seems you are saying it's ok to reach
into the mounts, in which case you could do exactly this... Perhaps the
shortness of the sentence makes it seem like I am saying something I am
not, basically I'm saying it should be possible to explicitly write rules
which do apply inside a mount.  CSS already gives you all sorts of tools
for someone developing a bit in isolation to say how important it is that
this particular rule holds up - you can increase specificity with id-based
nots or use !important or even the style attribute itself if it is that
fundamental - what you can't do is protect yourself on either end from
accidental error.  I feel like one could easily over-engineer a solution
here and kill its actual chances of success, whereas a smaller change could
not only have a good chance of getting done, but have very outsized impact
and provide some of the data on how to improve it further.  If this doesn't
seem -hostile- to decent further improvements, finding something minimal
but still very useful might be good.






 -Olli




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Minimum viable custom elements

2015-01-29 Thread Brian Kardell
On Thu, Jan 29, 2015 at 1:50 PM, Elliott Sprehn espr...@chromium.org
wrote:



 On Fri, Jan 30, 2015 at 3:52 AM, Brian Kardell bkard...@gmail.com wrote:



 On Thu, Jan 29, 2015 at 10:33 AM, Bruce Lawson bru...@opera.com wrote:

 On 29 January 2015 at 14:54, Steve Faulkner faulkner.st...@gmail.com
 wrote:
  I think being able to extend existing elements has potential value to
  developers far beyond accessibility (it just so happens that
 accessibility
  is helped a lot by re-use of existing HTML features.)

 I agree with everything Steve has said about accessibility. Extending
 existing elements also gives us progressive enhancement potential.

 Try https://rawgit.com/alice/web-components-demos/master/index.html in
 Safari or IE. The second column isn't functional because it's using
 brand new custom elements. The first column loses the web componenty
 sparkles but remains functional because it extends existing HTML
 elements.

 There's a similar story with Opera Mini, which is used by at least
 250m people (and another potential 100m transitioning on Microsoft
 feature phones) because of its proxy architecture.

 Like Steve, I've no particularly affection (or enmity) towards the
 input type=radio is=luscious-radio syntax. But I'd like to know,
 if it's dropped, how progressive enhancement can be achieved so we
 don't lock out users of browsers that don't have web components
 capabilities, JavaScript disabled or proxy browsers. If there is a
 concrete plan, please point me to it. If there isn't, it's
 irresponsible to drop a method that we can see working in the example
 above with nothing else to replace it.

 I also have a niggling worry that this may affect the uptake of web
 components. When I led a dev team for a large UK legal site, there's
 absolutely no way we could have used a technology that was
 non-functional in older/proxy browsers.

 bruce


 Humor me for a moment while I recap some historical arguments/play
 devil's advocate here.

 One conceptual problem I've always had with the is= form is that it
 adds some amount of ambiguity for authors and makes it plausible to author
 non-sense.  It's similar to the problem of aria being bolt on with mix
 and match attributes.  With the imperative form of extending you wind up
 with a tag name that definitely is defined as subclassing something
 super-button 'inherits' from HTMLButtonElement and I'll explain how it's
 different.  With the declarative attribute form you basically have to
 manage 3 things: ANY tag, the base class and the final definition.  This
 means it's possible to do things like iframe is=button which likely
 won't work.  Further, you can then proceed to define something which is
 clearly none-of-the-above.


 The is@ only works on the element you defined it to apply to, so iframe
 is=button does nothing unless the element button was registered as a
 type extension to iframe. I don't see that as any more error prone than
 writing paper-buton instead of paper-button.

 In other words, if there are 350 elements in HTML - in 349 you could say
is=button and it would do nothing.  This isn't possible with the pure tag
form, it either is or isn't the tag.  This is all I described - ambiguity
for authors and ability to author nonsense.  Maybe it is 'benign' nonsense
but it's nonsense and potentially frustrating in a way that misspelling a
tag isn't IMO.


 Also fwiw most share buttons on the web are actually iframes, so iframe
 is=facebook-button makes total sense.


youre somewhat locked into thinking that because it's how we've dealt with
things, don't you think?  I mean button is=iframe might conceptually
work too, but we know that they're iframes for SOP/isolation reasons.  That
said, what exactly would you add to your custom element facebook-button
that adds value then?  ... Like... what could you legitimately do with that
that you couldn't do with iframe class=facebook-button?  Would it
actually submit a form in *your* page, would your focus act the same, etc?
I'm asking for real because I think the use-cases are on the small end of
limited

I'm not saying it's better or worse, I'm actually trying to take the devils
advocate position here because there might be something beneath it worth
thinking about...  It does seem that composition actually seems to let you
express something equally good without ambiguity more easily except insofar
as giving you a really first-class fallback option if you don't support JS,
but... I'm having a really hard time imagining more than 3-4 cases  where
that's really a useful thing.



 - E




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Minimum viable custom elements

2015-01-29 Thread Brian Kardell
On Thu, Jan 29, 2015 at 2:43 PM, Bruce Lawson bru...@opera.com wrote:
[snip]


 a really first-class fallback option if you don't support JS is
 vital for the quarter of a billion people who use Opera Mini and the
 100 million people who use the Nokia proxy browser. Fallback rather
 than non-functional pages is vital for the people who don't use latest
 greatest Chromium or Gecko browsers.

 b


But in the context of custom elements (not shadow dom) these should be able
to do 'createdCallback' etc on the server... I can't really see any reason
why they couldn't/wouldn't.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Minimum viable custom elements

2015-01-15 Thread Brian Kardell
Not to sidetrack the discussion but Steve Faulker made what I think was a
valid observation and I haven't seen a response... Did I miss it?


Re: Minimum viable custom elements

2015-01-15 Thread Brian Kardell
On Thu, Jan 15, 2015 at 6:43 PM, Domenic Denicola d...@domenic.me wrote:

 Steve's concerns are best illustrated with a more complicated element like
 button. He did a great pull request to the custom elements spec that
 contrasts all the work you have to do with taco-button vs. button
 is=tequila-button:

 https://w3c.github.io/webcomponents/spec/custom/#custom-tag-example vs.
 https://w3c.github.io/webcomponents/spec/custom/#type-extension-example

 The summary is that you *can* duplicate *some* of the semantics and
 accessibility properties of a built-in element when doing custom tags, but
 it's quite arduous. (And, it has minor undesirable side effects, such as
 new DOM attributes which can be overwritten, whereas native role semantics
 are baked in.)

 Additionally, in some cases you *can't* duplicate the semantics and
 accessibility:


 https://github.com/domenic/html-as-custom-elements/blob/master/docs/accessibility.md#incomplete-mitigation-strategies

 An easy example is that you can never get a screen reader to announce
 custom-p as a paragraph, while it will happily do so for p
 is=custom-p. This is because there is no ARIA role for paragraphs that
 you could set in the createdCallback of your CustomP.

 However, this second point is IMO just a gap in the capabilities of ARIA
 that should be addressed. If we could assume it will be addressed on the
 same timeline as custom elements being implemented (seems ... not
 impossible), that still leaves the concern about having to duplicate all
 the functionality of a button, e.g. keyboard support, focus support,
 reaction to the presence/absence of the disabled attribute, etc.

 -Original Message-
 From: Edward O'Connor [mailto:eocon...@apple.com]
 Sent: Thursday, January 15, 2015 18:33
 To: WebApps WG
 Subject: Re: Minimum viable custom elements

 Hi all,

 Steve wrote:

  [I]t also does not address subclassing normal elements. Again, while
  that seems desirable
 
  Given that subclassing normal elements is the easiest and most robust
  method (for developers) of implementing semantics[1] and interaction
  support necessary for accessibility I would suggest it is undesirable
  to punt on it.

 Apologies in advance, Steve, if I'm missing something obvious. I probably
 am.

 I've been writing an article about turtles and I've gotten to the point
 that six levels of headings aren't enough. I want to use a seventh-level
 heading element in this article, but HTML only has h1–6. Currently, without
 custom elements, I can do this:

 div role=heading aria-level=7Cuora amboinensis, the southeast Asian box
 turtle/div

 Suppose instead that TedHaitchSeven is a subclass of HTMLElement and I've
 registered it as ted-h7. In its constructor or createdCallback or
 whatever, I add appropriate role and aria-level attributes. Now I can write
 this:

 ted-h7Cuora amboinensis, the southeast Asian box turtle/ted-h7

 This is just as accessible as the div was, but is considerably more
 straightforward to use. So yay custom elements!

 If I wanted to use is= to do this, I guess I could write:

 h1 is=ted-h7Cuora amboinensis, the southeast Asian box turtle/h1

 How is this easier? How is this more robust?

 I think maybe you could say this is more robust (if not easier) because,
 in a browser with JavaScript disabled, AT would see an h1. h1 is at
 least a heading, if not one of the right level. But in such a browser the
 div example above is even better, because AT would see both that the
 element is a heading and it would also see the correct level.

 OK, so let's work around the wrong-heading-level-when-JS-is-disabled
 problem by explicitly overriding h1's implicit heading level:

 h1 is=ted-h7 aria-level=7Cuora amboinensis, the southeast Asian box
 turtle/h1

 I guess this is OK, but seeing aria-level=7 on and h1 rubs me the wrong
 way even if it's not technically wrong, and I don't see how this is easier
 or more robust than the other options.


 Thanks,
 Ted



I think you really need look no further than HTML as Custom Elements work
to see how difficult it would be to get accessibility right even if we had
good APIs, which, as Domenic pointed out, we really don't.

Anyway, it seems like one of the biggest criticisms we have seen of custom
elements anyone has made has to do with accessibility... It definitely
doesn't seem desirable to make it *harder* to get that right if we can
avoid it, because this could definitely play into the success or failure
story writ large.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-13 Thread Brian Kardell
On Tue, Jan 13, 2015 at 8:09 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Jan 13, 2015, at 3:46 PM, Brian Kardell bkard...@gmail.com wrote:



 On Tue, Jan 13, 2015 at 2:07 PM, Ryosuke Niwa rn...@apple.com wrote:

 To separate presentational information (CSS) from the semantics (HTML).
 Defining both style isolation boundaries and the associated CSS rules in an
 external CSS file will allow authors to change both of them without having
 to modify every HTML documents that includes the CSS file.  Of course, this
 is a non-starter for Web apps that require a lot of scripting, but style
 isolation is a very useful feature for a lot of static pages as well.

 - R. Niwa


 Ryosuke,

 Should you also be able to do this from JavaScript/DOM in your opinion?
 Like, forget shadow dom as it is today in chrome or proposed -- should you
 be able to do something like

 ```
 element.isolateTree = true;
 ```

 and achieve a similar effect?  If not, why specifically?


 Or element.setAttribute('isolatetree', true);  I can't think of a reason
 not to do this.

 - R. Niwa


So if that is a given, why can we not start there and explain how it would
work and use it to fashion increasingly high abstractions - hopefully with
the ability to do some experimentation outside of native implementations?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-13 Thread Brian Kardell
On Tue, Jan 13, 2015 at 2:07 PM, Ryosuke Niwa rn...@apple.com wrote:

To separate presentational information (CSS) from the semantics (HTML).
 Defining both style isolation boundaries and the associated CSS rules in an
 external CSS file will allow authors to change both of them without having
 to modify every HTML documents that includes the CSS file.  Of course, this
 is a non-starter for Web apps that require a lot of scripting, but style
 isolation is a very useful feature for a lot of static pages as well.

 - R. Niwa


Ryosuke,

Should you also be able to do this from JavaScript/DOM in your opinion?
Like, forget shadow dom as it is today in chrome or proposed -- should you
be able to do something like

```
element.isolateTree = true;
```

and achieve a similar effect?  If not, why specifically?




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell
On Mon, Jan 12, 2015 at 4:57 PM, Ryosuke Niwa rn...@apple.com wrote:


  On Jan 12, 2015, at 4:13 AM, cha...@yandex-team.ru wrote:
 
  09.01.2015, 16:42, Anne van Kesteren ann...@annevk.nl:
  I'm wondering if it's feasible to provide developers with the
  primitive that the combination of Shadow DOM and CSS Scoping provides.
  Namely a way to isolate a subtree from selector matching (of document
  stylesheets, not necessarily user and user agent stylesheets) and
  requiring a special selector, such as , to pierce through the
  boundary.
 
  Sounds like a reasonable, and perhaps feasible thing to do, but the
 obvious question is why?
 
  The use cases I can think of are to provide the sort of thing we do with
 BEM today. Is the effort worth it, or are there other things I didn't think
 of (quite likely, given I spent multiple seconds on the question)?

 The benefit of this approach is that all the styling information will be
 in one place.  CSS cascading rules is already complicated, and having to
 consult the markup to know where the selector boundary is will be yet
 another cognitive stress.

 - R. Niwa


If it it necessary to reflect similar at the imperative end of things with
qsa/find/closest (at minimum) - and I think it is the least surprising
thing to do - then you've merely moved where the cognitive stress is, and
in a really new way... Suddenly your CSS is affecting your understanding of
the actual tree!  That seems bad.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell


 Sure, here are some use cases I can think off the top of my head:

1. Styling a navigation bar which is implemented as a list of
hyperlinks
2. Styling an article in a blog
3. Styling the comment section in a blog article
4. Styling a code snippet in a blog article

 None of these scenarios require authors to write scripts.

 - R. Niwa


I'm sorry, this might be dense but as use cases go those seem
incomplete I believe you intend to illustrate something here, but I'm
not getting it... Is the idea that the nav bar wants to deliver this is
how I am styled without interference from the page, potentially through
some assembly on the server or preprocess or something?   Or it is just
like this is actually really hard to manage with CSS and here's
potentially a way to make it 'scope' easier?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell
On Mon, Jan 12, 2015 at 7:45 PM, Ryosuke Niwa rn...@apple.com wrote:


 I understand your use case but please also understand that some authors
 don't want to write a few dozen lines of JavaScript to create a shadow DOM,
 and hundreds of lines of code or load a framework to decoratively isolate
 CSS rules in their pages.

 As far as I can tell, I am not talking about shadow dom or specifically
what authors would have to write... If I did, I didn't intend to do so.  I
am talking about how it is explained and where.  It could literally be as
simple as a single line of JavaScript for purposes of what I am discussing
- As anne mentioned earlier, it could just be a property of an element
potentially... Maybe you could even do with markup attribute.  I thought
that what we were discussing was how to isolate the simpler/potentially
less controversial bits of this.  Conceptually then, my point then is about
when you isolate to prevent accidental style leakage, it seems you nearly
always want to prevent qsa and traversal kinds of leakage too, and that it
wouldn't hurt you in some very rare case where you didn't explicitly *want*
it, as long as you can explicitly traverse the boundary with a combinator.



 Quick note based on some of your other responses - I actually didn't see
 any proposal in the suggestions about TreeScope or a non-parent/child link
 connector or something that talked about insertion points... I think that
 is a secondary question, as is event retargeting?  My comments are
 literally limited to the bare minimum stuff above without discussion of
 those.


 What questions do you have with regards with insertion points and event
 retargeting?  Are you asking whether they should happen as a side effect of
 having a style isolation?


I'm saying that you can talk about isolation without insertion points or
event retargeting, which is what I got out of the thread topic.  Maybe I'm
wrong?


 I would just say that we feel event retargeting should be treated as a
 separate concern from style isolation.  I'm denying that a style isolation
 with event retargeting is a valid use case but there are use cases in which
 style isolation without event retargeting is desirable or retargeting needs
 to be implemented by frameworks.


Now I'm quite confused.  IIRC you are the one that brought up insertion
points earlier - was someone else talking about them?  In any case, I agree
with you, it's possible to have this conversation without those two as a
start and I'd suggest we do that.




 - R. Niwa




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell
On Mon, Jan 12, 2015 at 7:23 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Jan 12, 2015, at 4:16 PM, Brian Kardell bkard...@gmail.com wrote:


 Sure, here are some use cases I can think off the top of my head:

1. Styling a navigation bar which is implemented as a list of
hyperlinks
2. Styling an article in a blog
3. Styling the comment section in a blog article
4. Styling a code snippet in a blog article

 None of these scenarios require authors to write scripts.

 - R. Niwa


 I'm sorry, this might be dense but as use cases go those seem
 incomplete I believe you intend to illustrate something here, but I'm
 not getting it... Is the idea that the nav bar wants to deliver this is
 how I am styled without interference from the page, potentially through
 some assembly on the server or preprocess or something?   Or it is just
 like this is actually really hard to manage with CSS and here's
 potentially a way to make it 'scope' easier?


 It's both that the navigation bar wants to have its own set of CSS rules
 and doesn't want to get affected by other CSS rules; and it's hard to
 manage a large number of CSS rules manually without an encapsulation
 mechanism like a style isolation boundary [1].

 [1] http://stackoverflow.com/questions/2253110/managing-css-explosion

 - R. Niwa


Yeah, ok, that's what I thought you meant.  Professionally, I come up with
this case all the time and the number of cases where I want JavaScript ops
to inadvertently poke into me is 0.  Let's use your menu case, lets say
that I have a class=title on some menu elements and use id=main -
because, hey, that's one reason we like style isolation, we don't need to
invent complex strategies.  Fairly high odds that someone else in the page
will qsa those and do something bad to me inadvertently.  Having those
respect the same boundary for the same seems very, very natural to me.

Quick note based on some of your other responses - I actually didn't see
any proposal in the suggestions about TreeScope or a non-parent/child link
connector or something that talked about insertion points... I think that
is a secondary question, as is event retargeting?  My comments are
literally limited to the bare minimum stuff above without discussion of
those.




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell
On Mon, Jan 12, 2015 at 11:51 AM, Anne van Kesteren ann...@annevk.nl
wrote:

 On Mon, Jan 12, 2015 at 5:47 PM, Brian Kardell bkard...@gmail.com wrote:
  Controlling it through CSS definitely seems to be very high-level.  To
 me at
  least it feels like it requires a lot more answering of how since it
 deals
  with identifying elements by way of rules/selection in order to
  differentially identify other elements by way of rules/selection.  At the
  end of the day you have to identify particular elements as different
 somehow
  and explain how that would work.  It seems better to start there at a
  reasonably low level and just keep in mind that it might be a future aim
 to
  move control of this sort of thing fully to CSS.

 I'm not sure I agree. Unless you make all of CSS imperative it seems
 really hard to judge what belongs where.


It's important that at least qsa/find/closest style things defined in
Selectors match the same in script as in CSS matching.  Whatever solution
likely needs to include this consideration.

I've heard some worries about async/sync requirements regarding rendering
here but I'd say it's further than that if it's rule based too from my
perspective - this seems like something we're going to have to deal with
anyway in a larger sense of extensibility.  I wouldn't (personally) let
that dictate that we can't do this in script - there are lots of places
where that seems practical/controllable enough even now and we could make
globally better with some effort.

Basing this off something in CSS matching, as opposed to DOM means that new
roots can come into play (or leave) in a document-wide sense based on the
addition or removal or rules.  This seems confusing and problematic to me
and the combination of these was relevant to my comment about what's
matching what.  It seems to me that identifying a shadow root is concrete
to an instance and once it's there, it's there.  You can consciously choose
to combinator select through it or not, but it's there unless the physical
DOM changes.



 --
 https://annevankesteren.nl/




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-12 Thread Brian Kardell
On Mon, Jan 12, 2015 at 7:04 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, Jan 9, 2015 at 10:11 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  tl;dr: Cramming a subtree into a TreeScope container and then hanging
  that off the DOM would do the job for free (because it bakes all
  that functionality in).

 Sure, or we could expose a property that when set isolates a tree.
 Both a lot simpler than requiring ShadowRoot. However, it seems to me
 that ideally you can control all of this through CSS. The ability to
 isolate parts of a tree and have them managed by some other stylesheet
 or selector mechanism.


Controlling it through CSS definitely seems to be very high-level.  To me
at least it feels like it requires a lot more answering of how since it
deals with identifying elements by way of rules/selection in order to
differentially identify other elements by way of rules/selection.  At the
end of the day you have to identify particular elements as different
somehow and explain how that would work.  It seems better to start there at
a reasonably low level and just keep in mind that it might be a future aim
to move control of this sort of thing fully to CSS.  Since CSS matching
kind of conceptually happens on 'not exactly the DOM tree' (pseudo
elements, for example) it seems kind of similar to me and it might be worth
figuring that out before attempting another high-level feature which could
make answering 'what's the path up' all that much harder.

















 --
 https://annevankesteren.nl/




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Shadow tree style isolation primitive

2015-01-09 Thread Brian Kardell
On Jan 9, 2015 8:43 AM, Anne van Kesteren ann...@annevk.nl wrote:

 I'm wondering if it's feasible to provide developers with the
 primitive that the combination of Shadow DOM and CSS Scoping provides.
 Namely a way to isolate a subtree from selector matching (of document
 stylesheets, not necessarily user and user agent stylesheets) and
 requiring a special selector, such as , to pierce through the
 boundary.

 This is a bit different from the `all` property as that just changes
 the values of all properties, it does not make a selector such as
 div no longer match.

 So to be clear, the idea is that if you have a tree such as

   section class=example
 h1Example/h1
 div ... /div
   /section

 Then a simple div selector would not match the innermost div if we
 isolated the section. Instead you would have to use section  div or
 some such. Or perhaps associate a set of selectors and style
 declarations with that subtree in some manner.


 --
 https://annevankesteren.nl/


For clarity, are you suggesting you'd control the matching boundary via CSS
somehow or you'd need an indicator in the tree?  A new element/attribute or
something like a fragment root (sort of a shadowroot-lite)?


Re: Shadow tree style isolation primitive

2015-01-09 Thread Brian Kardell
On Fri, Jan 9, 2015 at 10:49 AM, Anne van Kesteren ann...@annevk.nl wrote:


 I wasn't suggesting anything since I'm not sure what the best way
 would be. It has to be some flag that eventually ends up on an element
 so when you do selector matching you know what subtrees to ignore. If
 you set that flag through a CSS property you'd get circular
 dependencies, but perhaps that can be avoided somehow. Setting it
 through an element or attribute would violate separation of style and
 markup.

 Yeah, these are the reasons I ask - shadowRoot IMO kind of solves these
parts of the problem in really the only sensible way I can imagine, but I
think what you're saying is it's too much - and - is there a lesser
thing, something maybe underneath that proposal which just offers this
part.  That's why I say, kind of a fragment root of which maybe if we get
to shadow dom it could be a special type of?  I guess you're not proposing
that but I am saying what about a proposal like that would it answer your
concerns?



 --
 https://annevankesteren.nl/




-- 
Brian Kardell :: @briankardell :: hitchjs.com


[shadow dom] relitigation

2014-12-17 Thread Brian Kardell
I hate to tear open a wound, but it seems to me that two important browser
vendors have yet to buy into Shadow DOM.  It's currently listed by
Microsoft as under consideration but the sense I get is that the signal
isn't very positive right now.  Firefox is planning to move forward, Blink
has it unprefixed.

Things like document.register can be polyfilled fairly well and without too
much crazy.  If imports is controversial or we determine that we need more
experimentation to figure out what's down there in terms of other systems
like modules or fetch - we can do a lot of those experiments outside any
browser implementation too and use it to lead discussions.  I am all for
that, especially if we can lead the way in getting vendors to cooperate on
the polyfills and make some efforts to find future safe ways to do this.

But Shadow DOM - this is a different story.  It might not be a fundamental
primitive or DNA level thing, but it's well down there and actually
impossible to polyfill accurately and it is dark, dark magic requiring lots
of code to even fake it reasonably well.  There's a real risk there is that
the fidelity could actually cause problems when you jump to native too, I
think.

There seems to be a pretty large split in sentiment on Shadow DOM, or
perceived sentiment from developers.  From my perspective, a whole lot of
people tell me that they find Shadow DOM one of the most compelling pieces
of custom elements and without it, they're holding off.  Another thing they
tell me that frustrates them is that this makes it hard to share custom
elements - should they assume a Shadow DOM or not.

With Mozilla's post the other day[1] this has opened up a whole lot of new
conversations on my part and the preeminent question seems to be whether
there will be a positive signal from Apple or Microsoft or whether we need
to consider that as good as vapor for now.  For a lot of orgs,
consideration of switching to custom element and their plan for the next
few years is probably affected, as well as the state of the landscape and
where we will be shaping it.

With this in mind, I'm asking if anyone is willing to tip their hand at all
- even to the effect that if we get two interoperable, unprefixed
versions, we will follow... Any information I think is helpful - and
asking the question at least might move the conversation forward again (I
hope)?



1 - https://hacks.mozilla.org/2014/12/mozilla-and-web-components/

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [shadow dom] relitigation

2014-12-17 Thread Brian Kardell
On Wed, Dec 17, 2014 at 3:24 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi Brian,

 The WebKit team has given a lot of feedback over the years on the Shadow
 DOM spec.  We wouldn't have done that if we didn't care about it. :)  We're
 excited to hear that Mozilla is planning to give more feedback on Custom
 Elements and Shadow DOM because we feel that much of their feedback
 resonates with us.

 Having said that, our feedback has largely been dismissed or has not been
 adequately addressed.  I'm sure you can imagine that this does not
 encourage us to invest much more time or effort into providing additional
 feedback.

 - R. Niwa


Ryosuke,

Thanks for your response.

I can definitely appreciate that when you sink time into discussion and
things don't appear to go that way it seems frustrating and doesn't promote
good feelings about investing further.  At the same time, I'm sure you can
appreciate that this leaves things in a frustrating/confusing spot for so
many developers and their orgs around the world because of where this
particular piece of the puzzle lies.  I'm glad to hear that Mozilla's
position/feedback resonates but I'm still unclear.

I have followed all of these discussions pretty closely and even today
after some searching I am not sure about which feedback regarding Shadow
DOM specifically you feel still requires addressing?  Discussion about type
1, 2 boundary seems to have died off - was there some other?  Is there any
hope of resolving that if that's what you mean, or would this require
significant change?

Here's what I actually am unclear on at the end of the day:  Technicalities
around REC or process or politics aside - If we wind up with two
interoperable implementations in FF and Blink, will you still feel there is
something that needs addressing before sending the positive signal that
it'll eventually get implemented in Webkit?  If so, I feel like, as a
developer, now is the time I'd like to learn that.  Just as it is
frustrating to you above - it seems will be all the more frustrating for
everyone if that becomes the case and honestly, the development world is
just guessing what may be wildly uninformed guesses... That seems bad.  If
we need to have hard discussions, let's have them and get it out of the way
even if the result is something somewhat less than a commitment.

That's my 2 cents, anyway.  I'm not the chair, I'm not even a member - it's
just something I hear a lot of people discussing and thought worth bringing
into the open.


Brian Kardell :: @briankardell :: hitchjs.com


Re: [shadow dom] relitigation

2014-12-17 Thread Brian Kardell
On Wed, Dec 17, 2014 at 4:59 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Dec 17, 2014, at 3:18 PM, Brian Kardell bkard...@gmail.com wrote:

 On Wed, Dec 17, 2014 at 3:24 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi Brian,

 The WebKit team has given a lot of feedback over the years on the Shadow
 DOM spec.  We wouldn't have done that if we didn't care about it. :)  We're
 excited to hear that Mozilla is planning to give more feedback on Custom
 Elements and Shadow DOM because we feel that much of their feedback
 resonates with us.

 Having said that, our feedback has largely been dismissed or has not been
 adequately addressed.  I'm sure you can imagine that this does not
 encourage us to invest much more time or effort into providing additional
 feedback.


 I can definitely appreciate that when you sink time into discussion and
 things don't appear to go that way it seems frustrating and doesn't promote
 good feelings about investing further.  At the same time, I'm sure you can
 appreciate that this leaves things in a frustrating/confusing spot for so
 many developers and their orgs around the world because of where this
 particular piece of the puzzle lies.  I'm glad to hear that Mozilla's
 position/feedback resonates but I'm still unclear.


 I sympathize with the sentiment.  However, regardless of which browsers
 implement Shadow DOM and Custom Elements today, Web developers won't be
 able to use them without fallbacks since many users would be using older
 Web browsers that don't support these features.


Hopefully you don't mean this to the extent it sounds.  A whole lot of the
Web doesn't follow this model and there is a point past which this approach
increases complexity to the point where it is unmanageable. If tomorrow
everyone shipped these APIs interoperably (I won't hold my breath, I'm just
making a point) then within a few months this would be the common target
for vast swaths of the Web and private enterprise, try surfing the Web with
IE6 and if you're very lucky most of it will simply be prompting you to
upgrade your browser.  If your point is simply that until this is the case,
you'll have to ship polyfills - that's my whole point.  It's impossible to
polyfill that which is not yet agreed to, and where this fits in the puzzle
it's hard to even get it close.  If we were to say 'it's going to work just
like it does in chrome when we get around to it, and we will' (note: not
advocating this, just making a point) then there would be good reason to
consider using the polyfill.  Currently, it looks more like a prollyfill
and you might be stuck with it forever.  If we were stuck with
.registerElement forever, or even HTML Imports, it's probably not the end
of the world.  If they work for you today, they should work even better for
you tomorrow as the trend of perf is always faster.  But shadow dom is big
and complex and a different kind of bet.  We might be willing to use it for
development, even in production for smaller projects if it looks pretty
probable that we'll remove it and gain perf and drop the scary code.  It's
a different proposition if you can't.




 I have followed all of these discussions pretty closely and even today
 after some searching I am not sure about which feedback regarding Shadow
 DOM specifically you feel still requires addressing?  Discussion about type
 1, 2 boundary seems to have died off - was there some other?


 We've argued not to expose the shadow root of a host by default many
 times.  In fact, we got an agreement over a year ago to add a private mode
 (type II encapsulation) to the Shadow DOM.  However, the corresponding bug
 [1], which was filed in November 2012, hasn't been resolved yet.


Yes, I commented on it then too... thanks for the other links below too, I
couldn't find them but I recall now.  To be honest, I didn't get the
relationship with the transclusion thing[2] even then - seemed to mix
concerns to me.  Did anyone else bite on it?  I don't see anything
positive, but it's possible I missed it given that I was away during this
time.  I'm very curious on where that landed though in terms of who else
thought this was a problem/needed addressing like this.  #3 seems mostly
relevant to things beyond shadow root, like how it fits with imports.  Is
there some way to limit the scope and solve Shadow DOM L1 without imports
and saying only in the same origin or something?

What about this? Is it plausible to fork the draft and the prollyfills in
polymer and work out a counter-proposal?  While some might be unhappy that
Chrome released something unprefixed/not flagged on this front, you have to
at least give the Polymer guys mad props for the effort to ship a
prollyfill that works in all of the mainstream, modern browsers.

Believe it or not, I'm not interested in the politics of right or wrong
about shipping, I'm interested in finding a path forward that allows
competition in a healthy way.  A bad answer would be bad, I don't disagree

Re: HTML imports in Firefox

2014-12-15 Thread Brian Kardell
Very generally: this is actually why I said way back that a lot of things
seem like prollyfills (we hope that's the future) rather than polyfills
(it's a done deal) and advocated we make sure it's a future-safe, forward
compatible approach.

On Dec 15, 2014 4:06 PM, Ashley Gullen ash...@scirra.com wrote:

 On 15 December 2014 at 19:09, Boris Zbarsky bzbar...@mit.edu wrote:


 But more to the point, we're not shipping imports because we've gotten
feedback from a number of people that imports are not solving the problems
they actually need solved.  We'd prefer to not ship imports and then need
to ship yet another HTML import system that solves those problems.


 Well, imports work better for us than Javascript modules, for the reasons
I gave.

The p(r)olyfill is actually pretty decent and not huge, smaller than a lot
of module loaders.  For such an integral kind of platform feature, if we
have a fairly nice  polyfill and things are potentially still debatable and
there are legit seeming wants that aren't met, why not use it?

I hadn't given any feedback because everything looked great with HTML
imports and I was simply waiting for it to arrive in browsers. Maybe the
process biases feedback towards the negative? I guess you never hear the
chorus of cool, can't wait! from everyone looking forwards to it?

Currently, I agree, some of us are working on that so that we tighten the
feedback loop with both positive and negative feedback without overwhelming
the system.


 On 15 December 2014 at 19:09, Boris Zbarsky bzbar...@mit.edu wrote:

 On 12/15/14, 1:10 PM, Ashley Gullen wrote:

 Why would modules affect the decision to ship HTML imports?


 Because the interaction of the various import systems with each other
needs to be specified, for one thing.

 But more to the point, we're not shipping imports because we've gotten
feedback from a number of people that imports are not solving the problems
they actually need solved.  We'd prefer to not ship imports and then need
to ship yet another HTML import system that solves those problems.

 The webcomponents.org http://webcomponents.org polyfill for imports
 has a couple of caveats, so it doesn't look like it's totally equivalent
 and portable with browsers with native support, like Chrome which has
 shipped it already.


 Chrome has shipped a lot of things in this space already.  Feel free to
mail me privately for my feelings on the matter.  chrome shipping something
is not sufficient reason to ship something we're pretty sure is deficient,
I'm afraid.

 -Boris



[Push] one or many

2014-10-09 Thread Brian Kardell
I'm really confused by what seems to me like contradictory prose... In
the interface definition it says


Note that just a single push registration is allowed per webapp.


But in multiple places it seems to suggest otherwise, for example, in
the section on uniqueness it says:


webapps that create multiple push registrations are responsible for
mapping the individual registrations to specific app functions as
neces


Can someone clarify why those seem contradictory?  Can a webapp have 1
registration, or many?

-Brian



Re: [Push] one or many

2014-10-09 Thread Brian Kardell
On Thu, Oct 9, 2014 at 2:01 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Oct 9, 2014 at 7:53 PM, Brian Kardell bkard...@gmail.com wrote:
 Can someone clarify why those seem contradictory?  Can a webapp have 1
 registration, or many?

 The term webapp also seems wrong. There's no such established term and
 it does not mean anything in terms of security or typical API/protocol
 boundaries. As far as I can tell you can have a push registration per
 service worker. Which with the current design of service workers means
 push registrations are bound to URL scopes (which in turn are
 origin-scoped).


 --
 https://annevankesteren.nl/

They do define it in the spec at least[1], but I don't see how it can
mean both things.


[1] http://www.w3.org/TR/push-api/#dfn-webapp

-- 
Brian Kardell :: @briankardell :: hitchjs.com



Re: XMLHttpRequest: uppercasing method names

2014-08-12 Thread Brian Kardell
On Aug 12, 2014 9:29 AM, Anne van Kesteren ann...@annevk.nl wrote:

 In https://github.com/slightlyoff/ServiceWorker/issues/120 the
 question came up whether we should perhaps always uppercase method
 names as that is what people seem to expect. mnot seemed okay with
 adding appropriate advice on the HTTP side.

 The alternative is that we stick with our current subset and make that
 consistent across APIs, and treat other method names as
 case-sensitive.

 I somewhat prefer always uppercasing, but that would require changes
 to XMLHttpRequest.


 --
 http://annevankesteren.nl/


Both seem like common enough answers to this question that I think either
works.  I prefer the later just for consistency sake with xhr and the off
chance that we forgot to consider -something- with a change.  If there's no
really good reason to change it, least change is better IMO


Re: XMLHttpRequest: uppercasing method names

2014-08-12 Thread Brian Kardell
On Aug 12, 2014 11:12 AM, Takeshi Yoshino tyosh...@google.com wrote:

 On Tue, Aug 12, 2014 at 10:55 PM, Anne van Kesteren ann...@annevk.nl
wrote:

 On Tue, Aug 12, 2014 at 3:37 PM, Brian Kardell bkard...@gmail.com
wrote:
  If there's no really good reason to change it, least change is better
IMO

 All I can think of is that it would be somewhat more consistent to not
 have this list and always uppercase,


 Ideally


 but yeah, I guess I'll just align
 fetch() with XMLHttpRequest.


 Isn't it an option that we use stricter rule (all uppercase) for the
newly-introduced fetch() API but keep the list for XHR? Aligning XHR and
fetch() is basically good but making fetch() inherit the whitelist is a
little sad.



 Some archaeology:

 - Blink recently reduced the whitelist to conform to the latest WHATWG
XHR spec. http://src.chromium.org/viewvc/blink?view=revisionrevision=176592
 - Before that, used this list ported to WebKit from Firefox's behavior
http://trac.webkit.org/changeset/13652/trunk/WebCore/xml/xmlhttprequest.cpp
 - Anne introduced the initial version of the part of the spec in Aug 2006
http://dev.w3.org/cvsweb/2006/webapi/XMLHttpRequest/Overview.html.diff?r1=1.12;r2=1.13;f=h
 -- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0124.html
 -- http://lists.w3.org/Archives/Public/public-webapi/2006Apr/0126.html


fetch should explain magic in XMLHttpRequest et all.. I don't see how it
could differ in the way you are suggesting and match


Re: Blocking message passing for Workers

2014-08-09 Thread Brian Kardell
On Aug 9, 2014 10:16 AM, David Bruant bruan...@gmail.com wrote:

 Le 09/08/2014 15:51, Alan deLespinasse a écrit :

 Thanks. Apparently I did a lousy job of searching for previous
discussions.

 I just found this later, longer thread:

 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0965.html
 http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0678.html
 (same thread, different year, so they're not linked)

 Has anything changed since that thread? It seems like the discussion
stalled in early 2012. But I'm glad to find that other people want the same
thing.

 This topic is on people minds [1]. My understanding of where we're at is
that ECMAScript 7 will bring syntax (async/await keywords [2]) that looks
like sync syntax, but acts asynchronously. This should eliminate the need
for web devs for blocking message passing primitives for workers.

 There is still a case for blocking primitives for projects that compile
from other languages (C, C++, Python, Java, C#, etc.) to JS [3].


I'm glad to be switching last night's twitter discussion to a bigger
medium.  My question here is: what is the proposal (if there is any) to
balance these and simultaneously ensure that we don't wind up limiting
ourselves or providing really bad foot guns or two APIs depending on
whether you're in the main thread or a worker?

 I personally hope it won't happen as it would be a step backwards.
Blocking communication (cross-thread/process/computer) was a mistake. We
need a culture shift. The browser and Node.js are a step in the right
direction (they did not initiate it, but helped popularize it).

 David

 [1] https://twitter.com/briankardell/status/497843660680351744
 [2] https://github.com/lukehoban/ecmascript-asyncawait#example
 [3] https://bugzilla.mozilla.org/show_bug.cgi?id=783190#c26



Re: =[xhr]

2014-08-01 Thread Brian Kardell
On Aug 1, 2014 9:52 AM, nmork_consult...@cusa.canon.com wrote:

 Thank you for letting me know my input is not desired.

As Tab said, you can visually and functionally lock user input in your tab
and even provide a progress meter. Nothing you suggest is difficult with a
sync xhr and promises, and it's less hostile.  How is this unreasonable?


 From:Tab Atkins Jr. jackalm...@gmail.com
 To:nmork_consult...@cusa.canon.com,
 Cc:public-webapps public-webapps@w3.org
 Date:08/01/2014 06:46 AM
 Subject:Re: =[xhr]
 




 On Aug 1, 2014 8:39 AM, nmork_consult...@cusa.canon.com wrote:
 
  Spinner is not sufficient.  All user activity must stop.  They can take
 a coffee break if it takes too long.  Browser must be frozen and locked
down completely.  No other options are desirable.  All tabs, menus, etc.
must be frozen.  That is exactly the desired result.

 By spinner, I also meant freezing other parts of the page as necessary,
or obscuring them so they can't be clicked.

 Asking to freeze the rest of the browser is unnecessary and extremely
user-hostile, and we don't support allowing content to do that.

 ~TJ


Re: Fallout of non-encapsulated shadow trees

2014-07-01 Thread Brian Kardell
[snip]
On Jul 1, 2014 10:07 PM, Maciej Stachowiak m...@apple.com wrote:

 (3) A two-way membrane at the API layer between a component and a script;
approximately, this would be the Structured Clone algorithm, but extended
to also translate references to DOM objects between the worlds.

Has this all been spelled out somewhere in more detail and i missed it? In
minutes maybe?  I'm very curious about it, references between worlds could
help in a whole number of ways beyond just this.  If it can be uncovered to
explain the existing platform (or provide a possible explanation) I'd like
to hear more.


Re: [Bug 25376] - Web Components won't integrate without much testing

2014-05-23 Thread Brian Kardell
On May 23, 2014 10:18 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, May 20, 2014 at 8:41 PM, Axel Dahmen bril...@hotmail.com wrote:
  I got redirected here from a HTML5 discussion on an IFrame's SEAMLESS
  attribute:
 
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=25376
 
  Ian Hickson suggested to publish my findings here so the Web Components
team
  may consider to re-evaluate the draft and probably amending the spec.

 Could you post your findings here?

Replying to the points from the bug, quoted by Tab below ...




 Digging through the bug thread, it appears you might be talking about
this:

  Web Components require a plethora of additional browser features and
behaviours.
 
Natively though, that seems a good thing to me..

  Web Components require loads of additional HTML, CSS and client script
code for displaying content.
 
How? CSS seems the same either way, html could actually be significantly
lessened and script is dependent on what you actually want to do.  If it's
just a fragment js for a single fragment element would potentially serve
many and you can describe declaratively a lot.


  Web Components install complex concepts (e.g. decorators) by
introducing unique, complex, opaque behaviours, abandoning the pure nature
of presentation.

 
Decorators were dropped last i checked, but many of the new features create
a lightweight alternative to iframes and, again, give us, new powers to
create.


  Web Components require special script event handling, so existing
script code cannot be reused.

Depends, but possibly.  Can you provide a specific that works better with
iframes in this regard.

  Web Components require special CSS handling, so existing CSS cannot be
reused.
 
Same comment as above..



  Web Components unnecessarily introduce a new clumsy “custom”, or
“undefined” element, leaving the path of presentation. Custom Elements
could as easy be achieved using CSS classes, and querySelector() in ECMA
Script.
 
Definitely not, because as you say, we add new mechanisms to treat Custom
Elements (note title casing) as first class things with a known lifecycle,
larger meaning etc.  you could visually and interactively achieve similar
results from a user perspective potentially, and nothing prevents you going
forward from maintaining this mentality for your use.  What that approach
doesn't give you is a universal means to declaratively share these with
scores of users who don't have to understand all that and for the community
to participate easily in finding out what actually works for us instead of
spending years in a committee to debate about things only to find out that,
after all, it doesn't.



  The W3C DOM MutationObserver specification already provides
functionality equivalent to
insertedCallback()/readyCallback()/removeCallback().

MutationObservers, I believe, are neutral spec-wise on the point of when
they fire in terms of parsing (I think), but regardless of the spec, at
least Mozilla does not fire them during parse.  That turns out to be a
pretty big deal actually.  Ideally, though, we should be connecting APIs
and layering them atop one another so just because this is possible with
another API does not make it a bad thing.



 Is this correct?  Is this the full list of comments you wish to make?

 ~TJ



Re: CfC: to create a new developer's list for WebApps' specs; deadline May 28

2014-05-21 Thread Brian Kardell
On May 21, 2014 10:29 AM, Arthur Barstow art.bars...@gmail.com wrote:

 On 5/21/14 7:02 AM, Anne van Kesteren wrote:

 Developers seem to complain about us using mailing lists to
 communicate rather than GitHub or some other centralized platform that
 is not email. Might be worth checking with them first.


 Yes, good point Anne. I tweeted this Q with some tags that were intended
to extend the reach. If others would also reach out, I would appreciate it.

 I realize mail lists are a tool and there could be better ones to
reach/engage the developer audience.

 -Thanks, AB


+ public-nextweb...

I've kind of thought of Web Platform Docs as the developer end of things
and w3c specs for implementers and wgs - is it possible to set something up
under that banner? As a developer, i like the push orientation of mailing
lists too, but they suck for lots of other reasons too.


Re: Custom Elements: 'data-' attributes

2014-05-08 Thread Brian Kardell
On Thu, May 8, 2014 at 5:37 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, May 8, 2014 at 12:53 AM, Ryosuke Niwa rn...@apple.com wrote:
  The answer to that question, IMO, is no.  It's not safe to use custom
  attributes without 'data-' if one wanted to write a forward compatible
 HTML
  document.

 Note that the question is scoped to custom elements, not elements in
 general.

 It seems kind of sucky that if you have already minted a custom
 element name, you still need to prefix all your attributes too.

 j-details open=

 reads a lot better than

 j-details data-open=

 The clashes are also likely to happen on the API side. E.g. if I mint
 a custom element and support a property named selectable. If that gets
 traction that might prevent us from introducing selectable as a global
 attribute going forward.


 --
 http://annevankesteren.nl/


What do the parsing rules say about what an attr may begin with? Is it
plausible to just leading underscore or leading dash them as in CSS so that
all that's really necessary is for HTML to avoid using those natively (not
hard, cause, why would you) and then you provide an easy hatch for good
authors and get decent protection without getting too crazy?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-26 Thread Brian Kardell
On Wed, Mar 26, 2014 at 4:53 PM, Scott Miles sjmi...@google.com wrote:

 Yes, I agree with what R. Niwa says.

 I believe there are many variations on what should happen during element
 lifecycle, and the element itself is best positioned to make those choices.

 `:unresolved` is special because it exists a-priori to the element having
 any control.

 Scott


 On Wed, Mar 26, 2014 at 12:26 PM, Ryosuke Niwa rn...@apple.com wrote:

 Maybe the problem comes from not distinguishing elements being created
 and ready for API access versus elements is ready for interactions?

 I'd also imagine that the exact appearance of a custom element between
 the time the element is created and the time it is ready for interaction
 will depend on what the element does.   e.g. img behaves more or less like
 display:none at least until the dimension is available, and then updates
 the screen as the image is loaded.  iframe on the other hand will occupy
 the fixed size in accordance to its style from the beginning, and simply
 updates its content.

 Given that, I'm not certain adding another pseudo element in UA is the
 right approach here.  I suspect there could be multiple states between the
 time element is created and it's ready for user interaction for some custom
 elements.  Custom pseudo, for example, seems like a more appealing solution
 in that regard.

 - R. Niwa

 On Mar 25, 2014, at 2:31 PM, Brian Kardell bkard...@gmail.com wrote:

 I'm working with several individuals of varying skillsets on using/making
 custom elements - we are using a way cut-back subset of what we think are
 the really stable just to get started but I had an observation/thought that
 I wanted to share with the list based on feedback/experience so far...

 It turns out that we have a lot of what I am going to call async
 components - things that involve calling 1 or more services during their
 creation in order to actually draw something useful on the screen.  These
 range from something simple like an RSS element (which, of course, has to
 fetch the feed) to complex wizards which have to consult a service to
 determine which view/step they are even on and then potentially additional
 request(s) to display that view in a good way.  In both of these cases I've
 seen confusion over the :unresolved pseudo-class.  Essentially, the created
 callback has happened so from the currently defined lifecycle state it's
 :resolved, but still not useful.  This can easily be messed up at both
 ends (assuming that the thing sticking markup in a page and the CSS that
 styles it are two ends) such that we get FOUC garbage between the time
 something is :resolved and when it is actually conceptually ready.  I
 realize that there are a number of ways to work around this and maybe do it
 properly such that this doesn't happen, but there are an infinitely
 greater number of ways to barf unhappy content into the screen before its
 time.  To everyone who I see look at this, it seems they conceptually
 associate :resolved with ok it's ready, and my thought is that isn't
 necessarily an insensible thing to think since there is clearly a
 pseudo-class about 'non-readiness' of some kind and nothing else that seems
 to address this.

 I see a few options, I think all of them can be seen as enhancements, not
 necessary to a v1 spec if it is going to hold up something important.   The
 first would be to let the created callback optionally return a promise - if
 returned we can delay :resolved until the promise is fulfilled.  The other
 is to introduce another pseudo like :loaded and let the author
 participate in that somehow, perhaps the same way (optionally return a
 promise from created).  Either way, it seems to me that if we had that, my
 folks would use that over the current definition of :resolved in a lot of
 cases.



 --
 Brian Kardell :: @briankardell :: hitchjs.com





Just to be clear, so there is no confusion (because I realize after talking
to Dimitri that I was being pretty long winded about what I was saying):
 I'm simply saying what y'all are saying - the element is in the best place
to know that it's really fully cooked.  Yes, there could be N potential
states between 0 and fully cooked too, but we do know (at least I am
seeing repeatedly) that folks would like to participate in saying ok, now
I am fully cooked so that the CSS for it can be simple and sensible.

I'm not looking to change anything specifically (except maybe a little more
explicit callout of that in the spec), I'm just providing this feedback so
that we can all think about it in light of other proposals and
conversations we're all having and - maybe - if someone has good ideas you
could share them (offlist if you prefer, or maybe in public-nextweb) so
that those of us who are experimenting can try them out in library space...



-- 
Brian Kardell :: @briankardell :: hitchjs.com


[custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
I'm working with several individuals of varying skillsets on using/making
custom elements - we are using a way cut-back subset of what we think are
the really stable just to get started but I had an observation/thought that
I wanted to share with the list based on feedback/experience so far...

It turns out that we have a lot of what I am going to call async
components - things that involve calling 1 or more services during their
creation in order to actually draw something useful on the screen.  These
range from something simple like an RSS element (which, of course, has to
fetch the feed) to complex wizards which have to consult a service to
determine which view/step they are even on and then potentially additional
request(s) to display that view in a good way.  In both of these cases I've
seen confusion over the :unresolved pseudo-class.  Essentially, the created
callback has happened so from the currently defined lifecycle state it's
:resolved, but still not useful.  This can easily be messed up at both
ends (assuming that the thing sticking markup in a page and the CSS that
styles it are two ends) such that we get FOUC garbage between the time
something is :resolved and when it is actually conceptually ready.  I
realize that there are a number of ways to work around this and maybe do it
properly such that this doesn't happen, but there are an infinitely
greater number of ways to barf unhappy content into the screen before its
time.  To everyone who I see look at this, it seems they conceptually
associate :resolved with ok it's ready, and my thought is that isn't
necessarily an insensible thing to think since there is clearly a
pseudo-class about 'non-readiness' of some kind and nothing else that seems
to address this.

I see a few options, I think all of them can be seen as enhancements, not
necessary to a v1 spec if it is going to hold up something important.   The
first would be to let the created callback optionally return a promise - if
returned we can delay :resolved until the promise is fulfilled.  The other
is to introduce another pseudo like :loaded and let the author
participate in that somehow, perhaps the same way (optionally return a
promise from created).  Either way, it seems to me that if we had that, my
folks would use that over the current definition of :resolved in a lot of
cases.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
On Tue, Mar 25, 2014 at 6:10 PM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

  Do custom elements present any new challenges in comparison to
 non-custom elements here? I feel like you have the same issue with filling
 a select with data from a remote source.

Only really the fact that select exposes no clue already that it isn't
:unresolved or something.  You can see how the hint of an I'm not ready
yet can be interpreted this way.  Precisely, if someone created an
x-select data-src=... kind of tag, then yes, I do think most people
would think that that indicated when the actual (populated) element was
ready.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
On Tue, Mar 25, 2014 at 6:27 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 Let me try and repeat this back to you, standards-nerd-style:

 Now that we have custom elements, there's even more need for notifying a
 style engine of a change in internal elements state -- that is, without
 expressing it in attributes (class names, ids, etc.). We want the ability
 to make custom pseudo classes.

 Now, Busta Rhymes-style

 Yo, I got change
 In my internal state.
 Style resolution
 It ain't too late.
 We got solution!
 To save our a**ses
 That's right, it's custom pseudo classes.

 :DG


Probably it comes as no shock that I agree with our want to push Custom
Pseudo-Class forward, and I am *very* pro experimenting in the community
(#extendthewebforward), so - in fact, I am already experimenting with both
Custom Pseudo-Classes in general and this specific case (returning a
promise).  I'm happy to go that route entirely, but I'm sharing because I
am seeing a fair amount of confusion over :unresolved as currently defined.
 In the least case, we might make an effort to spell it out in the spec a
little more and let people know when we talk to them.  Ultimately, from
what I am seeing on the ground, it seems like :loaded or :ready or
something which is potentially component author-informed is actually
actually way more useful a thing for us to wind up with We'll see, I'm
not trying to push it on anyone, I'm just trying to pick the brains of
smart people and provide feedback into the system (tighten the feedback
loop, right).


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [Bug 24823] New: [ServiceWorker]: MAY NOT is not defined in RFC 2119

2014-02-26 Thread Brian Kardell
On Feb 26, 2014 1:01 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * bugzi...@jessica.w3.org wrote:
 The section Worker Script Caching uses the term MAY NOT, which is not
 defined in RFC 2119.  I'm assuming this is intended to be MUST NOT or
maybe
 SHOULD NOT.

 If an agent MAY $x then it also MAY not $x. It is possible that the
 author meant must not or should not in this specific instance, but
 in general such a reading would be incorrect. If course, specifications
 should not use constructs like may not.
 --

Your use of should not and the logic implies that actually they may use
may not they just shouldn't.  Do you mean they may not?

 Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
 Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/



Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Tue, Jan 28, 2014 at 5:11 PM, Jake Archibald jaffathec...@gmail.comwrote:

 (I'm late to this party, sorry)

 I'm really fond of the link rel=import elements=x-foo, x-bar
 pattern, but I yeah, you could end up with a massive elements list.

 How about making link[rel=import] async by default, but make elements with
 a dash in the tagname display:none by default?

 On a news article with components, the news article would load, the
 content would be readable, then the components would appear as they load.
 Similar to images without a width  height specified.

 As with images, the site developer could apply styling for the component
 roots before they load, to avoid/minimise the layout change as components
 load. This could be visibility:hidden along with a width  height (or
 aspect ratio, which I believe is coming to CSS), or display:block and
 additional styles to provide a view of the data inside the component that's
 good enough for a pre-enhancement render.

 This gives us:

 * Performance by default (we'd have made scripts async by default if we
 could go back right?)
 * Avoids FOUC by default
 * Can be picked up by a preparser
 * Appears to block rendering on pages that are build with a root web
 component

 Thoughts?

 Cheers,
 Jake.


I think that there are clearly use cases where either way feels right.
 It's considerably easier to tack on a pattern that makes async feel sync
than the reverse.  I'd like to suggest that Jake's proposal is -almost-
really good.  As an author, I'd be happier with the proposal if there were
just a little bit of sugar that made it very very easy to opt in and I
think that this lacks that only in that it relies either on a root level
component or some script to tweak something that indicates the body
visibility or display.  If we realize that this is going to be a common
pattern, why not just provide the simple abstration as part of the system.
 This could be as simple as adding something to section 7.2[1] which says
something like


The :unresolved pseudoclass may also be applied to the body element.  The
body tag is considered :unresolved until all of the elements contained in
the original document have been resolved.  This provides authors a simple
means to additionally manage rendering FOUC including and all the way up to
fully delaying rendering of the page until the Custom Element dependencies
are resolved, while still defaulting to asyc/non-blocking behavior.

Example:
-
/* Apply body styles like background coloring,
but don't render any elements until it's all ready...
*/
body:unresolved * {
display: none;
}


WDYT?


[1] -
http://w3c.github.io/webcomponents/spec/custom/#unresolved-element-pseudoclass



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Wed, Jan 29, 2014 at 12:09 PM, Jake Archibald jaffathec...@gmail.comwrote:

 :unresolved { display: none; } plus lazyload (
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html#attr-lazyload)
 would allow devs to create the non-blocking behaviour. But this is the
 wrong way around. Devs should have to opt-in to the slow thing and get the
 fast thing by default.


Isn't that what I suggested?  I suggested that it be asyc, just as you said
- and that all we do is add the ability to use the :unresolved pseudo class
on the body.  This provides authors as a simple means of control for opting
out of rendering in blocks above the level of the component without
resorting to the need to do it via script or a root level element which
serves no other real purpose. This level of ability seems not just simpler,
but probably more desirable - like a lot of authors I've done a lot of work
with things that pop into existence and cause relayout -- often the thing I
want to block or reserve space for isn't the specific content, but a
container or something.  Seems to me with addition of a body level
:unresolved you could answer pretty much any use case for partial rendering
from just dont do it all the way to screw it, the thing pops into
existence (the later being the default) very very simply - and at the
right layer (CSS).


Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Wed, Jan 29, 2014 at 12:30 PM, Jake Archibald jaffathec...@gmail.comwrote:

 My bad, many apologies. I get what you mean now.

 However, if web components are explaining the platform then body is
 :resolved by browser internals (I don't know if this is how :resolved works
 currently). Eg, imagine select as a built-in component which is resolved
 and given a shadow DOM by internals.

 7.2 of custom elements states:


The :unresolved pseudoclass *must* match all custom
elements whose created callback has not yet been invoked.


I suppose this leaves wiggle room that it may actually in theory match on
native elements as well.  As you say, this is a nice explanation maybe for
all elements - though - it doesn't seem remarkable what a custom element
would have something a native wouldn't.  Either way, I think my proposal
holds up in basic theory, the only caveat is whether the thing on body is
just a specialized meaning of resolved that only applies to custom
elements, or whether you need a specific name for that thing, right?  It's
really entirely bikesheddable what that thing should be called or maps to -
there must be a name for the document is done upgrading elements that we
in the tree at parse - I dont think that is DOMContentLoaded, but
hopefully you take my point.  If we could agree that that solution works,
we could then have a cage match to decide on a good name :)




 On 29 January 2014 09:19, Brian Kardell bkard...@gmail.com wrote:

 On Wed, Jan 29, 2014 at 12:09 PM, Jake Archibald 
 jaffathec...@gmail.comwrote:

 :unresolved { display: none; } plus lazyload (
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html#attr-lazyload)
 would allow devs to create the non-blocking behaviour. But this is the
 wrong way around. Devs should have to opt-in to the slow thing and get the
 fast thing by default.


 Isn't that what I suggested?  I suggested that it be asyc, just as you
 said - and that all we do is add the ability to use the :unresolved pseudo
 class on the body.  This provides authors as a simple means of control for
 opting out of rendering in blocks above the level of the component without
 resorting to the need to do it via script or a root level element which
 serves no other real purpose. This level of ability seems not just simpler,
 but probably more desirable - like a lot of authors I've done a lot of work
 with things that pop into existence and cause relayout -- often the thing I
 want to block or reserve space for isn't the specific content, but a
 container or something.  Seems to me with addition of a body level
 :unresolved you could answer pretty much any use case for partial rendering
 from just dont do it all the way to screw it, the thing pops into
 existence (the later being the default) very very simply - and at the
 right layer (CSS).








-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-14 Thread Brian Kardell


 As an alternate suggestion, and one that might dodge the subclassing
 issues, perhaps createShadowRoot could take an optional template argument
 and clone it automatically. Then this:

 this._root = this.createShadowRoot();
 this._root.appendChild(template.content.cloneNode());

 Could turn into this:

 this._root = this.createShadowRoot(template);

 Which is quite a bit simpler, and involves fewer basic contents.


Just to be totally clear, you are suggesting that the later would desugar
into precisely the former, correct?  What would happen if you called
createShadowRoot with some other kind of element?


Re: [custom elements] Improving the name of document.register()

2013-12-13 Thread Brian Kardell
On Dec 13, 2013 3:40 AM, Maciej Stachowiak m...@apple.com wrote:


 Thanks, Google folks, for considering a name to document.register. Though
a small change, I think it will be a nice improvement to code clarity.

 Since we're bikeshedding, let me add a few more notes in favor of
defineElement for consideration:

 1) In programming languages, you would normally say you define or
declare a function, class structure, variable, etc. I don't know of any
language where you register a function or class.

My earlier comment/concern about confusion and overloaded terms was about
this exactly.  The language we are in here is js and we define a class
structure by subclassing, right?  The element is defined, its just that
that alone isn't enough - it has to be connected/plugged in to the larger
system by way of a pattern - primarily the parser, right?


 2) registerElement sounds kind of like it would take an instance of
Element and register it for some purpose. defineElement sounds more like it
is introducing a new kind of element, rather than registering a concrete
instance of an element..

I don't disagree with that.  all proposals are partially misleading/not
quite crystal clear IMO.  I don't think registerElement is the height of
perfection either and perhaps reasonable people could disagree on which is
clearer.  At the end of the day I am inclined to not let perfect be the
enemy of good.

 3) If we someday define a standardized declarative equivalent (note that
I'm not necessarily saying we have to do so right now), defineElement has
more natural analogs. For example, a define or definition element would
convey the concept really well. But a register or registration or even
register-element element would be a weird name.


Seems a similar problem here - you could be defining anything, plus HTML
already has a dfn...What about element?  That's already on the table
after a lot of discussion I thought - is it not what you meant?

 4) The analogy to registerProtocolHandler is also not a great one, in my
opinion. First, it has a different scope - it is on navigator and applies
globally for the UI, rather than being on document and having scope limited
to that document. Second, the true parallel to registerProtocolHandler
would be registerElementDefinition. After all, it's not just called
registerProtocol. That would be an odd name. But defineElement conveys the
same idea as registerElementDefinition more concisely. The Web Components
spec itself says Element registration is a process of adding an element
definition to a registry.

The scope part seems not huge to me... But by the same kind of argument,
you might just as easily make the case that what we are really lacking is a
registry member or something not entirely unlike jQuery's plugins
conceptually.


 5) Register with the parser is not a good description of what
document.register does, either. It has an effect regardless of whether
elements are created with the parser. The best description is what the
custom elements spec itself calls it

Can you elaborate there?  What effect?  Lifecycle stuff via new?

 I feel that the preference for registerElement over defineElement may
partly be inertia due to the old name being document.register. Think about
it - is registerElement really the name you'd come up with, starting from a
blank slate?

For me, i think it still would be if i wound up with a document level
method as opposed to some other approach like a registry object.  But
again, i am of the opinion that none of these is perfect and to some extent
reasonable people can disagree.. I am largely not trying to convince anyone
that one way is right.  If it goes down as defineElement, the world still
wins IMO.

I hope you will give more consideration to defineElement (which seems to be
the most preferred candidate among the non-register-based names).

 Thanks,
 Maciej


 On Dec 12, 2013, at 10:09 PM, Dominic Cooney domin...@google.com wrote:




 On Fri, Dec 13, 2013 at 2:29 AM, Brian Kardell bkard...@gmail.com
wrote:


 On Dec 11, 2013 11:48 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com
wrote:
 
 ...
  El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com
escribió:
 
  Hi,
 
  The name register is very generic and could mean practically
anything.
  We need to adopt a name for document.register() that makes its
purpose
  clear to authors looking to use custom elements or those reading
someone
  else's code that makes use of custom elements.
 
  I think the method should be called registerElement, for these
reasons:
 
  - It's more descriptive about the purpose of the method than just
register.
  - It's not too verbose; it doesn't have any redundant part.
  - It's nicely parallel to registerProtocolHandler.
 
 
  I'd still refer declareElement (or defineElement) since
registerElement sounds as if we're registering an instance of element with
something.  Define and declare also match SGML/XML

Re: [custom elements] Improving the name of document.register()

2013-12-12 Thread Brian Kardell
On Dec 11, 2013 11:48 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com wrote:

 On Thu, Dec 12, 2013 at 5:17 AM, pira...@gmail.com pira...@gmail.com
wrote:

 I have seen registerProtocolHandler() and it's being discused
registerServiceWorker(). I believe registerElementDefinition() or
registerCustomElement() could help to keep going on this path.

 Send from my Samsung Galaxy Note II

 El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com escribió:

 Hi,

 The name register is very generic and could mean practically
anything.
 We need to adopt a name for document.register() that makes its purpose
 clear to authors looking to use custom elements or those reading
someone
 else's code that makes use of custom elements.


 I support this proposal.


 Here are some ideas:

 document.defineElement()
 document.declareElement()
 document.registerElementDefinition()
 document.defineCustomElement()
 document.declareCustomElement()
 document.registerCustomElementDefinition()

 I like document.defineCustomElement() the most, but
 document.defineElement() also works for me if people think
 document.defineCustomElement() is too long.


 I think the method should be called registerElement, for these reasons:

 - It's more descriptive about the purpose of the method than just
register.
 - It's not too verbose; it doesn't have any redundant part.
 - It's nicely parallel to registerProtocolHandler.


 I'd still refer declareElement (or defineElement) since registerElement
sounds as if we're registering an instance of element with something.
 Define and declare also match SGML/XML terminologies.

 - R. Niwa


Define/declare seem a little confusing because we are in the imperative
space where these have somewhat different connotations.  It really does
seem to me that conceptually we are registering (connecting the definition)
with the parser or something.  For whatever that comment is worth.


Re: [custom elements] Improving the name of document.register()

2013-12-11 Thread Brian Kardell
On Wed, Dec 11, 2013 at 3:17 PM, pira...@gmail.com pira...@gmail.comwrote:

 I have seen registerProtocolHandler() and it's being discused
 registerServiceWorker(). I believe registerElementDefinition() or
 registerCustomElement() could help to keep going on this path.


 Since a custom element is the only kind of element  you could register,
custom seems redundant - similarly - it isn't
registerCustomProtocolHandler().

.registerElement is reasonably short and, IMO, adds the descriptiveness
that Ted is looking for?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents] HTML Imports

2013-12-05 Thread Brian Kardell
I've been putting off a response on this, but I have some things to add...
The topic on this thread was originally HTML Imports - it seems like some
of the concerns expressed extend beyond imports and are a little wider
ranging.  I am cross posting this comment to public-next...@w3.org as I
think it is related.

I share the concern about letting out an API too early, but I think my
concerns are different.  In the past we worked (meaning browsers, devs,
stds groups) in a model in which things were released into the wild -
prefixed or not - without a very wide feedback loop.  At that point, the
practical realities leave not many good options for course correction or
even for small, but significant tweaks.  I think a lot is happening to
change that model and, as we can see in the case of everything with Web
Components (esp imports and selectors perhaps) the wider we throw the net
the more feedback we get from real people trying to accomplish real things
with real concerns - not just theory.  Some of this experimentation is
happening in the native space, but it is behind a flag, so we are shielded
from the problems above - no public Web site is relying on those things.
 And some of that is happening in the prollyfill space - Github FTW - in
projects like x-tags and polymer.  When we really look down through things
it does feel like it starts to become clear where the pain points are and
where things start to feel more stable.  With this approach, we don't need
to rush standardization in the large scale - if we can reasonably do it
without that and there seems to be wide questioning - let's hold off a bit.

HTML Imports, for example, are generating an *awful* lot of discussion - it
feels like they aren't cooked to me.  But virtually every discussion
involves elements we know we'd need to experiment in that space - modules
would allow one kind of experimentation, promises seem necessary for other
kinds, and so on.  There is a danger of undercooking, yes - but there is
also a danger in overcooking in the standards space alone that I think is
less evident:  No matter how good or bad something is technically, it needs
uptake to succeed.  If you think that ES6 modules have absolutely nothing
to do with this, for example, but through experimentation in the community
that sort of approach turns out to be a winner - it is much more valuable
than theoretical debate.  Debate is really good - but the advantage I think
we need to help exploit is that folks like Steve Souders or James Burke and
W3C TAG can debate and make their cases with working code without pulling
the proverbial trigger if we prioritize the right things and tools to make
it possible.  And no ones code needs to break in the meantime - the
JS-based approach you use today will work just as well tomorrow - better
actually because the perf curve of the browser and speed of machines they
run on is always up.

I don't think that perfect imports is necessarily the lynch-pin to value
in Web Components - it needn't block other progress to slow down the
standard on this one.  Other things like document.register already feel a
lot more stable.  Finding a way to evolve the Web is tricky, but I think
doable and the Web would be a lot better for it if we can get it right.


Re: [HTML Imports]: what scope to run in

2013-11-19 Thread Brian Kardell
On Nov 19, 2013 2:22 AM, Ryosuke Niwa rn...@apple.com wrote:


 On Nov 19, 2013, at 2:10 PM, Dimitri Glazkov dglaz...@chromium.org
wrote:

 On Mon, Nov 18, 2013 at 8:26 PM, Ryosuke Niwa rn...@apple.com wrote:

 We share the concern Jonas expressed here as I've repeatedly mentioned
on another threads.

 On Nov 18, 2013, at 4:14 PM, Jonas Sicking jo...@sicking.cc wrote:

 This has several downsides:
 * Libraries can easily collide with each other by trying to insert
 themselves into the global using the same property name.
 * It means that the library is forced to hardcode the property name
 that it's accessed through, rather allowing the page importing the
 library to control this.
 * It makes it harder for the library to expose multiple entry points
 since it multiplies the problems above.
 * It means that the library is more fragile since it doesn't know what
 the global object that it runs in looks like. I.e. it can't depend on
 the global object having or not having any particular properties.


 Or for that matter, prototypes of any builtin type such as Array.

 * Internal functions that the library does not want to expose require
 ugly anonymous-function tricks to create a hidden scope.


 IMO, this is the biggest problem.

 Many platforms, including Node.js and ES6 introduces modules as a way
 to address these problems.


 Indeed.

 At the very least, I would like to see a way to write your
 HTML-importable document as a module. So that it runs in a separate
 global and that the caller can access exported symbols and grab the
 ones that it wants.

 Though I would even be interested in having that be the default way of
 accessing HTML imports.


 Yes!  I support that.

 I don't know exactly what the syntax would be. I could imagine
something like

 In markup:
 link rel=import href=... id=mylib

 Once imported, in script:
 new $('mylib').import.MyCommentElement;
 $('mylib').import.doStuff(12);

 or

 In markup:
 link rel=import href=... id=mylib import=MyCommentElement
doStuff

 Once imported, in script:
 new MyCommentElement;
 doStuff(12);


 How about this?

 In the host document:
 link ref=import href=foo.js import=foo1 foo2
 script
 foo1.bar();
 foo2();
 /script

 In foo.js:
 module foo1 {
 export function bar() {}
 }
 function foo2() {}


 I think you just invented the module element:
https://github.com/jorendorff/js-loaders/blob/master/rationale.md#examples


 Putting the backward compatibility / fallback behavior concern with
respect to the HTML parsing algorithm aside, the current proposal appears
to only support js files.  Are you proposing to extend it so that it can
also load HTML documents just like link[rel=import] does?


I think james burke purposes something to that effect
https://gist.github.com/jrburke/7455354#comment-949905 (relevant bit is in
reply to me, comment #4 if i understand the question)


Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Brian Kardell
Mixed response here...

 I love the idea of making HTML imports *not* block rendering as the
default behavior
In terms of custom elements, this creates as a standard, the dreaded FOUC
problem about which a whole different group of people will be blogging and
tweeting... Right?  I don't know that the current solution is entirely
awesome, I'm just making sure we are discussing the same fact.  Also, links
in the head and links in the body both work though the spec disallows the
later it is defacto - the former blocks, the later doesn't I think.
 This creates some interesting situations for people that use something
like a CMS where they don't get to own the head upfront.

 So, for what it's worth, the Polymer team has the exact opposite
desire. I of course acknowledge use cases
 where imports are being used to enhance existing pages, but the assertion
that this is the primary use case is  at least arguable.

Scott, is that because of what I said above (why polymer has the exact
opposite desire)?

  if we allow Expressing the dependency in JS then why doesn't 'async'
(or 'sync') get us both what we want?

Just to kind of flip this on its head a bit - I feel like it is maybe
valuable to think that we should worry about how you express it in JS
*first* and worry about declarative sugar for one or more of those cases
after.  I know it seems the boat has sailed on that just a little with
imports, but nothing is really final else I think we wouldnt be having this
conversation in the first place.  Is it plausible to excavate an
explantation for the imports magic and define a JS API and then see how we
tweak that to solve all the things?


Re: should mutation observers be able to observe work done by the html parser

2013-09-16 Thread Brian Kardell
was therw ever agreement on this old topic?
http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0618.htmlwhether
by de facto implementation or spec agreements?  I am not seeing
anything in the draft but maybe i am missing it...


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 2:51 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Sep 13, 2013, at 8:26 PM, Brian Kardell bkard...@gmail.com wrote:


 On Sep 13, 2013 4:38 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Sep 11, 2013, at 11:54 AM, Francois Remy r...@adobe.com wrote:
 
  For the record, I'm equally concerned about renaming `matchesSelector`
 into `matches`.
 
  A lot of code now rely on a prefixed or unprefixed version of
 `matchesSelector` as this has shipped in an interoperable fashion in all
 browsers now.
 
 
  Which browser ships matchesSelector unprefixed?
  Neither Chrome, Firefox, nor Safari ship matchesSelector unprefixed.
 
 
  On Sep 13, 2013, at 1:12 PM, Francois Remy r...@adobe.com wrote:
 
  A lot of code now rely on a prefixed or unprefixed version of
  `matchesSelector` as this has shipped in an interoperable fashion in
 all
  browsers now.
 
 
  Unprefixed?
 
 
  Yeah. Future-proofing of existing code, mostly:
 
 
 
 https://github.com/search?q=matchesSelector+msMatchesSelectortype=Coderef
  =searchresults
 
 
  That’s just broken code.  One cannot speculatively rely on unprefixed
 DOM functions until they’re actually spec’ed and shiped.
  I have no sympathy or patience to maintain the backward compatibility
 with the code that has never worked.
 

 I am not really sure why you feel this way - this piece of the draft is
 tremendously stable, and interoperable as anything else.

 It's not interoperable at all. No vendor has ever shipped matchesSelector
 unprefixed as far as I know.  i.e. it didn't work anywhere unless you
 explicitly relied upon prefixed versions.

 Prefixes bound to vendors which may or may not match final and may or may
 not disappear when final comes around or just whenever, in release channel
 is exactly why most people are against this sort of thing now.  This
 predates that shift and regardless of whether we like it, stuff will break
 for people who were just following examples and going by the
 implementation/interop and  standard perception of stability.  Websites
 will stop working correctly - some will never get fixed - others will waste
 the time of hundreds or thousands of devs...

 Anyone using the prefixed versions should have a fallback path for old
 browsers that doesn't support it.  If some websites will break, then we'll
 simply keep the old prefixed version around but this is essentially each
 vendor's decision.  Gecko might drop sooner than other vendors for example.

 So.. Ok to keep prefix working in all browsers, but not just add it?  For
 the most part, isn't that just like an alias?

 Whether a browser keeps a prefixed version working or not is each vendor's
 decision.  Given that the unprefixed version has never worked, I don't see
 why we want to use the name matchesSelector as opposed to matches.

 - R. Niwa



I think the responses/questions are getting confused.  I'm not sure about
others, but my position is actually not that complicated:  This feature has
been out there and interoperable for quite a while - it is prefixed
everywhere and called matchesSelector.  Some potentially significant group
of people assumed that when it was unprefixed it would be called matches
and others matchesSelector.  Whatever we think people should do in terms
of whether there is a fallback or what not, we know reality often doesn't
match that - people support a certain version forward.  However much we'd
like people to switch, lots of websites are an investment that doesn't get
revisited for a long time.  Thus: 1) let's not try to repurpose matches for
anything that doesn't match this signature (I thought I heard someone
advocating that early on) 2) let's make sure we don't disable those
prefixes and risk breaking stuff that assumed improperly ~or~ if possible -
since this is so bikesheddy, let's just make an alias in the spec given the
circumstances.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Sep 16, 2013 3:46 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Sep 16, 2013 at 12:03 PM, Brian Kardell bkard...@gmail.com
wrote:
  I think the responses/questions are getting confused.  I'm not sure
about
  others, but my position is actually not that complicated:  This feature
has
  been out there and interoperable for quite a while - it is prefixed
  everywhere and called matchesSelector.

 No, it's called *MatchesSelector, where * is various vendor prefixes.

Yeah, that is more accurately what I intended to convey - the delta being
the selector part.

  Some potentially significant group
  of people assumed that when it was unprefixed it would be called
matches
  and others matchesSelector.

 Regardless of what they assumed, there's presumably a case to handle
 older browsers that don't support it at all.  If the scripts guessed
 wrongly about what the unprefixed name would be, then they'll fall
 into this case anyway, which should be okay.

Yes, as long as prefixes stay around, and we don't change repurpose
.matches for another use  that's true.  I thought someone suggested the
later earlier in the thread(s) have to go back and look.

 If they didn't support down-level browsers at all, then they're
 already broken for a lot of users, so making them broken for a few
 more shouldn't be a huge deal. ^_^

This seems like a cop out if there is an easy way to avoid breaking them.
 Just leaving the prefixed ones there goes a long way, but I think we've
shown that some libs and uses either happened before the decision to switch
to .matches so they forward estimated that it would be .matchesSelector
and, people used them (or maybe they've used them before the lib was
updated even).  It seems really easy to unprefix matchesSelector, and say
see matches, it's an alias and prevent breakage.  If I'm alone on that,
I'm not going to keep beating it to death, it just seems easily forward
friendly.  I know I've gotten calls for some mom and pop type project where
I guessed wrong on early standards in my younger days and, well - it sucks.
 I'd rather not put anyone else through that pain unnecessarily if there is
a simple fix.  Maybe I am wrong about the level of simplicity, but - it
seems really bikesheddy anyway.

 ~TJ


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 5:43 PM, Scott González scott.gonza...@gmail.comwrote:

 On Mon, Sep 16, 2013 at 5:33 PM, Brian Kardell bkard...@gmail.com wrote:

 I think Francois shared a github search with shows almost 15,500 uses
 expecting matchesSelector.


 As is generally the case, that GitHub search returns the same code
 duplicated thousands of times. From this search, it's impossible to tell
 which are forks of libraries implementing a polyfill or shim, which are
 projects that actually get released, which are projects that will never be
 released, and which will update their dependencies in a timely fashion
 (resulting in use of the proper method). It seems like a fair amount of
 these are actually just a few polyfills or different versions of jQuery.
 These results are also inflated by matches in source maps.



That's a good observation.  I hadn't considered that.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 4:29 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Sep 16, 2013 at 1:05 PM, Brian Kardell bkard...@gmail.com wrote:
  If they didn't support down-level browsers at all, then they're
  already broken for a lot of users, so making them broken for a few
  more shouldn't be a huge deal. ^_^
 
  This seems like a cop out if there is an easy way to avoid breaking them.
  Just leaving the prefixed ones there goes a long way, but I think we've
  shown that some libs and uses either happened before the decision to
 switch
  to .matches so they forward estimated that it would be .matchesSelector
 and,
  people used them (or maybe they've used them before the lib was updated
  even).  It seems really easy to unprefix matchesSelector, and say see
  matches, it's an alias and prevent breakage.  If I'm alone on that, I'm
 not
  going to keep beating it to death, it just seems easily forward
 friendly.  I
  know I've gotten calls for some mom and pop type project where I guessed
  wrong on early standards in my younger days and, well - it sucks.  I'd
  rather not put anyone else through that pain unnecessarily if there is a
  simple fix.  Maybe I am wrong about the level of simplicity, but - it
 seems
  really bikesheddy anyway.

 Aliasing cruft is *often* very simple to add; that's not the point.
 It's *cruft*, and unnecessary at that.  Aliasing is sometimes a good
 idea, if you have a well-supported bad name and there's a really good
 alternate name you want to use which is way more consistent, etc.
 This isn't the case here - you're suggesting we add an alias for a
 term that *doesn't even exist on the platform yet*.



I feel like you are taking it to mean that I am advocating aliasing
everywhere for everything where that is not simply not my intent.  I am
saying in this one very particular case because of the timing of things it
seems like it would be a good idea to alias and be done with it.


 There are
 literally zero scripts which depend on the name matchesSelector,
 because it's never worked anywhere.  They might depend on the prefixed
 variants, but that's a separate issue to deal with.


I think Francois shared a github search with shows almost 15,500 uses
expecting matchesSelector.  I think we all agree these should work just
fine as long as prefixes remain - but that's the point... With that many,
why worry about when someone wrote their code or unprefixing or lots more
emails.  Seems an acceptable amount of cruft to me in this case.  Having
said that, I promise I will make no further case :)




 ~TJ




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-14 Thread Brian Kardell
On Sep 14, 2013 6:07 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Sat, Sep 14, 2013 at 4:26 AM, Brian Kardell bkard...@gmail.com wrote:
  I am not really sure why you feel this way - this piece of the draft is
  tremendously stable, and interoperable as anything else.  The decision
to
  make it matches was old and popular.  It's not just random joe schmoe
doing
  this, it's illustrated and recommended by respected sources... For
example
  http://docs.webplatform.org/wiki/dom/methods/matchesSelector

 1) I don't think that's a respected source just yet. 2) When I search
 for matchesSelector on Google I get
 https://developer.mozilla.org/en-US/docs/Web/API/Element.matches which
 reflects the state of things much better. Note that the name
 matchesSelector has been gone from the standard for a long time now.


  So.. Ok to keep prefix working in all browsers, but not just add it?
 For
  the most part, isn't that just like an alias?

 Depends on the implementation details of the prefixed version. FWIW,
 I'd expect Gecko to remove support for the prefixed version. Maybe
 after some period of emitting warnings. We've done that successfully
 for a whole bunch of things.


 --
 http://annevankesteren.nl/

I think there may be confusion because of where in the thread i responded -
it was unclear who i was responding to (multi).  I pointed to web platform
link because it is an example of a respected source: a) showing how to
write it for forward compat b) assuming that, based on old/popular
decision it would be called matches.

I didnt use the moz ref because i think it is misleading in that: a) unlike
a *lot* of other moz refs, it doesn't show anything regarding using it with
other prefixes/unprefixing b) the state of that doc now still wouldn't be
what someone referenced in a project they wrote 6 months or a year ago.

My entire point is that it seems, unfortunately, in this very specific
case, kind of reasonable that:
A) Element.prototype.matches() has to mean what .mozMatchedSelector() means
today.  It shouldn't be reconsidered, repurposed or worrisome.
B) Enough stuff assumes Element.prototype.matchesSelector() to cause me
worry that it will prevent unprefixing.
C) We could bikeshed details all day long, but why not just add both where
one is an alias for the other.  Then, what Anne said about dropping prefix
over time becomes less troubling as the number of people who did neither
and don't rev becomes vanishingly small (still, if it is easy why drop at
all).

Very succinctly, i am suggesting:
.matchesSector be unprefixed, .matches is an alias and docs just say see
matchesSelector, its an alias. And no one breaks.  And we avoid this in
the future by following better practices.


Re: Making selectors first-class citizens

2013-09-13 Thread Brian Kardell
On Sep 13, 2013 4:38 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Sep 11, 2013, at 11:54 AM, Francois Remy r...@adobe.com wrote:

 For the record, I'm equally concerned about renaming `matchesSelector`
into `matches`.

 A lot of code now rely on a prefixed or unprefixed version of
`matchesSelector` as this has shipped in an interoperable fashion in all
browsers now.


 Which browser ships matchesSelector unprefixed?
 Neither Chrome, Firefox, nor Safari ship matchesSelector unprefixed.


 On Sep 13, 2013, at 1:12 PM, Francois Remy r...@adobe.com wrote:

 A lot of code now rely on a prefixed or unprefixed version of
 `matchesSelector` as this has shipped in an interoperable fashion in
all
 browsers now.


 Unprefixed?


 Yeah. Future-proofing of existing code, mostly:



https://github.com/search?q=matchesSelector+msMatchesSelectortype=Coderef
 =searchresults


 That’s just broken code.  One cannot speculatively rely on unprefixed DOM
functions until they’re actually spec’ed and shiped.
 I have no sympathy or patience to maintain the backward compatibility
with the code that has never worked.


I am not really sure why you feel this way - this piece of the draft is
tremendously stable, and interoperable as anything else.  The decision to
make it matches was old and popular.  It's not just random joe schmoe doing
this, it's illustrated and recommended by respected sources... For example
http://docs.webplatform.org/wiki/dom/methods/matchesSelector

Essentially, this reaches the level of de facto standard in my book. .all
it really lacks is a vote.

Prefixes bound to vendors which may or may not match final and may or may
not disappear when final comes around or just whenever, in release channel
is exactly why most people are against this sort of thing now.  This
predates that shift and regardless of whether we like it, stuff will break
for people who were just following examples and going by the
implementation/interop and  standard perception of stability.  Websites
will stop working correctly - some will never get fixed - others will waste
the time of hundreds or thousands of devs... This isn't something that was
implemented by 1 or 2 browsers, was hotly contested or has only been around
a few months: This is out there a long time and implemented a long time.

 Furthermore, the existing code will continue to work with the prefixed
versions since we’re not suggesting to drop the prefixed versions.

But, you could just as easily because it is prefixed and experimental.  I
guess i am just not understanding why we are ok to keep around the crappy
named prefix ones but not alias the better name that is widely documented
and definitely used just so we can bikeshed a bit more?  If there is also
something better, let's find a way to add without needing to mess with this.

 - R. Niwa


So.. Ok to keep prefix working in all browsers, but not just add it?  For
the most part, isn't that just like an alias?


Re: Making selectors first-class citizens

2013-09-12 Thread Brian Kardell
On Sep 12, 2013 2:16 AM, Garrett Smith dhtmlkitc...@gmail.com wrote:

 FWD'ing to put my reply back on list (and to others)...

 On Sep 11, 2013 6:35 AM, Anne van Kesteren ann...@annevk.nl wrote:

 As far as I can tell Element.prototype.matches() is not deployed yet.
 Should we instead make selectors first-class citizens, just like
 regular expressions, and have

 var sel = new Selectors(i  love  selectors, so[much])
 sel.test(node)

 # 2007 David Anderson proposed the idea.

 That seems like a much nicer approach.

 (It also means this can be neatly defined in the Selectors
 specification, rather than in DOM, which means less work for me. :-))

 # 2009 the API design remerged
 http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/1445.html

 # 2010 Selectors explained in an article:
 http://www.fortybelow.ca/hosted/dhtmlkitchen/JavaScript-Query-Engines.html
 (search Query Matching Strategy).
 --
 Garrett
 Twitter: @xkit
 personx.tumblr.com



I may be the only one, but... I am unsure what you are advocating here
Garrett.


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 9:34 AM, Anne van Kesteren ann...@annevk.nl wrote:

 As far as I can tell Element.prototype.matches() is not deployed yet.
 Should we instead make selectors first-class citizens, just like
 regular expressions, and have this:

   var sel = new Selectors(i  love  selectors, so[much])
   sel.test(node)

 That seems like a much nicer approach.

 (It also means this can be neatly defined in the Selectors
 specification, rather than in DOM, which means less work for me. :-))


 --
 http://annevankesteren.nl/


I like the idea, but matches has been in release builds for a long time,
right?  Hitch uses it.


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 11:11 AM, James Graham ja...@hoppipolla.co.uk wrote:

 On 11/09/13 15:50, Brian Kardell wrote:

 Yes, to be clear, that is what i meant. If it is in a draft and
 widely/compatibly implemented and deployed in released browsers not
 behind a flag - people are using it.


 If people are using a prefixed — i.e. proprietary — API there is no
requirement that a standard is developed and shipped for that API. It's
then up to the individual vendor to decide whether to drop their
proprietary feature or not.



Please note carefully what i said.  I don't think I am advocating anything
that hasn't been discussed a million times.  In theory what you say was the
original intent.  In practice, that's not how things went.  Browsers have
changed what used to be standard practice to help avoid this in the
future.  We are making cross-browser prollyfills outside browser
implementations to avoid this in the future.  What is done is done though.
The reality is that real and not insignificant production code uses
prefixed things that meet the criteria I stated.  If removed, those will
break.  If something with the same name but different signature or
functionality goes out unprefixed, things will break.


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Wed, Sep 11, 2013 at 12:26 PM, Brian Kardell bkard...@gmail.com wrote:


 On Sep 11, 2013 11:11 AM, James Graham ja...@hoppipolla.co.uk wrote:
 
  On 11/09/13 15:50, Brian Kardell wrote:
 
  Yes, to be clear, that is what i meant. If it is in a draft and
  widely/compatibly implemented and deployed in released browsers not
  behind a flag - people are using it.
 
 
  If people are using a prefixed — i.e. proprietary — API there is no
 requirement that a standard is developed and shipped for that API. It's
 then up to the individual vendor to decide whether to drop their
 proprietary feature or not.
 
 

 Please note carefully what i said.  I don't think I am advocating anything
 that hasn't been discussed a million times.  In theory what you say was the
 original intent.  In practice, that's not how things went.  Browsers have
 changed what used to be standard practice to help avoid this in the
 future.  We are making cross-browser prollyfills outside browser
 implementations to avoid this in the future.  What is done is done though.
 The reality is that real and not insignificant production code uses
 prefixed things that meet the criteria I stated.  If removed, those will
 break.  If something with the same name but different signature or
 functionality goes out unprefixed, things will break.


Mozillians, just for example:
https://github.com/x-tag/x-tag/blob/master/dist/x-tag-components.js#L2161

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 12:29 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/11/13 12:26 PM, Brian Kardell wrote:

 If something with the same name but different
 signature or functionality goes out unprefixed, things will break.


 Why is this, exactly?  Is code assuming that mozFoo, webkitFoo and
foo are interchangeable?  Because they sure aren't, in general.


 In any case, there is no mozMatches or webkitMatches, so matches
should be ok.


As things mature to the manner/degree i described, yes.  But, this isn't
surprising, right?  When things reach this level, we feel pretty
comfortable calling them polyfills which do exactly what you describe: We
assume prefixed and unprefixed are equivalent.  We also feel comfortable
listing them on sites like caniuse.com and even working group members have
products that effectively just unprefix.  It's the same logic used by
Robert O'Callahan regarding unprefixing CSS selectors[1] and we ended up
doing a lot of that - and even prior to that there was talk of unprefixing
.matchesSelector as .matches right here on public web-apps[2].  When things
reach this point, we really have to consider what is out there and how
widely it has been promoted for how long.  I think it is too late for
matchesSelector for sure, and I'd be lying if I said I wasn't worried about
.matches().  I for one am very glad we are taking approaches that help us
not be in this boat - but the idea that something can be called as a
constructor or not isn't new either - can we make it backwards compat and
get the best of both worlds?  Given the similarities in what they do, it
doesn't seem to me like implementation is a problem.  In the very least, I
feel like we need to retain .matchesSelector for some time.

[1] http://lists.w3.org/Archives/Public/www-style/2011Nov/0271.html

[2] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1146.html


 -Boris




Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 10:04 AM, Robin Berjon ro...@w3.org wrote:

 On 11/09/2013 15:56 , Anne van Kesteren wrote:

 On Wed, Sep 11, 2013 at 2:52 PM, Brian Kardell bkard...@gmail.com
wrote:

 I like the idea, but matches has been in release builds for a long time,
 right?  Hitch uses it.


 !DOCTYPE html.scriptw(matches in document.body)/script
 http://software.hixie.ch/utilities/js/live-dom-viewer/

 false in both Firefox and Chrome.


 See http://caniuse.com/#search=matches. You do get mozMatchesSelector
(and variants) in there.


 --
 Robin Berjon - http://berjon.com/ - @robinberjon

Yes, to be clear, that is what i meant. If it is in a draft and
widely/compatibly implemented and deployed in released browsers not behind
a flag - people are using it.  That's part of why we switched the general
philosophy right? No doubt one can be a shorthand for the better approach
though...right?


Re: [webcomponents]: The Shadow Cat in the Hat Edition

2013-09-09 Thread Brian Kardell
On Sep 9, 2013 9:32 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Sep 9, 2013 at 6:20 PM, Scott Miles sjmi...@google.com wrote:
  I'd greatly prefer to stick with the current plan of having to mark
  things to be exposed explicitly,
 
  Fwiw, we tried that and got in the weeds right away. See Dimitri's post
for
  details. I'm afraid of trading real-life pain (e.g. exploding part
lists)
  for what is IMO an unreal advantage (e.g. the notion components can be
  upgraded and assured never to break is just not realistic).

 Did you miss my suggestion that we allow this with a third value on
 the current allow selectors through switch?

 ~TJ


I am worried that i am not understanding one or both of you properly and
honestly ... I am feeling just a bit lost.

For purposes here consider i have some kind of a special table component
complete with sortable and configurable columns.  When i use that, i
honestly don't want to know what is in the sausage - just how to style or
potentially deal with some parts.  If i start writing things depending on
the gory details, shame on me.  If you leave me no choice but to do that,
shame on you.  You can fool me once but you can't get fooled again... Or
something.

Ok, so, is there a problem with things at that simple level or do the
problems only arise as i build a playlist component out of that table and
some other stuff and in turn a music player out of that?  Is that the
exploding parts list?  Why is exposing explicitly bad?


Re: element Needs A Beauty Nap

2013-08-13 Thread Brian Kardell
On Tue, Aug 13, 2013 at 9:15 AM, Daniel Buchner dan...@mozilla.com wrote:

 I concur. On hold doesn't mean forever, and the imperative API affords us
 nearly identical feature capability. Nailing the imperative and getting the
 APIs to market is far more important to developers at this point.
 On Aug 12, 2013 4:46 PM, Alex Russell slightly...@google.com wrote:

 As discussed face-to-face, I agree with this proposal. The declarative
 form isn't essential to the project of de-sugaring the platform and can be
 added later when we get agreement on what the right path forward is.
 Further, polymer-element is evidence that it's not even necessary so long
 as we continue to have the plumbing for loading content that is HTML
 Imports.

 +1


 On Mon, Aug 12, 2013 at 4:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 tl;dr: I am proposing to temporarily remove declarative custom element
 syntax (aka element) from the spec. It's broken/dysfunctional as
 spec'd and I can't see how to fix it in the short term.

 We tried. We gave it a good old college try. In the end, we couldn't
 come up with an element syntax that's both functional and feasible.

 A functional element would:

 1) Provide a way to declare new or extend existing HTML/SVG elements
 using markup
 2) Allow registering prototype/lifecycle callbacks, both inline and out
 3) Be powerful enough for developers to prefer it over document.register

 A feasible element would:

 1) Be intuitive to use
 2) Have simple syntax and API surface
 3) Avoid complex over-the-wire dependency resolution machinery

 You've all watched the Great Quest unfold over in public-webapps over
 the last few months.

 The two key problems that still remain unsolved in this quest are:

 A. How do we integrate the process of creating a custom element
 declaration [1] with the process of creating a prototype registering
 lifecycle callbacks?

 B. With HTML Imports [2], how do we ensure that the declaration of a
 custom element is loaded after the declaration of the custom element
 it extends? At the very least, how do we enable developers to reason
 about dependency failures?

 We thought we solved problem A first with the incredible this [3],
 and then with the last completion value [4], but early experiments
 are showing that this last completion value technique produces brittle
 constructs, since it forces specific statement ordering. Further, the
 technique ties custom element declaration too strongly to script. Even
 at the earliest stages, the developers soundly demanded the ability to
 separate ALL the script into a single, separate file.

 The next solution was to invent another quantum of time, where

 1) declaration and
 2) prototype-building come together at
 3) some point of registration.

 Unfortunately, this further exacerbates problem B: since (3) occurs
 neither at (1) or (2), but rather at some point in the future, it
 becomes increasingly more difficult to reason about why a dependency
 failed.

 Goram! Don't even get me started on problem B. By far, the easiest
 solution here would have been to make HTML Imports block on loading,
 like scripts. Unlucky for us, the non-blocking behavior is one of the
 main benefits that HTML Imports bring to the table. From here, things
 de-escalate quickly. Spirits get broken and despair rules the land.

 As it stands, I have little choice but to make the following proposal:

 Let's let declarative custom element syntax rest for a while. Let's
 yank it out of the spec. Perhaps later, when it eats more cereal and
 gathers its strength, it shall rise again. But not today.

 :DG

 [1]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-create-custom-element-declaration
 [2]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/imports/index.html
 [3]:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0152.html
 [4]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-last-completion-value




+1 - this is my preferred route anyway.  Concepts like register and shadow
dom are the core elements... Give projects like x-tags and polymer and even
projects like Ember and Angular some room to help lead the charge on asking
those questions and helping to offer potentially competing answers -- there
need be no rush to standardize at the high level at this point IMO.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: jar protocol

2013-05-10 Thread Brian Kardell
Would it be possible (not suggesting this would be the  common story) to
reference a zipped asset directly via the full url, sans a link tag?


Re: jar protocol

2013-05-10 Thread Brian Kardell


 Can you hash out a little bit more how this would work? I'm assuming you
mean something like:

   img src='/bundle.zip/img/dahut.jpg'

Meh, sorta - but I was missing some context on the mitigation strategies -
thanks for filling me in offline.

Still, same kinda idea, could you add an attribute that allowed for it to
specify that it is available in a bundle?  I'm not suggesting that this is
fully thought out, or even necessarily useful, just fleshing out the
original question in a potentially more understandable/acceptable way...

  img src='/products/images/clock.jpg'
bundle=//products/images/bundle.zip

That should be pretty much infinitely back-compatible, and require no
special mitigation at the server (including configuration wise which many
won't have access to) - just that they share the root concept and don't
clash, which I think is implied by the server solution too, right?  Old UAs
would ignore the unknown bundle attribute and request the src as per usual.
 New UAs could make sure that an archive was requested only once and serve
the file out of the archive.  Presumably you could just add support into
that attribute for some simple way to indicate a named link too...

Psuedo-ish code, bikeshed details, this is just to convey idea:

   link rel=bundle name=products href=//products/images/bundle.zip
img src='/img/dahut.jpg' bundle=link:products

I don't know if this is wise or useful, but one problem that I run into
frequently is that I see pages that mash together content where the author
doesn't get to control the head... This can make integration a little
harder than I think it should be. I'm not sure it matters, I suppose it
depends on:

a) where the link tag will be allowed to live

b) the effects created by including the same link href multiple times in
the same doc

This might be entirely sidetracking the main conversation, so I don't want
to lose that I really like where this is going so far sans any of my
questions/comments :)


Re: jar protocol

2013-05-10 Thread Brian Kardell
 I'm not sure it matters, I suppose it depends on:

 a) where the link tag will be allowed to live


 You can use link anywhere. It might not be valid, but who cares about
 validity :) It works.

Some people :)  why does it have to be invalid when it works.  Lame, no?


 b) the effects created by including the same link href multiple times in
 the same doc

 No effect whatsoever beyond wasted resources.

Yeah, if a UA mitigated that somehow it would address this pretty well.  It
should be cached the second time i suppose, but there has to be overhead in
re-treating as a fresh request.  Maybe they are smart enough to deal with
that already.
 --
 Robin Berjon - http://berjon.com/ - @robinberjon

--
Brian Kardell :: @briankardell :: hitchjs.com


Re: URL comparison

2013-05-01 Thread Brian Kardell
+ the public-nextweb list...

On Wed, May 1, 2013 at 9:00 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, Apr 28, 2013 at 12:56 PM, Brian Kardell bkard...@gmail.com wrote:
 We created a prollyfill for this about a year ago (called :-link-local
 instead of :local-link for forward compatibility):

 http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

 Cool!


 If you can specify the workings, we (public-nextweb community group) can rev
 the prollyfill, help create tests, collect feedback, etc so that when it
 comes time for implementation and rec there are few surprises.

 Did you get any feedback thus far about desired functionality,
 problems that are difficult to overcome, ..?


 --
 http://annevankesteren.nl/

We have not uncovered much on this one other than that the few people
who commented were confused by what it meant - but we didn't really
make a huge effort to push it out there... By comparison to some
others it isn't a very 'exciting' fill (our :has() for example had
lots of comment as did our mathematical attribute selectors) - but we
definitely can.  I'd like to open it up to these groups where/how you
think might be an effective means of collecting necessary data -
should we ask people to contribute comments to the list? set up a git
project where people can pull/create issues, register tests/track fork
suggestions, etc?  Most of our stuff for collecting information has
been admittedly all over the place (twitter, HN, reddit, blog
comments, etc), but this predates the nextweb group and larger
coordination, so I'm *very happy* if we can begin to change that.



--
Brian Kardell :: @briankardell :: hitchjs.com



Re: URL comparison

2013-04-28 Thread Brian Kardell
On Apr 25, 2013 1:39 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 4:34 AM, Anne van Kesteren ann...@annevk.nl
wrote:
  Background reading: http://dev.w3.org/csswg/selectors/#local-pseudo
  and http://url.spec.whatwg.org/
 
  :local-link() seems like a special case API for doing URL comparison
  within the context of selectors. It seems like a great feature, but
  I'd like it if we could agree on common comparison rules so that when
  we eventually introduce the JavaScript equivalent they're not wildly
  divergent.

 My plan is to lean *entirely* on your URL spec for all parsing,
 terminology, and equality notions.  The faster you can get these
 things written, the faster I can edit Selectors to depend on them. ^_^

  Requests I've heard before I looked at :local-link():
 
  * Simple equality
  * Ignore fragment
  * Ignore fragment and query
  * Compare query, but ignore order (e.g. ?xy will be identical to
  ?yx, which is normally not the case)
  * Origin equality (ignores username/password/path/query/fragment)
  * Further normalization (browsers don't normalize as much as they
  could during parsing, but maybe this should be an operation to modify
  the URL object rather than a comparison option)
 
  :local-link() seems to ask for: Ignore fragment and query and only
  look at a subset of path segments. However, :local-link() also ignores
  port/scheme which is not typical. We try to keep everything
  origin-scoped (ignoring username/password probably makes sense).

 Yes.

  Furthermore, :local-link() ignores a final empty path segment, which
  seems to mimic some popular server architectures (although those
  ignore most empty path segments, not just the final), but does not
  match URL architecture.

 Yeah, upon further discussion with you and Simon, I agree we shouldn't
 do this.  The big convincer for me was Simon pointing out that /foo
 and /foo/ have different behavior wrt relative links, and Anne
 pointing out that the URL spec still makes example.com and
 example.com/ identical.

  For JavaScript I think the basic API will have to be something like:
 
  url.equals(url2, {query:ignore-order})
  url.equals(url2, {query:ignore-order, upto:fragment}) // ignores
fragment
  url.equals(url2, {upto:path}) // compares everything before path,
  including username/password
  url.origin == url2.origin // ignores username/password
  url.equals(url2, {pathSegments:2}) // implies ignoring query/fragment
 
  or some such. Better ideas more than welcome.

 Looks pretty reasonable.  Only problem I have is that your upto key
 implicitly orders the url components, when there are times I would
 want to ignore parts out-of-order.

 For example, sometimes the query is just used for incidental
 information, and changing it doesn't actually result in a different
 page.  So, you'd like to ignore it when comparing, but pay attention
 to everything else.

 So, perhaps in addition to upto, an ignore key that takes a string
 or array of strings naming components that should be ignored?

 This way, :local-link(n) would be equivalent to:
 linkurl.equals(docurl, {pathSegments:n, ignore:userinfo})

 :local-link would be equivalent to:
 linkurl.equals(docurl, {upto:fragment})  (Or {ignore:fragment})

 ~TJ


Anne/Tab,

We created a prollyfill for this about a year ago (called :-link-local
instead of :local-link for forward compatibility):

http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

If you can specify the workings, we (public-nextweb community group) can
rev the prollyfill, help create tests, collect feedback, etc so that when
it comes time for implementation and rec there are few surprises.


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Brian Kardell
Can Scott or Daniel or someone explain the challenge with creating a
normal constructor that has been mentioned a few times (Scott mentioned
has-a).  I get the feeling that several people are playing catch up on that
challenge and the implications that are causing worry.  Until people have
some shared understanding it is difficult to impossible to reach something
acceptable all around.  Hard to solve the unknown problems.


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-13 Thread Brian Kardell
On Apr 13, 2013 8:57 PM, Daniel Buchner dan...@mozilla.com wrote:

 @Rick - if we generated a constructor that was in scope when the script
was executed, there is no need for rebinding 'this'. I'd gladly ditch the
rebinding in favor of sane, default, generated constructors.

I think we need someone to summarize where we are at this point :)

Is anyone  besides scott in favor of the

2) Invent a new element specifically for the purpose of defining prototypes

For the record, i am not.


Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Brian Kardell
On Mon, Mar 18, 2013 at 5:05 PM, Scott Miles sjmi...@google.com wrote:
 I'm already on the record with A, but I have a question about 'lossiness'.

 With my web developer hat on, I wonder why I can't say:

 div id=foo
   shadowroot
 shadow stuff
   /shadowroot

   light stuff

 /div


 and then have the value of #foo.innerHTML still be

   shadowroot
  shadow stuff
   /shadowroot

   lightstuff

 I understand that for DOM, there is a wormhole there and the reality of what
 this means is new and frightening; but as a developer it seems to be
 perfectly fine as a mental model.

 We web devs like to grossly oversimplify things. :)

 Scott

I am also a Web developer and I find that proposal (showing in
innerHTML) feels really wrong/unintuitive to me... I think that is
actually a feature, not a detriment and easily explainable.

I am in a) camp



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Brian Kardell
On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:

 So, what you quoted are thoughts I already deprecated mysefl in this
thread. :)

 If you read a bit further, see that  I realized that shadow-root is
really part of the 'outer html' of the node and not the inner html.

Yeah sorry, connectivity issue prevented me from seeing those until after i
sent i guess.

  I think that is actually a feature, not a detriment and easily
explainable.

 What is actually a feature? You mean that the shadow root is invisible to
innerHTML?



Yes.

 Yes, that's true. But without some special handling of Shadow DOM you get
into trouble when you start using innerHTML to serialize DOM into HTML and
transfer content from A to B. Or even from A back to itself.


I think Dimiti's implication iii is actually intuitive - that is what I am
saying... I do think that round-tripping via innerHTML would be lossy of
declarative markup used to create the instances inside the shadow... to get
that it feels like you'd need something else which I think he also
provided/mentioned.

Maybe I'm alone on this, but it's just sort of how I expected it to work
all along... Already, roundtripping can differ from the original source, If
you aren't careful this can bite you in the hind-quarters but it is
actually sensible.  Maybe I need to think about this a little deeper, but I
see nothing at this stage to make me think that the proposal and
implications are problematic.


Re: [webcomponents]: de-duping in HTMLImports

2013-04-09 Thread Brian Kardell
On Tue, Apr 9, 2013 at 2:42 PM, Scott Miles sjmi...@google.com wrote:
 Duplicate fetching is not observable, but duplicate parsing and duplicate
 copies are observable.

 Preventing duplicate parsing and duplicate copies allows us to use 'imports'
 without a secondary packaging mechanism. For example, I can load 100
 components that each import 'base.html' without issue. Without this feature,
 we would need to manage these dependencies somehow; either manually, via
 some kind of build tool, or with a packaging system.

 If import de-duping is possible, then ideally there would also be an
 attribute to opt-out.

 Scott


 On Tue, Apr 9, 2013 at 11:08 AM, Dimitri Glazkov dglaz...@google.com
 wrote:

 The trick here is to figure out whether de-duping is observable by the
 author (other than as a performance gain). If it's not, it's a
 performance optimization by a user agent. If it is, it's a spec
 feature.

 :DG

 On Tue, Apr 9, 2013 at 10:53 AM, Scott Miles sjmi...@google.com wrote:
  When writing polyfills for HTMLImports/CustomElements, we included a
  de-duping mechanism, so that the same document/script/stylesheet is not
  (1)
  fetched twice from the network and (2) not parsed twice.
 
  But these features are not in specification, and are not trivial as
  design
  decisions.
 
  WDYT?
 
  Scott
 



For what it is worth, I think I might have opened a bug on this
already (long ago) - but it would have been mixed in with a larger
'how to load them'...

--
Brian Kardell :: @briankardell :: hitchjs.com



Re: [webcomponents]: Naming the Baby

2013-03-28 Thread Brian Kardell
On Mar 28, 2013 11:45 AM, Dimitri Glazkov dglaz...@google.com wrote:

 So. :

 rel type: import

 spec name:

 1) HTML Imports
 2) Web Imports

 :DG


Makes sense to me!


Re: [webcomponents]: Naming the Baby

2013-03-27 Thread Brian Kardell
On Mar 27, 2013 2:27 PM, Scott Miles sjmi...@google.com wrote:

 The problem I'm trying to get at, is that while a 'custom element' has a
chance of meeting your 1-6 criterion, the thing on the other end of link
rel='to-be-named'... has no such qualifications. As designed, the target
of this link is basically arbitrary HTML.

 This is why I'm struggling with link rel='component' ...

 Scott


 On Wed, Mar 27, 2013 at 10:20 AM, Angelina Fabbro 
angelinafab...@gmail.com wrote:

 Just going to drop this in here for discussion. Let's try and get at
what a just a component 'is':

 A gold-standard component:

 1. Should do one thing well
 2. Should contain all the necessary code to do that one thing (HTML, JS,
CSS)
 3. Should be modular (and thus reusable)
 4. Should be encapsulated
 5. (Bonus) Should be as small as it can be

 I think it follows, then, that a 'web component' is software that fits
all of these criteria, but for explicit use in the browser to build web
applications. The tools provided - shadow DOM, custom elements etc. give
developers tools to create web components. In the case of:

 link rel=component href=..

 I would (as mentioned before) call this a 'component include' as I think
this description is pretty apt.

 It is true that widgets and components are synonymous, but that has been
that way for a couple of years now at least already. Widgets, components,
modules - they're all interchangeable depending on who you talk to. We've
stuck with 'components' to describe things so far. Let's not worry about
the synonyms. So far, the developers I've introduced to this subject
understood implicitly that they could build widgets with this stuff, all
the while I used the term 'components'.

 Cheers,

 - A

 On Tue, Mar 26, 2013 at 10:58 PM, Scott Miles sjmi...@google.com wrote:

 Forgive me if I'm perseverating, but do you imagine 'component' that is
included to be generic HTML content, and maybe some scripts or some custom
elements?

 I'm curious what is it you envision when you say 'component', to test
my previous assertion about this word.

 Scott


 On Tue, Mar 26, 2013 at 10:46 PM, Angelina Fabbro 
angelinafab...@gmail.com wrote:

 'Component Include'

 'Component Include' describes what the markup is doing, and I like
that a lot. The syntax is similar to including a stylesheet or a script and
so this name should be evocative enough for even a novice to understand
what is implied by it.

 - Angelina


 On Tue, Mar 26, 2013 at 4:19 PM, Scott Miles sjmi...@google.com
wrote:

 Fwiw, my main concern is that for my team and for lots of other
people I communicate with, 'component' is basically synonymous with 'custom
element'. In that context, 'component' referring to
chunk-of-web-resources-loaded-via-link is problematic, even if it's not
wrong, per se.

 We never complained about this before because Dimitri always wrote
the examples as link rel=components... (note the plural). When it was
changed to link rel=component... was when the rain began.

 Scott


 On Tue, Mar 26, 2013 at 4:08 PM, Ryan Seddon seddon.r...@gmail.com
wrote:

 I like the idea of package seems all encompassing which captures
the requirements nicely. That or perhaps resource, but then resource
seems singular.

 Or perhaps component-package so it is obvious that it's tied to
web components?

 -Ryan


 On Tue, Mar 26, 2013 at 6:03 AM, Dimitri Glazkov dglaz...@google.com
wrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is
more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to
do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG








This is why I suggested prototype.. It might be an arbitrary doc, but it's
intent really is to serve as kinda a way to get things you intend to insert
into your page may or not be components to the definition... I saw no
uptake, but that was the rationale: it's hard to not use widget or
component.


Re: [webcomponents]: Naming the Baby

2013-03-26 Thread Brian Kardell
On Mar 25, 2013 3:03 PM, Dimitri Glazkov dglaz...@google.com wrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG

I'm sure this is flawed and i will regret sharing it without more
consideration after it popped into my head - but what about something like
prototype?  Does that need explanation as to where i pulled that from or
is it obvious?


Re: [webcomponents]: First stab at the Web Components spec

2013-03-18 Thread Brian Kardell
On Mar 18, 2013 10:48 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Mar 18, 2013 at 7:35 AM, Karl Dubost k...@la-grange.net wrote:
  Le 7 mars 2013 à 18:25, Dimitri Glazkov a écrit :
  Here's a first rough draft of the Web Components spec:
 
https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/components/index.html
 
  Cool.
 
  I see
 
  link rel=component href=/components/heart.html
 
  Do you plan to allow the HTTP counterpart?
 
  Link: /components/heart.html; rel=component

 Does that need to be allowed?  I thought the Link header was just
 equivalent, in general, to specify a link in your head.

 ~TJ


Just bringing this up on list as it has come up in conversations offlist:
while not currently valid.for htmk, link for Web components will work in
the body too? #justcheckin


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
On Mon, Mar 11, 2013 at 1:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/11/13 3:44 PM, Daniel Buchner wrote:

 Just to be clear, these are callbacks (right?), meaning synchronous
 executions on one specific node. That is a far cry from the old issues
 with mutation events and nightmarish bubbling scenarios.


 Where does bubbling come in?

 The issue with _synchronous_ (truly synchronous, as opposed to end of
 microtask or whatnot) callbacks is that they are required to fire in the
 middle of DOM mutation while the DOM is in an inconsistent state of some
 sort.  This has nothing to do with bubbling and everything to do with what
 happens when you append a node somewhere while it already has a parent and
 it has a removed callback that totally rearranges the DOM in the middle of
 your append.


So does it actually need to be sync at that leve?  I'm not sure why it does
really.  Can someone explain just for my own clarity?

-Brian


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
Is it very difficult to provide here is an attribute I'm watching + a
callback?  Most things require us to write switches and things and receive
overly broad notifications which aren't great for performance or for code
legibility IMO.

Just curious.


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
Sorry I clicked send accidentally there... I meant to mention that I think
this is sort of the intent of attributeFilter in mutation observers


On Mon, Mar 11, 2013 at 5:59 PM, Brian Kardell bkard...@gmail.com wrote:

 Is it very difficult to provide here is an attribute I'm watching + a
 callback?  Most things require us to write switches and things and receive
 overly broad notifications which aren't great for performance or for code
 legibility IMO.

 Just curious.



 --
 Brian Kardell :: @briankardell :: hitchjs.com




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
On Mar 11, 2013 9:03 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/11/13 8:59 PM, Brian Kardell wrote:

 Is it very difficult to provide here is an attribute I'm watching + a
 callback?


 It's not super-difficult but it adds more complication to
already-complicated code

 One big question is whether in practice the attribute that will be
changing is one that the consumer cares about or not.  If it's the former,
it makes somewhat more sense to put the checking of which attribute in the
consumer.

 -Boris

Daniel can confirm but in all of the stuff i have seen and played with so
far it is...you want a changing a component attribute to have some effect.
Internally you would use mutation observers i think.


Re: Feedback and questions on shadow DOM and web components

2012-11-13 Thread Brian Kardell
Brian Kardell :: @bkardell :: hitchjs.com
On Nov 13, 2012 9:34 AM, Angelina Fabbro angelinafab...@gmail.com wrote:

 Hello public-webapps,

 I'm Angelina, and I've been very interested in shadow DOM and web
components for some time now. So much so that I've tried to teach people
about them several times. There's a video from JSConfEU floating around on
Youtube if you're interested . I think I managed to get the important parts
right despite my nerves. I've given this sort of talk four times now, and
as a result I've collected some feedback and questions from the developers
I've talked to.

 1. It looks like from the spec and the code in Glazkov's polyfill that if
I add and remove the 'is' attribute, the shadow tree should apply/unapply
itself to the host element.

Two things: 1. Added in markup or dynamically?  The draft says it can't be
added dynamically just in case...  2.  The draft itself is a little unclear
on is.  Early in the text, the reference was changed to say that these
will be custom tags, in other words x-map instead of select
is=x-map.  Mozilla's x-tags is currently operating under that assumption
as well.

 I've not found this to be the case. See my examples for 2. below - I
tried applying and unapplying the 'is' attribute to remix the unordered
list using a template without success.






Re: [webcomponents] More backward-compatible templates

2012-11-02 Thread Brian Kardell
The reason is because all of the things that you do in every template
system (iteration, conditionals, etc) are also intended to be template.

It kinda messes with the mind to get used to that idea, even for me I
occasionally need reminding...

http://memegenerator.net/instance/29459456

Brian Kardell :: @bkardell :: hitchjs.com
On Nov 2, 2012 5:18 PM, Glenn Maynard gl...@zewt.org wrote:

 I'm coming into this late, but what's the purpose of allowing nested
 templates (this part doesn't seem hard) and scripts in templates, and what
 does putting a script within a template mean?  (It sounds like it would run
 the script when you clone the template, but at least in the template
 example at the top, that doesn't look like what would happen.)  It sounds
 closer to a widget feature than a template.

 I template HTML in HTML simply by sticking templates inside a hidden div
 and cloning its contents into a DocumentFragment that I can insert wherever
 I want.  The templates never contain scripts (unless I really mean for them
 to be run at parse time).  I never nest templates this way, but there's
 nothing preventing it.

 It would be useful to have a template that works like that, which simply
 gives me a clone contents into DocumentFragment function (basically
 cloneNode(true), but returning a top-level element of DocumentFragment
 instead of HTMLTemplateElement), and hints the browser that the contents
 are a template (eg. it may want to deprioritize loading images within it).
 It wouldn't be intended to hold script, and if you did put script blocks
 inside them they'd just be run when parsed (since that's what browsers
 today will do with it).  It requires no escaping at all, and parses like
 any other tree, unlike the script approach which would just be an opaque
 block of text, so you couldn't manipulate it in-place with DOM APIs and
 it'd take a lot more work to make it viewable in developer tools, etc.

 This would essentially be a CSS rule template { display: none; } and an
 interface that gives a cloneIntoFragment (or something) method.

 With the more complicated approaches people are suggesting I assume there
 are use cases this doesn't cover--what are they?

 --
 Glenn Maynard





[Web-storage] subdomains / cooperation and limits

2012-09-17 Thread Brian Kardell
I have searched the archives and been unable to resolve this to a great
answer and I just want to make sure that my understanding is correct lest I
have to unwind things later as someone has recently made me second guess
what I thought was a logical understanding of things.  Essentially,
x.wordpress.com and y.wordpress.com both allocate and use space - no
problem, right?  Access is subject to the browsers -general- sop, (leaving
aside the ability to document.domain up one), right?  If I have two
affliate sites who communicate across an explicit trust via postMessage -
is this problematic?  I thought not, and it doesn't seem to be - further -
I cannot imagine how it could work otherwise and still be useful for a host
of common cases (like the wordpress one I mentioned above).  I have been
told that the draft contradicts my understanding, but I don't think so.
Thought that some manufactures/maybe Hixie could set me straight...?

Brian


Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Aug 21, 2012 4:03 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
  Meh. I think this loses most of the CSS is so much more convenient
  benefits. It's mainly the fact that you don't have to worry about
whether
  the nodes exist yet that makes CSS more convenient.

 Note that this benefit is preserved.  Moving or inserting an element
 in the DOM should apply CAS to it.

 The only thing we're really losing in the dynamic-ness is that other
 types of mutations to the DOM don't change what CAS does, and some of
 the dynamic selectors like :hover don't do anything.


So if I had a selector .foo .bar and then some script inserted a .bar
inside a .foo - that would work... but if I added a .bar class to some
existing child of .foo it would not...is that right?

  That said, I share your worry that having this be dynamic would slow
down
  DOM modification too much.
 
  What if we only allowed a restricted set of selectors and made these
sheets
  dynamic instead? Simple, non-pseudo selectors have information that is
all
  local to the node itself (e.g. can be applied before the node is in the
  DOM). Maybe even just restrict it to IDs and classes. I think that would
  meet the majority use-case much better.

 I think that being able to use complex selectors is a sufficiently
 large use-case that we should keep it.

  Alternately, what if these applied the attributes asynchronously (e.g.
right
  before style resolution)?

 Can you elaborate?

 ~TJ



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Tue, Aug 21, 2012 at 4:32 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 1:30 PM, Brian Kardell bkard...@gmail.com wrote:
 On Aug 21, 2012 4:03 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
 Meh. I think this loses most of the CSS is so much more convenient
 benefits. It's mainly the fact that you don't have to worry about
 whether
 the nodes exist yet that makes CSS more convenient.

 Note that this benefit is preserved.  Moving or inserting an element
 in the DOM should apply CAS to it.

 The only thing we're really losing in the dynamic-ness is that other
 types of mutations to the DOM don't change what CAS does, and some of
 the dynamic selectors like :hover don't do anything.


 So if I had a selector .foo .bar and then some script inserted a .bar inside
 a .foo - that would work... but if I added a .bar class to some existing
 child of .foo it would not...is that right?

 Correct.  If we applied CAS on attribute changes, we'd have... problems.

 ~TJ

Because you could do something like:

.foo[x=123]{ x:  234; }
.foo[x=234]{ x:  123; }

?



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Aug 21, 2012 5:40 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 2:28 PM, Ojan Vafai o...@chromium.org wrote:
  On a somewhat unrelated note, could we somehow also incorporate jquery
style
  live event handlers here? See previous www-dom discussion about this: .
I
  suppose we'd still just want listen/unlisten(selector, handler)
methods, but
  they'd get applied at the same time as cascaded attributes. Although, we
  might want to apply those on attribute changes as well.

 Using CAS to apply an onfoo attribute is nearly the same (use a
 string value to pass the function, obviously).  It'll only allow a
 single listener to be applied, though.

 If it's considered worthwhile, we can magic up this case a bit.  CAS
 properties don't accept functions normally (or rather, as I have it
 defined in the OP, it would just accept a FUNCTION token, which is
 just the function name and opening paren, but I should tighten up that
 definition).  We could have a magic function like listen(string)
 that, when used on an onfoo attribute (more generally, on a
 host-language-defined event listener attribute) does an
 addEventListener() call rather than a setAttribute() call.

 ~TJ


Can you give some pseudo code or something that is relatively close to what
you mean here?  I'm not entirely sure I follow.


  1   2   >