Re: solving the CPU usage issue for non-visible pages

2009-10-21 Thread Brian Kardell
I like window.hasAttention if you can even vaguely define what it
means... It's pointless to make it so vague that useful things will
work differently in different browsers by accident rather than by
design (for example, it might be ok for mobile devices to work
differently by design, but it would royally stink to have a particular
application (like gmail and wave which I think were mentioned earlier)
work fundamentally differently in different browsers - so that's kind
of what I'm trying to get at:  There really seem to be a few classes
of things, which ones mean that your window has attention?

Some things might be tougher than they are worth and probably exceed
the practices of even the best non-web-based solutions... I think that
primarily, relying on an OS level maintenance for things like low
power mode is more rational than requiring each JavaScript programmer
dealing with it individually..

browser windows that are minimized and inactive tabs are certainly a
related class of problem and seem like the simplest one.  Other things
I think are more debatable, but potentially useful... I'm not sure.  I
think that it would be tremendously hard for a programmer to guess
some of these things in such a way that they would be easily
predictable for users without some kind of prompting if you're not
careful... For example, I recently the Image Evolution demo from
http://www.canvasdemos.com/2009/07/15/image-evolution/ as a kind of a
performance test and let it run for three days - during which it was
not visible 99.999% of the time.  Should processing stop - or just
painting?  Painting wont happen because the OS says it wont right?





On Tue, Oct 20, 2009 at 8:16 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Oct 21, 2009 at 3:57 PM, Brian Kardell bkard...@gmail.com wrote:

 Is it really the visibility of the page that is being queried - or the
 some kind of state of a window?  Maybe it's a silly bit of semantics,
 but it seems clearer to me that most of the things discussed here are
 about a whole window/tab being minimized (either to a taskbar or tab
 or something).  If I have one app open and it is covering a browser
 window - the browser window is not visible (it's lower in the stacking
 order).  Likewise, a page is generally partially visible
 (scrollbars) so that seems more confusing than it needs to be too.

 There are lots of reasons why the browser might deduce that the user is not
 paying attention to a document, e.g.
 -- the browser window containing the document is minimized
 -- the tab containing the document is hidden
 -- the document is in an IFRAME and scrolled offscreen
 -- the browser window is buried behind other windows on the desktop
 -- the screen is dimmed for power saving
 -- gaze tracking detects that the user is looking somewhere else
 -- ultrasonic pings detect that the user is not there

 If we need an API beyond just animation, you might as well call it something
 like window.hasAttention so browsers can cover all of those cases.

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]





Re: solving the CPU usage issue for non-visible pages

2009-10-21 Thread Brian Kardell
Right... Not to beat the point - but page or window? :)  You said page
again and I'm just trying to get some clarity...


On Tue, Oct 20, 2009 at 8:05 PM, Ennals, Robert robert.enn...@intel.com wrote:
 [my last attempt at an inline reply seems to have interacted strangely with 
 Maciej's email client, so I'm going to top-post for the moment until I work 
 out what was going on]

 Good point. I don't know what other people are thinking, but when I say 
 invisible I'm thinking about pages that have been minimized or are in an 
 invisible tab. Changing the semantics of a page when it is occluded by 
 another page could be confusing.

 -Rob

 -Original Message-
 From: Brian Kardell [mailto:bkard...@gmail.com]
 Sent: Tuesday, October 20, 2009 7:58 PM
 To: Maciej Stachowiak
 Cc: Ennals, Robert; Jonas Sicking; rob...@ocallahan.org; public-
 weba...@w3.org
 Subject: Re: solving the CPU usage issue for non-visible pages

 So... in describing this feature:

 Is it really the visibility of the page that is being queried - or the
 some kind of state of a window?  Maybe it's a silly bit of semantics,
 but it seems clearer to me that most of the things discussed here are
 about a whole window/tab being minimized (either to a taskbar or tab
 or something).  If I have one app open and it is covering a browser
 window - the browser window is not visible (it's lower in the stacking
 order).  Likewise, a page is generally partially visible
 (scrollbars) so that seems more confusing than it needs to be too.


 On Tue, Oct 20, 2009 at 7:41 PM, Maciej Stachowiak m...@apple.com
 wrote:
 
  On Oct 20, 2009, at 7:13 PM, Ennals, Robert wrote:
 
  One thing I like about the requestAnimationFrame approach is that
 it makes
  it easy to do the right thing. If the simplest approach burns CPU
 cycles,
  and programmers have to think a bit harder to avoid doing this, then
 I
  suspect the likely outcome would be that many programmers will take
 the
  shortest path, and not check whether their page is visible.
 
  It's nice if you are able to re-engineer your animations enough to
 make use
  of it. The other approaches discussed seem easier to bolt on to
 existing
  code.
  Note: if you really want to optimize CPU use, then the best thing IMO
 is to
  use CSS Transitions or CSS Animations, that way the browser is fully
 in
  control of the frame rate and in many cases can do most of the work
 on the
  GPU, with no need to execute any script as the animation goes. I
 think this
  has the potential to be more CPU-friendly than
 the requestAnimationFrame
  approach, though obviously it's not applicable in some cases (e.g.
 canvas
  drawing).
 
  I'd even be tempted to risk breaking existing applications a little
 bit and
  make the *default* behavior for HTML5 pages be that time stops when
 a page
  is not visible. If a programmer has a good reason to run javascript
 on an
  invisible page then they should have to pass an option to make it
 clear that
  they know what they are doing.
 
  One challenge with this approach is that there's no good way at
 present to
  make time stop for a plugin. I suspect more generally that this
 approach
  would cause compatibility bugs.
  Regards,
  Maciej
 





Re: childElements, childElementCount, and children (was: [ElementTraversal]: Feature string for DOMImplementation.hasFeature(feature, version)?)

2009-10-21 Thread Brian Kardell
@deprecated ? :)

On Tue, Oct 20, 2009 at 8:22 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Oct 21, 2009 at 4:15 PM, Maciej Stachowiak m...@apple.com wrote:

 I agree. The reason I phrased it as I did was to contrast with my previous
 remarks. The children attribute should be part of a standard, even though
 it creates what I think is a poor design pattern (mix of previous/next and
 indexed access to the same collection).

 It might be worth adding annotations to the spec to say this API is
 terrible, do not use and this API is terrible, do not follow its design.

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]





Re: solving the CPU usage issue for non-visible pages

2009-10-21 Thread Brian Kardell
So.. I wound up speaking to Robert offline and in our discussion his
comments became much clearer to me and I think that it's at least
worth documenting in case anyone else misunderstands as I did (even
historically via the archive).

There are really a few proposals here which are sort of only
tangentially related in that they all happen to deal with time and
visibility in the current ways of coding.  I think that it would be a
mistake to assume that they are necessarily inherently related in
designing a new API without at least considering that they might be
different things.  The first has to do with the fact that timers are
currently used for animation which has the effect of wastefully eating
CPU cycles to do things that the OS won't ultimately respect anyway
(i.e. calculate visual updates for something that is non-visible).
What Robert has suggested specifically is that there be an API
specifically about animation which allows the user agent to
essentially publish a universal next frame kind of event that
animations subscribe to.

There are at least two practical side effects to this - one of which
is the thing being discussed here... That user agents _could_ (if they
choose) optimize this universally with techniques like avoiding the
paint part if the window is minimized.  The  second practical
benefit is that the API expresses much more clearly what it is about
and is less shoe-horned into what just happens to be the way we've
gotten it to work with the existing technologies -- no more need to
create timers or set frame rates, worry about interleaving frames, etc
-- that would all be handled quite beautifully by the simplicity of
the API design.

The second - and seemingly only partially related topic is whether the
windows state or visibility should be accessible in script... It is
this second question to which my questions are directed.  I definitely
see a potential utility there, but without well defined answers to
some of those questions - it seems that you could easily create a real
rats nest of complexity, incompatibility and unexpected side-effects.



On Tue, Oct 20, 2009 at 8:48 PM, Brian Kardell bkard...@gmail.com wrote:
 I suppose I should not have used that phrasing... It wasn't really
 accurate and it obscures my point...  My point was that I actually
 wanted it to run in the background... So - does time stop, or just
 rendering?  I think that you have to be very clear.



 On Tue, Oct 20, 2009 at 8:43 PM, Robert O'Callahan rob...@ocallahan.org 
 wrote:
 On Wed, Oct 21, 2009 at 4:34 PM, Brian Kardell bkard...@gmail.com wrote:

 For example, I recently the Image Evolution demo from
 http://www.canvasdemos.com/2009/07/15/image-evolution/ as a kind of a
 performance test and let it run for three days - during which it was
 not visible 99.999% of the time.  Should processing stop - or just
 painting?  Painting wont happen because the OS says it wont right?

 Depends on the OS, I guess. Performance testing is hard; for good
 performance testing you need a carefully set up environment. It's OK to
 require special browser configuration to make it believe that the user is
 always present and the window is always visible. I don't think we need to
 avoid Web platform or browser features because they might make performance
 testing a bit harder.

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]






Re: childElements, childElementCount, and children (was: [ElementTraversal]: Feature string for DOMImplementation.hasFeature(feature, version)?)

2009-10-21 Thread Brian Kardell
 In this particular case, I think anything that's implemented in all of the
 major browser engines should be an official standard, not just de facto.

Why only in this particular case? :)  As a rule that seems like sound
guidance.  If it's implemented everywhere, shouldn't you have to make
a pretty compelling case for it _not_ to be included in an official
standard?



On Tue, Oct 20, 2009 at 1:42 PM, Maciej Stachowiak m...@apple.com wrote:

 On Oct 18, 2009, at 4:14 AM, Jonas Sicking wrote:

 On Sun, Oct 18, 2009 at 12:12 AM, Doug Schepers schep...@w3.org wrote:

 So, rather than dwell on an admittedly imperfect spec, I personally suggest

 that we urge WebKit developers to implement .children and .children.length,

 in the anticipation that this will be in a future spec but can be useful to

 authors today.

 They already do. Which casts some amount of doubt on Maciejs argument
 that it was too performance heavy to implement in WebKit. :)

 What I said way back in the day (about childElements) was this:

 I suggest leaving this out, because it's not possible to implement
 both next/previous and indexed access in a way that is efficient for
 all cases (it's possible to make it fast for most cases but pretty
 challenging to make it efficient for all). This is especially bad
 with a live list and an element whose contents may be changing while
 you are iterating.

 If all you care about is looping through once, writing the loop with
 nextElementSibling is not significantly harder than indexing a list.

 I stand by that remark. It is indeed hard to get both indexed and
 previous/next access efficient in all cases. Of course, we are not going to
 let that stop us from interoperating with de facto standards, and we do our
 best (as for other kinds of NodeLists and HTMLCollections), but I'd rather
 not have new APIs follow this pattern.
 In this particular case, I think anything that's implemented in all of the
 major browser engines should be an official standard, not just de facto.
 Regards,
 Maciej





Re: solving the CPU usage issue for non-visible pages

2009-10-21 Thread Brian Kardell
So... in describing this feature:

Is it really the visibility of the page that is being queried - or the
some kind of state of a window?  Maybe it's a silly bit of semantics,
but it seems clearer to me that most of the things discussed here are
about a whole window/tab being minimized (either to a taskbar or tab
or something).  If I have one app open and it is covering a browser
window - the browser window is not visible (it's lower in the stacking
order).  Likewise, a page is generally partially visible
(scrollbars) so that seems more confusing than it needs to be too.


On Tue, Oct 20, 2009 at 7:41 PM, Maciej Stachowiak m...@apple.com wrote:

 On Oct 20, 2009, at 7:13 PM, Ennals, Robert wrote:

 One thing I like about the requestAnimationFrame approach is that it makes
 it easy to do the right thing. If the simplest approach burns CPU cycles,
 and programmers have to think a bit harder to avoid doing this, then I
 suspect the likely outcome would be that many programmers will take the
 shortest path, and not check whether their page is visible.

 It's nice if you are able to re-engineer your animations enough to make use
 of it. The other approaches discussed seem easier to bolt on to existing
 code.
 Note: if you really want to optimize CPU use, then the best thing IMO is to
 use CSS Transitions or CSS Animations, that way the browser is fully in
 control of the frame rate and in many cases can do most of the work on the
 GPU, with no need to execute any script as the animation goes. I think this
 has the potential to be more CPU-friendly than the requestAnimationFrame
 approach, though obviously it's not applicable in some cases (e.g. canvas
 drawing).

 I'd even be tempted to risk breaking existing applications a little bit and
 make the *default* behavior for HTML5 pages be that time stops when a page
 is not visible. If a programmer has a good reason to run javascript on an
 invisible page then they should have to pass an option to make it clear that
 they know what they are doing.

 One challenge with this approach is that there's no good way at present to
 make time stop for a plugin. I suspect more generally that this approach
 would cause compatibility bugs.
 Regards,
 Maciej





Re: solving the CPU usage issue for non-visible pages

2009-10-21 Thread Brian Kardell
I suppose I should not have used that phrasing... It wasn't really
accurate and it obscures my point...  My point was that I actually
wanted it to run in the background... So - does time stop, or just
rendering?  I think that you have to be very clear.



On Tue, Oct 20, 2009 at 8:43 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Oct 21, 2009 at 4:34 PM, Brian Kardell bkard...@gmail.com wrote:

 For example, I recently the Image Evolution demo from
 http://www.canvasdemos.com/2009/07/15/image-evolution/ as a kind of a
 performance test and let it run for three days - during which it was
 not visible 99.999% of the time.  Should processing stop - or just
 painting?  Painting wont happen because the OS says it wont right?

 Depends on the OS, I guess. Performance testing is hard; for good
 performance testing you need a carefully set up environment. It's OK to
 require special browser configuration to make it believe that the user is
 always present and the window is always visible. I don't think we need to
 avoid Web platform or browser features because they might make performance
 testing a bit harder.

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]





Re: Behavior Attachment Redux, was Re: HTML element content models vs. components

2011-10-03 Thread Brian Kardell
Is x-mywidget necessarily more performant?  Why?

On Oct 3, 2011 5:33 AM, Roland Steiner rolandstei...@google.com wrote:

 If I may briefly summarize the pros and cons of every approach discussed:

 X-MYWIDGET

 Pros:
 - element name is inherently immutable
 - can provide arbitrary API, can (but does not have to) derive from
arbitrary HTML element
 - best performance (in instantiation, CSS selector matching)
 Cons:
 - accessibility only for shadow tree contents, no accessibility for host
element unless ARIA roles are specified
 - parsing issues in special circumstances (table, auto-closing p,
etc.)
 - no/limited fallback (limited: user provides fallback as content of
X-MYWIDGET, won't work in special places like within tables)
 - makes it easy to conflate semantics and representation

 button IS=MYWIDGET

 Pros:
 - fallback behavior as per HTML element
 - accessibility as per HTML element + shadow tree contents
 - binding only at creation, or immediately thereafter
 - API is that of host element, +alpha
 Cons:
 - add'l APIs ignored for accessibility
 - harder to implement: there's a window during parsing (before reading the
button) where it's still an ordinary button, requiring binding to be added
afterwards
 - immutability of 'is' attribute not immediately obvious to authors
 - unclear what happens if a HTML element with intrinsic shadow DOM is
assigned a CSS binding

 button { BINDING: MYWIDGET; }

 Pros:
 - fallback behavior as if un-styled
 - accessibility
 - mutability depending on medium, etc.
 - host element stays unchanged
 Cons:
 - dynamic binding is hard to implement
 - shadow DOM dependent on rendering tree (something we explicitly wanted
to avoid)
 - API immutable that of host element
 - unclear what happens if a HTML element with (intrinsic or explicit)
shadow DOM is assigned a CSS binding as well


 Does the above look about right?

 - Roland


Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Brian Kardell
 This is _very_ hard to reasonably unless the browser can trust those
 functions to not do anything weird.  Which of course it can't.  So your
 options are either much slower selector matching or not having this. Your
 pick.

This too has come up in some discussions on CSS (CSSOM I think) that I
have had.  In the right context - I don't think it would actually be
that hard.  It would require a way to provide a sand-boxed evaluation
(read only elements) and a pattern much like jquery's where it is a
filter which can only return true or false.  True enough that it would
be slower than native for a few reasons - but perhaps still useful.


On Tue, Oct 18, 2011 at 4:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/18/11 4:20 PM, Yehuda Katz wrote:

  * Speeding up certain operations like `#foo` and `body`. There is *no
    excuse* for it being possible to implement userland hacks that
    improve on the performance of querySelectorAll.

 Sure there is.  One such excuse, for example, is that the userland hacks
 have different behavior from querySelectorAll in many cases.  Now the author
 happens to know that the difference doesn't matter in their case, but the
 _browser_ has no way to know that.

 The other excuse is that adding special cases (which is what you're asking
 for) slows down all the non-special-case codepaths.  That may be fine for
 _your_ usage of querySelectorAll, where you use it with a particular limited
 set of selectors, but it's not obvious that this is always a win.

 This may be the result of browsers failing to cache the result of parsing
 selectors

 Yep.  Browsers don't cache it.  There's generally no reason to.  I have yet
 to see any real-life testcase bottlenecked on this part of querySelectorAll
 performance.

    or something else, but the fact remains that qSA can be noticably
    slower than the old DOM methods, even when Sizzle needs to parse the
    selector to look for fast-paths.

 I'd love to see testcases showing this.

 jQuery also handles certain custom pseudoselectors, and it might be nice
 if it was possible to register JavaScript functions that qSA would use
 if it found an unknown pseudo

 This is _very_ hard to reasonably unless the browser can trust those
 functions to not do anything weird.  Which of course it can't.  So your
 options are either much slower selector matching or not having this. Your
 pick.

 -Boris





Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Brian Kardell
On Tue, Oct 18, 2011 at 5:04 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/18/11 5:01 PM, Brian Kardell wrote:

 This too has come up in some discussions on CSS (CSSOM I think) that I
 have had.  In the right context - I don't think it would actually be
 that hard.  It would require a way to provide a sand-boxed evaluation
 (read only elements)

 This is not that easy.  Especially because you can reach all DOM objects
 from elements, so you have to lock down the entire API somehow.

Right, you would need essentially, to pass in a node list which
iterated 'lite' read-only elements.  Not impossible to imagine -
right? Maybe I'm way off, but actually seems not that difficult to
imagine the implementation.


 and a pattern much like jquery's where it is a
 filter which can only return true or false.  True enough that it would
 be slower than native for a few reasons - but perhaps still useful.

 The slowness comes from not having a way to tell whether the world has
 changed under you or not and therefore having to assume that it has, not
 from the actual call into JS per se.

I imagine that they would be implemented as filters so if you had

div .x:foo(.bar) span

The normal CSS resolution would be to get the spans, narrow by .x's
then throw what you have so far to the filter, removing anything that
returned false and carrying on as normal. The slowness as I see it
would be that the filter would yes, call across the boundary and yes
have to build some intermediate and evaluating anything too complex in
the filter in that would be very slow by comparison probably - but you
don't have to do much to be useful...  Is there something in that
pattern that I am missing in terms of  what you are saying about
identifying what has changed out from underneath you? As far as I can
see it doesn't invalidate anything that already exists in CSS/selector
implementations in terms of indexes or anything - but I've been
looking for an answer to this exact question so if you know something
I'd be very interested in even a pointer to some code so I can
understand myself.



Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Brian Kardell
On Tue, Oct 18, 2011 at 5:32 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/18/11 5:23 PM, Brian Kardell wrote:

 This is not that easy.  Especially because you can reach all DOM objects
 from elements, so you have to lock down the entire API somehow.

 Right, you would need essentially, to pass in a node list which
 iterated 'lite' read-only elements.

 So the script would not get an actual DOM tree and not run in the Window
 scope?  The objects would not have an ownerDocument?  What other
 restrictions would they need to have?

They would run in their own sandbox and they would have access to the
parameters passed into the function by way of pattern.  I think that
that pattern would look a lot like jquery's selector plugin pattern
something like: The match itself, the index of the match, the
arguments to the selector itself. The 'match' in this case wouldn't be
a mutable DOM element.  You can give it a smaller API by saying that
the 'lite' version of the element that is passed in has no properties
which might give you something mutable - or you can say that all
methods/properties would also return immutable shadows of themselves.
I would be happy to walk through more detailed ideas in terms of what
specifically that would look like if there were some kind of initial
yeah, that might work - its worth looking into some more :)


 Maybe I'm way off, but actually seems not that difficult to
 imagine the implementation.

 If we're willing to pass in some totally-not-DOM data structure and run in
 some sandbox scope, then sure.

 div .x:foo(.bar) span

 The normal CSS resolution would be to get the spans, narrow by .x's
 then throw what you have so far to the filter, removing anything that
 returned false and carrying on as normal.

 Normal CSS selector examines the .x part for each span as it finds it.
 Otherwise selectors like #foo  * would require building up a list of all
 elements in the DOM, no?

I'm not sure that I understand the distinction of what you are saying
here or if it matters.  My understanding of the webkit code was that
it walks the tree (or subtree) once (as created/modifed) and optimizes
fastpath indexes on classes, ids and tags (also some other
optimizations for some slightly more complex things if I recall).  I
would have expected the querySelector** stuff to re-use that
underlying code, but I don't know - it sounds like you are saying
maybe not.


 The slowness as I see it would be that the filter would yes, call across
 the boundary and yes
 have to build some intermediate and evaluating anything too complex in
 the filter in that would be very slow by comparison probably - but you
 don't have to do much to be useful...  Is there something in that
 pattern that I am missing in terms of  what you are saying about
 identifying what has changed out from underneath you?

 _If_ the filter runs JS that can touch the DOM, then in your example for
 every span you find you'd end up calling into the filter, and then you have
 to worry about the filter rearranging the DOM under you.

 As far as I can see it doesn't invalidate anything that already exists in
 CSS/selector
 implementations in terms of indexes or anything

 At least the querySelectorAll implementations I have looked at (WebKit and
 Gecko) traverse the DOM and for each element they find check whether it
 matches the selector.  If so, they add it to the result set. Furthermore,
 selector matching itself has to walk over the tree in various ways (e.g. to
 handle combinators).  Both operations right now assume that the tree does
 NOT mutate while this is happening.

Yes - it absolutely can NOT mutate while this is happening, but it
shouldn't right?  It would be kind of non-sensical if it did.  It
doesn't have to mutate in order to be useful - even in jQuery's model,
its purpose is in order to determine what _should_ mutate, not to do
the mutation itself.


 -Boris





Re: QSA, the problem with :scope, and naming

2011-10-18 Thread Brian Kardell
Some pseudos can contain selector groups, so it would be more than just
split on comma.
On Oct 18, 2011 7:40 PM, Alex Russell slightly...@google.com wrote:

 On Tue, Oct 18, 2011 at 6:00 PM, Erik Arvidsson a...@chromium.org wrote:
  On Tue, Oct 18, 2011 at 09:42, Alex Russell slightly...@google.com
 wrote:
  Ah, but we don't need to care what CSS thinks of our DOM-only API. We
  can live and let live by building on :scope and specifying find* as
  syntactic sugar, defined as:
 
   HTMLDocument.prototype.find =
   HTMLElement.prototype.find = function(rootedSelector) {
  return this.querySelector(:scope  + rootedSelector);
}
 
HTMLDocument.prototype.findAll =
HTMLElement.prototype.findAll = function(rootedSelector) {
  return this.querySelectorAll(:scope  + rootedSelector);
}
 
  I like the way you think. Can I subscribe to your mailing list?

 Heh. Yes ;-)

  One thing to point out with the desugar is that it has a bug and most
  JS libs have the same but. querySelectorAll allows multiple selectors,
  separated by a comma and to do this correctly you need to parse the
  selector which of course requires tons of code so no one does this.
  Lets fix that by building this into the platform.

 I agree. I left should have mentioned it. The resolution I think is
 most natural is to split on , and assume that all selectors in the
 list are :scope prefixed and that. A minor point is how to order the
 items in the returned flattened list are ordered (document order? the
 natural result of concat()?).




Re: QSA, the problem with :scope, and naming

2011-10-26 Thread Brian Kardell
Yeah, I have to agree with the list here.  If you allow one its unintuitive
to not allow it the same way in a group.  The more exceptions and complexity
you add, the harder it is for someone to learn.

 On Oct 25, 2011 10:16 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * Tab Atkins Jr. wrote:
 On Tue, Oct 25, 2011 at 4:56 PM, Ojan Vafai o...@chromium.org wrote:
  On Tue, Oct 25, 2011 at 4:44 PM, Bjoern Hoehrmann derhoe...@gmx.net
 wrote:
  * Tab Atkins Jr. wrote:
  Did you not understand my example?  el.find(+ foo, + bar) feels
  really weird and I don't like it.  I'm okay with a single selector
  starting with a combinator, like el.find(+ foo), but not a selector
  list.
 
  Allowing + foo but not + foo, + bar would be really weird.
 
  Tab, what specifically is weird about el.find(+ foo, + bar)?
 
 Seeing a combinator immediately after a comma just seems weird to me.

 A list of abbreviated selectors is a more intuitive concept than a
 list of selectors where the first and only the first selector may be
 abbreviated. List of type versus special case and arbitrary limit.
 If one abbreviated selector isn't weird, then two shouldn't be either
 if two selectors aren't weird on their own.
 --
 Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
 Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/




Re: QSA, the problem with :scope, and naming

2011-11-15 Thread Brian Kardell
 Right now, the spec does however handle that use case by doing this:

  document.querySelectorAll(:scope .foo, x);

 Where x is either an individual element, or an Array, NodeList or
numerically indexed object containing 0 or more Elements.

 (It does however limit the result only to elements that are in the
document, and any disconnected elements in the collection x would not be
found.)


What spec are you referring to? I've never seen that and I am having
trouble finding it now.


Re: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?

2011-11-22 Thread Brian Kardell
Complexity and discussions about combinators seem to have prevented it from
getting into any draft despite lots of +1s.  It is really different from
the rest of the selectors that exist today which are optimized like crazy
so it requires more in term of implementation than most to keep performance
sane.  As yet I think (for the same reasons)  no one has implemented
selectors 4 subject which is simpler than :has.
On Nov 22, 2011 5:06 AM, Charles Pritchard ch...@jumis.com wrote:

 **
 On 11/22/11 1:56 AM, Sean Hogan wrote:

 On 22/11/11 7:14 PM, Roland Steiner wrote:

 On Tue, Nov 22, 2011 at 14:19, Yehuda Katz wyc...@gmail.com wrote:


 Yehuda Katz
 (ph) 718.877.1325


  On Mon, Nov 21, 2011 at 8:34 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/21/11 11:31 AM, Tab Atkins Jr. wrote:

  1)  Make sense.
 2)  Not break existing content.
 3)  Be short.


 .matches
 .is


  I like .is, the name jQuery uses for this purpose. Any reason not to go
 with it?


  IMHO 'is' seems awfully broad in meaning and doesn't very well indicate
 that the parameter should be a selector. Inasmuch I like .matches better.

  Also, FWIW, an 'is' attribute on elements was/is in discussion on this
 ML as one possibility to specify components.


 Funnily enough, I've just been talking to the DOM5 and DOM6 API designers
 and they said almost exactly the same thing.


 On the the theme, Be short, are there issues with .has?
 if(node.has('[role=button]')) node.is='button';




Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-20 Thread Brian Kardell
Yes, I had almost the same thought, though why not just require a prefix?

I also think some examples actually showing some handling of events and use
of css would be really helpful here... The upper boundary for css vs
inheritance I think would be made especially easier to understand with a
good example.  I think it is saying that a rule based on the selector 'div'
would not apply to div inside the shadow tree, but whatever the font size
is at that point in the calculation when it crosses over is maintained...is
that right?

Is there any implication here  beyond events?  For example, do shadow doms
run in a kind of worker or something to allow less worry of stomping all
over...or is that what you were specifically trying to avoid with your
distinction about the type of boundary?  Anything special there about
blocking for stylesheets or script?  Also, I might have missed this, but it
seems like you would still have access to document object? I understand its
not a  security related boundary you are describing but would it be
possible to just evaluate the meaning of document based on your position so
as to avoid the confusion that will likely cause?

One more thing... Is there any CSSOM or like access on ShadowRoot?  It
feels like there should be...

-Brian
On Dec 20, 2011 7:52 PM, Edward Oapos;Connor eocon...@apple.com wrote:

 Hi Dimitri,

 You wrote:

  In the joyous spirit of sharing, I present you with a first draft of
  the Shadow DOM Specification:
 
  http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html

 Awesome. Thanks for writing this up! Obviously, I'll have to read this
 more closely while hiding upstairs at my in-law's house next week. That
 said, I wanted to quickly note something I noticed while skimming this
 just now.

 In your Event Retargeting Example[1], you have a pseudo= attribute
 which allows the author of the shadow DOM to specify the name of a
 pseudo-element which will match that element. For example, in

div id=player
  shadow-boundary
div pseudo=controls
  …
/div
  /shadow-boundary
/div

 the web author would be able to select the player's controls by writing

#player::controls

 I'm worried that users may stomp all over the CSS WG's ability to mint
 future pseudo-element names. I'd rather use a functional syntax to
 distinguish between custom, user-defined pseudo-elements and
 engine-supplied, CSS WG-blessed ones. Something like

#player::shadow(controls)
 or
#player::custom(controls)

 could do the trick. The latter (or some other, non-shadow-DOM-specific
 name) is potentially more exciting because there may be more use cases
 for author-supplied pseudo-elements than just the shadow DOM. For
 instance, I could imagine an extension to DOM Range which would allow a
 user to name a range for selector matching.

 Anyway, thanks for the writeup, and have a wonderful break!


 Ted

 1.
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#event-retargeting-example




Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-22 Thread Brian Kardell
 ShadowRoot is a Node, so all of the typical DOM accessors apply. Is
 this what you had in mind?

CSSOM interfaces are attached to the document specifically though - right?
 And they (at least that I can recall) have no association concept with
scope (yet)... So I think that implies that unless you added at least the
stylesheets collection to the ShadowRoot, it would be kind of non-sensical
unless it is smart enough to figure out that when you say document inside
a shadow boundary, you really mean the shadow root (but that seems to
conflict with the rest of my reading).

Now that I am going back through based on your question above I am thinking
that I might have misread...Can you clarify my understanding...  Given a
document like this:


divA/div

shadow-boundary

divB/div

script

shadowRoot.addEventListener('DOMContentLoaded', function(){

console.log(shadow...);

console.log(divs in the document: +
document.querySelectorAll(div).length);

console.log(divs in the shadow boundary: +
shadowRoot.querySelectorAll('div').length);

},false);

/script

/shadow-boundary

divC/div

script

document.addEventListener(DOMContentLoaded, function(){

console.log(main...);

console.log(divs in the document: +
document.querySelectorAll(div).length);

});

/script


What is the expected console output?



-Brian



On Dec 21, 2011 11:58 AM, Dimitri Glazkov dglaz...@google.com wrote:

 On Tue, Dec 20, 2011 at 5:38 PM, Brian Kardell bkard...@gmail.com wrote:
  Yes, I had almost the same thought, though why not just require a
prefix?
 
  I also think some examples actually showing some handling of events and
use
  of css would be really helpful here... The upper boundary for css vs
  inheritance I think would be made especially easier to understand with a
  good example.  I think it is saying that a rule based on the selector
'div'
  would not apply to div inside the shadow tree, but whatever the font
size is
  at that point in the calculation when it crosses over is maintained...is
  that right?

 In short, yup. I do need to write a nice example that shows how it all
 fits together (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15173).

 
  Is there any implication here  beyond events?  For example, do shadow
doms
  run in a kind of worker or something to allow less worry of stomping all
  over...or is that what you were specifically trying to avoid with your
  distinction about the type of boundary?  Anything special there about
  blocking for stylesheets or script?  Also, I might have missed this,
but it
  seems like you would still have access to document object? I understand
its
  not a  security related boundary you are describing but would it be
possible
  to just evaluate the meaning of document based on your position so as to
  avoid the confusion that will likely cause?

 There are no workers or any special considerations for things that
 aren't mentioned. It is just a DOM subtree. I wonder if there's a
 convention of stating this somehow without actually re-describing how
 HTML/DOM works?

 
  One more thing... Is there any CSSOM or like access on ShadowRoot?  It
feels
  like there should be...

 ShadowRoot is a Node, so all of the typical DOM accessors apply. Is
 this what you had in mind?

 :DG

 
  -Brian
 
  On Dec 20, 2011 7:52 PM, Edward Oapos;Connor eocon...@apple.com
wrote:
 
  Hi Dimitri,
 
  You wrote:
 
   In the joyous spirit of sharing, I present you with a first draft of
   the Shadow DOM Specification:
  
  
http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html
 
  Awesome. Thanks for writing this up! Obviously, I'll have to read this
  more closely while hiding upstairs at my in-law's house next week. That
  said, I wanted to quickly note something I noticed while skimming this
  just now.
 
  In your Event Retargeting Example[1], you have a pseudo= attribute
  which allows the author of the shadow DOM to specify the name of a
  pseudo-element which will match that element. For example, in
 
 div id=player
   shadow-boundary
 div pseudo=controls
   …
 /div
   /shadow-boundary
 /div
 
  the web author would be able to select the player's controls by writing
 
 #player::controls
 
  I'm worried that users may stomp all over the CSS WG's ability to mint
  future pseudo-element names. I'd rather use a functional syntax to
  distinguish between custom, user-defined pseudo-elements and
  engine-supplied, CSS WG-blessed ones. Something like
 
 #player::shadow(controls)
  or
 #player::custom(controls)
 
  could do the trick. The latter (or some other, non-shadow-DOM-specific
  name) is potentially more exciting because there may be more use cases
  for author-supplied pseudo-elements than just the shadow DOM. For
  instance, I could imagine an extension to DOM Range which would

Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-22 Thread Brian Kardell
On Thu, Dec 22, 2011 at 3:15 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Thu, Dec 22, 2011 at 7:10 AM, Brian Kardell bkard...@gmail.com wrote:
  ShadowRoot is a Node, so all of the typical DOM accessors apply. Is
  this what you had in mind?
 
  CSSOM interfaces are attached to the document specifically though -
 right?
   And they (at least that I can recall) have no association concept with
  scope (yet)... So I think that implies that unless you added at least the
  stylesheets collection to the ShadowRoot, it would be kind of
 non-sensical
  unless it is smart enough to figure out that when you say document
 inside
  a shadow boundary, you really mean the shadow root (but that seems to
  conflict with the rest of my reading).

 Ohh, I think I understand the problem. Let me say it back to see if I do:

 * The upper-boundary encapsulation
 (
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#upper-boundary-encapsulation
 )
 constraints do not mention CSSOM extensions to Document interface
 (http://dev.w3.org/csswg/cssom/#extensions-to-the-document-interface).
 * They should be included to the constraints to also say that you
 can't access stylesheets in shadow DOM subtrees.

 Yes!  You might also consider adding them to the ShadowRoot since I see no
real reason why they are relevant at the document level, but not at the
ShadowRoot level.  Either way it would implications for CSSOM
implementation and possibly the draft - it should be linked like the other
references.  I think Anne is still listed as the editor there, but that's
not right if I recall... Maybe cross post it?



 This also implies that style blocks, defined inside of a shadow DOM
 subtree should have no effect on the document, and unless the style
 block has a scoped attribute, it should have no effect inside of a
 shadow DOM subtree, either. Right? (filed
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=15314).


Yes.  That was definitely part of what I was wondering... Explicitly
calling out those details about style blocks would definitely be helpful -
I assumed that anything inside a shadow DOM would be assumed to be scoped.
 Either way, reasonable people could interpret it differently so best to
call it out lest the worst possible thing happens and browsers implement it
differently :)




 Now that I am going back through based on your question above I am
 thinking
  that I might have misread...Can you clarify my understanding...  Given a
  document like this:
 
 
  divA/div
 
  shadow-boundary
 
  divB/div
 
  script
 
  shadowRoot.addEventListener('DOMContentLoaded',
 function(){
 
  console.log(shadow...);
 
  console.log(divs in the document: +
  document.querySelectorAll(div).length);
 
  console.log(divs in the shadow boundary: +
  shadowRoot.querySelectorAll('div').length);
 
  },false);
 
  /script
 
  /shadow-boundary
 
  divC/div
 
  script
 
  document.addEventListener(DOMContentLoaded, function(){
 
  console.log(main...);
 
  console.log(divs in the document: +
  document.querySelectorAll(div).length);
 
  });
 
  /script
 
 
  What is the expected console output?

 shadowRoot doesn't fire DOMContentLoaded, so the output will be:

main...
 divs in the document: 2

 There's also an interesting issue of when (and whether) script
 executes inside of a shadow DOM subtree (filed
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=15313 to track).

 Yeah that's the nature of the question - whether it acts as sort of a
document within a document firing DOMContentLoaded, etc - or whether
there is a way to do effectively the same thing - when scripts execute,
whether they block, etc.  I'm not sure what you mean by whether - the whole
events section really seems to imply that it must Did I misread?





 :DG

 
 
 
  -Brian
 
 
 
  On Dec 21, 2011 11:58 AM, Dimitri Glazkov dglaz...@google.com wrote:
 
  On Tue, Dec 20, 2011 at 5:38 PM, Brian Kardell bkard...@gmail.com
 wrote:
   Yes, I had almost the same thought, though why not just require a
   prefix?
  
   I also think some examples actually showing some handling of events
 and
   use
   of css would be really helpful here... The upper boundary for css vs
   inheritance I think would be made especially easier to understand
 with a
   good example.  I think it is saying that a rule based on the selector
   'div'
   would not apply to div inside the shadow tree, but whatever the font
   size is
   at that point in the calculation when it crosses over is
 maintained...is
   that right?
 
  In short, yup. I do need to write a nice example that shows how it all
  fits together (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15173).
 
  
   Is there any implication here  beyond events?  For example, do shadow
   doms
   run in a kind of worker or something to allow less worry of stomping
 all
   over

Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-22 Thread Brian Kardell
So... I was going to ask a follow up here but as I tried to formulate I
went back to the draft and it became kind of clear that I don't actually
understand shadow or content elements at all...  ShadowRoot has a
constructor, but it doesn't seem to have anything in its signature that
would give you a shadow or content element (unless maybe they return node
lists that are actually specialized kinds of nodes?)...

It seems like all of the examples are using fictional markup where I think
the draft is actually saying a scripted API is required to create... Is
that right?  If so, it would be great to have some kind of scripted
example, even if it is really basic for discussion... If not.. well... my
re-read seems to have gotten me a little lost.

-Brian




On Thu, Dec 22, 2011 at 4:04 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Thu, Dec 22, 2011 at 12:49 PM, Brian Kardell bkard...@gmail.com
 wrote:
 
 
  On Thu, Dec 22, 2011 at 3:15 PM, Dimitri Glazkov dglaz...@chromium.org
  wrote:
 
  On Thu, Dec 22, 2011 at 7:10 AM, Brian Kardell bkard...@gmail.com
 wrote:
   ShadowRoot is a Node, so all of the typical DOM accessors apply. Is
   this what you had in mind?
  
   CSSOM interfaces are attached to the document specifically though -
   right?
And they (at least that I can recall) have no association concept
 with
   scope (yet)... So I think that implies that unless you added at least
   the
   stylesheets collection to the ShadowRoot, it would be kind of
   non-sensical
   unless it is smart enough to figure out that when you say document
   inside
   a shadow boundary, you really mean the shadow root (but that seems to
   conflict with the rest of my reading).
 
  Ohh, I think I understand the problem. Let me say it back to see if I
 do:
 
  * The upper-boundary encapsulation
 
  (
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#upper-boundary-encapsulation
 )
  constraints do not mention CSSOM extensions to Document interface
  (http://dev.w3.org/csswg/cssom/#extensions-to-the-document-interface).
  * They should be included to the constraints to also say that you
  can't access stylesheets in shadow DOM subtrees.
 
  Yes!  You might also consider adding them to the ShadowRoot since I see
 no
  real reason why they are relevant at the document level, but not at the
  ShadowRoot level.  Either way it would implications for CSSOM
 implementation
  and possibly the draft - it should be linked like the other references.
  I
  think Anne is still listed as the editor there, but that's not right if I
  recall... Maybe cross post it?
 
 
 
  This also implies that style blocks, defined inside of a shadow DOM
  subtree should have no effect on the document, and unless the style
  block has a scoped attribute, it should have no effect inside of a
  shadow DOM subtree, either. Right? (filed
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=15314).
 
 
  Yes.  That was definitely part of what I was wondering... Explicitly
 calling
  out those details about style blocks would definitely be helpful - I
 assumed
  that anything inside a shadow DOM would be assumed to be scoped.  Either
  way, reasonable people could interpret it differently so best to call it
 out
  lest the worst possible thing happens and browsers implement it
 differently
  :)

 Sounds good. Keep an eye on the bug for updates.

 
 
 
 
   Now that I am going back through based on your question above I am
   thinking
   that I might have misread...Can you clarify my understanding...
  Given a
   document like this:
  
  
   divA/div
  
   shadow-boundary
  
   divB/div
  
   script
  
   shadowRoot.addEventListener('DOMContentLoaded',
   function(){
  
   console.log(shadow...);
  
   console.log(divs in the document: +
   document.querySelectorAll(div).length);
  
   console.log(divs in the shadow boundary: +
   shadowRoot.querySelectorAll('div').length);
  
   },false);
  
   /script
  
   /shadow-boundary
  
   divC/div
  
   script
  
   document.addEventListener(DOMContentLoaded, function(){
  
   console.log(main...);
  
   console.log(divs in the document: +
   document.querySelectorAll(div).length);
  
   });
  
   /script
  
  
   What is the expected console output?
 
  shadowRoot doesn't fire DOMContentLoaded, so the output will be:
 
  main...
  divs in the document: 2
 
  There's also an interesting issue of when (and whether) script
  executes inside of a shadow DOM subtree (filed
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=15313 to track).
 
  Yeah that's the nature of the question - whether it acts as sort of a
  document within a document firing DOMContentLoaded, etc - or whether
 there
  is a way to do effectively the same thing - when scripts execute, whether
  they block, etc.  I'm not sure what you mean by whether - the whole
 events

Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-23 Thread Brian Kardell
In your example, you lost me on this part:

// Insert Bob's shadow tree under the election story box.
root.appendChild(document.createElement('shadow'));

Is that wrong?  If not, can you explain it?  also... How does this patter
give browsers timely enough information to avoid fouc?  It feels like there
is a piece missing..
On Dec 22, 2011 8:16 PM, Brian Kardell bkard...@gmail.com wrote:

 Quick note :  That is the single best draft prose I have ever read :)
 On Dec 22, 2011 6:56 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 BTW, added an example:

 dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#shadow-dom-example

 :DG




Re: [webcomponents]: First draft of the Shadow DOM Specification

2011-12-23 Thread Brian Kardell
On Dec 23, 2011 1:00 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 On Fri, Dec 23, 2011 at 5:23 AM, Brian Kardell bkard...@gmail.com wrote:
  In your example, you lost me on this part:
 
  // Insert Bob's shadow tree under the election story box.
  root.appendChild(document.createElement('shadow'));
 
  Is that wrong?  If not, can you explain it?

 Sure. Since Alice's shadow DOM subtree is added later than Bob's, his
 tree is older than hers. The way shadow insertion point works, it
 looks for an older tree in the tree stack, hosted by the ul element.
 In this case, the older tree is Bob's. Thus, Bob's entire shadow DOM
 tree is inserted in place of the shadow element. Does that make more
 sense? What can I do to improve the example? A diagram perhaps? Please
 file a bug with ideas.

Hmmm.  So if you say document.createElement('shadow') it actually gives you
a reference to the most recently  added shadow hosted by the same element?
It doesn't really create?  What if you did that and there were none?  Would
it throw?  Seems kind of tough to wrap my head around let me think about it
some more and I will file a bug if I have any ideas.


  also... How does this patter
  give browsers timely enough information to avoid fouc?  It feels like
there
  is a piece missing..

 In this particular case, since both Bob and Alice use
 DOMContentLoaded, FOUC is not an issue. The first paint will occur
 after the shadow subtrees are in place.
A handler attached to DOMContentLoaded doesn't block painting...   That
doesn't sound right to me...  It might be generally faster than people
notice, but it still depends right?   In practice a lot of css is already
applied at that point...yeah?  You could still get fouc right?

 :DG

 
  On Dec 22, 2011 8:16 PM, Brian Kardell bkard...@gmail.com wrote:
 
  Quick note :  That is the single best draft prose I have ever read :)
 
  On Dec 22, 2011 6:56 PM, Dimitri Glazkov dglaz...@chromium.org
wrote:
 
  BTW, added an example:
 
 
dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#shadow-dom-example
 
  :DG
 On Dec 23, 2011 1:00 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 On Fri, Dec 23, 2011 at 5:23 AM, Brian Kardell bkard...@gmail.com wrote:
  In your example, you lost me on this part:
 
  // Insert Bob's shadow tree under the election story box.
  root.appendChild(document.createElement('shadow'));
 
  Is that wrong?  If not, can you explain it?

 Sure. Since Alice's shadow DOM subtree is added later than Bob's, his
 tree is older than hers. The way shadow insertion point works, it
 looks for an older tree in the tree stack, hosted by the ul element.
 In this case, the older tree is Bob's. Thus, Bob's entire shadow DOM
 tree is inserted in place of the shadow element. Does that make more
 sense? What can I do to improve the example? A diagram perhaps? Please
 file a bug with ideas.

  also... How does this patter
  give browsers timely enough information to avoid fouc?  It feels like
 there
  is a piece missing..

 In this particular case, since both Bob and Alice use
 DOMContentLoaded, FOUC is not an issue. The first paint will occur
 after the shadow subtrees are in place.

 :DG

 
  On Dec 22, 2011 8:16 PM, Brian Kardell bkard...@gmail.com wrote:
 
  Quick note :  That is the single best draft prose I have ever read :)
 
  On Dec 22, 2011 6:56 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
 
  BTW, added an example:
 
 
 dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#shadow-dom-example
 
  :DG



Re: [webcomponents] HTML Parsing and the template element

2012-02-08 Thread Brian Kardell
Are you essentially suggesting partials?  Basically, one template can
contain another only by reference?  Then you have something like a
corresponding tag or macro-ish thing whereby you can reference
(functionally include) on themplate from another?

That sidesteps the whole nested template parsing pretty nicely and its what
a lot of logicless template approaches do.
 On Feb 8, 2012 6:34 PM, Rafael Weinstein rafa...@google.com wrote:

 On Wed, Feb 8, 2012 at 3:16 PM, Adam Barth w...@adambarth.com wrote:
  On Wed, Feb 8, 2012 at 2:47 PM, Rafael Weinstein rafa...@chromium.org
 wrote:
  Here's a real-world example, that's probably relatively simple
  compared to high traffic web pages (i.e. amazon or facebook)
 
 
 http://src.chromium.org/viewvc/chrome/trunk/src/chrome/common/extensions/docs/template/api_template.html?revision=120962content-type=text%2Fplain
 
  that produces each page of the chrome extensions API doc, e.g.
 
  http://code.google.com/chrome/extensions/contextMenus.html
 
  This uses jstemplate. Do a search in the first link. Every time you
  see jsdisplay or jsselect, think template.
 
  It's a bit hard for me to understand that example because I don't know
  how jstemplate works.

 Sorry. This example wasn't really meant to be understood so much as
 observed for:

 1) A general feel for levels of nesting.
 2) That the nested components are defined where they are used.
 3) How complex the templating already is, even given that templates
 can be nested.
 3) Imagine what this page might look like if each nested component was
 pulled out and put somewhere else (possibly the top level).

 
  I'm just suggesting that rather than trying to jam a square peg
  (template) into a round hole (the HTML parser), there might be a way
  of reshaping both the peg and the hole into an octagon.

 I get that. Unfortunately, I'm useless on this front because I know
 next to nothing about HTML parsing.

 All I can offer is an opinion as to how well various declarative
 semantics will address the templating use case.

 Maybe the best analogy I can give is this: try to imagine if someone
 proposed that C looping constructs couldn't contain a body -- only a
 function call. e.g.

 for (int i = 0; i  count; i++) doMyThing();

 You can still write all the same programs, but it'd be an
 unfortunately feature to give up.

 
  Adam
 
 
  On Wed, Feb 8, 2012 at 2:36 PM, Adam Barth w...@adambarth.com wrote:
  On Wed, Feb 8, 2012 at 2:20 PM, Erik Arvidsson a...@chromium.org
 wrote:
  On Wed, Feb 8, 2012 at 14:10, Adam Barth w...@adambarth.com wrote:
  ... Do you have a concrete example of
  where nested template declarations are required?
 
  When working with tree like structures it is comment to use recursive
 templates.
 
  http://code.google.com/p/mdv/source/browse/use_cases/tree.html
 
  I'm not sure I fully understand how templates work, so please forgive
  me if I'm butchering it, but here's how I could imagine changing that
  example:
 
  === Original ===
 
  ul class=tree
   template iterate id=t1
 li class={{ children | toggle('has-children') }}{{name}}
   ul
 template ref=t1 iterate=children/template
   /ul
 /li
   /template
  /ul
 
  === Changed ===
 
  ul class=tree
   template iterate id=t1
 li class={{ children | toggle('has-children') }}{{name}}
   ul
 template-reference ref=t1
 iterate=children/template-reference
   /ul
 /li
   /template
  /ul
 
  (Obviously you'd want a snappier name than template-reference to
  reference another template element.)
 
  I looked at the other examples in the same directory and I didn't see
  any other examples of nested template declarations.
 
  Adam




Re: [webcomponents] HTML Parsing and the template element

2012-02-08 Thread Brian Kardell
Then why not something like

template id=aworld/template
template id=bhello partial with=a/template
On Feb 8, 2012 10:22 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Feb 8, 2012 at 5:20 PM, Brian Kardell bkard...@gmail.com wrote:

 Are you essentially suggesting partials?  Basically, one template can
 contain another only by reference?  Then you have something like a
 corresponding tag or macro-ish thing whereby you can reference
 (functionally include) on themplate from another?

 That sidesteps the whole nested template parsing pretty nicely and its
 what a lot of logicless template approaches do.

 I think that's what Adam is suggesting and Erik, Dimitri, and Rafael are
 advocating nested templates.

 - Ryosuke




Re: Disallowing mutation events in shadow DOM

2012-02-23 Thread Brian Kardell
Just to be clear on this:  what is the status of mutation observers?  If
there any chance shadow dom beats mutation observers to standardization?  I
don't think so, but just checking...  If that turned out to be the case it
could be crippling shadow dom until such a time..

Brian
On Feb 23, 2012 6:46 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 Sounds good. Filed a bug here:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=16096

 :DG

 On Thu, Feb 23, 2012 at 3:38 PM, Ryosuke Niwa rn...@webkit.org wrote:
  Can we disallow mutation events inside shadow DOM?
 
  There is no legacy content that depends on mutation events API inside
 shadow
  DOM, and we have a nice spec  implementation of new mutation observer
 API
  already.
 
  FYI, https://bugs.webkit.org/show_bug.cgi?id=79278
 
  Best,
  Ryosuke Niwa
  Software Engineer
  Google Inc.
 
 




Re: Disallowing mutation events in shadow DOM

2012-02-23 Thread Brian Kardell
Yeah that was pretty much my feeling but always worth checking.
On Feb 23, 2012 7:13 PM, Olli Pettay olli.pet...@helsinki.fi wrote:

 On 02/24/2012 02:10 AM, Brian Kardell wrote:

 Just to be clear on this:  what is the status of mutation observers?


 They are in DOM 4. The API may still change a bit, but
 there is already one implementation, and another one close to
 ready.



 If
 there any chance shadow dom beats mutation observers to
 standardization?

 AFAIK, shadow DOM is quite far from being stable.


  I don't think so, but just checking...  If that turned
 out to be the case it could be crippling shadow dom until such a time..

 Brian

 On Feb 23, 2012 6:46 PM, Dimitri Glazkov dglaz...@chromium.org
 mailto:dglaz...@chromium.org wrote:

Sounds good. Filed a bug here:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16096

:DG

On Thu, Feb 23, 2012 at 3:38 PM, Ryosuke Niwa rn...@webkit.org
mailto:rn...@webkit.org wrote:
  Can we disallow mutation events inside shadow DOM?
 
  There is no legacy content that depends on mutation events API
inside shadow
  DOM, and we have a nice spec  implementation of new mutation
observer API
  already.
 
  FYI, https://bugs.webkit.org/show_bug.cgi?id=79278
 
  Best,
  Ryosuke Niwa
  Software Engineer
  Google Inc.
 
 




Re: [webcomponents] Progress Update

2012-03-20 Thread Brian Kardell
on: http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html
 as listed below, it returns error: revision not found: spec.

I think it should be:
http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html



On Mon, Mar 19, 2012 at 3:42 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 Hello, public-webapps!

 Here's another summary of work, happening in Web Components.

 SHADOW DOM (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978)
 * First bits of the Shadow DOM test suite have landed:
 http://w3c-test.org/webapps/ShadowDOM/tests/submissions/Google/tests.html
 * More work in spec, long tail of edge cases and bugs:
  - You can now select elements, distributed into insertion points
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
  - A bug in adjusting event's relatedTarget was discovered and fixed
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
  - As a result of examining Viewlink (an IE feature), more events are
 now stopped at the boundary
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15804)
  - Fixed a bug around scoping of styles
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16318)
 * Started restructuring CSS-related parts of the spec to accommodate
 these new features:
  - Specify a way to select host element
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15220)
  - Consider a notion of shared stylesheet
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15818)
  - Consider a flag for resetting inherited styles at the shadow
 boundary (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15820)
 * Experimental support of Shadow DOM in WebKit is slowly, but surely
 gaining multiple shadow DOM subtree support
 (https://bugs.webkit.org/show_bug.cgi?id=77503)

 HTML TEMPLATES 
 (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=15476):
 * First draft of the specification is ready for review:
 http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html.
 * Most mechanical parts are written as deltas to the HTML spec, which
 offers an interesting question of whether this spec should just be
 part of HTML.

 CODE SAMPLES (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14956):
 * Web Components Polyfill
 (https://github.com/dglazkov/Web-Components-Polyfill) now has unit
 tests and a good bit of test coverage. Contributions are appreciated.
 Even though it may not

 ADDITIONAL WAYS TO STAY UPDATED:
 * https://plus.google.com/b/103330502635338602217/
 * http://dvcs.w3.org/hg/webcomponents/rss-log
 * follow the meta bugs for each section.

 :DG




Re: [webcomponents] Progress Update

2012-03-20 Thread Brian Kardell
Whoops... that does not appear to be the same file.  Appears that the
repo points to

http://dvcs.w3.org/hg/webcomponents/raw-file/c2f82425ba8d/spec/templates/index.html

However in that doc listed as the latest editors draft is the one for
shadow I included below. ??


On Tue, Mar 20, 2012 at 10:09 AM, Brian Kardell bkard...@gmail.com wrote:
 on: http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html
  as listed below, it returns error: revision not found: spec.

 I think it should be:
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html



 On Mon, Mar 19, 2012 at 3:42 PM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 Hello, public-webapps!

 Here's another summary of work, happening in Web Components.

 SHADOW DOM (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978)
 * First bits of the Shadow DOM test suite have landed:
 http://w3c-test.org/webapps/ShadowDOM/tests/submissions/Google/tests.html
 * More work in spec, long tail of edge cases and bugs:
  - You can now select elements, distributed into insertion points
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
  - A bug in adjusting event's relatedTarget was discovered and fixed
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
  - As a result of examining Viewlink (an IE feature), more events are
 now stopped at the boundary
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15804)
  - Fixed a bug around scoping of styles
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16318)
 * Started restructuring CSS-related parts of the spec to accommodate
 these new features:
  - Specify a way to select host element
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15220)
  - Consider a notion of shared stylesheet
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15818)
  - Consider a flag for resetting inherited styles at the shadow
 boundary (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15820)
 * Experimental support of Shadow DOM in WebKit is slowly, but surely
 gaining multiple shadow DOM subtree support
 (https://bugs.webkit.org/show_bug.cgi?id=77503)

 HTML TEMPLATES 
 (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=15476):
 * First draft of the specification is ready for review:
 http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html.
 * Most mechanical parts are written as deltas to the HTML spec, which
 offers an interesting question of whether this spec should just be
 part of HTML.

 CODE SAMPLES 
 (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14956):
 * Web Components Polyfill
 (https://github.com/dglazkov/Web-Components-Polyfill) now has unit
 tests and a good bit of test coverage. Contributions are appreciated.
 Even though it may not

 ADDITIONAL WAYS TO STAY UPDATED:
 * https://plus.google.com/b/103330502635338602217/
 * http://dvcs.w3.org/hg/webcomponents/rss-log
 * follow the meta bugs for each section.

 :DG




Re: [webcomponents] Progress Update

2012-03-20 Thread Brian Kardell
Sure... Note that tip is in the wrong link I sent too, it's just
pointing to the wrong doc :)
I was just noting that I got it from the latest version of this doc
link in that revision which is (currently) actually pointing to the
tip of shadow, not templates.


On Tue, Mar 20, 2012 at 10:14 AM, Jarred Nicholls jar...@webkit.org wrote:
 On Tue, Mar 20, 2012 at 10:11 AM, Brian Kardell bkard...@gmail.com wrote:

 Whoops... that does not appear to be the same file.  Appears that the
 repo points to


 http://dvcs.w3.org/hg/webcomponents/raw-file/c2f82425ba8d/spec/templates/index.html


 FYI tip will point to the latest
 revision: http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html


 However in that doc listed as the latest editors draft is the one for
 shadow I included below. ??


 On Tue, Mar 20, 2012 at 10:09 AM, Brian Kardell bkard...@gmail.com
 wrote:
  on:
  http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html
   as listed below, it returns error: revision not found: spec.
 
  I think it should be:
  http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html
 
 
 
  On Mon, Mar 19, 2012 at 3:42 PM, Dimitri Glazkov dglaz...@chromium.org
  wrote:
  Hello, public-webapps!
 
  Here's another summary of work, happening in Web Components.
 
  SHADOW DOM
  (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978)
  * First bits of the Shadow DOM test suite have landed:
 
  http://w3c-test.org/webapps/ShadowDOM/tests/submissions/Google/tests.html
  * More work in spec, long tail of edge cases and bugs:
   - You can now select elements, distributed into insertion points
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
   - A bug in adjusting event's relatedTarget was discovered and fixed
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
   - As a result of examining Viewlink (an IE feature), more events are
  now stopped at the boundary
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15804)
   - Fixed a bug around scoping of styles
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16318)
  * Started restructuring CSS-related parts of the spec to accommodate
  these new features:
   - Specify a way to select host element
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15220)
   - Consider a notion of shared stylesheet
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15818)
   - Consider a flag for resetting inherited styles at the shadow
  boundary (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15820)
  * Experimental support of Shadow DOM in WebKit is slowly, but surely
  gaining multiple shadow DOM subtree support
  (https://bugs.webkit.org/show_bug.cgi?id=77503)
 
  HTML TEMPLATES
  (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=15476):
  * First draft of the specification is ready for review:
  http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html.
  * Most mechanical parts are written as deltas to the HTML spec, which
  offers an interesting question of whether this spec should just be
  part of HTML.
 
  CODE SAMPLES
  (https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14956):
  * Web Components Polyfill
  (https://github.com/dglazkov/Web-Components-Polyfill) now has unit
  tests and a good bit of test coverage. Contributions are appreciated.
  Even though it may not
 
  ADDITIONAL WAYS TO STAY UPDATED:
  * https://plus.google.com/b/103330502635338602217/
  * http://dvcs.w3.org/hg/webcomponents/rss-log
  * follow the meta bugs for each section.
 
  :DG
 





Re: [webcomponents] HTML Parsing and the template element

2012-04-24 Thread Brian Kardell
 Yes. I think this issue is a distraction.

 Using the script tag for encoding opaque text contents is a hack, but
 it works as well as it can. AFAIC, The main drawback is that the
 contents cannot contain the string /script. This will be the case
 for any new element we came up with for this purpose.
 If someone has an idea for how to do better than this and why it's
 worth doing, please speak up.

 Part of the point of parsing the template contents as HTML is exactly
 so that template contents can contain subtemplates. It's a universal
 feature of templating systems and needs to be well supported.

I know of many, many templating systems and I have simply never (aside
from MDV) seen it in exactly this light (that is templates actually
embedded in others), regardless of whether those are for within the
browser for generating HTML (or anything else) or on the server - or
even for code generation.  It seems to me that there is a rich history
of templating text, it's very useful and in every case I have seen you
have a template and that template can contain references to other
templates, not embed them...  Am I seeing this improperly?  This seems
to be the case with freemarker, velocity, mustache, handlbars, even
jsp, asp, php, etc - (there are really a lot of them, I'm just
throwing out a bunch).  This kind of approach does not seem like it
would be out of place in HTML or even XML - we think about a lot of
things that way (in terms of references).  Are there some in
particular that someone could point to that illustrate otherwise?

If you use the same element as both a marker for the template and as a
beam off which to hang instructions (template data-iterate for
example) then you are bound to wind up in this situation, but
otherwise I don't see why it is so controversial to just say not
embedded, referenced.  Perhaps if described as 2 or 3 tags instead of
1 it would be easier to discuss?

1. the template tag, just like script (can't embed itself) only
template (so it can contain scripts and is easily identified for what
it is)
2. a template-ref tag which allows you to ref another template (I
expect this wont actually be used by a lot of languages since this is
generally a feature of the template language itself - but ok)
3. a template-instruction tag which provides your beam off which to
hang whatever (iterate, condition, etc) and allows anyone who is
interested in building templating languages that are fully legit
HTML to do so (I think that unless there is a lot more to the sale,
people will tend to stick with non-HTML looking iterators, conditions,
etc - but ok)

Is 1 as I describe it really controversial?  2 and 3 seem to me to be
clearly about features of a particular templating language (or at
least a particular class of them that mostly don't exist today).

I expect, however, that there might be larger ideas behind why not to
do this in the sense of web components or declarative MDV-like data
binding and it would be good to hear the larger perspective of how
that might fit together so decisions on one front don't negate good
ideas on another.

- Brian


 On Mon, Apr 23, 2012 at 4:11 PM, Ryosuke Niwa rn...@webkit.org wrote:
 Why don't we just use script elements for that then?


 On Mon, Apr 23, 2012 at 3:52 PM, Yuval Sadan sadan.yu...@gmail.com wrote:

 You musn't forget what we're not planning for. Templates can be great for
 so many applications - generating code (JSON, Javascript), generating
 plain-text or otherwise formatted (markdown, restructured text, etc.)
 content and much more. I don't think templates should be parsed by DOM
 unless explicitly requested. The simplest scenario should also be supported
 imho, that is script type=text/html/script-ish markup with access to
 textContent.


 On Thu, Apr 19, 2012 at 1:56 AM, Rafael Weinstein rafa...@google.com
 wrote:

 On Wed, Apr 18, 2012 at 2:54 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Wed, Apr 18, 2012 at 2:31 PM, James Graham jgra...@opera.com
  wrote:
  On Wed, 18 Apr 2012, Dimitri Glazkov wrote:
 
  Wouldn't it make more sense to host the template contents as normal
  descendants of the template element and to make templating APIs
  accept
  either template elements or document fragments as template input?
   Or
  to make the template elements have a cloneAsFragment() method if the
  template fragment is designed to be cloned as the first step anyway?
 
  When implementing this, making embedded content inert is probably
  the
  most time-consuming part and just using a document fragment as a
  wrapper isn't good enough anyway, since for example img elements
  load
  their src even when not inserted into the DOM tree. Currently, Gecko
  can make imbedded content inert on a per-document basis.  This
  capability is used for documents returned by XHR, createDocument and
  createHTMLDocument. It looks like the template proposal will involve
  computing inertness from the ancestor chain (template ancestor or
  DocumentFragment 

Re: [webcomponents] HTML Parsing and the template element

2012-04-24 Thread Brian Kardell
On Tue, Apr 24, 2012 at 11:48 AM, Erik Arvidsson a...@chromium.org wrote:
 On Tue, Apr 24, 2012 at 06:46, Brian Kardell bkard...@gmail.com wrote:
 I know of many, many templating systems and I have simply never (aside
 from MDV) seen it in exactly this light (that is templates actually
 embedded in others), regardless of whether those are for within the
 browser for generating HTML (or anything else) or on the server - or
 even for code generation.  It seems to me that there is a rich history
 of templating text, it's very useful and in every case I have seen you
 have a template and that template can contain references to other
 templates, not embed them...  Am I seeing this improperly?  This seems
 to be the case with freemarker, velocity, mustache, handlbars, even
 jsp, asp, php, etc - (there are really a lot of them, I'm just
 throwing out a bunch).  This kind of approach does not seem like it
 would be out of place in HTML or even XML - we think about a lot of
 things that way (in terms of references).  Are there some in
 particular that someone could point to that illustrate otherwise?

 Most system do allow it. The syntax they use might not make it clear.

 http://emberjs.com/#toc_displaying-a-list-of-items

 ul
  {{#each people}}
    liHello, {{name}}!/li
  {{/each}}
 /ul

 In here there is a template between the start each and end each.

While you could think of it that way, that's not generally how we
refer to it when discussing templates - right?  Just pick any and the
documentation (it seems to me) will refer to templates separately
from instructions/macros/etc that make up the templating language (the
above are an example of handlebars each block helpers).  In your
provided example (which uses handlebars) the better analogy to what I
am arguing is partials - see https://github.com/wycats/handlebars.js/
--- about 1/2 way down the page you will find:

Partials

You can register additional templates as partials, which will be used
by Handlebars when it encounters a partial ({{ partialName}}).
Partials can either be String templates or compiled template
functions. Here's an example:

var source = ul{{#people}}li{{ link}}/li{{/people}}/ul;

Handlebars.registerPartial('link', 'a href=/people/{{id}}{{name}}/a')
var template = Handlebars.compile(source);

var data = { people: [
{ name: Alan, id: 1 },
{ name: Yehuda, id: 2 }
  ]};

template(data);

// Should render:
// ul
//   lia href=/people/1Alan/a/li
//   lia href=/people/2Yehuda/a/li
// /ul


Again, all of these templating systems (it seems to me) draw a
distinction between these two ideas... Am I missing something?



Re: [webcomponents] HTML Parsing and the template element

2012-04-24 Thread Brian Kardell
On Tue, Apr 24, 2012 at 1:50 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Apr 24, 2012 at 10:14 AM, Brian Kardell bkard...@gmail.com wrote:
 On Tue, Apr 24, 2012 at 12:45 PM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 On Tue, Apr 24, 2012 at 9:12 AM, Clint Hill clint.h...@gmail.com wrote:
 Hmm. I have to say that I disagree that your example below shows a
 template within a template. That is IMO 1 template wherein there is
 iteration syntax.

 The iteration syntax is basically an element - the example that Arv
 gave even used element-like syntax, with open and close tags.  That
 iteration element is inside of a template.

 But in his example, and most of the ones people have been citing or
 really want to use this for they are  no tags in the HTML sense...
 Those are handlebars (or mustache or dust or haml or whatever)
 [snip further comments along similar lines]

 As long as it's handlebars that merely *look* like elements, we have
 to ship code down the wire that is simply a functional replacement for
 the DOM that the user already has.  This is suboptimal.  It's a
 cowpath that only curves this way because the straighter path was
 blocked by a boulder, and cows don't have dynamite.  We do.

I do not think it is an entirely accurate statement to say the path
only curves this way because the straighter path was blocked by a
boulder.  There is pretty much nothing preventing existing templating
languages (client or server) from looking (inside) just like has been
suggested, and yet the most popular ones generally don't do that...
Instead, we use things for instructions/controls/macros/etc that
purposely don't look like HTML specifically to call them out as
different because that increases readability and maintainability - and
it makes them not use-specific (I can use them to generate anything,
not just DOM).  Personally, I prefer it that way - but maybe that's
just me?  It does seem to be the case that many people are talking
about though -- using those libraries with the template tag... Maybe
they can chime in... I am very interested though in knowing whether
this would essentially be a case of allowed/works, but discouraged.

I do agree that it is less optimal in the sense that you have to
lex/parse/etc but that also means there is a lot of room for
competition in variants which I think is a good thing in general.  If
things get so tight as to require that the template tag identifies
iterators, etc as well (by way of data-* attributes) - I think that
would limit the competition for templating languages to very close to
1.  At that point, might as well go ahead and define it.  At that
point - it also seems salty enough to my taste to want to trade a few
ms for something I like more... Does anyone have any kinds of metrics
illustrating just how much better optimized this would be by
comparison?  I've used templating a lot and honestly, I've never found
it to be the bottleneck or the problem.

All that said,  maybe with some time and experience I could learn to
love it as DOM too... I'm really not trying to be the only one arguing
endlessly about it, so unless someone backs me up on at least some
point here I will rest my case :)

-Brian



Re: [webcomponents] Template element parser changes = Proposal for adding DocumentFragment.innerHTML

2012-04-25 Thread Brian Kardell
It does feel very sensible that regardless of templates this is a useful
feature that we've long desired.
On Apr 24, 2012 8:28 AM, Rafael Weinstein rafa...@google.com wrote:

 No, I hadn't. Let me digest this thread. Much of what I'm implicitly
 asking has already been discussed. I'll repost if I have anything to
 add here. Apologies for the noise.

 On Mon, Apr 23, 2012 at 10:32 PM, Ryosuke Niwa rn...@webkit.org wrote:
  Have you looked
  at
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0663.html ?
 
  On Mon, Apr 23, 2012 at 8:39 PM, Rafael Weinstein rafa...@google.com
  wrote:
 
  The main points of contention in the discussion about the template
 element
  are
 
  1) By what mechanism are its content elements 'inert'
  2) Do template contents reside in the document, or outside of it
 
  What doesn't appear to be controversial is the parser changes which
  would allow the template element to have arbitrary top-level content
  elements.
 
  I'd like to propose that we add DocumentFragment.innerHTML which
  parses markup into elements without a context element. This has come
  up in the past, and is in itself a useful feature. The problem it
  solves is allowing templating systems to create DOM from markup
  without having to sniff the content and only innerHTML on an
  appropriate parent element (Yehuda can speak more to this).
 
  The parser changes required for this are a subset of the changes that
  Dimitri uncovered here:
 
 
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
 
  And I've uploaded a webkit patch which implements them here:
 
  https://bugs.webkit.org/show_bug.cgi?id=84646
 
  I'm hoping this is a sensible way to make progress. Thoughts?
 
 




Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Brian Kardell
Earlier in this thread I mentioned I expect, however, that there
might be larger ideas behind why not to
do this in the sense of web components or declarative MDV-like data binding...

I guess this is mostly a question for Dimitri or Dominic, but:
template is used/referenced extensively in the Web Components
Explainer[1] -- I am wondering what using template to hold something
like a mustache template (which doesn't use an HTML-like syntax for
things like iterators and thus must be used as a string) would mean
in the context of those proposals... How would it affect one's ability
to use custom elements, decorators, etc...?

- Brian

[1] - https://dvcs.w3.org/hg/webcomponents/raw-file/tip/explainer/index.html



On Wed, Apr 25, 2012 at 9:41 AM, Clint Hill clint.h...@gmail.com wrote:
 JSONP:
 script
 src=/myserver/users/{userID}/profile.jsjsonp=setProfile/script



 On 4/25/12 2:36 AM, Kornel Lesiński kor...@geekhood.net wrote:

On Wed, 25 Apr 2012 00:48:15 +0100, Clint Hill clint.h...@gmail.com
wrote:

 1) Templates that cleanly include /script.

What's the use-case for including script in a template? Can't code
using
the template simply invoke functions it needs?

--
regards, Kornel Lesiński








Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Brian Kardell
On Wed, Apr 25, 2012 at 1:57 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 On Wed, Apr 25, 2012 at 10:45 AM, Brian Kardell bkard...@gmail.com wrote:
 Earlier in this thread I mentioned I expect, however, that there
 might be larger ideas behind why not to
 do this in the sense of web components or declarative MDV-like data 
 binding...

 I guess this is mostly a question for Dimitri or Dominic, but:
 template is used/referenced extensively in the Web Components
 Explainer[1] -- I am wondering what using template to hold something
 like a mustache template (which doesn't use an HTML-like syntax for
 things like iterators and thus must be used as a string) would mean
 in the context of those proposals... How would it affect one's ability
 to use custom elements, decorators, etc...?

 Why would we want to consider a solution that requires two-pass
 parsing and thus is guaranteed to be slower and more error-prone?


The nature of my question isn't whether you/we would want to consider
replacing the current inert parse with treat it as text... I will let
someone else address that if they care to.

Regardless, however, it definitely seems to be the case that several
people here have pointed out that nothing in this prevents one from
using template to send mustache or handlebars templates, then just
grabbing it with innerHTML or maybe even making some special property
(originalText or something) available and using it more or less the
way we do now...

However, the explainer uses templates as part of other ideas, like
element and decorator.  The question I am asking then is If one
chose to use the manual two pass parse approach above, would that
affect their ability to use those templates inside of element or
decorator?

None of the examples in the explainer actually appear to use
template element as anything more than a static chunk of markup, so
I'm not sure how they are applied/whether a templating language choice
even matters... Could they (meaning templates used in element and
decorator) include token replacement or iteration, etc?

-Brian



Re: [webcomponents] Template element parser changes = Proposal for adding DocumentFragment.innerHTML

2012-04-25 Thread Brian Kardell
That would be a major leap forward in the least right?
On Apr 25, 2012 3:41 PM, Rafael Weinstein rafa...@google.com wrote:

 Ok, so from the thread that Yehuda started last year,

 There seem to be three issues:

 1) Interop (e.g. WRT IE)
 2) Defining the behavior for all elements
 3) HTML vs SVG vs MathML

 I think what Yehuda outlined earlier is basically right, and I have a
 proposal which accomplishes everything he wants in a different way and
 also addresses the three concerns above. My approach here is to not
 let perfect be the enemy of good.

 DocumentFragment.innerHTML has the following behavior. It picks an
 *implied context element* based on the tagName of the first start tag
 token which appears in the html provided. It then operates per the
 fragment case of the spec, using the implied context element as the
 context element.

 Here's the approach for picking the implied context element:

 Let the first start tag token imply the context element. The start tag
 = implied context element is as follows:

 caption, colgroup, thead, tbody, tfoot = HTMLTableElement
 tr = HTMLTableBodyElement
 col = HTMLColGroupElement
 td, th = HTMLTableRowElement
 head, body = HTMLHTMLElement
 rp, rt = HTMLRubyElement
 Any other HTML tagName = HTMLBodyElement
 Any other SVG tagName = SVGElement
 Any other MathML tagName = MathElement
 Any other tagName = HTMLBodyElement

 Note a few things about this:

 *Because this is basically a pre-processing step to the existing
 fragment case, the changes to the parser spec are purely additive (no
 new insertion modes or other parser changes needed).

 *It addresses (1) by only adding new parsing behavior to new API
 (implicitly retaining compat)

 *It explains (2)

 *The only problem with (3) is the SVG style, script, a  font tags.
 Here HTML wins and I think that's fine. This problem is inherent to
 the SVG 1.1 spec and we shouldn't let it wreak more havoc on HTML.

 *This doesn't attempt to do anything clever with sequences of markup
 that contain conflicting top-level nodes (e.g. df.innerHTML =
 'tdFoo/tdg/g';). There's nothing clever to be done, and IMO,
 attempting to be clever is a mistake.


 Here's how some of the examples from the previous thread would be
 parsed. I've tested these by simply inspecting the output of innerHTML
 applied to the implied context element from the example.

 On Thu, Nov 10, 2011 at 3:43 AM, Henri Sivonen hsivo...@iki.fi wrote:
  What about SVG and MathML elements?
 
  I totally sympathize that this is a problem with tr, but developing
  a complete solution that works sensibly even when you do stuff like
  frag.innerHTML = head/head

 head
 body

  frag.innerHTML = headdiv/div/head

 head
 body
  div

  frag.innerHTML = frameset/frameseta!-- b --

 a
 !-- b --

  frag.innerHTML = htmlbodyfoo/htmlbartr/tr

 foobar

  frag.innerHTML = htmlbodyfoo/htmltr/tr

 foo

  frag.innerHTML = div/divtr/tr

 div

  frag.innerHTML = tr/trdiv/div

 tbody
  tr
 div

  frag.innerHTML = gpath//g

 g
  path

 [Note that innerHTML doesn't work presently on SVGElements in WebKit
 or Gecko, but this last example would result if it did]


 On Tue, Apr 24, 2012 at 5:26 AM, Rafael Weinstein rafa...@google.com
 wrote:
  No, I hadn't. Let me digest this thread. Much of what I'm implicitly
  asking has already been discussed. I'll repost if I have anything to
  add here. Apologies for the noise.
 
  On Mon, Apr 23, 2012 at 10:32 PM, Ryosuke Niwa rn...@webkit.org wrote:
  Have you looked
  at
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0663.html ?
 
  On Mon, Apr 23, 2012 at 8:39 PM, Rafael Weinstein rafa...@google.com
  wrote:
 
  The main points of contention in the discussion about the template
 element
  are
 
  1) By what mechanism are its content elements 'inert'
  2) Do template contents reside in the document, or outside of it
 
  What doesn't appear to be controversial is the parser changes which
  would allow the template element to have arbitrary top-level content
  elements.
 
  I'd like to propose that we add DocumentFragment.innerHTML which
  parses markup into elements without a context element. This has come
  up in the past, and is in itself a useful feature. The problem it
  solves is allowing templating systems to create DOM from markup
  without having to sniff the content and only innerHTML on an
  appropriate parent element (Yehuda can speak more to this).
 
  The parser changes required for this are a subset of the changes that
  Dimitri uncovered here:
 
 
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
 
  And I've uploaded a webkit patch which implements them here:
 
  https://bugs.webkit.org/show_bug.cgi?id=84646
 
  I'm hoping this is a sensible way to make progress. Thoughts?
 
 




Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Brian Kardell
And when that becomes the case, then using the source text becomes
problematic not just less efficient right?
On Apr 25, 2012 6:15 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Wed, Apr 25, 2012 at 1:00 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  No. Also, as spec'd today, HTML Templates
  (
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
 )
  do not have anything like token replacement or iteration.

 Though, of course, we'd like to augment Templates to have those
 capabilities in the future, tied to MDV, and then Components can have
 inert MDV-driven template iteration in their shadow DOM...

 Yay for modular specs that combine together well!

 ~TJ



Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Brian Kardell
On Apr 25, 2012 7:22 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Wed, Apr 25, 2012 at 3:31 PM, Ojan Vafai o...@chromium.org wrote:
  script type=text/html works for string-based templating. Special
handling
  of /script is not a big enough pain to justify adding a template
element.
 
  For Web Components and template systems that want to do DOM based
templating
  (e.g. MDV), the template element can meet that need much better than a
  string-based approach. If nothing else, it's more efficient (e.g. it
only
  parses the HTML once instead of for each instantiation of the template).
 
  String-based templating already works. We don't need new API for it.
  DOM-based templating and Web Components do need new API in order to
work at
  all. There's no need, and little benefit, for the template element to
try to
  meet both use-cases.

 String-based templating *doesn't* work unless you take pains to make
 it work.  This is why jQuery has to employ regex hacks to make
 $('tdfoo/td') work the way you'd expect.  Fixing that in the
 platform is a win, so authors don't have to ship code down the wire to
 deal with this (imo quite reasonable) use-case.

Tab, are you saying:  a) fixing that in script apis for document fragment
or something is an ok solution to this, or that b) this means text
templates should use template or c) it should be the goal to kill string
templates altogether?


 When you want to do DOM-based templating, such as for Components or
 MDV, you run into the *exact same* problems as the above, where you
 may want to template something that, in normal HTML parsing, expects
 to be in a particular context.  Solving the problem once is nice,
 especially since we get to kill another problem at the same time.  We
 aren't even compromising - this is pretty much exactly what we want
 for full DOM-based templating.

 ~TJ


Re: [webcomponents] HTML Parsing and the template element

2012-04-25 Thread Brian Kardell
Yes!!  Thanks guys...that's exactly the distictions and clarifications I
was looking for...assuming these are acceptable distinctions, definitions
and goals.
On Apr 25, 2012 8:16 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Wed, Apr 25, 2012 at 4:33 PM, Brian Kardell bkard...@gmail.com wrote:
  Tab, are you saying:  a) fixing that in script apis for document
 fragment or
  something is an ok solution to this, or that b) this means text templates
  should use template or c) it should be the goal to kill string
 templates
  altogether?

 I just talked with Ojan in person, because we were talking past each
 other with the terms we were using.  I now understand what he was
 saying!

 When we say string-based templating, we're really saying I want to
 embed an arbitrary foreign language into HTML.  That language happens
 to be a templating language, but the exact purpose you're putting it
 to is irrelevant for our purposes here.  For this, script type=foo
 is not only a satisfactory existing solution to this, but it's the
 *correct* solution to this - that's precisely what script is
 designed to do.

 I was confusing this term with the idea of programmatic DOM creation
 that doesn't suck.  Right now, the right way to create a fragment
 of HTML is to use a lot of document.createElement() and
 el.appendChild().  This sucks.  Instead, everyone wants to just write
 a string containing HTML and parse it, like what innerHTML and jQuery
 do.  For this, Raf's proposal works great - it lets you use the
 innerHTML API without having to employ hacks like jQuery uses to
 ensure that things parse in the right context.

 Related closely to this is stuff like a Web Component that wants to
 fill itself with a DOM fragment.  Right now, you're required to do
 that within script.  This sucks, partially because of the
 aforementioned existing suckiness with DOM building.  Even once we fix
 that with Raf's proposal, it will still suck, because it means that
 Components are required to run script to build themselves, even if
 their DOM structure is totally static and doesn't depend on outside
 data.  We want template to help us solve this problem, by letting us
 send HTML structure *in HTML* and potentially hook it up to a
 component declaratively, so components don't need to run script unless
 they're actually doing something dynamic.  Parsing the contents of a
 template correctly requires the same mechanism that Raf is
 proposing.

 Somewhat further away, we have another proposal, MDV, which *is*
 intending to replace the basic functionality of the current
 templating libraries.  It takes something representing an inert DOM
 structure with holes punched in it for data to fill in, hooks it up to
 a JS object full of data, and pops out a fragment of real DOM with
 all the holes filled in.  This is obviously useful when done purely
 via script (the popularity of templating libraries attests to that!),
 but there's intriguing design-space around *declarative*
 templating/iteration, where you just declare a template in markup,
 tell it how to fetch a data source to use, and it handles the rest for
 you.  No script required!  This is very similar to the no-script
 Components use-case, and so it would be nice to reuse template.
 Even if we use a differently-named element, the parsing problems are
 identical, and we still need something like Raf's proposal to solve
 them.  (Even ignoring the pure-declarative case, being able to ship
 your templates in the page HTML and just grab them with script when
 you want to use it seems useful.)


 A text template like handlebars (in other words, a foreign language)
 should be able to use a significant fraction of the stuff that MDV
 provides.  It will have to parse itself into a DOM structure with the
 holes set up manually, but once you've done so it should be able to
 be used in many of the same contexts.  This has nothing to do with the
 template tag, though, because it's not *trying* to parse as HTML -
 they should use script.

 This isn't *completely* ideal - if you are *almost* fine with the
 functionality that MDV provides, but need just a little bit more, you
 either have to switch from template to script (which isn't
 trivial), or embed your extra functionality in the HTML via @data-* or
 something, which may be a bit clumsy.  We'll see how bad this is in
 practice, but I suspect that once MDV matures, this will become a
 minor problem.

 ~TJ



Re: Shrinking existing libraries as a goal

2012-05-17 Thread Brian Kardell
So, out of curiosity - do you have a list of things?  I'm wondering
where some efforts fall in all of this - whether they are good or bad
on this scale, etc... For example:  querySelectorAll - it has a few
significant differences from jQuery both in terms of what it will
return (jquery uses getElementById in the case that someone does #,
for example, but querySelectorAll doesn't do that if there are
multiple instances of the same id in the tree) and performance (this
example illustrates both - since jQuery is doing the simpler thing in
all cases, it is actually able to be faster (though technically not
correct) in some very difficult ones. Previously, this was something
that the browser APIs just didn't offer at all -- now they offer them,
but jQuery has mitigation to do in order to use them effectively since
they do not have parity.

On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote:

 Yehuda Katz
 (ph) 718.877.1325


 On Thu, May 17, 2012 at 10:37 AM, John J Barton
 johnjbar...@johnjbarton.com wrote:

 On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Thu, May 17, 2012 at 9:56 AM, John J Barton
  johnjbar...@johnjbarton.com wrote:
  On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com
  wrote:
  Consider the cowpath metaphor - web developers have made highways out
  of
  sticks, grass and mud - what we need is someone to pour the concrete.
 
  I'm confused. Is the goal shorter load times (Yehuda) or better
  developer ergonomics (Waldron)?
 
  Of course *some* choices may do both. Some may not.
 
  Libraries generally do three things: (1) patch over browser
  inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new
  features*.
 
  #1 is just background noise; we can't do anything except write good
  specs, patch our browsers, and migrate users.
 
  #3 is the normal mode of operations here.  I'm sure there are plenty
  of features currently done purely in libraries that would benefit from
  being proposed here, like Promises, but I don't think we need to push
  too hard on this case.  It'll open itself up on its own, more or less.
   Still, something to pay attention to.
 
  #2 is the kicker, and I believe what Yehuda is mostly talking about.
  There's a *lot* of code in libraries which offers no new features,
  only a vastly more convenient syntax for existing features.  This is a
  large part of the reason why jQuery got so popular.  Fixing this both
  makes the web easier to program for and reduces library weight.

 Yes! Fixing ergonomics of APIs has dramatically improved web
 programming.  I'm convinced that concrete proposals vetted by major
 library developers would be welcomed and have good traction. (Even
 better would be a common shim library demonstrating the impact).

 Measuring these changes by the numbers of bytes removed from downloads
 seems 'nice to have' but should not be the goal IMO.


 We can use bytes removed from downloads as a proxy of developer ergonomics
 because it means that useful, ergonomics-enhancing features from libraries
 are now in the platform.

 Further, shrinking the size of libraries provides more headroom for higher
 level abstractions on resource-constrained devices, instead of wasting the
 first 35k of downloading and executing on relatively low-level primitives
 provided by jQuery because the primitives provided by the platform itself
 are unwieldy.



 jjb

 
  * Yes, #3 is basically a subset of #2 since libraries aren't rewriting
  the JS engine, but there's a line you can draw between here's an
  existing feature, but with better syntax and here's a fundamentally
  new idea, which you could do before but only with extreme
  contortions.
 
  ~TJ





Re: Shrinking existing libraries as a goal

2012-05-17 Thread Brian Kardell
On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote:


 On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote:

 So, out of curiosity - do you have a list of things?  I'm wondering
 where some efforts fall in all of this - whether they are good or bad
 on this scale, etc... For example:  querySelectorAll - it has a few
 significant differences from jQuery both in terms of what it will
 return (jquery uses getElementById in the case that someone does #,
 for example, but querySelectorAll doesn't do that if there are
 multiple instances of the same id in the tree)


 Which is an abomination for for developers to deal with, considering the ID
 attribute value must be unique amongst all the IDs in the element's home
 subtree[1] . qSA should've been spec'ed to enforce the definition of an ID
 by only returning the first match for an ID selector - devs would've learned
 quickly how that worked; since it doesn't and since getElementById is
 faster, jQuery must take on the additional code burden, via cover API, in
 order to make a reasonably usable DOM querying interface. jQuery says
 you're welcome.




 and performance (this
 example illustrates both - since jQuery is doing the simpler thing in
 all cases, it is actually able to be faster (though technically not
 correct)



 I'd argue that qSA, in its own contradictory specification, is not
 correct.

It has been argued in the past - I'm taking no position here, just
noting.  For posterity (not you specifically, but for the benefit of
those who don't follow so closely), the HTML link also references DOM
Core, which has stated for some time that getElementById should return
the _first_  element with that ID in the document (implying that there
could be more than one) [a] and despite whatever CSS has said since
day one (ids are unique in a doc) [b] a quick check in your favorite
browser will show that CSS doesn't care, it will style all IDs that
match.  So basically - qSA matches CSS, which does kind of make sense
to me... I'd love to see it corrected in CSS too (first element with
that ID if there are more than one) but it has been argued that a lot
of stuff (more than we'd like to admit) would break.

 in some very difficult ones. Previously, this was something
 that the browser APIs just didn't offer at all -- now they offer them,
 but jQuery has mitigation to do in order to use them effectively since
 they do not have parity.


 Yes, we're trying to reduce the amount of mitigation that is required of
 libraries to implement reasonable apis. This is a multi-view discussion:
 short and long term.


So can someone name specific items?   Would qSA / find been pretty
high on that list?  Is it better for jQuery (specifically) that we
have them in their current state or worse?  Just curious.

[a] - 
http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid
[b] - http://www.w3.org/TR/CSS1/#id-as-selector









 Rick


 [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute




 On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote:
 
  Yehuda Katz
  (ph) 718.877.1325
 
 
  On Thu, May 17, 2012 at 10:37 AM, John J Barton
  johnjbar...@johnjbarton.com wrote:
 
  On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com
  wrote:
   On Thu, May 17, 2012 at 9:56 AM, John J Barton
   johnjbar...@johnjbarton.com wrote:
   On Thu, May 17, 2012 at 9:29 AM, Rick Waldron
   waldron.r...@gmail.com
   wrote:
   Consider the cowpath metaphor - web developers have made highways
   out
   of
   sticks, grass and mud - what we need is someone to pour the
   concrete.
  
   I'm confused. Is the goal shorter load times (Yehuda) or better
   developer ergonomics (Waldron)?
  
   Of course *some* choices may do both. Some may not.
  
   Libraries generally do three things: (1) patch over browser
   inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new
   features*.
  
   #1 is just background noise; we can't do anything except write good
   specs, patch our browsers, and migrate users.
  
   #3 is the normal mode of operations here.  I'm sure there are plenty
   of features currently done purely in libraries that would benefit
   from
   being proposed here, like Promises, but I don't think we need to push
   too hard on this case.  It'll open itself up on its own, more or
   less.
    Still, something to pay attention to.
  
   #2 is the kicker, and I believe what Yehuda is mostly talking about.
   There's a *lot* of code in libraries which offers no new features,
   only a vastly more convenient syntax for existing features.  This is
   a
   large part of the reason why jQuery got so popular.  Fixing this both
   makes the web easier to program for and reduces library weight.
 
  Yes! Fixing ergonomics of APIs has dramatically improved web
  programming.  I'm convinced that concrete proposals vetted by major
  library developers would be welcomed and have good

Re: Shrinking existing libraries as a goal

2012-05-17 Thread Brian Kardell
Has anyone compiled an more general and easy to reference list of the stuff
jquery has to normalize across browsers new and old?  For example, ready,
event models in general, query selector differences, etc?
On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote:



 On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote:

 On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
 
 
  On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com
 wrote:
 
  So, out of curiosity - do you have a list of things?  I'm wondering
  where some efforts fall in all of this - whether they are good or bad
  on this scale, etc... For example:  querySelectorAll - it has a few
  significant differences from jQuery both in terms of what it will
  return (jquery uses getElementById in the case that someone does #,
  for example, but querySelectorAll doesn't do that if there are
  multiple instances of the same id in the tree)
 
 
  Which is an abomination for for developers to deal with, considering
 the ID
  attribute value must be unique amongst all the IDs in the element's
 home
  subtree[1] . qSA should've been spec'ed to enforce the definition of
 an ID
  by only returning the first match for an ID selector - devs would've
 learned
  quickly how that worked; since it doesn't and since getElementById is
  faster, jQuery must take on the additional code burden, via cover API,
 in
  order to make a reasonably usable DOM querying interface. jQuery says
  you're welcome.
 
 
 
 
  and performance (this
  example illustrates both - since jQuery is doing the simpler thing in
  all cases, it is actually able to be faster (though technically not
  correct)
 
 
 
  I'd argue that qSA, in its own contradictory specification, is not
  correct.

 It has been argued in the past - I'm taking no position here, just
 noting.  For posterity (not you specifically, but for the benefit of
 those who don't follow so closely), the HTML link also references DOM
 Core, which has stated for some time that getElementById should return
 the _first_  element with that ID in the document (implying that there
 could be more than one) [a] and despite whatever CSS has said since
 day one (ids are unique in a doc) [b] a quick check in your favorite
 browser will show that CSS doesn't care, it will style all IDs that
 match.  So basically - qSA matches CSS, which does kind of make sense
 to me... I'd love to see it corrected in CSS too (first element with
 that ID if there are more than one) but it has been argued that a lot
 of stuff (more than we'd like to admit) would break.

  in some very difficult ones. Previously, this was something
  that the browser APIs just didn't offer at all -- now they offer them,
  but jQuery has mitigation to do in order to use them effectively since
  they do not have parity.
 
 
  Yes, we're trying to reduce the amount of mitigation that is required of
  libraries to implement reasonable apis. This is a multi-view discussion:
  short and long term.
 

 So can someone name specific items?   Would qSA / find been pretty
 high on that list?  Is it better for jQuery (specifically) that we
 have them in their current state or worse?  Just curious.


 TBH, the current state can't get any worse, though I'm sure it will.
 Assuming you're referring to this:
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1454.html

 ... Yes, APIs like this would be improvements, especially considering the
 pace of implementation in modern browsers - hypothetically, this could be
 in wide implementation in less then a year; by then development of a sort
 of jQuery 2.0 could happen -- same API, but perhaps modern browser only??
 This is hypothetical of course.



 Rick




 [a] -
 http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid
 [b] - http://www.w3.org/TR/CSS1/#id-as-selector














 
  Rick
 
 
  [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute
 
 
 
 
  On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote:
  
   Yehuda Katz
   (ph) 718.877.1325
  
  
   On Thu, May 17, 2012 at 10:37 AM, John J Barton
   johnjbar...@johnjbarton.com wrote:
  
   On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. 
 jackalm...@gmail.com
   wrote:
On Thu, May 17, 2012 at 9:56 AM, John J Barton
johnjbar...@johnjbarton.com wrote:
On Thu, May 17, 2012 at 9:29 AM, Rick Waldron
waldron.r...@gmail.com
wrote:
Consider the cowpath metaphor - web developers have made
 highways
out
of
sticks, grass and mud - what we need is someone to pour the
concrete.
   
I'm confused. Is the goal shorter load times (Yehuda) or better
developer ergonomics (Waldron)?
   
Of course *some* choices may do both. Some may not.
   
Libraries generally do three things: (1) patch over browser
inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new
features*.
   
#1 is just

Re: Shrinking existing libraries as a goal

2012-05-18 Thread Brian Kardell
A related TL;DR observation...

While we may get 5 things that really help shrink the current set of
problems, it adds APIs which inevitably introduce new ones.  In the
meantime, nothing stands still - lots of specs are introducing lots of new
APIs. Today's 'modern browsers' are the ones we are all swearing at a year
or two from now.

New APIs allow people to think about things in new ways.  Given new APIs,
new ideas will develop (either in popular existing libraries, or even whole
new ones).  Ideas spawn more ideas - offshoots, competitors, etc. In the
long term, changes like the ones being discussed will probably serve more
to mitigating libraries otherwise inevitable continued growth.

More interestingly though, to Tab's point -  all of the things that he
explained will happen with all of those new APIs too.  New ideas will spawn
competitors and better APIs that are normalized by libraries, etc.  They
will compete and evolve until eventually it becomes self-evident over time
that there is something much preferred still by the user community at large
to whatever is actually implemented in the browser.It seems to me that
this is inevitable, happens with all software,  and is actually kind of a
good thing...

I'm not exactly sure what value this observation has other than to maybe
explain why I think that on this front, libraries have a few important
advantages and wonder aloud whether somehow there is a way to change the
model/process to incorporate those advantages more directly.  Particularly,
the advantages are about real world competition and less need to be
absolutely positively fully universal.

The advantages of the competition aspect I think cannot be overstated -
they play in virtually every point along whole lifecycle For all of the
intelligence on the committees and on these lists (and it's a lot), it's
actually a pretty small group of people ultimately proposing things for the
whole world.  By their very nature, committees (and the vendors who are
heavily involved) also have to consider the very fringe cases and the
browser vendors have to knowing enter in to things considering that every
change means more potential problems that have to work without breaking
anything existing.  Libraries might have a small number of authors, but
their user base starts our small too. The fact that it is also the author's
choice to opt-in to using a library also means that they are much freer to
rev and version and say don't do that, instead do this with some of the
very fringe cases - or even just consciously choose that that is not a use
case they are interested in supporting.  With the process,  even when we
get to vendor implementations, features start out in test builds or require
flags to enable.  While it's good - it's really more of a test for
uniform compliance and a preview for/by a group of mavens.  This means that
features/apis cannot actually be practically used in developing real
pages/sites and that is a huge disadvantage that libraries don't generally
have.  Often it isn't obvious until there are at least thousands and
thousands of average developers who have had significant time to really try
to live with it in the real world (actually delivering product) that it
becomes evident that something is overly cumbersome or somehow falls short
for what turn out to be unexpectedly common cases.  Finally, the whole
point of  these committees is to arrive at standards, not compete.
 However, in practice, they also commonly resolve differences after the
fact (the standard is revised to meet what is implemented and now can't
change).  Libraries are inherently usually the opposite - they want
competition first and standardization only after things have wide
consensus.   These are the kinds of things that cause innovation and
competition of ideas which ultimately helping define and evolve what the
community at large sees as good.

I'm not exactly sure how you would go about changing the model/process to
encourage/foster the sort of inverse relationship while simultaneously
focusing on standards... tricky.  Maybe some of the very smart people on
this list have some thoughts?

-Brian


On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote:



 On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote:

 On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com
 wrote:
 
 
  On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com
 wrote:
 
  So, out of curiosity - do you have a list of things?  I'm wondering
  where some efforts fall in all of this - whether they are good or bad
  on this scale, etc... For example:  querySelectorAll - it has a few
  significant differences from jQuery both in terms of what it will
  return (jquery uses getElementById in the case that someone does #,
  for example, but querySelectorAll doesn't do that if there are
  multiple instances of the same id in the tree)
 
 
  Which is an abomination for for developers to deal

Re: [selectors-api] Consider backporting find() behavior to querySelector()

2012-06-19 Thread Brian Kardell
I am very opposed to this, they do different things.  Having abilities
isn't a bad thing and numerous Web sites and libraries make use of qsa, not
just because find was not available but because different APIs shapes
interesting new possibilities, different ways of looking at problems,
etc... We solve problems with the tools at hand, so given that it has been
widely implemented a long time, you can safely assume we have found good
uses for it.

There are a vast number of libraries and websites that have use cases for
find, this is especially true because selector engines that solved those
cases evolved in the wild a long time ago it probably would have been
nice to have had that native first, as it would have been a more immediate
help for the vast number of users, but qsa is definitely useful.
On Jun 18, 2012 10:45 AM, Simon Pieters sim...@opera.com wrote:

 So 
 http://dev.w3.org/2006/webapi/**selectors-api2/http://dev.w3.org/2006/webapi/selectors-api2/introduces
  the methods find() and findAll() in addition to querySelector()
 and querySelectorAll() and changes the scoping behavior for the former
 methods to match what people expect them to do.

 I'm not convinced that doubling the API surface is a good idea. If we were
 to do that every time we find that a shipped API has suboptimal behavior,
 the API surface on the Web would grow exponentially and we wouldn't make
 the overall situation any better. What if we find a new problem with find()
 after it has shipped? Do we introduce yet another method?

 I think we should instead either fix the old API (if it turns out to not
 Break the Web) or live with past mistake (if it turns out it does). To find
 out whether it Breaks the Web (and the breakage can't be evanged), I
 suggest we ship the backwards-incompatible change to querySelector() in
 nightly/aurora (or equivalent) in one or more browsers for some time.

 --
 Simon Pieters
 Opera Software




Re: [webcomponents]-ish: Visibility of work in Bugzilla

2012-08-16 Thread Brian Kardell
On Thu, Aug 16, 2012 at 12:36 PM, Dimitri Glazkov dglaz...@google.com wrote:
 Folks,

 Several peeps now mentioned to me that the visibility of work in
 Bugzilla is not very high: a special step of watching an email is
 required to get all the updates in real time. I do make the regular
 update posts (as you may have noticed), but those are somewhat
 post-factum, and don't have the same feel of immediacy.

 I was wondering what we could do to mitigate the visibility issue.

 One idea is to do the same thing HTML WG did and spam public-webapps
 with all bug updates. This may get a bit noisy, because from my own
 experience, I know I can generate quite a bit of traffic by simply
 triaging issues.

 Another idea is to have a separate mailing list for this. At least,
 there will be some opt-in step that will give other
 public-webapps-nauts at choice.

 WDYT?

 :DG


I like the last idea - opt in for that would be great... I've asked
various members if they knew of a way to do that on several occasions.



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Aug 21, 2012 4:03 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
  Meh. I think this loses most of the CSS is so much more convenient
  benefits. It's mainly the fact that you don't have to worry about
whether
  the nodes exist yet that makes CSS more convenient.

 Note that this benefit is preserved.  Moving or inserting an element
 in the DOM should apply CAS to it.

 The only thing we're really losing in the dynamic-ness is that other
 types of mutations to the DOM don't change what CAS does, and some of
 the dynamic selectors like :hover don't do anything.


So if I had a selector .foo .bar and then some script inserted a .bar
inside a .foo - that would work... but if I added a .bar class to some
existing child of .foo it would not...is that right?

  That said, I share your worry that having this be dynamic would slow
down
  DOM modification too much.
 
  What if we only allowed a restricted set of selectors and made these
sheets
  dynamic instead? Simple, non-pseudo selectors have information that is
all
  local to the node itself (e.g. can be applied before the node is in the
  DOM). Maybe even just restrict it to IDs and classes. I think that would
  meet the majority use-case much better.

 I think that being able to use complex selectors is a sufficiently
 large use-case that we should keep it.

  Alternately, what if these applied the attributes asynchronously (e.g.
right
  before style resolution)?

 Can you elaborate?

 ~TJ



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Tue, Aug 21, 2012 at 4:32 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 1:30 PM, Brian Kardell bkard...@gmail.com wrote:
 On Aug 21, 2012 4:03 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 12:37 PM, Ojan Vafai o...@chromium.org wrote:
 Meh. I think this loses most of the CSS is so much more convenient
 benefits. It's mainly the fact that you don't have to worry about
 whether
 the nodes exist yet that makes CSS more convenient.

 Note that this benefit is preserved.  Moving or inserting an element
 in the DOM should apply CAS to it.

 The only thing we're really losing in the dynamic-ness is that other
 types of mutations to the DOM don't change what CAS does, and some of
 the dynamic selectors like :hover don't do anything.


 So if I had a selector .foo .bar and then some script inserted a .bar inside
 a .foo - that would work... but if I added a .bar class to some existing
 child of .foo it would not...is that right?

 Correct.  If we applied CAS on attribute changes, we'd have... problems.

 ~TJ

Because you could do something like:

.foo[x=123]{ x:  234; }
.foo[x=234]{ x:  123; }

?



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Aug 21, 2012 5:40 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 2:28 PM, Ojan Vafai o...@chromium.org wrote:
  On a somewhat unrelated note, could we somehow also incorporate jquery
style
  live event handlers here? See previous www-dom discussion about this: .
I
  suppose we'd still just want listen/unlisten(selector, handler)
methods, but
  they'd get applied at the same time as cascaded attributes. Although, we
  might want to apply those on attribute changes as well.

 Using CAS to apply an onfoo attribute is nearly the same (use a
 string value to pass the function, obviously).  It'll only allow a
 single listener to be applied, though.

 If it's considered worthwhile, we can magic up this case a bit.  CAS
 properties don't accept functions normally (or rather, as I have it
 defined in the OP, it would just accept a FUNCTION token, which is
 just the function name and opening paren, but I should tighten up that
 definition).  We could have a magic function like listen(string)
 that, when used on an onfoo attribute (more generally, on a
 host-language-defined event listener attribute) does an
 addEventListener() call rather than a setAttribute() call.

 ~TJ


Can you give some pseudo code or something that is relatively close to what
you mean here?  I'm not entirely sure I follow.


Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-21 Thread Brian Kardell
On Aug 21, 2012 6:49 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 3:44 PM, Brian Kardell bkard...@gmail.com wrote:
  On Aug 21, 2012 6:18 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
  So, in my current proposal, you can just set an onfoo attribute:
 
  ul.special  li {
onclick: alert('You clicked me!');
  evt.target.classlist.add('clicked');;
  }
 
  Here's a suggestion for a similar API that would invoke
  addEventListener instead of setAttribute:
 
  ul.special  li {
onclick: listen(alert('You clicked me!');
  evt.target.classlist.add('clicked'););
  }
 
  This feels a lot like netscape's old actionsheets proposal.  Doesn't it
  create the same footgun I mentioned above though?  Would you be blocked
off
  from accessing dom in those handlers? Or are the read only (you may
remember
  you, borris and I discussed how that might work last year)  In other
words,
  what is preventing you from writing...
 
  .foo .bar{
  onclick: listen(create a .bar and attach it as a child of
evt.target);
  }

 Nothing prevents you from writing that.  That's not problematic at
 all, though.  When you click on a .bar, it creates a sibling .bar and
 gives it the same onclick.  I think you've confused yourself into
 thinking this is an infinite loop - it's not.

 Since you can't create a mutation observer with an attribute, I don't
 think you can infinite-loop yourself at all.  Even if you could, it's
 no more troublesome than the same possibility in pure JS.

 ~TJ

You are right, I was thinking that :)  I blame it on doing this all on my
cell phone.  Ok.  If you click on the inner foo, now both will fire unless
you pop the bubble, right?  That is probably fine.  Do you need the quotes
if you do it inside a type fn like listen?  Could we not parse around
that?


[Web-storage] subdomains / cooperation and limits

2012-09-17 Thread Brian Kardell
I have searched the archives and been unable to resolve this to a great
answer and I just want to make sure that my understanding is correct lest I
have to unwind things later as someone has recently made me second guess
what I thought was a logical understanding of things.  Essentially,
x.wordpress.com and y.wordpress.com both allocate and use space - no
problem, right?  Access is subject to the browsers -general- sop, (leaving
aside the ability to document.domain up one), right?  If I have two
affliate sites who communicate across an explicit trust via postMessage -
is this problematic?  I thought not, and it doesn't seem to be - further -
I cannot imagine how it could work otherwise and still be useful for a host
of common cases (like the wordpress one I mentioned above).  I have been
told that the draft contradicts my understanding, but I don't think so.
Thought that some manufactures/maybe Hixie could set me straight...?

Brian


Re: [webcomponents] More backward-compatible templates

2012-11-02 Thread Brian Kardell
The reason is because all of the things that you do in every template
system (iteration, conditionals, etc) are also intended to be template.

It kinda messes with the mind to get used to that idea, even for me I
occasionally need reminding...

http://memegenerator.net/instance/29459456

Brian Kardell :: @bkardell :: hitchjs.com
On Nov 2, 2012 5:18 PM, Glenn Maynard gl...@zewt.org wrote:

 I'm coming into this late, but what's the purpose of allowing nested
 templates (this part doesn't seem hard) and scripts in templates, and what
 does putting a script within a template mean?  (It sounds like it would run
 the script when you clone the template, but at least in the template
 example at the top, that doesn't look like what would happen.)  It sounds
 closer to a widget feature than a template.

 I template HTML in HTML simply by sticking templates inside a hidden div
 and cloning its contents into a DocumentFragment that I can insert wherever
 I want.  The templates never contain scripts (unless I really mean for them
 to be run at parse time).  I never nest templates this way, but there's
 nothing preventing it.

 It would be useful to have a template that works like that, which simply
 gives me a clone contents into DocumentFragment function (basically
 cloneNode(true), but returning a top-level element of DocumentFragment
 instead of HTMLTemplateElement), and hints the browser that the contents
 are a template (eg. it may want to deprioritize loading images within it).
 It wouldn't be intended to hold script, and if you did put script blocks
 inside them they'd just be run when parsed (since that's what browsers
 today will do with it).  It requires no escaping at all, and parses like
 any other tree, unlike the script approach which would just be an opaque
 block of text, so you couldn't manipulate it in-place with DOM APIs and
 it'd take a lot more work to make it viewable in developer tools, etc.

 This would essentially be a CSS rule template { display: none; } and an
 interface that gives a cloneIntoFragment (or something) method.

 With the more complicated approaches people are suggesting I assume there
 are use cases this doesn't cover--what are they?

 --
 Glenn Maynard





Re: Feedback and questions on shadow DOM and web components

2012-11-13 Thread Brian Kardell
Brian Kardell :: @bkardell :: hitchjs.com
On Nov 13, 2012 9:34 AM, Angelina Fabbro angelinafab...@gmail.com wrote:

 Hello public-webapps,

 I'm Angelina, and I've been very interested in shadow DOM and web
components for some time now. So much so that I've tried to teach people
about them several times. There's a video from JSConfEU floating around on
Youtube if you're interested . I think I managed to get the important parts
right despite my nerves. I've given this sort of talk four times now, and
as a result I've collected some feedback and questions from the developers
I've talked to.

 1. It looks like from the spec and the code in Glazkov's polyfill that if
I add and remove the 'is' attribute, the shadow tree should apply/unapply
itself to the host element.

Two things: 1. Added in markup or dynamically?  The draft says it can't be
added dynamically just in case...  2.  The draft itself is a little unclear
on is.  Early in the text, the reference was changed to say that these
will be custom tags, in other words x-map instead of select
is=x-map.  Mozilla's x-tags is currently operating under that assumption
as well.

 I've not found this to be the case. See my examples for 2. below - I
tried applying and unapplying the 'is' attribute to remix the unordered
list using a template without success.






Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
On Mon, Mar 11, 2013 at 1:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/11/13 3:44 PM, Daniel Buchner wrote:

 Just to be clear, these are callbacks (right?), meaning synchronous
 executions on one specific node. That is a far cry from the old issues
 with mutation events and nightmarish bubbling scenarios.


 Where does bubbling come in?

 The issue with _synchronous_ (truly synchronous, as opposed to end of
 microtask or whatnot) callbacks is that they are required to fire in the
 middle of DOM mutation while the DOM is in an inconsistent state of some
 sort.  This has nothing to do with bubbling and everything to do with what
 happens when you append a node somewhere while it already has a parent and
 it has a removed callback that totally rearranges the DOM in the middle of
 your append.


So does it actually need to be sync at that leve?  I'm not sure why it does
really.  Can someone explain just for my own clarity?

-Brian


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
Is it very difficult to provide here is an attribute I'm watching + a
callback?  Most things require us to write switches and things and receive
overly broad notifications which aren't great for performance or for code
legibility IMO.

Just curious.


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
Sorry I clicked send accidentally there... I meant to mention that I think
this is sort of the intent of attributeFilter in mutation observers


On Mon, Mar 11, 2013 at 5:59 PM, Brian Kardell bkard...@gmail.com wrote:

 Is it very difficult to provide here is an attribute I'm watching + a
 callback?  Most things require us to write switches and things and receive
 overly broad notifications which aren't great for performance or for code
 legibility IMO.

 Just curious.



 --
 Brian Kardell :: @briankardell :: hitchjs.com




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Brian Kardell
On Mar 11, 2013 9:03 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/11/13 8:59 PM, Brian Kardell wrote:

 Is it very difficult to provide here is an attribute I'm watching + a
 callback?


 It's not super-difficult but it adds more complication to
already-complicated code

 One big question is whether in practice the attribute that will be
changing is one that the consumer cares about or not.  If it's the former,
it makes somewhat more sense to put the checking of which attribute in the
consumer.

 -Boris

Daniel can confirm but in all of the stuff i have seen and played with so
far it is...you want a changing a component attribute to have some effect.
Internally you would use mutation observers i think.


Re: [webcomponents]: First stab at the Web Components spec

2013-03-18 Thread Brian Kardell
On Mar 18, 2013 10:48 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Mar 18, 2013 at 7:35 AM, Karl Dubost k...@la-grange.net wrote:
  Le 7 mars 2013 à 18:25, Dimitri Glazkov a écrit :
  Here's a first rough draft of the Web Components spec:
 
https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/components/index.html
 
  Cool.
 
  I see
 
  link rel=component href=/components/heart.html
 
  Do you plan to allow the HTTP counterpart?
 
  Link: /components/heart.html; rel=component

 Does that need to be allowed?  I thought the Link header was just
 equivalent, in general, to specify a link in your head.

 ~TJ


Just bringing this up on list as it has come up in conversations offlist:
while not currently valid.for htmk, link for Web components will work in
the body too? #justcheckin


Re: [webcomponents]: Naming the Baby

2013-03-26 Thread Brian Kardell
On Mar 25, 2013 3:03 PM, Dimitri Glazkov dglaz...@google.com wrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG

I'm sure this is flawed and i will regret sharing it without more
consideration after it popped into my head - but what about something like
prototype?  Does that need explanation as to where i pulled that from or
is it obvious?


Re: [webcomponents]: Naming the Baby

2013-03-27 Thread Brian Kardell
On Mar 27, 2013 2:27 PM, Scott Miles sjmi...@google.com wrote:

 The problem I'm trying to get at, is that while a 'custom element' has a
chance of meeting your 1-6 criterion, the thing on the other end of link
rel='to-be-named'... has no such qualifications. As designed, the target
of this link is basically arbitrary HTML.

 This is why I'm struggling with link rel='component' ...

 Scott


 On Wed, Mar 27, 2013 at 10:20 AM, Angelina Fabbro 
angelinafab...@gmail.com wrote:

 Just going to drop this in here for discussion. Let's try and get at
what a just a component 'is':

 A gold-standard component:

 1. Should do one thing well
 2. Should contain all the necessary code to do that one thing (HTML, JS,
CSS)
 3. Should be modular (and thus reusable)
 4. Should be encapsulated
 5. (Bonus) Should be as small as it can be

 I think it follows, then, that a 'web component' is software that fits
all of these criteria, but for explicit use in the browser to build web
applications. The tools provided - shadow DOM, custom elements etc. give
developers tools to create web components. In the case of:

 link rel=component href=..

 I would (as mentioned before) call this a 'component include' as I think
this description is pretty apt.

 It is true that widgets and components are synonymous, but that has been
that way for a couple of years now at least already. Widgets, components,
modules - they're all interchangeable depending on who you talk to. We've
stuck with 'components' to describe things so far. Let's not worry about
the synonyms. So far, the developers I've introduced to this subject
understood implicitly that they could build widgets with this stuff, all
the while I used the term 'components'.

 Cheers,

 - A

 On Tue, Mar 26, 2013 at 10:58 PM, Scott Miles sjmi...@google.com wrote:

 Forgive me if I'm perseverating, but do you imagine 'component' that is
included to be generic HTML content, and maybe some scripts or some custom
elements?

 I'm curious what is it you envision when you say 'component', to test
my previous assertion about this word.

 Scott


 On Tue, Mar 26, 2013 at 10:46 PM, Angelina Fabbro 
angelinafab...@gmail.com wrote:

 'Component Include'

 'Component Include' describes what the markup is doing, and I like
that a lot. The syntax is similar to including a stylesheet or a script and
so this name should be evocative enough for even a novice to understand
what is implied by it.

 - Angelina


 On Tue, Mar 26, 2013 at 4:19 PM, Scott Miles sjmi...@google.com
wrote:

 Fwiw, my main concern is that for my team and for lots of other
people I communicate with, 'component' is basically synonymous with 'custom
element'. In that context, 'component' referring to
chunk-of-web-resources-loaded-via-link is problematic, even if it's not
wrong, per se.

 We never complained about this before because Dimitri always wrote
the examples as link rel=components... (note the plural). When it was
changed to link rel=component... was when the rain began.

 Scott


 On Tue, Mar 26, 2013 at 4:08 PM, Ryan Seddon seddon.r...@gmail.com
wrote:

 I like the idea of package seems all encompassing which captures
the requirements nicely. That or perhaps resource, but then resource
seems singular.

 Or perhaps component-package so it is obvious that it's tied to
web components?

 -Ryan


 On Tue, Mar 26, 2013 at 6:03 AM, Dimitri Glazkov dglaz...@google.com
wrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is
more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to
do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG








This is why I suggested prototype.. It might be an arbitrary doc, but it's
intent really is to serve as kinda a way to get things you intend to insert
into your page may or not be components to the definition... I saw no
uptake, but that was the rationale: it's hard to not use widget or
component.


Re: [webcomponents]: Naming the Baby

2013-03-28 Thread Brian Kardell
On Mar 28, 2013 11:45 AM, Dimitri Glazkov dglaz...@google.com wrote:

 So. :

 rel type: import

 spec name:

 1) HTML Imports
 2) Web Imports

 :DG


Makes sense to me!


Re: [webcomponents]: de-duping in HTMLImports

2013-04-09 Thread Brian Kardell
On Tue, Apr 9, 2013 at 2:42 PM, Scott Miles sjmi...@google.com wrote:
 Duplicate fetching is not observable, but duplicate parsing and duplicate
 copies are observable.

 Preventing duplicate parsing and duplicate copies allows us to use 'imports'
 without a secondary packaging mechanism. For example, I can load 100
 components that each import 'base.html' without issue. Without this feature,
 we would need to manage these dependencies somehow; either manually, via
 some kind of build tool, or with a packaging system.

 If import de-duping is possible, then ideally there would also be an
 attribute to opt-out.

 Scott


 On Tue, Apr 9, 2013 at 11:08 AM, Dimitri Glazkov dglaz...@google.com
 wrote:

 The trick here is to figure out whether de-duping is observable by the
 author (other than as a performance gain). If it's not, it's a
 performance optimization by a user agent. If it is, it's a spec
 feature.

 :DG

 On Tue, Apr 9, 2013 at 10:53 AM, Scott Miles sjmi...@google.com wrote:
  When writing polyfills for HTMLImports/CustomElements, we included a
  de-duping mechanism, so that the same document/script/stylesheet is not
  (1)
  fetched twice from the network and (2) not parsed twice.
 
  But these features are not in specification, and are not trivial as
  design
  decisions.
 
  WDYT?
 
  Scott
 



For what it is worth, I think I might have opened a bug on this
already (long ago) - but it would have been mixed in with a larger
'how to load them'...

--
Brian Kardell :: @briankardell :: hitchjs.com



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Brian Kardell
On Mon, Mar 18, 2013 at 5:05 PM, Scott Miles sjmi...@google.com wrote:
 I'm already on the record with A, but I have a question about 'lossiness'.

 With my web developer hat on, I wonder why I can't say:

 div id=foo
   shadowroot
 shadow stuff
   /shadowroot

   light stuff

 /div


 and then have the value of #foo.innerHTML still be

   shadowroot
  shadow stuff
   /shadowroot

   lightstuff

 I understand that for DOM, there is a wormhole there and the reality of what
 this means is new and frightening; but as a developer it seems to be
 perfectly fine as a mental model.

 We web devs like to grossly oversimplify things. :)

 Scott

I am also a Web developer and I find that proposal (showing in
innerHTML) feels really wrong/unintuitive to me... I think that is
actually a feature, not a detriment and easily explainable.

I am in a) camp



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Brian Kardell
On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:

 So, what you quoted are thoughts I already deprecated mysefl in this
thread. :)

 If you read a bit further, see that  I realized that shadow-root is
really part of the 'outer html' of the node and not the inner html.

Yeah sorry, connectivity issue prevented me from seeing those until after i
sent i guess.

  I think that is actually a feature, not a detriment and easily
explainable.

 What is actually a feature? You mean that the shadow root is invisible to
innerHTML?



Yes.

 Yes, that's true. But without some special handling of Shadow DOM you get
into trouble when you start using innerHTML to serialize DOM into HTML and
transfer content from A to B. Or even from A back to itself.


I think Dimiti's implication iii is actually intuitive - that is what I am
saying... I do think that round-tripping via innerHTML would be lossy of
declarative markup used to create the instances inside the shadow... to get
that it feels like you'd need something else which I think he also
provided/mentioned.

Maybe I'm alone on this, but it's just sort of how I expected it to work
all along... Already, roundtripping can differ from the original source, If
you aren't careful this can bite you in the hind-quarters but it is
actually sensible.  Maybe I need to think about this a little deeper, but I
see nothing at this stage to make me think that the proposal and
implications are problematic.


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-13 Thread Brian Kardell
On Apr 13, 2013 8:57 PM, Daniel Buchner dan...@mozilla.com wrote:

 @Rick - if we generated a constructor that was in scope when the script
was executed, there is no need for rebinding 'this'. I'd gladly ditch the
rebinding in favor of sane, default, generated constructors.

I think we need someone to summarize where we are at this point :)

Is anyone  besides scott in favor of the

2) Invent a new element specifically for the purpose of defining prototypes

For the record, i am not.


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Brian Kardell
Can Scott or Daniel or someone explain the challenge with creating a
normal constructor that has been mentioned a few times (Scott mentioned
has-a).  I get the feeling that several people are playing catch up on that
challenge and the implications that are causing worry.  Until people have
some shared understanding it is difficult to impossible to reach something
acceptable all around.  Hard to solve the unknown problems.


Re: URL comparison

2013-04-28 Thread Brian Kardell
On Apr 25, 2013 1:39 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 4:34 AM, Anne van Kesteren ann...@annevk.nl
wrote:
  Background reading: http://dev.w3.org/csswg/selectors/#local-pseudo
  and http://url.spec.whatwg.org/
 
  :local-link() seems like a special case API for doing URL comparison
  within the context of selectors. It seems like a great feature, but
  I'd like it if we could agree on common comparison rules so that when
  we eventually introduce the JavaScript equivalent they're not wildly
  divergent.

 My plan is to lean *entirely* on your URL spec for all parsing,
 terminology, and equality notions.  The faster you can get these
 things written, the faster I can edit Selectors to depend on them. ^_^

  Requests I've heard before I looked at :local-link():
 
  * Simple equality
  * Ignore fragment
  * Ignore fragment and query
  * Compare query, but ignore order (e.g. ?xy will be identical to
  ?yx, which is normally not the case)
  * Origin equality (ignores username/password/path/query/fragment)
  * Further normalization (browsers don't normalize as much as they
  could during parsing, but maybe this should be an operation to modify
  the URL object rather than a comparison option)
 
  :local-link() seems to ask for: Ignore fragment and query and only
  look at a subset of path segments. However, :local-link() also ignores
  port/scheme which is not typical. We try to keep everything
  origin-scoped (ignoring username/password probably makes sense).

 Yes.

  Furthermore, :local-link() ignores a final empty path segment, which
  seems to mimic some popular server architectures (although those
  ignore most empty path segments, not just the final), but does not
  match URL architecture.

 Yeah, upon further discussion with you and Simon, I agree we shouldn't
 do this.  The big convincer for me was Simon pointing out that /foo
 and /foo/ have different behavior wrt relative links, and Anne
 pointing out that the URL spec still makes example.com and
 example.com/ identical.

  For JavaScript I think the basic API will have to be something like:
 
  url.equals(url2, {query:ignore-order})
  url.equals(url2, {query:ignore-order, upto:fragment}) // ignores
fragment
  url.equals(url2, {upto:path}) // compares everything before path,
  including username/password
  url.origin == url2.origin // ignores username/password
  url.equals(url2, {pathSegments:2}) // implies ignoring query/fragment
 
  or some such. Better ideas more than welcome.

 Looks pretty reasonable.  Only problem I have is that your upto key
 implicitly orders the url components, when there are times I would
 want to ignore parts out-of-order.

 For example, sometimes the query is just used for incidental
 information, and changing it doesn't actually result in a different
 page.  So, you'd like to ignore it when comparing, but pay attention
 to everything else.

 So, perhaps in addition to upto, an ignore key that takes a string
 or array of strings naming components that should be ignored?

 This way, :local-link(n) would be equivalent to:
 linkurl.equals(docurl, {pathSegments:n, ignore:userinfo})

 :local-link would be equivalent to:
 linkurl.equals(docurl, {upto:fragment})  (Or {ignore:fragment})

 ~TJ


Anne/Tab,

We created a prollyfill for this about a year ago (called :-link-local
instead of :local-link for forward compatibility):

http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

If you can specify the workings, we (public-nextweb community group) can
rev the prollyfill, help create tests, collect feedback, etc so that when
it comes time for implementation and rec there are few surprises.


Re: URL comparison

2013-05-01 Thread Brian Kardell
+ the public-nextweb list...

On Wed, May 1, 2013 at 9:00 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, Apr 28, 2013 at 12:56 PM, Brian Kardell bkard...@gmail.com wrote:
 We created a prollyfill for this about a year ago (called :-link-local
 instead of :local-link for forward compatibility):

 http://hitchjs.wordpress.com/2012/05/18/content-based-css-link/

 Cool!


 If you can specify the workings, we (public-nextweb community group) can rev
 the prollyfill, help create tests, collect feedback, etc so that when it
 comes time for implementation and rec there are few surprises.

 Did you get any feedback thus far about desired functionality,
 problems that are difficult to overcome, ..?


 --
 http://annevankesteren.nl/

We have not uncovered much on this one other than that the few people
who commented were confused by what it meant - but we didn't really
make a huge effort to push it out there... By comparison to some
others it isn't a very 'exciting' fill (our :has() for example had
lots of comment as did our mathematical attribute selectors) - but we
definitely can.  I'd like to open it up to these groups where/how you
think might be an effective means of collecting necessary data -
should we ask people to contribute comments to the list? set up a git
project where people can pull/create issues, register tests/track fork
suggestions, etc?  Most of our stuff for collecting information has
been admittedly all over the place (twitter, HN, reddit, blog
comments, etc), but this predates the nextweb group and larger
coordination, so I'm *very happy* if we can begin to change that.



--
Brian Kardell :: @briankardell :: hitchjs.com



Re: jar protocol

2013-05-10 Thread Brian Kardell
Would it be possible (not suggesting this would be the  common story) to
reference a zipped asset directly via the full url, sans a link tag?


Re: jar protocol

2013-05-10 Thread Brian Kardell


 Can you hash out a little bit more how this would work? I'm assuming you
mean something like:

   img src='/bundle.zip/img/dahut.jpg'

Meh, sorta - but I was missing some context on the mitigation strategies -
thanks for filling me in offline.

Still, same kinda idea, could you add an attribute that allowed for it to
specify that it is available in a bundle?  I'm not suggesting that this is
fully thought out, or even necessarily useful, just fleshing out the
original question in a potentially more understandable/acceptable way...

  img src='/products/images/clock.jpg'
bundle=//products/images/bundle.zip

That should be pretty much infinitely back-compatible, and require no
special mitigation at the server (including configuration wise which many
won't have access to) - just that they share the root concept and don't
clash, which I think is implied by the server solution too, right?  Old UAs
would ignore the unknown bundle attribute and request the src as per usual.
 New UAs could make sure that an archive was requested only once and serve
the file out of the archive.  Presumably you could just add support into
that attribute for some simple way to indicate a named link too...

Psuedo-ish code, bikeshed details, this is just to convey idea:

   link rel=bundle name=products href=//products/images/bundle.zip
img src='/img/dahut.jpg' bundle=link:products

I don't know if this is wise or useful, but one problem that I run into
frequently is that I see pages that mash together content where the author
doesn't get to control the head... This can make integration a little
harder than I think it should be. I'm not sure it matters, I suppose it
depends on:

a) where the link tag will be allowed to live

b) the effects created by including the same link href multiple times in
the same doc

This might be entirely sidetracking the main conversation, so I don't want
to lose that I really like where this is going so far sans any of my
questions/comments :)


Re: jar protocol

2013-05-10 Thread Brian Kardell
 I'm not sure it matters, I suppose it depends on:

 a) where the link tag will be allowed to live


 You can use link anywhere. It might not be valid, but who cares about
 validity :) It works.

Some people :)  why does it have to be invalid when it works.  Lame, no?


 b) the effects created by including the same link href multiple times in
 the same doc

 No effect whatsoever beyond wasted resources.

Yeah, if a UA mitigated that somehow it would address this pretty well.  It
should be cached the second time i suppose, but there has to be overhead in
re-treating as a fresh request.  Maybe they are smart enough to deal with
that already.
 --
 Robin Berjon - http://berjon.com/ - @robinberjon

--
Brian Kardell :: @briankardell :: hitchjs.com


Re: element Needs A Beauty Nap

2013-08-13 Thread Brian Kardell
On Tue, Aug 13, 2013 at 9:15 AM, Daniel Buchner dan...@mozilla.com wrote:

 I concur. On hold doesn't mean forever, and the imperative API affords us
 nearly identical feature capability. Nailing the imperative and getting the
 APIs to market is far more important to developers at this point.
 On Aug 12, 2013 4:46 PM, Alex Russell slightly...@google.com wrote:

 As discussed face-to-face, I agree with this proposal. The declarative
 form isn't essential to the project of de-sugaring the platform and can be
 added later when we get agreement on what the right path forward is.
 Further, polymer-element is evidence that it's not even necessary so long
 as we continue to have the plumbing for loading content that is HTML
 Imports.

 +1


 On Mon, Aug 12, 2013 at 4:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 tl;dr: I am proposing to temporarily remove declarative custom element
 syntax (aka element) from the spec. It's broken/dysfunctional as
 spec'd and I can't see how to fix it in the short term.

 We tried. We gave it a good old college try. In the end, we couldn't
 come up with an element syntax that's both functional and feasible.

 A functional element would:

 1) Provide a way to declare new or extend existing HTML/SVG elements
 using markup
 2) Allow registering prototype/lifecycle callbacks, both inline and out
 3) Be powerful enough for developers to prefer it over document.register

 A feasible element would:

 1) Be intuitive to use
 2) Have simple syntax and API surface
 3) Avoid complex over-the-wire dependency resolution machinery

 You've all watched the Great Quest unfold over in public-webapps over
 the last few months.

 The two key problems that still remain unsolved in this quest are:

 A. How do we integrate the process of creating a custom element
 declaration [1] with the process of creating a prototype registering
 lifecycle callbacks?

 B. With HTML Imports [2], how do we ensure that the declaration of a
 custom element is loaded after the declaration of the custom element
 it extends? At the very least, how do we enable developers to reason
 about dependency failures?

 We thought we solved problem A first with the incredible this [3],
 and then with the last completion value [4], but early experiments
 are showing that this last completion value technique produces brittle
 constructs, since it forces specific statement ordering. Further, the
 technique ties custom element declaration too strongly to script. Even
 at the earliest stages, the developers soundly demanded the ability to
 separate ALL the script into a single, separate file.

 The next solution was to invent another quantum of time, where

 1) declaration and
 2) prototype-building come together at
 3) some point of registration.

 Unfortunately, this further exacerbates problem B: since (3) occurs
 neither at (1) or (2), but rather at some point in the future, it
 becomes increasingly more difficult to reason about why a dependency
 failed.

 Goram! Don't even get me started on problem B. By far, the easiest
 solution here would have been to make HTML Imports block on loading,
 like scripts. Unlucky for us, the non-blocking behavior is one of the
 main benefits that HTML Imports bring to the table. From here, things
 de-escalate quickly. Spirits get broken and despair rules the land.

 As it stands, I have little choice but to make the following proposal:

 Let's let declarative custom element syntax rest for a while. Let's
 yank it out of the spec. Perhaps later, when it eats more cereal and
 gathers its strength, it shall rise again. But not today.

 :DG

 [1]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-create-custom-element-declaration
 [2]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/imports/index.html
 [3]:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0152.html
 [4]:
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-last-completion-value




+1 - this is my preferred route anyway.  Concepts like register and shadow
dom are the core elements... Give projects like x-tags and polymer and even
projects like Ember and Angular some room to help lead the charge on asking
those questions and helping to offer potentially competing answers -- there
need be no rush to standardize at the high level at this point IMO.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [webcomponents]: The Shadow Cat in the Hat Edition

2013-09-09 Thread Brian Kardell
On Sep 9, 2013 9:32 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Sep 9, 2013 at 6:20 PM, Scott Miles sjmi...@google.com wrote:
  I'd greatly prefer to stick with the current plan of having to mark
  things to be exposed explicitly,
 
  Fwiw, we tried that and got in the weeds right away. See Dimitri's post
for
  details. I'm afraid of trading real-life pain (e.g. exploding part
lists)
  for what is IMO an unreal advantage (e.g. the notion components can be
  upgraded and assured never to break is just not realistic).

 Did you miss my suggestion that we allow this with a third value on
 the current allow selectors through switch?

 ~TJ


I am worried that i am not understanding one or both of you properly and
honestly ... I am feeling just a bit lost.

For purposes here consider i have some kind of a special table component
complete with sortable and configurable columns.  When i use that, i
honestly don't want to know what is in the sausage - just how to style or
potentially deal with some parts.  If i start writing things depending on
the gory details, shame on me.  If you leave me no choice but to do that,
shame on you.  You can fool me once but you can't get fooled again... Or
something.

Ok, so, is there a problem with things at that simple level or do the
problems only arise as i build a playlist component out of that table and
some other stuff and in turn a music player out of that?  Is that the
exploding parts list?  Why is exposing explicitly bad?


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 9:34 AM, Anne van Kesteren ann...@annevk.nl wrote:

 As far as I can tell Element.prototype.matches() is not deployed yet.
 Should we instead make selectors first-class citizens, just like
 regular expressions, and have this:

   var sel = new Selectors(i  love  selectors, so[much])
   sel.test(node)

 That seems like a much nicer approach.

 (It also means this can be neatly defined in the Selectors
 specification, rather than in DOM, which means less work for me. :-))


 --
 http://annevankesteren.nl/


I like the idea, but matches has been in release builds for a long time,
right?  Hitch uses it.


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 11:11 AM, James Graham ja...@hoppipolla.co.uk wrote:

 On 11/09/13 15:50, Brian Kardell wrote:

 Yes, to be clear, that is what i meant. If it is in a draft and
 widely/compatibly implemented and deployed in released browsers not
 behind a flag - people are using it.


 If people are using a prefixed — i.e. proprietary — API there is no
requirement that a standard is developed and shipped for that API. It's
then up to the individual vendor to decide whether to drop their
proprietary feature or not.



Please note carefully what i said.  I don't think I am advocating anything
that hasn't been discussed a million times.  In theory what you say was the
original intent.  In practice, that's not how things went.  Browsers have
changed what used to be standard practice to help avoid this in the
future.  We are making cross-browser prollyfills outside browser
implementations to avoid this in the future.  What is done is done though.
The reality is that real and not insignificant production code uses
prefixed things that meet the criteria I stated.  If removed, those will
break.  If something with the same name but different signature or
functionality goes out unprefixed, things will break.


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Wed, Sep 11, 2013 at 12:26 PM, Brian Kardell bkard...@gmail.com wrote:


 On Sep 11, 2013 11:11 AM, James Graham ja...@hoppipolla.co.uk wrote:
 
  On 11/09/13 15:50, Brian Kardell wrote:
 
  Yes, to be clear, that is what i meant. If it is in a draft and
  widely/compatibly implemented and deployed in released browsers not
  behind a flag - people are using it.
 
 
  If people are using a prefixed — i.e. proprietary — API there is no
 requirement that a standard is developed and shipped for that API. It's
 then up to the individual vendor to decide whether to drop their
 proprietary feature or not.
 
 

 Please note carefully what i said.  I don't think I am advocating anything
 that hasn't been discussed a million times.  In theory what you say was the
 original intent.  In practice, that's not how things went.  Browsers have
 changed what used to be standard practice to help avoid this in the
 future.  We are making cross-browser prollyfills outside browser
 implementations to avoid this in the future.  What is done is done though.
 The reality is that real and not insignificant production code uses
 prefixed things that meet the criteria I stated.  If removed, those will
 break.  If something with the same name but different signature or
 functionality goes out unprefixed, things will break.


Mozillians, just for example:
https://github.com/x-tag/x-tag/blob/master/dist/x-tag-components.js#L2161

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 12:29 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/11/13 12:26 PM, Brian Kardell wrote:

 If something with the same name but different
 signature or functionality goes out unprefixed, things will break.


 Why is this, exactly?  Is code assuming that mozFoo, webkitFoo and
foo are interchangeable?  Because they sure aren't, in general.


 In any case, there is no mozMatches or webkitMatches, so matches
should be ok.


As things mature to the manner/degree i described, yes.  But, this isn't
surprising, right?  When things reach this level, we feel pretty
comfortable calling them polyfills which do exactly what you describe: We
assume prefixed and unprefixed are equivalent.  We also feel comfortable
listing them on sites like caniuse.com and even working group members have
products that effectively just unprefix.  It's the same logic used by
Robert O'Callahan regarding unprefixing CSS selectors[1] and we ended up
doing a lot of that - and even prior to that there was talk of unprefixing
.matchesSelector as .matches right here on public web-apps[2].  When things
reach this point, we really have to consider what is out there and how
widely it has been promoted for how long.  I think it is too late for
matchesSelector for sure, and I'd be lying if I said I wasn't worried about
.matches().  I for one am very glad we are taking approaches that help us
not be in this boat - but the idea that something can be called as a
constructor or not isn't new either - can we make it backwards compat and
get the best of both worlds?  Given the similarities in what they do, it
doesn't seem to me like implementation is a problem.  In the very least, I
feel like we need to retain .matchesSelector for some time.

[1] http://lists.w3.org/Archives/Public/www-style/2011Nov/0271.html

[2] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1146.html


 -Boris




Re: Making selectors first-class citizens

2013-09-11 Thread Brian Kardell
On Sep 11, 2013 10:04 AM, Robin Berjon ro...@w3.org wrote:

 On 11/09/2013 15:56 , Anne van Kesteren wrote:

 On Wed, Sep 11, 2013 at 2:52 PM, Brian Kardell bkard...@gmail.com
wrote:

 I like the idea, but matches has been in release builds for a long time,
 right?  Hitch uses it.


 !DOCTYPE html.scriptw(matches in document.body)/script
 http://software.hixie.ch/utilities/js/live-dom-viewer/

 false in both Firefox and Chrome.


 See http://caniuse.com/#search=matches. You do get mozMatchesSelector
(and variants) in there.


 --
 Robin Berjon - http://berjon.com/ - @robinberjon

Yes, to be clear, that is what i meant. If it is in a draft and
widely/compatibly implemented and deployed in released browsers not behind
a flag - people are using it.  That's part of why we switched the general
philosophy right? No doubt one can be a shorthand for the better approach
though...right?


Re: Making selectors first-class citizens

2013-09-12 Thread Brian Kardell
On Sep 12, 2013 2:16 AM, Garrett Smith dhtmlkitc...@gmail.com wrote:

 FWD'ing to put my reply back on list (and to others)...

 On Sep 11, 2013 6:35 AM, Anne van Kesteren ann...@annevk.nl wrote:

 As far as I can tell Element.prototype.matches() is not deployed yet.
 Should we instead make selectors first-class citizens, just like
 regular expressions, and have

 var sel = new Selectors(i  love  selectors, so[much])
 sel.test(node)

 # 2007 David Anderson proposed the idea.

 That seems like a much nicer approach.

 (It also means this can be neatly defined in the Selectors
 specification, rather than in DOM, which means less work for me. :-))

 # 2009 the API design remerged
 http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/1445.html

 # 2010 Selectors explained in an article:
 http://www.fortybelow.ca/hosted/dhtmlkitchen/JavaScript-Query-Engines.html
 (search Query Matching Strategy).
 --
 Garrett
 Twitter: @xkit
 personx.tumblr.com



I may be the only one, but... I am unsure what you are advocating here
Garrett.


Re: Making selectors first-class citizens

2013-09-13 Thread Brian Kardell
On Sep 13, 2013 4:38 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Sep 11, 2013, at 11:54 AM, Francois Remy r...@adobe.com wrote:

 For the record, I'm equally concerned about renaming `matchesSelector`
into `matches`.

 A lot of code now rely on a prefixed or unprefixed version of
`matchesSelector` as this has shipped in an interoperable fashion in all
browsers now.


 Which browser ships matchesSelector unprefixed?
 Neither Chrome, Firefox, nor Safari ship matchesSelector unprefixed.


 On Sep 13, 2013, at 1:12 PM, Francois Remy r...@adobe.com wrote:

 A lot of code now rely on a prefixed or unprefixed version of
 `matchesSelector` as this has shipped in an interoperable fashion in
all
 browsers now.


 Unprefixed?


 Yeah. Future-proofing of existing code, mostly:



https://github.com/search?q=matchesSelector+msMatchesSelectortype=Coderef
 =searchresults


 That’s just broken code.  One cannot speculatively rely on unprefixed DOM
functions until they’re actually spec’ed and shiped.
 I have no sympathy or patience to maintain the backward compatibility
with the code that has never worked.


I am not really sure why you feel this way - this piece of the draft is
tremendously stable, and interoperable as anything else.  The decision to
make it matches was old and popular.  It's not just random joe schmoe doing
this, it's illustrated and recommended by respected sources... For example
http://docs.webplatform.org/wiki/dom/methods/matchesSelector

Essentially, this reaches the level of de facto standard in my book. .all
it really lacks is a vote.

Prefixes bound to vendors which may or may not match final and may or may
not disappear when final comes around or just whenever, in release channel
is exactly why most people are against this sort of thing now.  This
predates that shift and regardless of whether we like it, stuff will break
for people who were just following examples and going by the
implementation/interop and  standard perception of stability.  Websites
will stop working correctly - some will never get fixed - others will waste
the time of hundreds or thousands of devs... This isn't something that was
implemented by 1 or 2 browsers, was hotly contested or has only been around
a few months: This is out there a long time and implemented a long time.

 Furthermore, the existing code will continue to work with the prefixed
versions since we’re not suggesting to drop the prefixed versions.

But, you could just as easily because it is prefixed and experimental.  I
guess i am just not understanding why we are ok to keep around the crappy
named prefix ones but not alias the better name that is widely documented
and definitely used just so we can bikeshed a bit more?  If there is also
something better, let's find a way to add without needing to mess with this.

 - R. Niwa


So.. Ok to keep prefix working in all browsers, but not just add it?  For
the most part, isn't that just like an alias?


Re: Making selectors first-class citizens

2013-09-14 Thread Brian Kardell
On Sep 14, 2013 6:07 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Sat, Sep 14, 2013 at 4:26 AM, Brian Kardell bkard...@gmail.com wrote:
  I am not really sure why you feel this way - this piece of the draft is
  tremendously stable, and interoperable as anything else.  The decision
to
  make it matches was old and popular.  It's not just random joe schmoe
doing
  this, it's illustrated and recommended by respected sources... For
example
  http://docs.webplatform.org/wiki/dom/methods/matchesSelector

 1) I don't think that's a respected source just yet. 2) When I search
 for matchesSelector on Google I get
 https://developer.mozilla.org/en-US/docs/Web/API/Element.matches which
 reflects the state of things much better. Note that the name
 matchesSelector has been gone from the standard for a long time now.


  So.. Ok to keep prefix working in all browsers, but not just add it?
 For
  the most part, isn't that just like an alias?

 Depends on the implementation details of the prefixed version. FWIW,
 I'd expect Gecko to remove support for the prefixed version. Maybe
 after some period of emitting warnings. We've done that successfully
 for a whole bunch of things.


 --
 http://annevankesteren.nl/

I think there may be confusion because of where in the thread i responded -
it was unclear who i was responding to (multi).  I pointed to web platform
link because it is an example of a respected source: a) showing how to
write it for forward compat b) assuming that, based on old/popular
decision it would be called matches.

I didnt use the moz ref because i think it is misleading in that: a) unlike
a *lot* of other moz refs, it doesn't show anything regarding using it with
other prefixes/unprefixing b) the state of that doc now still wouldn't be
what someone referenced in a project they wrote 6 months or a year ago.

My entire point is that it seems, unfortunately, in this very specific
case, kind of reasonable that:
A) Element.prototype.matches() has to mean what .mozMatchedSelector() means
today.  It shouldn't be reconsidered, repurposed or worrisome.
B) Enough stuff assumes Element.prototype.matchesSelector() to cause me
worry that it will prevent unprefixing.
C) We could bikeshed details all day long, but why not just add both where
one is an alias for the other.  Then, what Anne said about dropping prefix
over time becomes less troubling as the number of people who did neither
and don't rev becomes vanishingly small (still, if it is easy why drop at
all).

Very succinctly, i am suggesting:
.matchesSector be unprefixed, .matches is an alias and docs just say see
matchesSelector, its an alias. And no one breaks.  And we avoid this in
the future by following better practices.


Re: should mutation observers be able to observe work done by the html parser

2013-09-16 Thread Brian Kardell
was therw ever agreement on this old topic?
http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0618.htmlwhether
by de facto implementation or spec agreements?  I am not seeing
anything in the draft but maybe i am missing it...


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 2:51 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Sep 13, 2013, at 8:26 PM, Brian Kardell bkard...@gmail.com wrote:


 On Sep 13, 2013 4:38 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Sep 11, 2013, at 11:54 AM, Francois Remy r...@adobe.com wrote:
 
  For the record, I'm equally concerned about renaming `matchesSelector`
 into `matches`.
 
  A lot of code now rely on a prefixed or unprefixed version of
 `matchesSelector` as this has shipped in an interoperable fashion in all
 browsers now.
 
 
  Which browser ships matchesSelector unprefixed?
  Neither Chrome, Firefox, nor Safari ship matchesSelector unprefixed.
 
 
  On Sep 13, 2013, at 1:12 PM, Francois Remy r...@adobe.com wrote:
 
  A lot of code now rely on a prefixed or unprefixed version of
  `matchesSelector` as this has shipped in an interoperable fashion in
 all
  browsers now.
 
 
  Unprefixed?
 
 
  Yeah. Future-proofing of existing code, mostly:
 
 
 
 https://github.com/search?q=matchesSelector+msMatchesSelectortype=Coderef
  =searchresults
 
 
  That’s just broken code.  One cannot speculatively rely on unprefixed
 DOM functions until they’re actually spec’ed and shiped.
  I have no sympathy or patience to maintain the backward compatibility
 with the code that has never worked.
 

 I am not really sure why you feel this way - this piece of the draft is
 tremendously stable, and interoperable as anything else.

 It's not interoperable at all. No vendor has ever shipped matchesSelector
 unprefixed as far as I know.  i.e. it didn't work anywhere unless you
 explicitly relied upon prefixed versions.

 Prefixes bound to vendors which may or may not match final and may or may
 not disappear when final comes around or just whenever, in release channel
 is exactly why most people are against this sort of thing now.  This
 predates that shift and regardless of whether we like it, stuff will break
 for people who were just following examples and going by the
 implementation/interop and  standard perception of stability.  Websites
 will stop working correctly - some will never get fixed - others will waste
 the time of hundreds or thousands of devs...

 Anyone using the prefixed versions should have a fallback path for old
 browsers that doesn't support it.  If some websites will break, then we'll
 simply keep the old prefixed version around but this is essentially each
 vendor's decision.  Gecko might drop sooner than other vendors for example.

 So.. Ok to keep prefix working in all browsers, but not just add it?  For
 the most part, isn't that just like an alias?

 Whether a browser keeps a prefixed version working or not is each vendor's
 decision.  Given that the unprefixed version has never worked, I don't see
 why we want to use the name matchesSelector as opposed to matches.

 - R. Niwa



I think the responses/questions are getting confused.  I'm not sure about
others, but my position is actually not that complicated:  This feature has
been out there and interoperable for quite a while - it is prefixed
everywhere and called matchesSelector.  Some potentially significant group
of people assumed that when it was unprefixed it would be called matches
and others matchesSelector.  Whatever we think people should do in terms
of whether there is a fallback or what not, we know reality often doesn't
match that - people support a certain version forward.  However much we'd
like people to switch, lots of websites are an investment that doesn't get
revisited for a long time.  Thus: 1) let's not try to repurpose matches for
anything that doesn't match this signature (I thought I heard someone
advocating that early on) 2) let's make sure we don't disable those
prefixes and risk breaking stuff that assumed improperly ~or~ if possible -
since this is so bikesheddy, let's just make an alias in the spec given the
circumstances.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Sep 16, 2013 3:46 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Sep 16, 2013 at 12:03 PM, Brian Kardell bkard...@gmail.com
wrote:
  I think the responses/questions are getting confused.  I'm not sure
about
  others, but my position is actually not that complicated:  This feature
has
  been out there and interoperable for quite a while - it is prefixed
  everywhere and called matchesSelector.

 No, it's called *MatchesSelector, where * is various vendor prefixes.

Yeah, that is more accurately what I intended to convey - the delta being
the selector part.

  Some potentially significant group
  of people assumed that when it was unprefixed it would be called
matches
  and others matchesSelector.

 Regardless of what they assumed, there's presumably a case to handle
 older browsers that don't support it at all.  If the scripts guessed
 wrongly about what the unprefixed name would be, then they'll fall
 into this case anyway, which should be okay.

Yes, as long as prefixes stay around, and we don't change repurpose
.matches for another use  that's true.  I thought someone suggested the
later earlier in the thread(s) have to go back and look.

 If they didn't support down-level browsers at all, then they're
 already broken for a lot of users, so making them broken for a few
 more shouldn't be a huge deal. ^_^

This seems like a cop out if there is an easy way to avoid breaking them.
 Just leaving the prefixed ones there goes a long way, but I think we've
shown that some libs and uses either happened before the decision to switch
to .matches so they forward estimated that it would be .matchesSelector
and, people used them (or maybe they've used them before the lib was
updated even).  It seems really easy to unprefix matchesSelector, and say
see matches, it's an alias and prevent breakage.  If I'm alone on that,
I'm not going to keep beating it to death, it just seems easily forward
friendly.  I know I've gotten calls for some mom and pop type project where
I guessed wrong on early standards in my younger days and, well - it sucks.
 I'd rather not put anyone else through that pain unnecessarily if there is
a simple fix.  Maybe I am wrong about the level of simplicity, but - it
seems really bikesheddy anyway.

 ~TJ


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 5:43 PM, Scott González scott.gonza...@gmail.comwrote:

 On Mon, Sep 16, 2013 at 5:33 PM, Brian Kardell bkard...@gmail.com wrote:

 I think Francois shared a github search with shows almost 15,500 uses
 expecting matchesSelector.


 As is generally the case, that GitHub search returns the same code
 duplicated thousands of times. From this search, it's impossible to tell
 which are forks of libraries implementing a polyfill or shim, which are
 projects that actually get released, which are projects that will never be
 released, and which will update their dependencies in a timely fashion
 (resulting in use of the proper method). It seems like a fair amount of
 these are actually just a few polyfills or different versions of jQuery.
 These results are also inflated by matches in source maps.



That's a good observation.  I hadn't considered that.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Making selectors first-class citizens

2013-09-16 Thread Brian Kardell
On Mon, Sep 16, 2013 at 4:29 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Sep 16, 2013 at 1:05 PM, Brian Kardell bkard...@gmail.com wrote:
  If they didn't support down-level browsers at all, then they're
  already broken for a lot of users, so making them broken for a few
  more shouldn't be a huge deal. ^_^
 
  This seems like a cop out if there is an easy way to avoid breaking them.
  Just leaving the prefixed ones there goes a long way, but I think we've
  shown that some libs and uses either happened before the decision to
 switch
  to .matches so they forward estimated that it would be .matchesSelector
 and,
  people used them (or maybe they've used them before the lib was updated
  even).  It seems really easy to unprefix matchesSelector, and say see
  matches, it's an alias and prevent breakage.  If I'm alone on that, I'm
 not
  going to keep beating it to death, it just seems easily forward
 friendly.  I
  know I've gotten calls for some mom and pop type project where I guessed
  wrong on early standards in my younger days and, well - it sucks.  I'd
  rather not put anyone else through that pain unnecessarily if there is a
  simple fix.  Maybe I am wrong about the level of simplicity, but - it
 seems
  really bikesheddy anyway.

 Aliasing cruft is *often* very simple to add; that's not the point.
 It's *cruft*, and unnecessary at that.  Aliasing is sometimes a good
 idea, if you have a well-supported bad name and there's a really good
 alternate name you want to use which is way more consistent, etc.
 This isn't the case here - you're suggesting we add an alias for a
 term that *doesn't even exist on the platform yet*.



I feel like you are taking it to mean that I am advocating aliasing
everywhere for everything where that is not simply not my intent.  I am
saying in this one very particular case because of the timing of things it
seems like it would be a good idea to alias and be done with it.


 There are
 literally zero scripts which depend on the name matchesSelector,
 because it's never worked anywhere.  They might depend on the prefixed
 variants, but that's a separate issue to deal with.


I think Francois shared a github search with shows almost 15,500 uses
expecting matchesSelector.  I think we all agree these should work just
fine as long as prefixes remain - but that's the point... With that many,
why worry about when someone wrote their code or unprefixing or lots more
emails.  Seems an acceptable amount of cruft to me in this case.  Having
said that, I promise I will make no further case :)




 ~TJ




-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Brian Kardell
Mixed response here...

 I love the idea of making HTML imports *not* block rendering as the
default behavior
In terms of custom elements, this creates as a standard, the dreaded FOUC
problem about which a whole different group of people will be blogging and
tweeting... Right?  I don't know that the current solution is entirely
awesome, I'm just making sure we are discussing the same fact.  Also, links
in the head and links in the body both work though the spec disallows the
later it is defacto - the former blocks, the later doesn't I think.
 This creates some interesting situations for people that use something
like a CMS where they don't get to own the head upfront.

 So, for what it's worth, the Polymer team has the exact opposite
desire. I of course acknowledge use cases
 where imports are being used to enhance existing pages, but the assertion
that this is the primary use case is  at least arguable.

Scott, is that because of what I said above (why polymer has the exact
opposite desire)?

  if we allow Expressing the dependency in JS then why doesn't 'async'
(or 'sync') get us both what we want?

Just to kind of flip this on its head a bit - I feel like it is maybe
valuable to think that we should worry about how you express it in JS
*first* and worry about declarative sugar for one or more of those cases
after.  I know it seems the boat has sailed on that just a little with
imports, but nothing is really final else I think we wouldnt be having this
conversation in the first place.  Is it plausible to excavate an
explantation for the imports magic and define a JS API and then see how we
tweak that to solve all the things?


Re: [HTML Imports]: what scope to run in

2013-11-19 Thread Brian Kardell
On Nov 19, 2013 2:22 AM, Ryosuke Niwa rn...@apple.com wrote:


 On Nov 19, 2013, at 2:10 PM, Dimitri Glazkov dglaz...@chromium.org
wrote:

 On Mon, Nov 18, 2013 at 8:26 PM, Ryosuke Niwa rn...@apple.com wrote:

 We share the concern Jonas expressed here as I've repeatedly mentioned
on another threads.

 On Nov 18, 2013, at 4:14 PM, Jonas Sicking jo...@sicking.cc wrote:

 This has several downsides:
 * Libraries can easily collide with each other by trying to insert
 themselves into the global using the same property name.
 * It means that the library is forced to hardcode the property name
 that it's accessed through, rather allowing the page importing the
 library to control this.
 * It makes it harder for the library to expose multiple entry points
 since it multiplies the problems above.
 * It means that the library is more fragile since it doesn't know what
 the global object that it runs in looks like. I.e. it can't depend on
 the global object having or not having any particular properties.


 Or for that matter, prototypes of any builtin type such as Array.

 * Internal functions that the library does not want to expose require
 ugly anonymous-function tricks to create a hidden scope.


 IMO, this is the biggest problem.

 Many platforms, including Node.js and ES6 introduces modules as a way
 to address these problems.


 Indeed.

 At the very least, I would like to see a way to write your
 HTML-importable document as a module. So that it runs in a separate
 global and that the caller can access exported symbols and grab the
 ones that it wants.

 Though I would even be interested in having that be the default way of
 accessing HTML imports.


 Yes!  I support that.

 I don't know exactly what the syntax would be. I could imagine
something like

 In markup:
 link rel=import href=... id=mylib

 Once imported, in script:
 new $('mylib').import.MyCommentElement;
 $('mylib').import.doStuff(12);

 or

 In markup:
 link rel=import href=... id=mylib import=MyCommentElement
doStuff

 Once imported, in script:
 new MyCommentElement;
 doStuff(12);


 How about this?

 In the host document:
 link ref=import href=foo.js import=foo1 foo2
 script
 foo1.bar();
 foo2();
 /script

 In foo.js:
 module foo1 {
 export function bar() {}
 }
 function foo2() {}


 I think you just invented the module element:
https://github.com/jorendorff/js-loaders/blob/master/rationale.md#examples


 Putting the backward compatibility / fallback behavior concern with
respect to the HTML parsing algorithm aside, the current proposal appears
to only support js files.  Are you proposing to extend it so that it can
also load HTML documents just like link[rel=import] does?


I think james burke purposes something to that effect
https://gist.github.com/jrburke/7455354#comment-949905 (relevant bit is in
reply to me, comment #4 if i understand the question)


Re: [webcomponents] HTML Imports

2013-12-05 Thread Brian Kardell
I've been putting off a response on this, but I have some things to add...
The topic on this thread was originally HTML Imports - it seems like some
of the concerns expressed extend beyond imports and are a little wider
ranging.  I am cross posting this comment to public-next...@w3.org as I
think it is related.

I share the concern about letting out an API too early, but I think my
concerns are different.  In the past we worked (meaning browsers, devs,
stds groups) in a model in which things were released into the wild -
prefixed or not - without a very wide feedback loop.  At that point, the
practical realities leave not many good options for course correction or
even for small, but significant tweaks.  I think a lot is happening to
change that model and, as we can see in the case of everything with Web
Components (esp imports and selectors perhaps) the wider we throw the net
the more feedback we get from real people trying to accomplish real things
with real concerns - not just theory.  Some of this experimentation is
happening in the native space, but it is behind a flag, so we are shielded
from the problems above - no public Web site is relying on those things.
 And some of that is happening in the prollyfill space - Github FTW - in
projects like x-tags and polymer.  When we really look down through things
it does feel like it starts to become clear where the pain points are and
where things start to feel more stable.  With this approach, we don't need
to rush standardization in the large scale - if we can reasonably do it
without that and there seems to be wide questioning - let's hold off a bit.

HTML Imports, for example, are generating an *awful* lot of discussion - it
feels like they aren't cooked to me.  But virtually every discussion
involves elements we know we'd need to experiment in that space - modules
would allow one kind of experimentation, promises seem necessary for other
kinds, and so on.  There is a danger of undercooking, yes - but there is
also a danger in overcooking in the standards space alone that I think is
less evident:  No matter how good or bad something is technically, it needs
uptake to succeed.  If you think that ES6 modules have absolutely nothing
to do with this, for example, but through experimentation in the community
that sort of approach turns out to be a winner - it is much more valuable
than theoretical debate.  Debate is really good - but the advantage I think
we need to help exploit is that folks like Steve Souders or James Burke and
W3C TAG can debate and make their cases with working code without pulling
the proverbial trigger if we prioritize the right things and tools to make
it possible.  And no ones code needs to break in the meantime - the
JS-based approach you use today will work just as well tomorrow - better
actually because the perf curve of the browser and speed of machines they
run on is always up.

I don't think that perfect imports is necessarily the lynch-pin to value
in Web Components - it needn't block other progress to slow down the
standard on this one.  Other things like document.register already feel a
lot more stable.  Finding a way to evolve the Web is tricky, but I think
doable and the Web would be a lot better for it if we can get it right.


Re: [custom elements] Improving the name of document.register()

2013-12-11 Thread Brian Kardell
On Wed, Dec 11, 2013 at 3:17 PM, pira...@gmail.com pira...@gmail.comwrote:

 I have seen registerProtocolHandler() and it's being discused
 registerServiceWorker(). I believe registerElementDefinition() or
 registerCustomElement() could help to keep going on this path.


 Since a custom element is the only kind of element  you could register,
custom seems redundant - similarly - it isn't
registerCustomProtocolHandler().

.registerElement is reasonably short and, IMO, adds the descriptiveness
that Ted is looking for?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom elements] Improving the name of document.register()

2013-12-12 Thread Brian Kardell
On Dec 11, 2013 11:48 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com wrote:

 On Thu, Dec 12, 2013 at 5:17 AM, pira...@gmail.com pira...@gmail.com
wrote:

 I have seen registerProtocolHandler() and it's being discused
registerServiceWorker(). I believe registerElementDefinition() or
registerCustomElement() could help to keep going on this path.

 Send from my Samsung Galaxy Note II

 El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com escribió:

 Hi,

 The name register is very generic and could mean practically
anything.
 We need to adopt a name for document.register() that makes its purpose
 clear to authors looking to use custom elements or those reading
someone
 else's code that makes use of custom elements.


 I support this proposal.


 Here are some ideas:

 document.defineElement()
 document.declareElement()
 document.registerElementDefinition()
 document.defineCustomElement()
 document.declareCustomElement()
 document.registerCustomElementDefinition()

 I like document.defineCustomElement() the most, but
 document.defineElement() also works for me if people think
 document.defineCustomElement() is too long.


 I think the method should be called registerElement, for these reasons:

 - It's more descriptive about the purpose of the method than just
register.
 - It's not too verbose; it doesn't have any redundant part.
 - It's nicely parallel to registerProtocolHandler.


 I'd still refer declareElement (or defineElement) since registerElement
sounds as if we're registering an instance of element with something.
 Define and declare also match SGML/XML terminologies.

 - R. Niwa


Define/declare seem a little confusing because we are in the imperative
space where these have somewhat different connotations.  It really does
seem to me that conceptually we are registering (connecting the definition)
with the parser or something.  For whatever that comment is worth.


Re: [custom elements] Improving the name of document.register()

2013-12-13 Thread Brian Kardell
On Dec 13, 2013 3:40 AM, Maciej Stachowiak m...@apple.com wrote:


 Thanks, Google folks, for considering a name to document.register. Though
a small change, I think it will be a nice improvement to code clarity.

 Since we're bikeshedding, let me add a few more notes in favor of
defineElement for consideration:

 1) In programming languages, you would normally say you define or
declare a function, class structure, variable, etc. I don't know of any
language where you register a function or class.

My earlier comment/concern about confusion and overloaded terms was about
this exactly.  The language we are in here is js and we define a class
structure by subclassing, right?  The element is defined, its just that
that alone isn't enough - it has to be connected/plugged in to the larger
system by way of a pattern - primarily the parser, right?


 2) registerElement sounds kind of like it would take an instance of
Element and register it for some purpose. defineElement sounds more like it
is introducing a new kind of element, rather than registering a concrete
instance of an element..

I don't disagree with that.  all proposals are partially misleading/not
quite crystal clear IMO.  I don't think registerElement is the height of
perfection either and perhaps reasonable people could disagree on which is
clearer.  At the end of the day I am inclined to not let perfect be the
enemy of good.

 3) If we someday define a standardized declarative equivalent (note that
I'm not necessarily saying we have to do so right now), defineElement has
more natural analogs. For example, a define or definition element would
convey the concept really well. But a register or registration or even
register-element element would be a weird name.


Seems a similar problem here - you could be defining anything, plus HTML
already has a dfn...What about element?  That's already on the table
after a lot of discussion I thought - is it not what you meant?

 4) The analogy to registerProtocolHandler is also not a great one, in my
opinion. First, it has a different scope - it is on navigator and applies
globally for the UI, rather than being on document and having scope limited
to that document. Second, the true parallel to registerProtocolHandler
would be registerElementDefinition. After all, it's not just called
registerProtocol. That would be an odd name. But defineElement conveys the
same idea as registerElementDefinition more concisely. The Web Components
spec itself says Element registration is a process of adding an element
definition to a registry.

The scope part seems not huge to me... But by the same kind of argument,
you might just as easily make the case that what we are really lacking is a
registry member or something not entirely unlike jQuery's plugins
conceptually.


 5) Register with the parser is not a good description of what
document.register does, either. It has an effect regardless of whether
elements are created with the parser. The best description is what the
custom elements spec itself calls it

Can you elaborate there?  What effect?  Lifecycle stuff via new?

 I feel that the preference for registerElement over defineElement may
partly be inertia due to the old name being document.register. Think about
it - is registerElement really the name you'd come up with, starting from a
blank slate?

For me, i think it still would be if i wound up with a document level
method as opposed to some other approach like a registry object.  But
again, i am of the opinion that none of these is perfect and to some extent
reasonable people can disagree.. I am largely not trying to convince anyone
that one way is right.  If it goes down as defineElement, the world still
wins IMO.

I hope you will give more consideration to defineElement (which seems to be
the most preferred candidate among the non-register-based names).

 Thanks,
 Maciej


 On Dec 12, 2013, at 10:09 PM, Dominic Cooney domin...@google.com wrote:




 On Fri, Dec 13, 2013 at 2:29 AM, Brian Kardell bkard...@gmail.com
wrote:


 On Dec 11, 2013 11:48 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com
wrote:
 
 ...
  El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com
escribió:
 
  Hi,
 
  The name register is very generic and could mean practically
anything.
  We need to adopt a name for document.register() that makes its
purpose
  clear to authors looking to use custom elements or those reading
someone
  else's code that makes use of custom elements.
 
  I think the method should be called registerElement, for these
reasons:
 
  - It's more descriptive about the purpose of the method than just
register.
  - It's not too verbose; it doesn't have any redundant part.
  - It's nicely parallel to registerProtocolHandler.
 
 
  I'd still refer declareElement (or defineElement) since
registerElement sounds as if we're registering an instance of element with
something.  Define and declare also match SGML/XML

Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-14 Thread Brian Kardell


 As an alternate suggestion, and one that might dodge the subclassing
 issues, perhaps createShadowRoot could take an optional template argument
 and clone it automatically. Then this:

 this._root = this.createShadowRoot();
 this._root.appendChild(template.content.cloneNode());

 Could turn into this:

 this._root = this.createShadowRoot(template);

 Which is quite a bit simpler, and involves fewer basic contents.


Just to be totally clear, you are suggesting that the later would desugar
into precisely the former, correct?  What would happen if you called
createShadowRoot with some other kind of element?


Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Tue, Jan 28, 2014 at 5:11 PM, Jake Archibald jaffathec...@gmail.comwrote:

 (I'm late to this party, sorry)

 I'm really fond of the link rel=import elements=x-foo, x-bar
 pattern, but I yeah, you could end up with a massive elements list.

 How about making link[rel=import] async by default, but make elements with
 a dash in the tagname display:none by default?

 On a news article with components, the news article would load, the
 content would be readable, then the components would appear as they load.
 Similar to images without a width  height specified.

 As with images, the site developer could apply styling for the component
 roots before they load, to avoid/minimise the layout change as components
 load. This could be visibility:hidden along with a width  height (or
 aspect ratio, which I believe is coming to CSS), or display:block and
 additional styles to provide a view of the data inside the component that's
 good enough for a pre-enhancement render.

 This gives us:

 * Performance by default (we'd have made scripts async by default if we
 could go back right?)
 * Avoids FOUC by default
 * Can be picked up by a preparser
 * Appears to block rendering on pages that are build with a root web
 component

 Thoughts?

 Cheers,
 Jake.


I think that there are clearly use cases where either way feels right.
 It's considerably easier to tack on a pattern that makes async feel sync
than the reverse.  I'd like to suggest that Jake's proposal is -almost-
really good.  As an author, I'd be happier with the proposal if there were
just a little bit of sugar that made it very very easy to opt in and I
think that this lacks that only in that it relies either on a root level
component or some script to tweak something that indicates the body
visibility or display.  If we realize that this is going to be a common
pattern, why not just provide the simple abstration as part of the system.
 This could be as simple as adding something to section 7.2[1] which says
something like


The :unresolved pseudoclass may also be applied to the body element.  The
body tag is considered :unresolved until all of the elements contained in
the original document have been resolved.  This provides authors a simple
means to additionally manage rendering FOUC including and all the way up to
fully delaying rendering of the page until the Custom Element dependencies
are resolved, while still defaulting to asyc/non-blocking behavior.

Example:
-
/* Apply body styles like background coloring,
but don't render any elements until it's all ready...
*/
body:unresolved * {
display: none;
}


WDYT?


[1] -
http://w3c.github.io/webcomponents/spec/custom/#unresolved-element-pseudoclass



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Wed, Jan 29, 2014 at 12:09 PM, Jake Archibald jaffathec...@gmail.comwrote:

 :unresolved { display: none; } plus lazyload (
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html#attr-lazyload)
 would allow devs to create the non-blocking behaviour. But this is the
 wrong way around. Devs should have to opt-in to the slow thing and get the
 fast thing by default.


Isn't that what I suggested?  I suggested that it be asyc, just as you said
- and that all we do is add the ability to use the :unresolved pseudo class
on the body.  This provides authors as a simple means of control for opting
out of rendering in blocks above the level of the component without
resorting to the need to do it via script or a root level element which
serves no other real purpose. This level of ability seems not just simpler,
but probably more desirable - like a lot of authors I've done a lot of work
with things that pop into existence and cause relayout -- often the thing I
want to block or reserve space for isn't the specific content, but a
container or something.  Seems to me with addition of a body level
:unresolved you could answer pretty much any use case for partial rendering
from just dont do it all the way to screw it, the thing pops into
existence (the later being the default) very very simply - and at the
right layer (CSS).


Re: [HTML Imports]: Sync, async, -ish?

2014-01-29 Thread Brian Kardell
On Wed, Jan 29, 2014 at 12:30 PM, Jake Archibald jaffathec...@gmail.comwrote:

 My bad, many apologies. I get what you mean now.

 However, if web components are explaining the platform then body is
 :resolved by browser internals (I don't know if this is how :resolved works
 currently). Eg, imagine select as a built-in component which is resolved
 and given a shadow DOM by internals.

 7.2 of custom elements states:


The :unresolved pseudoclass *must* match all custom
elements whose created callback has not yet been invoked.


I suppose this leaves wiggle room that it may actually in theory match on
native elements as well.  As you say, this is a nice explanation maybe for
all elements - though - it doesn't seem remarkable what a custom element
would have something a native wouldn't.  Either way, I think my proposal
holds up in basic theory, the only caveat is whether the thing on body is
just a specialized meaning of resolved that only applies to custom
elements, or whether you need a specific name for that thing, right?  It's
really entirely bikesheddable what that thing should be called or maps to -
there must be a name for the document is done upgrading elements that we
in the tree at parse - I dont think that is DOMContentLoaded, but
hopefully you take my point.  If we could agree that that solution works,
we could then have a cage match to decide on a good name :)




 On 29 January 2014 09:19, Brian Kardell bkard...@gmail.com wrote:

 On Wed, Jan 29, 2014 at 12:09 PM, Jake Archibald 
 jaffathec...@gmail.comwrote:

 :unresolved { display: none; } plus lazyload (
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourcePriorities/Overview.html#attr-lazyload)
 would allow devs to create the non-blocking behaviour. But this is the
 wrong way around. Devs should have to opt-in to the slow thing and get the
 fast thing by default.


 Isn't that what I suggested?  I suggested that it be asyc, just as you
 said - and that all we do is add the ability to use the :unresolved pseudo
 class on the body.  This provides authors as a simple means of control for
 opting out of rendering in blocks above the level of the component without
 resorting to the need to do it via script or a root level element which
 serves no other real purpose. This level of ability seems not just simpler,
 but probably more desirable - like a lot of authors I've done a lot of work
 with things that pop into existence and cause relayout -- often the thing I
 want to block or reserve space for isn't the specific content, but a
 container or something.  Seems to me with addition of a body level
 :unresolved you could answer pretty much any use case for partial rendering
 from just dont do it all the way to screw it, the thing pops into
 existence (the later being the default) very very simply - and at the
 right layer (CSS).








-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [Bug 24823] New: [ServiceWorker]: MAY NOT is not defined in RFC 2119

2014-02-26 Thread Brian Kardell
On Feb 26, 2014 1:01 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * bugzi...@jessica.w3.org wrote:
 The section Worker Script Caching uses the term MAY NOT, which is not
 defined in RFC 2119.  I'm assuming this is intended to be MUST NOT or
maybe
 SHOULD NOT.

 If an agent MAY $x then it also MAY not $x. It is possible that the
 author meant must not or should not in this specific instance, but
 in general such a reading would be incorrect. If course, specifications
 should not use constructs like may not.
 --

Your use of should not and the logic implies that actually they may use
may not they just shouldn't.  Do you mean they may not?

 Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
 Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/



[custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
I'm working with several individuals of varying skillsets on using/making
custom elements - we are using a way cut-back subset of what we think are
the really stable just to get started but I had an observation/thought that
I wanted to share with the list based on feedback/experience so far...

It turns out that we have a lot of what I am going to call async
components - things that involve calling 1 or more services during their
creation in order to actually draw something useful on the screen.  These
range from something simple like an RSS element (which, of course, has to
fetch the feed) to complex wizards which have to consult a service to
determine which view/step they are even on and then potentially additional
request(s) to display that view in a good way.  In both of these cases I've
seen confusion over the :unresolved pseudo-class.  Essentially, the created
callback has happened so from the currently defined lifecycle state it's
:resolved, but still not useful.  This can easily be messed up at both
ends (assuming that the thing sticking markup in a page and the CSS that
styles it are two ends) such that we get FOUC garbage between the time
something is :resolved and when it is actually conceptually ready.  I
realize that there are a number of ways to work around this and maybe do it
properly such that this doesn't happen, but there are an infinitely
greater number of ways to barf unhappy content into the screen before its
time.  To everyone who I see look at this, it seems they conceptually
associate :resolved with ok it's ready, and my thought is that isn't
necessarily an insensible thing to think since there is clearly a
pseudo-class about 'non-readiness' of some kind and nothing else that seems
to address this.

I see a few options, I think all of them can be seen as enhancements, not
necessary to a v1 spec if it is going to hold up something important.   The
first would be to let the created callback optionally return a promise - if
returned we can delay :resolved until the promise is fulfilled.  The other
is to introduce another pseudo like :loaded and let the author
participate in that somehow, perhaps the same way (optionally return a
promise from created).  Either way, it seems to me that if we had that, my
folks would use that over the current definition of :resolved in a lot of
cases.



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
On Tue, Mar 25, 2014 at 6:10 PM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

  Do custom elements present any new challenges in comparison to
 non-custom elements here? I feel like you have the same issue with filling
 a select with data from a remote source.

Only really the fact that select exposes no clue already that it isn't
:unresolved or something.  You can see how the hint of an I'm not ready
yet can be interpreted this way.  Precisely, if someone created an
x-select data-src=... kind of tag, then yes, I do think most people
would think that that indicated when the actual (populated) element was
ready.

-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-25 Thread Brian Kardell
On Tue, Mar 25, 2014 at 6:27 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 Let me try and repeat this back to you, standards-nerd-style:

 Now that we have custom elements, there's even more need for notifying a
 style engine of a change in internal elements state -- that is, without
 expressing it in attributes (class names, ids, etc.). We want the ability
 to make custom pseudo classes.

 Now, Busta Rhymes-style

 Yo, I got change
 In my internal state.
 Style resolution
 It ain't too late.
 We got solution!
 To save our a**ses
 That's right, it's custom pseudo classes.

 :DG


Probably it comes as no shock that I agree with our want to push Custom
Pseudo-Class forward, and I am *very* pro experimenting in the community
(#extendthewebforward), so - in fact, I am already experimenting with both
Custom Pseudo-Classes in general and this specific case (returning a
promise).  I'm happy to go that route entirely, but I'm sharing because I
am seeing a fair amount of confusion over :unresolved as currently defined.
 In the least case, we might make an effort to spell it out in the spec a
little more and let people know when we talk to them.  Ultimately, from
what I am seeing on the ground, it seems like :loaded or :ready or
something which is potentially component author-informed is actually
actually way more useful a thing for us to wind up with We'll see, I'm
not trying to push it on anyone, I'm just trying to pick the brains of
smart people and provide feedback into the system (tighten the feedback
loop, right).


-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: [custom-elements] :unresolved and :psych

2014-03-26 Thread Brian Kardell
On Wed, Mar 26, 2014 at 4:53 PM, Scott Miles sjmi...@google.com wrote:

 Yes, I agree with what R. Niwa says.

 I believe there are many variations on what should happen during element
 lifecycle, and the element itself is best positioned to make those choices.

 `:unresolved` is special because it exists a-priori to the element having
 any control.

 Scott


 On Wed, Mar 26, 2014 at 12:26 PM, Ryosuke Niwa rn...@apple.com wrote:

 Maybe the problem comes from not distinguishing elements being created
 and ready for API access versus elements is ready for interactions?

 I'd also imagine that the exact appearance of a custom element between
 the time the element is created and the time it is ready for interaction
 will depend on what the element does.   e.g. img behaves more or less like
 display:none at least until the dimension is available, and then updates
 the screen as the image is loaded.  iframe on the other hand will occupy
 the fixed size in accordance to its style from the beginning, and simply
 updates its content.

 Given that, I'm not certain adding another pseudo element in UA is the
 right approach here.  I suspect there could be multiple states between the
 time element is created and it's ready for user interaction for some custom
 elements.  Custom pseudo, for example, seems like a more appealing solution
 in that regard.

 - R. Niwa

 On Mar 25, 2014, at 2:31 PM, Brian Kardell bkard...@gmail.com wrote:

 I'm working with several individuals of varying skillsets on using/making
 custom elements - we are using a way cut-back subset of what we think are
 the really stable just to get started but I had an observation/thought that
 I wanted to share with the list based on feedback/experience so far...

 It turns out that we have a lot of what I am going to call async
 components - things that involve calling 1 or more services during their
 creation in order to actually draw something useful on the screen.  These
 range from something simple like an RSS element (which, of course, has to
 fetch the feed) to complex wizards which have to consult a service to
 determine which view/step they are even on and then potentially additional
 request(s) to display that view in a good way.  In both of these cases I've
 seen confusion over the :unresolved pseudo-class.  Essentially, the created
 callback has happened so from the currently defined lifecycle state it's
 :resolved, but still not useful.  This can easily be messed up at both
 ends (assuming that the thing sticking markup in a page and the CSS that
 styles it are two ends) such that we get FOUC garbage between the time
 something is :resolved and when it is actually conceptually ready.  I
 realize that there are a number of ways to work around this and maybe do it
 properly such that this doesn't happen, but there are an infinitely
 greater number of ways to barf unhappy content into the screen before its
 time.  To everyone who I see look at this, it seems they conceptually
 associate :resolved with ok it's ready, and my thought is that isn't
 necessarily an insensible thing to think since there is clearly a
 pseudo-class about 'non-readiness' of some kind and nothing else that seems
 to address this.

 I see a few options, I think all of them can be seen as enhancements, not
 necessary to a v1 spec if it is going to hold up something important.   The
 first would be to let the created callback optionally return a promise - if
 returned we can delay :resolved until the promise is fulfilled.  The other
 is to introduce another pseudo like :loaded and let the author
 participate in that somehow, perhaps the same way (optionally return a
 promise from created).  Either way, it seems to me that if we had that, my
 folks would use that over the current definition of :resolved in a lot of
 cases.



 --
 Brian Kardell :: @briankardell :: hitchjs.com





Just to be clear, so there is no confusion (because I realize after talking
to Dimitri that I was being pretty long winded about what I was saying):
 I'm simply saying what y'all are saying - the element is in the best place
to know that it's really fully cooked.  Yes, there could be N potential
states between 0 and fully cooked too, but we do know (at least I am
seeing repeatedly) that folks would like to participate in saying ok, now
I am fully cooked so that the CSS for it can be simple and sensible.

I'm not looking to change anything specifically (except maybe a little more
explicit callout of that in the spec), I'm just providing this feedback so
that we can all think about it in light of other proposals and
conversations we're all having and - maybe - if someone has good ideas you
could share them (offlist if you prefer, or maybe in public-nextweb) so
that those of us who are experimenting can try them out in library space...



-- 
Brian Kardell :: @briankardell :: hitchjs.com


Re: Custom Elements: 'data-' attributes

2014-05-08 Thread Brian Kardell
On Thu, May 8, 2014 at 5:37 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, May 8, 2014 at 12:53 AM, Ryosuke Niwa rn...@apple.com wrote:
  The answer to that question, IMO, is no.  It's not safe to use custom
  attributes without 'data-' if one wanted to write a forward compatible
 HTML
  document.

 Note that the question is scoped to custom elements, not elements in
 general.

 It seems kind of sucky that if you have already minted a custom
 element name, you still need to prefix all your attributes too.

 j-details open=

 reads a lot better than

 j-details data-open=

 The clashes are also likely to happen on the API side. E.g. if I mint
 a custom element and support a property named selectable. If that gets
 traction that might prevent us from introducing selectable as a global
 attribute going forward.


 --
 http://annevankesteren.nl/


What do the parsing rules say about what an attr may begin with? Is it
plausible to just leading underscore or leading dash them as in CSS so that
all that's really necessary is for HTML to avoid using those natively (not
hard, cause, why would you) and then you provide an easy hatch for good
authors and get decent protection without getting too crazy?


-- 
Brian Kardell :: @briankardell :: hitchjs.com


  1   2   >