Re: Does JS bound to element need to inherit from HTMLElement?
On Sat, Apr 13, 2013 at 12:03 PM, John J Barton johnjbar...@johnjbarton.com wrote: While I completely understand the beauty of having any JS object bound to an element inherit functions that make that object 'be an element', I'm unsure of the practical value. To me the critical relationship between the JS and the element is JS object access to its corresponding element instance without global operations. That is, no document.querySelector() must be required, because the result could depend upon the environment of the component instance. The critical issue to me is that there is a canonical object that script uses to interact with the element. With ad-hoc wrapping of elements in JavaScript, there are two objects (the native element wrapper provided by the UA and the object provided by the page author) which results in tedium at best (I did querySelector, now I need to do some other step to find the author's wrapper if it exists) and bugs at worst (the author's wrapper is trying to maintain some abstraction but that is violated by direct access to the native element wrapper.) Whether that access is through |this| is way down the list of critical issues for me. Given a reference to the element I guess I can do everything I want. In fact I believe the vast majority of the JS code used in components will never override HTMLElement operations for the same reason we rarely override Object operations. The Object interface is not terribly specific and mostly dedicated to metaprogramming the object model, so it is not surprising that it isn't heavily overridden. Elements are more specific so overriding their operations seems more useful. If I design a new kind of form input, it's very useful to hook HTMLInputElement.value to do some de/serialization and checking. Extending HTMLElement et al is not just about overriding methods. It is also to let the component author define new properties alongside existing ones, as most HTMLElement subtypes do alongside HTMLElement's existing properties and methods. And to enable authors to do this in a way consistent with the way the UA does it, so authors using Web Components don't need to be constantly observant that some particular functionality is provided by the UA and some particular functionality is provided by libraries. So is the inheritance thing really worth the effort? It seems to complicate the component story as far as I can tell. I think it is worth the effort. -- http://goto.google.com/dc-email-sla
Re: RE: MathML and Clipboard API and events
I suspect that the MathML community would be eager to help define what needs to get stripped out of MathML to maintain security. However, speaking for myself, I do not know what kinds of things are considered dangerous. For example, MathML has markup that lets a math expression act as a hyperlink. Do we need to strip that out completely or is that dependent on the url? See the initial list of bad stuff in https://www.w3.org/Bugs/Public/show_bug.cgi?id=21700 Basically, the attack scenario is: trick a user into trying to copy something from an attacker's site to a rich text element on a target site. If this process can make some code execute inside the target site, the attack can succeed. (There is also some scope for doing malice with CSS and form elements, but probably much less.) -- Hallvord R. M. Steen Core tester, Opera Software
Re: File API: auto-revoking blob URLs
On Tue, Apr 16, 2013 at 2:48 AM, Glenn Maynard gl...@zewt.org wrote: The solution I propose is the same as it's always been. Have a synchronous algorithm, eg. parse and capture the URL, which is invoked at the time (eg.) .src is set. This 1: invokes the URL spec's parser; 2: if the result is a blob URL, grabs the associated blob data and puts a reference to it in the resulting parsed URL; then 3: returns the result. Assigning .src would then synchronously invoke this, so the blob is always captured immediately, even if the fetch isn't. This way, we can synchronously resolve all this stuff, even if the fetch won't happen for a while. As Ian pointed out (see WHATWG IRC reference above) you don't always want to parse synchronously as the base URL might change at a later stage. This also does not work for CSS where when parsing happens is even less defined (which might benefit projects such as Servo). This would also fix https://www.w3.org/Bugs/Public/show_bug.cgi?id=21058, because URLs would be resolved against base href synchronously. That would make img behave differently from e.g. a download. Pretty sure a needs to resolve at the point it is actually clicked. -- http://annevankesteren.nl/
Re: File API: auto-revoking blob URLs
On Tue, Apr 16, 2013 at 4:57 AM, Anne van Kesteren ann...@annevk.nl wrote: As Ian pointed out (see WHATWG IRC reference above) you don't always want to parse synchronously as the base URL might change at a later stage. For images, that's what you want--if the base URL changes after you assign .src, the old base should still be used. Most of the time this is what you get now with images. The only time you don't is the images on demand path, which I think is a bug (this would just align those two paths). If there are cases where base changes do need to be picked up after assignment, we might need a bit of a hack to deal with this. First, parse and capture the URL synchronously, as above. Then, at fetch time parse the URL again. If the resulting parsed URL is the same, use the original one, so you retain any captured blob. If the parsed URL has changed (because of a base change), discard the original parsed URL and use the new one instead. That means that if the base doesn't change (or if the URL is absolute, as with blob URLs), the captured blob data is still there. If the URL did change, it'll use the new parsed URL. This also does not work for CSS where when parsing happens is even less defined (which might benefit projects such as Servo). If the time CSS parses its URLs isn't defined, then I think blob URLs are fundamentally incompatible with being put into CSS. Either CSS's parse time needs to be defined, or we should disallow blob URLs in CSS. I know putting blob URLs in CSS is a major case for some people, but we can only support it if we can define it interoperably. (This applies to non-autorevoke blobs, too, I think, depending on how undefined it is.) This would also fix https://www.w3.org/Bugs/Public/show_bug.cgi?id=21058, because URLs would be resolved against base href synchronously. That would make img behave differently from e.g. a download. Pretty sure a needs to resolve at the point it is actually clicked. The above hack would deal with this. -- Glenn Maynard
Re: Does JS bound to element need to inherit from HTMLElement?
I wonder if there may be a cultural difference involved in our different points of view. As a C++ developer I think your point of view makes a lot of sense. As a JavaScript developer I find it puzzling. Given a JS object I can override its value getter and add new properties operating on the object or inheriting from it. Pre-ES6, the number of failure modes in both paths loom large. Anyone looking at the end result won't be able to tell the difference. Anyway the group seems keen on inheritance so I hope it works out. On Mon, Apr 15, 2013 at 11:24 PM, Dominic Cooney domin...@google.comwrote: On Sat, Apr 13, 2013 at 12:03 PM, John J Barton johnjbar...@johnjbarton.com wrote: While I completely understand the beauty of having any JS object bound to an element inherit functions that make that object 'be an element', I'm unsure of the practical value. To me the critical relationship between the JS and the element is JS object access to its corresponding element instance without global operations. That is, no document.querySelector() must be required, because the result could depend upon the environment of the component instance. The critical issue to me is that there is a canonical object that script uses to interact with the element. With ad-hoc wrapping of elements in JavaScript, there are two objects (the native element wrapper provided by the UA and the object provided by the page author) which results in tedium at best (I did querySelector, now I need to do some other step to find the author's wrapper if it exists) and bugs at worst (the author's wrapper is trying to maintain some abstraction but that is violated by direct access to the native element wrapper.) Whether that access is through |this| is way down the list of critical issues for me. Given a reference to the element I guess I can do everything I want. In fact I believe the vast majority of the JS code used in components will never override HTMLElement operations for the same reason we rarely override Object operations. The Object interface is not terribly specific and mostly dedicated to metaprogramming the object model, so it is not surprising that it isn't heavily overridden. Elements are more specific so overriding their operations seems more useful. If I design a new kind of form input, it's very useful to hook HTMLInputElement.value to do some de/serialization and checking. Extending HTMLElement et al is not just about overriding methods. It is also to let the component author define new properties alongside existing ones, as most HTMLElement subtypes do alongside HTMLElement's existing properties and methods. And to enable authors to do this in a way consistent with the way the UA does it, so authors using Web Components don't need to be constantly observant that some particular functionality is provided by the UA and some particular functionality is provided by libraries. So is the inheritance thing really worth the effort? It seems to complicate the component story as far as I can tell. I think it is worth the effort. -- http://goto.google.com/dc-email-sla
Upload progress events and redirects
I request a URL using POST that redirects using 307 to some other URL. This will result in the request entity body being transmitted twice. It seems however user agents only fire progress events as if only one fetch happened. Is that how we want to define this? I guess arguably that's how we define fetching in general (ignoring redirects) but it does mean you have to wait for the response (to see if it's a redirect or 401/407) before you can dispatch loadend on .upload which is contrary to what we decided a little over a year ago: http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0749.html What do people think? -- http://annevankesteren.nl/
Re: [webcomponents]: Re-imagining shadow root as Element
On Mon, Apr 15, 2013 at 11:05 PM, Dominic Cooney domin...@google.com wrote: On Thu, Apr 11, 2013 at 5:53 AM, Erik Arvidsson a...@chromium.org wrote: For the record I'm opposed to what you are proposoing. I don't like that you lose the symmetry between innerHTML and outerHTML. Sorry for replying to such a cold thread. Could you elaborate on what symmetry is being broken here? outerHTML is innerHTML with a prefix and a suffix. In this proposal the prefix includes shadow-root. What problems are likely to result from that? outerHTML has always been start tag + innerHTML + end tag. Now it would become start tag + shadow dom + innerHTML + end tag. I remember when IE used to include those pesky ?import directives and the confusion it caused. Lets not make a similar mistake. Dominic On Wed, Apr 10, 2013 at 4:34 PM, Scott Miles sjmi...@google.com wrote: I made an attempt to describe how these things can be non-lossy here: https://gist.github.com/sjmiles/5358120 On Wed, Apr 10, 2013 at 12:19 PM, Scott Miles sjmi...@google.com wrote: input/video would have intrinsic Shadow DOM, so it would not ever be part of outerHTML. I don't have a precise way to differentiate intrinsic Shadow DOM from non-intrinsic Shadow DOM, but my rule of thumb is this: 'node.outerHTML' should produce markup that parses back into 'node' (assuming all dependencies exist). On Wed, Apr 10, 2013 at 12:15 PM, Erik Arvidsson a...@chromium.org wrote: Once again, how would this work for input/video? Are you suggesting that `createShadowRoot` behaves different than when you create the shadow root using markup? On Wed, Apr 10, 2013 at 3:11 PM, Scott Miles sjmi...@google.com wrote: I think we all agree that node.innerHTML should not reveal node's ShadowDOM, ever. What I am arguing is that, if we have shadow-root element that you can use to install shadow DOM into an arbitrary node, like this: div shadow-root Decoration -- content/content -- Decoration shadow-root Light DOM /div Then, as we agree, innerHTML is LightDOM but outerHTML would be div shadow-root Decoration -- content/content -- Decoration shadow-root Light DOM /div I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by which I mean Shadow DOM that would never exist on a node unless you hadn't specifically put it there (as opposed to Shadow DOM intrinsic to a particular element type). With this inner/outer rule, all serialization cases I can think of work in a sane fashion, no lossiness. Scott On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson a...@chromium.org wrote: Maybe I'm missing something but we clearly don't want to include shadowroot in the general innerHTML getter case. If I implement input[type=range] using custom elements + shadow DOM I don't want innerHTML to show the internal guts. On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.com wrote: I don't see any reason why my document markup for some div should not be serializable back to how I wrote it via innerHTML. That seems just plain bad. I hope you can take a look at what I'm saying about outerHTML. I believe at least the concept there solves all cases. On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.com wrote: On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote: So, what you quoted are thoughts I already deprecated mysefl in this thread. :) If you read a bit further, see that I realized that shadow-root is really part of the 'outer html' of the node and not the inner html. Yeah sorry, connectivity issue prevented me from seeing those until after i sent i guess. I think that is actually a feature, not a detriment and easily explainable. What is actually a feature? You mean that the shadow root is invisible to innerHTML? Yes. Yes, that's true. But without some special handling of Shadow DOM you get into trouble when you start using innerHTML to serialize DOM into HTML and transfer content from A to B. Or even from A back to itself. I think Dimiti's implication iii is actually intuitive - that is what I am saying... I do think that round-tripping via innerHTML would be lossy of declarative markup used to create the instances inside the shadow... to get that it feels like you'd need something else which I think he also provided/mentioned. Maybe I'm alone on this, but it's just sort of how I expected it to work all along... Already, roundtripping can differ from the original source, If you aren't careful this can bite you in the hind-quarters but it is actually sensible. Maybe I need to think about this a little deeper, but I see nothing at this stage to make me think that the proposal and implications are problematic. -- erik -- erik -- erik -- -- erik
Re: Upload progress events and redirects
On 4/16/13 11:59 AM, Anne van Kesteren wrote: I guess arguably that's how we define fetching in general (ignoring redirects) but it does mean you have to wait for the response (to see if it's a redirect or 401/407) before you can dispatch loadend on .upload which is contrary to what we decided a little over a year ago: http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0749.html What do people think? I think, as I did then, that there's a reason Gecko only claims the upload is done when it starts getting a response... ;) -Boris
Using readyCallback for built-in elements, was: [webcomponents]: Of weird script elements and Benadryl
On Mon, Apr 15, 2013 at 6:59 AM, Anne van Kesteren ann...@annevk.nl wrote: I think we should go for one interface per element here. abstract classes not being constructable seems fine. Node/CharacterData are similar to that. This would mean HTMLH1Element, ..., of which compatibility impact has not been measured. The other problem we need to solve is that document.createElement(x) currently gives different results from new x's interface. E.g. new Audio() sets an attribute, document.createElement(audio) does not. I think we should settle for document.createElement(audio) also creating an attribute here. What if we use the newly-found power if readyCallback here? Suppose that HTMLAudioElement has a readyCallback that, among other things does: if (!this.parentNode) // aha! I am created imperatively this.setAttribute(controls); Several HTML elements will need to use the callback to build their shadow trees and set internal state, like textarea, input, details, fieldset, etc. If we just build readyCallback into DOM, we have cake. :DG
Re: [webcomponents]: Of weird script elements and Benadryl
Wow. What a thread. I look away for a day, and this magic beanstalk is all the way to the clouds. I am happy to see that all newcomers are now up to speed. I am heartened to recognize the same WTFs and grumbling that we went through along the path. I feel your pain -- I've been there myself. As Hixie once told me (paraphrasing, can't remember exact words), All the good choices have been made. We're only left with terrible ones. I could be wrong (please correct me), but we didn't birth any new ideas so far, now that everyone has caught up with the constraints. The good news is that the imperative syntax is solid. It's nicely compatible with ES6, ES3/5, and can be even used to make built-in HTML elements (modulo security/isolation problem, which we shouldn't tackle here). I am going to offer a cop-out option: maybe we simply don't offer imperative syntax as part of the spec? Should we let libraries/frameworks build their own custom elements (with opinion and flair) to implement declarative syntax systems? :DG
Re: [webcomponents]: Of weird script elements and Benadryl
*I am going to offer a cop-out option: maybe we simply don't offer imperative syntax as part of the spec? * Why would we do this if the imperative syntax is solid, nicely compatible, and relatively uncontentious? Did you mean to say declarative? Daniel J. Buchner Product Manager, Developer Ecosystem Mozilla Corporation On Tue, Apr 16, 2013 at 2:56 PM, Dimitri Glazkov dglaz...@google.comwrote: Wow. What a thread. I look away for a day, and this magic beanstalk is all the way to the clouds. I am happy to see that all newcomers are now up to speed. I am heartened to recognize the same WTFs and grumbling that we went through along the path. I feel your pain -- I've been there myself. As Hixie once told me (paraphrasing, can't remember exact words), All the good choices have been made. We're only left with terrible ones. I could be wrong (please correct me), but we didn't birth any new ideas so far, now that everyone has caught up with the constraints. The good news is that the imperative syntax is solid. It's nicely compatible with ES6, ES3/5, and can be even used to make built-in HTML elements (modulo security/isolation problem, which we shouldn't tackle here). I am going to offer a cop-out option: maybe we simply don't offer imperative syntax as part of the spec? Should we let libraries/frameworks build their own custom elements (with opinion and flair) to implement declarative syntax systems? :DG
Re: [webcomponents]: Of weird script elements and Benadryl
On Tue, Apr 16, 2013 at 3:00 PM, Daniel Buchner dan...@mozilla.com wrote: I am going to offer a cop-out option: maybe we simply don't offer imperative syntax as part of the spec? Why would we do this if the imperative syntax is solid, nicely compatible, and relatively uncontentious? Did you mean to say declarative? DERP. Yes, thank you Daniel. I mean to say: I am going to offer a cop-out option: maybe we simply don't offer DECLARATIVE syntax as part of the spec? Should we let libraries/frameworks build their own custom elements (with opinion and flair) to implement declarative syntax systems? :DG
Re: [webcomponents]: Of weird script elements and Benadryl
One thing I've heard from many of our in-house developers, is that they prefer the imperative syntax, with one caveat: we provide an easy way to allow components import/require/rely-upon other components. This could obviously be done using ES6 Modules, but is there anything we can do to address that use case for the web of today? On Tue, Apr 16, 2013 at 3:02 PM, Dimitri Glazkov dglaz...@google.comwrote: On Tue, Apr 16, 2013 at 3:00 PM, Daniel Buchner dan...@mozilla.com wrote: I am going to offer a cop-out option: maybe we simply don't offer imperative syntax as part of the spec? Why would we do this if the imperative syntax is solid, nicely compatible, and relatively uncontentious? Did you mean to say declarative? DERP. Yes, thank you Daniel. I mean to say: I am going to offer a cop-out option: maybe we simply don't offer DECLARATIVE syntax as part of the spec? Should we let libraries/frameworks build their own custom elements (with opinion and flair) to implement declarative syntax systems? :DG
Re: [webcomponents]: Of weird script elements and Benadryl
On Tue, Apr 16, 2013 at 3:07 PM, Daniel Buchner dan...@mozilla.com wrote: One thing I've heard from many of our in-house developers, is that they prefer the imperative syntax, with one caveat: we provide an easy way to allow components import/require/rely-upon other components. This could obviously be done using ES6 Modules, but is there anything we can do to address that use case for the web of today? Yes, one key ability we lose here is the declarative quality -- with the declarative syntax, you don't have to run script in order to comprehend what custom elements could be used by a document. :DG
Re: Using readyCallback for built-in elements, was: [webcomponents]: Of weird script elements and Benadryl
On Tue, Apr 16, 2013 at 5:33 PM, Dimitri Glazkov dglaz...@google.comwrote: On Mon, Apr 15, 2013 at 6:59 AM, Anne van Kesteren ann...@annevk.nl wrote: I think we should go for one interface per element here. abstract classes not being constructable seems fine. Node/CharacterData are similar to that. This would mean HTMLH1Element, ..., of which compatibility impact has not been measured. The other problem we need to solve is that document.createElement(x) currently gives different results from new x's interface. E.g. new Audio() sets an attribute, document.createElement(audio) does not. I think we should settle for document.createElement(audio) also creating an attribute here. What if we use the newly-found power if readyCallback here? Suppose that HTMLAudioElement has a readyCallback that, among other things does: if (!this.parentNode) // aha! I am created imperatively this.setAttribute(controls); Several HTML elements will need to use the callback to build their shadow trees and set internal state, like textarea, input, details, fieldset, etc. If we just build readyCallback into DOM, we have cake. Can someone point me to the discussion that lead to the name choice readyCallback? Thanks Rick :DG
Re: Using readyCallback for built-in elements, was: [webcomponents]: Of weird script elements and Benadryl
I think there were several f2f conversations around that. I can't remember if we had an email thread around this. It used to be called created, but the timing at which the callback is called makes the name misleading. For example, when parsing, by the time the callback is invoked, the custom element not only had been created, but also populated with attributes and put in the tree. It's essentially now ready for operation. :DG
Re: [webcomponents]: Of weird script elements and Benadryl
On Apr 16, 2013, at 3:13 PM, Dimitri Glazkov wrote: On Tue, Apr 16, 2013 at 3:07 PM, Daniel Buchner dan...@mozilla.com wrote: One thing I've heard from many of our in-house developers, is that they prefer the imperative syntax, with one caveat: we provide an easy way to allow components import/require/rely-upon other components. This could obviously be done using ES6 Modules, but is there anything we can do to address that use case for the web of today? Yes, one key ability we lose here is the declarative quality -- with the declarative syntax, you don't have to run script in order to comprehend what custom elements could be used by a document. My sense is that the issues of concern (at least on this thread) with declaratively defining custom elements all related to how custom behavior (ie, script stuff) is declaratively associated. I'm not aware (but also not very familiar) with similar issues relating to template and other possible element subelement. I also imagine that there is probably a set of use cases that don't actually need any custom behavior. That suggests to me, that a possible middle ground, for now, is to still have declarative custom element definitions but don't provide any declarative mechanism for associating script with them. Imperative code could presumably make that association, if it needed to. I've been primarily concerned about approaches that would be future hostile toward the use of applicable ES features that are emerging. I think we'll be see those features in browsers within the next 12 months. Deferring just the script features of element would help with the timing and probably allow a better long term solution to be designed. Allen
Re: [webcomponents]: Of weird script elements and Benadryl
*Deferring just the script features of element would help with the timing and probably allow a better long term solution to be designed.* If the callbacks are not mutable or become inert after registration (as I believe was the case), how would a developer do this -- *Imperative code could presumably make that association, if it needed to.* On Tue, Apr 16, 2013 at 3:47 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote: On Apr 16, 2013, at 3:13 PM, Dimitri Glazkov wrote: On Tue, Apr 16, 2013 at 3:07 PM, Daniel Buchner dan...@mozilla.com wrote: One thing I've heard from many of our in-house developers, is that they prefer the imperative syntax, with one caveat: we provide an easy way to allow components import/require/rely-upon other components. This could obviously be done using ES6 Modules, but is there anything we can do to address that use case for the web of today? Yes, one key ability we lose here is the declarative quality -- with the declarative syntax, you don't have to run script in order to comprehend what custom elements could be used by a document. My sense is that the issues of concern (at least on this thread) with declaratively defining custom elements all related to how custom behavior (ie, script stuff) is declaratively associated. I'm not aware (but also not very familiar) with similar issues relating to template and other possible element subelement. I also imagine that there is probably a set of use cases that don't actually need any custom behavior. That suggests to me, that a possible middle ground, for now, is to still have declarative custom element definitions but don't provide any declarative mechanism for associating script with them. Imperative code could presumably make that association, if it needed to. I've been primarily concerned about approaches that would be future hostile toward the use of applicable ES features that are emerging. I think we'll be see those features in browsers within the next 12 months. Deferring just the script features of element would help with the timing and probably allow a better long term solution to be designed. Allen
Re: [webcomponents]: Of weird script elements and Benadryl
On Apr 16, 2013, at 4:08 PM, Daniel Buchner wrote: Deferring just the script features of element would help with the timing and probably allow a better long term solution to be designed. If the callbacks are not mutable or become inert after registration (as I believe was the case), how would a developer do this -- Imperative code could presumably make that association, if it needed to. Here is what I suggested earlier on this thread for what to do if a constructor= attribute wasn't supplied, when we were talking about that scheme: 1) create a new anonymous constructor object that inherits from HTMLElement. It wouldn't have any unique behavior but it would be uniquely associated with the particular element that defined it and it might be useful for doing instanceof tests. It would be the constructor that you register with the tag. If that was done, it seems reasonable that the provided constructor object could be available as the value of an attribute of the HTMLElementElement that corresponds to the element. So, imperative code could lookup the HTMLElementElement based on its name property and retrieve the constructor object. The constructor object would have a prototype whose value is the actual prototype object used for these custom elements objects and the imperative code could assign methods. The script that assigns such methods would need to be placed to run after the element is parsed but before any other imperative code that actually makes use of those methods. Prototype objects are not normally immutable so there is no problem with delaying the installation of such methods even until after instances of the custom element have actually been created by the HTML parser. Allen On Tue, Apr 16, 2013 at 3:47 PM, Allen Wirfs-Brock al...@wirfs-brock.com wrote: On Apr 16, 2013, at 3:13 PM, Dimitri Glazkov wrote: On Tue, Apr 16, 2013 at 3:07 PM, Daniel Buchner dan...@mozilla.com wrote: One thing I've heard from many of our in-house developers, is that they prefer the imperative syntax, with one caveat: we provide an easy way to allow components import/require/rely-upon other components. This could obviously be done using ES6 Modules, but is there anything we can do to address that use case for the web of today? Yes, one key ability we lose here is the declarative quality -- with the declarative syntax, you don't have to run script in order to comprehend what custom elements could be used by a document. My sense is that the issues of concern (at least on this thread) with declaratively defining custom elements all related to how custom behavior (ie, script stuff) is declaratively associated. I'm not aware (but also not very familiar) with similar issues relating to template and other possible element subelement. I also imagine that there is probably a set of use cases that don't actually need any custom behavior. That suggests to me, that a possible middle ground, for now, is to still have declarative custom element definitions but don't provide any declarative mechanism for associating script with them. Imperative code could presumably make that association, if it needed to. I've been primarily concerned about approaches that would be future hostile toward the use of applicable ES features that are emerging. I think we'll be see those features in browsers within the next 12 months. Deferring just the script features of element would help with the timing and probably allow a better long term solution to be designed. Allen
[Bug 21725] New: Specify id for Step 1 of the Web Workers Processing Model
https://www.w3.org/Bugs/Public/show_bug.cgi?id=21725 Bug ID: 21725 Summary: Specify id for Step 1 of the Web Workers Processing Model Classification: Unclassified Product: WebAppsWG Version: unspecified Hardware: PC OS: Windows NT Status: NEW Severity: normal Priority: P2 Component: Web Workers (editor: Ian Hickson) Assignee: i...@hixie.ch Reporter: jm...@microsoft.com QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org The High Resolution Time Level 2 specification [1] adds support for the performance.now() method in the worker context. Specifically, it defines the time origin for a shared worker to be the time immediately before the creation of the shared worker. To more clearly define the time of creation, I'd like to link to Step 1 of the Web Worker processing model. Please specify an id for step 1. Thanks, Jatinder [1]https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/HighResolutionTime2/Overview.html -- You are receiving this mail because: You are on the CC list for the bug.
Re: [webcomponents]: Of weird script elements and Benadryl
* Rick Waldron wrote: Of course, but we'd also eat scraps from the trash if that was the only edible food left on earth. document.createElement() is and has always been the wrong way—the numbers shown in those graphs are grossly skewed by a complete lack of any real alternative. If I want to make a new button to put in the document, the first thing my JS programming experience tells me: new Button(); And if you read code like `new A();` your programming experience would probably tell you that you are looking at machine-generated code. And if you read `new Time();` you would have no idea whether this creates some `new Date();`-like object, or throw an exception because the browser you try to run that code on does not support the `time /` element yet or anymore (the element was proposed, withdrawn, and then proposed again) and if it's something like var font = new Font(Arial 12pt); canvas.drawText(Hello World!, font); The idea that you are constructing `font /` elements probably wouldn't cross your mind much. And between new HTMLButtonElement(); and new Element('button'); I don't see why anyone would want the former in an environment where you cannot rely on `HTMLHGroupElement` existing (the `hgroup` element had been proposed, and is currently withdrawn, or not, depending on where you get your news from). Furthermore, there actually are a number of dependencies to take into account, like in var agent = new XMLHttpRequest(); ... agent.open('GET', 'example'); Should that fail because the code does not say where to get `example` from, or should it succeed by picking up some base reference magically from the environment (and which one, is `example` relative to from the script code, or the document the code has been transcluded into, and when is that decision made as code moves across global objects, and so on)? Same question for `new Element('a')`, if the object exposes some method to obtain the absolute value of the `href` attribute in some way. But I live in the bad old days (assuming my children won't have to use garbage APIs to program the web) and my reality is still here: document.createElement(button); That very clearly binds the return value to `document` so you actually can do var button = document.createElement(button); ... button.ownerDocument.example(...); in contrast to, if you will, var button = new Button(); button.ownerDocument.example(...); where `button.ownerDocument` could only have a Document value if there is some dependency on global state that your own code did not create. I would expect that code to fail because the ownerDocument has not been specified, and even if I would expect that particular code to succeed, I would be unable to tell what would happen if `example` was invoked in some other way, especially when `example` comes from another global. -- Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/