Re: do not deprecate synchronous XMLHttpRequest
Given implementations already support synchronous behavior, the only reason to deprecate would be if continuing to support that behavior comes at a significant burden to future versions of these implementations. I would conjecture it is not, so I oppose deprecation on the principle that policy should be determined by the user/author, not the standard. Morality should not be legislated! On Tue, Feb 10, 2015 at 7:47 AM, Ashley Gullen ash...@scirra.com wrote: I am on the side that synchronous AJAX should definitely be deprecated, except in web workers where sync stuff is OK. Especially on the modern web, there are two really good alternatives: - write your code in a web worker where synchronous calls don't hang the browser - write async code which doesn't hang the browser With modern tools like Promises and the new Fetch API, I can't think of any reason to write a synchronous AJAX request on the main thread, when an async one could have been written instead with probably little extra effort. Alas, existing codebases rely on it, so it cannot be removed easily. But I can't see why anyone would argue that it's a good design principle to make possibly seconds-long synchronous calls on the UI thread. On 9 February 2015 at 19:33, George Calvert george.calv...@loudthink.com wrote: I third Michaela and Gregg. It is the app and site developers' job to decide whether the user should wait on the server — not the standard's and, 99.9% of the time, not the browser's either. I agree a well-designed site avoids synchronous calls. BUT — there still are plenty of real-world cases where the best choice is having the user wait: Like when subsequent options depend on the server's reply. Or more nuanced, app/content-specific cases where rewinding after an earlier transaction fails is detrimental to the overall UX or simply impractical to code. Let's focus our energies elsewhere — dispensing with browser warnings that tell me what I already know and with deprecating features that are well-entrenched and, on occasion, incredibly useful. Thanks, George Calvert
Re: Are web components *seriously* not namespaced?
On Wed, Feb 4, 2015 at 2:31 PM, Glen glen...@gmail.com wrote: I know I'm rather late to the party, but I've been doing a lot of reading lately about web components and related technologies, and the one thing that confounds me is the fact that web components appear not to have any real namespacing. There is a serious antipathy towards XML in some quarters. So I believe the vocabulary was designed for non-XML namespace aware parsers. Others can verify my understanding (or not). Can someone explain why this is so, and what the justification is? Or is it just a case of it was too complicated, this is good enough? I see this has been brought up once before @ http://lists.w3.org/Archives/ Public/public-webapps/2013AprJun/0964.html, but nothing changed. It's not going to be long before x-tabs has been defined by 1,000,000 people (slight exaggeration), and you have no idea what it is or where it came from without looking through imports/scripts etc. Also you want to keep things short, so you call your element ms-panel (you work for Monkey Solutions LLC), but then someone else on the team is importing ms-panel from Microsoft, and BAM!, you have another problem. Why can't we do something like this? !-- /scripts/monkey-solutions/panel.js -- script var panel = document.registerElement(panel, { namespace: ms https://monkey-solutions.com/namespace; }); /script !-- /scripts/microsoft/panel.js -- script var panel = document.registerElement(panel, { namespace: ms https://microsoft.com/namespace; }); /script !-- Uses last defined element, as it currently works. -- ms-panel !-- Redefine the namespace prefix for one of the custom elements. -- element name=panel namespace=https://microsoft.com/namespace; prefix=msft / ms-panel msft-panel You could also assign a prefix to all elements within a namespace like this: element name=* namespace=https://microsoft.com/namespace; prefix=msft / You can override the prefix multiple times and the closest element definition is used. Please note that the above syntax is just an example of what could be used. Another BIG pro here is that IDEs can pull in information about the elements by sending an HTTP request to the namespace URI so that a tooltip could be displayed with an element description, author, sample usage, etc. I really do hope that it's not too late to make such a change. Regards, Glen.
Re: CfC: publish Proposed Recommendation of Server-Sent Events; deadline November 28
+1 On Fri, Nov 21, 2014 at 7:02 AM, Arthur Barstow art.bars...@gmail.com wrote: The latest interop data Zhiqiang generated for Server-sent Events [All] indicates 102/124 passes and [2] isolates the 22 failures with less than two implementations including 9 failures which are due to Web IDL implementation bugs (thus, not counting the WebIDL failures the pass rate is 111/124 or ~90%). The non Web IDL failures are: 1. http://www.w3c-test.org/eventsource/dedicated-worker/ eventsource-constructor-non-same-origin.htm 2. http://www.w3c-test.org/eventsource/shared-worker/ eventsource-constructor-non-same-origin.htm 3. http://www.w3c-test.org/eventsource/format-field-retry-bogus.htm My take on these failures is: #1 and #2 test the UA's error handling of URLs that cannot be resolved (f.ex. unsupported URL scheme, URL doesn't exist). The failures appear to be relatively low priority implementation bugs (see [Bug119974]) that seem unlikely to occur in a tested deployment. #3 tests the UA's handling of invalid data value for the retry (constructor) parameter. This test actually now passes when I run it on FF beta 34.0 so it should be removed from [2]. Regardless, the failure appears to be a relatively low priority implementation bug that seems unlikely to occur in a tested deployment. As such, this is Call for Consensus to publish SSE as a Proposed Recommendation. If you have any comments or concerns about this CfC, please reply to this e-mail by November 28 at the latest. Positive response is preferred and encouraged, and silence will be considered as agreement with the proposal. The [ED] has changed since the [CR] was published (see [Diff]) so this proposal assumes that if/when there is a resource commitment to include changes on the TR track, that will be done separately. -Thanks, AB [All] http://w3c.github.io/test-results/eventsource/less-than-2.html [2] http://w3c.github.io/test-results/eventsource/less-than-2.html [Bug119974] https://bugs.webkit.org/show_bug.cgi?id=119974 [CR] http://www.w3.org/TR/2012/CR-eventsource-20121211/ [ED] http://dev.w3.org/html5/eventsource/ [Diff] http://services.w3.org/htmldiff?doc1=http%3A%2F% 2Fdev.w3.org%2Fcvsweb%2F~checkout~%2Fhtml5%2Feventsource%2FOverview.html% 3Frev%3D1.233%3Bcontent-type%3Dtext%252Fhtmldoc2=http%3A% 2F%2Fdev.w3.org%2Fcvsweb%2F~checkout~%2Fhtml5% 2Feventsource%2FOverview.html%3Frev%3D1.258%3Bcontent-type%3Dtext%252Fhtml
Re: publishing new WD of URL spec
On Thu, Sep 11, 2014 at 2:20 PM, Robin Berjon ro...@w3.org wrote: On 11/09/2014 00:14 , Glenn Adams wrote: WHATWG specs are not legitimate for reference by W3C specs. Their IPR status is indeterminate and they do not follow a consensus process. This is blatant trolling as well as factually wrong in every single statement that it makes. I would say something impolite and vulgar at this point. But you've already pulled down this thread to the gutter Robin by resorting to name calling, so I won't do that. I would invite all of you to not feed this thread as it can't possibly lead anywhere useful. You mean, don't feed it like you are doing? As a W3T member, I would think it your duty to stay out of the fray. Best listen to your duty. -- Robin Berjon - http://berjon.com/ - @robinberjon
Re: publishing new WD of URL spec
WHATWG specs are not legitimate for reference by W3C specs. Their IPR status is indeterminate and they do not follow a consensus process. On Wed, Sep 10, 2014 at 11:58 PM, Domenic Denicola dome...@domenicdenicola.com wrote: This is a formal objection to the publication of this specification. My arguments against publishing this specification include that copying the spec from the WHATWG is an unnecessarily combative way of working with another standards body, especially with regard to the URL Standard wherein we/they have been trying hard to address the issues of IP coverage and stable references on the W3C's terms. I would rather see this talked through and agreement come to regarding how the W3C can work to reference WHATWG specs in the same way that they reference Ecma or IETF specs. On the technical side, I argue that previous efforts to copy WHATWG specs, even well-intentioned ones like the DOM, have led to out-of-date snapshots permeating the internet, and causing developer and implementer confusion. (See links in [1]; see also the contrast between one implementer's policies at [2] and another's at [3].) We can't even fall back to never look at TR because it is always out of date; use ED instead! because in the case of e.g. DOM 4, the ED is five months out of date. I acknowledge that Dan is going to great lengths to make sure that this copying is done right, insofar as it can be. E.g., he is copying not plagiarizing; he is stating that he wants feedback to flow through the upstream version instead of diverging; and he says that he will add more clear signposting to the document to help direct implementers and developers to the upstream version. However, I think this plan is merely a band-aid on a larger problem, akin to feeding the W3C's spec-copying addiction with a nicotine patch instead of a full-on cancer stick. An improvement, but I'd really prefer we break the addiction entirely. There are a number of remedies that would address this formal objection. The most preferable would be for the W3C to work amicably with the WHATWG to figure out a way to treat them and their specs as legitimate, instead of constantly copying them. This could include e.g. issuing a call to the AC reps in the webapps working group to commit to patent protection via the WHATWG's patent mechanism [4]. In the category of these proposals MAY be vague or incomplete [5], I would like the W3C to consider seriously how to react to the world wherein standards best serve the web by being living, and find some way to get out of the outmoded and bug-encouraging mode of thinking that stands behind stable references. An alternate way of addressing the formal objection would be outline a very clear process for avoiding the dangers that have cropped up in previous WHATWG copies. This would include, among other things: an automated system for ensuring that the latest version of the upstream spec is always copied to TR; a blacklisting of outdated snapshots from search engines via robots.txt; some way of dealing with the fact that webapps patent commitments will be made to an outdated snapshot, but that snapshot should not be given any prominence for implementers or authors visiting the W3C website; and a public acknowledgement that implementers should not look at any outdated snapshots such as CR (so, the normal call for implementations would have to be modified, so we don't get ridiculous situations like HTML 5.0 is currently undergoing where you call for implementations of a spec that is multiple years behind what implementations actually need to implement for interoperability). [1]: http://wiki.whatwg.org/wiki/TR_strikes_again [2]: https://github.com/mozilla/servo/wiki/Relevant-spec-links [3]: http://status.modern.ie/ [4]: http://blog.whatwg.org/make-patent-commitments [5]: http://www.w3.org/2014/Process-20140801/#FormalObjection -Original Message- From: Arthur Barstow [mailto:art.bars...@gmail.com] Sent: Wednesday, September 10, 2014 18:40 To: public-webapps; www-...@w3.org Subject: PSA: publishing new WD of URL spec [ Sorry for the cross-posting but this is about a joint WD publication between WebApps and TAG. ] This is heads-up (aka PublicServiceAnnoucement) about the intent to publish a new WD of the URL spec (on or around Sept 16) using this ED as the basis: http://w3ctag.github.io/url/ As previously agree, and codified in WebApps' current [Charter], the WD will be published jointly by WebApps and the TAG. I realize some people do not support W3C publishing the URL spec, so as reminder - as defined in WebApps' off-topic discussion policy ([OffTopic]) - if anyone has any _process-type_ comments, concerns, etc. about this publication - please send that feedback to the public-w3process list [w3process]. Please do _not_ send such feedback to public-webapps nor www-tag. -Thanks, AB [Charter]
Re: publishing new WD of URL spec
On Thu, Sep 11, 2014 at 12:27 AM, James Robinson jam...@google.com wrote: On Wed, Sep 10, 2014 at 3:14 PM, Glenn Adams gl...@skynav.com wrote: WHATWG specs are not legitimate for reference by W3C specs. Do you have a citation to back up this claim? If it isn't obvious, I am stating my opinion regarding the matter of legitimacy. Just like Domenic is stating his opinion. My opinion is based on 20 years of experience with the W3C and 40 years of experience with standards bodies. In contrast, my claim regarding IPR policy and lack of consensus is not an opinion, but an uncontested fact. Or would you dispute this? Their IPR status is indeterminate and they do not follow a consensus process. Do you have citations for where this is listed as part of the requirements for references in W3C specifications? The current W3C normative references guidelines [1], only recently published, are the only written policy of which I'm aware. This document does not prohibit referencing a WHATWG document. Ultimately, only TBL (or his delegate) will make a decision on such matters. But I trust they will take input from their members into account. Reading [1], one wonders how the WHATWG would fare on the question of Stability of the Referenced Document, including Stability of the Organization/Group. Since there is no organization per se, and since the philosophy of the WHATWG is explicitly contrasted with the notion of stability, then there are serious questions to be asked about permitting such a reference. [1] http://www.w3.org/2013/09/normative-references I know these are your personal opinions but am not aware of anything that states this is W3C process. I agree, but that doesn't mean that it is acceptable or even a good idea to permit normative references to a WHATWG work, i.e., a work of Hixie and friends. Personally, I don't care how good the technical content of this work is if its authors refuse to participate in accepted processes. Why, for instance, is this work [URL] not being taken to the IETF? That is the natural home for such work. From all appearance, the answer is that Hixie et al don't like playing the normally accepted standards process game and wish to sideline it for their own ends. While I admire Hixie and his group of friends for their technical proclivity, I do not admire their refusal to work within accepted practices. There is good value in following the W3C IPR policies and participating in a consensus process. So I object to efforts that would diminish or destabilize the value proposition of the W3C, the IETF, and other accepted organizations, for no other apparent purpose than to satisfy the whim and impatience of a gang of Young Turks. I think this is all I need to say on this subject, so I will avoid continuing this thread. Take it as you will. - James
Re: First Draft of W3C version of URL Spec
On Thu, Aug 28, 2014 at 10:04 AM, Ian Hickson i...@hixie.ch wrote: On Wed, 27 Aug 2014, Daniel Appelquist wrote: As you might know, the new charter for webapps includes a new version of the URL spec. I am acting as editor of this spec. What's the purpose of the W3C republishing this spec? quite obviously, to have a reference to a stable document that follows the W3C REC process, while WhatWG documents satisfy neither condition -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: First Draft of W3C version of URL Spec
On Wed, Aug 27, 2014 at 2:50 PM, Daniel Appelquist appelqu...@gmail.com wrote: Hello URL fans - As you might know, the new charter for webapps[1] includes a new version of the URL spec. I am acting as editor of this spec. With some help from Robin and PLH I've produced a first draft[2] which imports the latest work by Anne on the upstream WHATWG URL spec[3] with a few minimal editorial changes. Also note that the document is licensed as CC-BY. The intention is to keep this version in sync with the WHATWG version of the URL spec. This means that ideally any changes should be fed back through the WHATWG bug tracker[4]. The intention is to follow the model laid down by the DOM spec.[5] It’s my further intention to ensure confusion is minimized by clearly sign-posting in the w3c version that the WHATWG version is the living spec. Make sure this is informative text. I don't know if the term living spec[ification] has any formal meaning in the W3C. [Correct me if I missed the memo that defines it.] This version updates and supersedes the previous W3C version published in May 2012[6] and later updated in November 2012[7]. The goal is to move ahead fairly aggressively with the publication time-line for this spec. Please feed back any comments here. Thanks, Dan Appelquist 1. http://www.w3.org/2014/06/webapps-charter.html 2. http://w3ctag.github.io/url/ 3. http://url.spec.whatwg.org 4. https://www.w3.org/Bugs/Public/buglist.cgi?component=URLlist_id=42864product=WHATWGresolution=--- 5. http://www.w3.org/TR/dom/ 6. http://www.w3.org/TR/url/ 7. https://dvcs.w3.org/hg/url/raw-file/tip/Overview.html
Re: WebIDL Spec Status
On Thu, Jun 26, 2014 at 10:18 AM, Ian Hickson i...@hixie.ch wrote: On Wed, 25 Jun 2014, Glenn Adams wrote: On Tue, Jun 24, 2014 at 8:28 PM, Ian Hickson i...@hixie.ch wrote: Compraing implementations to anything but the very latest draft is not only a waste of time, it's actively harmful to interoperability. At no point should any implementor even remotely consider making a change from implementing what is currently specified to what was previously specified, that would literally be going backwards. That sounds reasonable, but its not always true (an exception to every rule, eh). For example, in order to ship a device that must satisfy compliance testing to be certified, e.g., to be granted a branding label, to satisfy a government mandate, etc., it may be necessary to implement and ship support for an earlier version. For pointless certification purposes, you can use any random revision of the spec -- just say what the revision number is and use that (and honestly, who cares how well you implement that version -- it's not like the testing process is going to be thorough). Don't ship that, though. Whatever you ship should be regularly kept up to date with changes to the spec as they occur. (It's not an option to not be able to ship fixes, since otherwise you'd be unable to fix security vulnerabilities either, which is obviously a non-starter.) What you ship, and subsequent revisions thereto, is what you should be spending any serious amount of time testing. And for that, you shouldn't use a snapshot, you should use the latest revision of the spec. For the pointless certification, just as for the patent coverage, we should publish whatever revision we have and just stamp it as a REC. It doesn't matter what bugs it has. We know it'll have bugs -- the day after it's published, maybe even earlier, we'll find new bugs that will need fixing. It doesn't really matter, since it's not for use by implementors, just by lawyers and pointless certification teams. I would respond, but it would be ... pointless. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: WebIDL Spec Status
On Fri, Jun 27, 2014 at 10:29 AM, Ian Hickson i...@hixie.ch wrote: On Fri, 27 Jun 2014, Glenn Adams wrote: For pointless certification purposes, you can use any random revision of the spec -- just say what the revision number is and use that (and honestly, who cares how well you implement that version -- it's not like the testing process is going to be thorough). Don't ship that, though. Whatever you ship should be regularly kept up to date with changes to the spec as they occur. (It's not an option to not be able to ship fixes, since otherwise you'd be unable to fix security vulnerabilities either, which is obviously a non-starter.) What you ship, and subsequent revisions thereto, is what you should be spending any serious amount of time testing. And for that, you shouldn't use a snapshot, you should use the latest revision of the spec. For the pointless certification, just as for the patent coverage, we should publish whatever revision we have and just stamp it as a REC. It doesn't matter what bugs it has. We know it'll have bugs -- the day after it's published, maybe even earlier, we'll find new bugs that will need fixing. It doesn't really matter, since it's not for use by implementors, just by lawyers and pointless certification teams. I would respond, but it would be ... pointless. I'm guessing you misinterpreted what I said, specifically, that you interpreted the pointless in pointless certification as an insult of some sort. To clarify, I did not mean it that way; I meant it literally, as in, specifically the kinds of certifications that you may be required to pursue for political or bureaucratic reasons but which have no practical purpose, as opposed to the kind of certification that serves an important purpose, like certifying that some software that's going to run a rocket passes all its tests. No, I did not take it as an insult. I have too thick a skin to be insulted. In any case, most insults thrown my way are probably true. :) My use of pointless was intended to mean that it is pointless to argue with you about whether certification required by political or bureaucratic reasons (by which I understand you to include legal reasons as well) is or is not pointless to use your phrase. Clearly I don't agree with your position. Certifying that software passes tests for an obsolete version of a standard, when the standard's purpose is interoperability and achieving that interoperability requires converging on a target that we're only slowly reaching over many years, is at best pointless, and at worst harmful, which is why I stand by the advice above. We have different understandings of the meaning of interoperability. My interpretation of your definition of interoperability is that it is a ghost: in the sense that it has no fixed point of reference, i.e., no fixed set of specifications against which it (interoperability) can be certified. Clearly we operate in different business regimes. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: WebIDL Spec Status
On Thu, Jun 26, 2014 at 4:52 AM, Arthur Barstow art.bars...@gmail.com wrote: On 6/25/14 11:58 AM, Glenn Adams wrote: In the case of WebIDL, my personal preference would be to not spend precious effort on WebIDL 1 CR, but instead to: (1) publish WebIDL 1 CR as a WG Note without attempting to resolve outstanding issues, other than by clearly annotating the existence of those issues in the Note; (2) focus on moving WebIDL 2E (2nd edition) to FPWD and thence to LC, etc. If this process is followed, then it also may be useful to relabel these two works a bit, e.g., by calling what is now WebIDL CR something like WebIDL Legacy in a WG Note, and then using the generic name WebIDL for what is now called WebIDL 2E. Just an idea to consider. Well, I admit I like this proposal, quite a lot actually, however, I don't know if it will satisfy the relevant process requirements (f.ex. [NormRef]). (Perhaps I should move this proposal to the public-w3process list ...) Phillippe, Yves, Cindy - what are your thoughts on Glenn's proposal for v1? Glenn - would your v1 WG Note proposal satisfy all of the WebIDL reference cases that concern you (I'm wondering in particular about specs from other SSOs that reference WebIDL)? The reference cases I'm working with (primarily DLNA specs) dereference WebIDL via the HTML5 references list, which in turn, refers to WebIDL 2E. So at this point, I have no issue with the existing CR being moved to a WG Note. Given the limited editorial resources, I prefer effort going into progressing 2E. All - feedback on Glenn's proposal is certainly welcome. -Thanks, AB [NormRef] http://www.w3.org/2013/09/normative-references
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 8:28 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 24 Jun 2014, Boris Zbarsky wrote: On 6/24/14, 1:05 PM, Glenn Adams wrote: Such device certification regimes cannot work unless the referenced specifications are locked down and clearly implementable. I see. So this is not about actual spec implementations or spec authors but effectively about a QA cycle that compares the implementations to the specs, and which needs to know which spec to compare the implementations to. Compraing implementations to anything but the very latest draft is not only a waste of time, it's actively harmful to interoperability. At no point should any implementor even remotely consider making a change from implementing what is currently specified to what was previously specified, that would literally be going backwards. That sounds reasonable, but its not always true (an exception to every rule, eh). For example, in order to ship a device that must satisfy compliance testing to be certified, e.g., to be granted a branding label, to satisfy a government mandate, etc., it may be necessary to implement and ship support for an earlier version. In the case of WebIDL, my personal preference would be to not spend precious effort on WebIDL 1 CR, but instead to: (1) publish WebIDL 1 CR as a WG Note without attempting to resolve outstanding issues, other than by clearly annotating the existence of those issues in the Note; (2) focus on moving WebIDL 2E (2nd edition) to FPWD and thence to LC, etc. If this process is followed, then it also may be useful to relabel these two works a bit, e.g., by calling what is now WebIDL CR something like WebIDL Legacy in a WG Note, and then using the generic name WebIDL for what is now called WebIDL 2E. Just an idea to consider.
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 7:14 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/24/14, 6:56 AM, Charles McCathie Nevile wrote: While nobody is offering an editor who can get the work done, this argument is in any case academic (unless the editor's availability is predicated on the outcome, in which case it would be mere political machinations). I strongly disagree with that characterization. The fact is, for browser vendors a stable v1 Web IDL snapshot as we have right now is not very useful, since that's not what they need to implement in practice: there are too many APIs currently being specified that cannot be expressed in that snapshot. So it's really hard to justify devoting resources to such a snapshot. On the other hand, making Web IDL reflect ongoing specification reality is something that's really useful to browser vendors, so it might be easier to convince them to spend time on that. No political machinations involved. A more recent snapshot might be more useful, but is still likely to end up not being an actual implementation target because there are still too many changes happening in terms of ES integration and the like. I don't have a good solution to this problem, unfortunately. :( On the other hand, the only audience I see for a snapshot are specification writers who don't want/need the newer things we're adding to Web IDL. Are there other audiences? Are there actually such specification writers? The recent set of changes to Web IDL have all been driven by specification needs. There are organizations attempting to create device certification regimes based on specifications that normatively reference HTML5, DOM4, XHR2, Canvas2D, WebGL, etc, and many other W3C API specs all of which have a normative dependency on the WebIDL in the sense that they must implement IDL features in ECMAScript according to the ECMAScript binding semantics in WebIDL, which, in turn become dependencies for testing. Such device certification regimes cannot work unless the referenced specifications are locked down and clearly implementable. Having a WebIDL that is always in a state of flux makes such work well nigh impossible or at best extremely difficult and untrustworthy. -Boris
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 11:08 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/24/14, 1:05 PM, Glenn Adams wrote: Such device certification regimes cannot work unless the referenced specifications are locked down and clearly implementable. I see. So this is not about actual spec implementations or spec authors but effectively about a QA cycle that compares the implementations to the specs, and which needs to know which spec to compare the implementations to. Not at all. This is not about one or even a group of organizations or about QA. It is about fulfilling the process goals of the W3C and the WebApps WG. The primary goal of the W3C is to produce Technical Reports that reach a stable level of maturity. The charter of each WG includes the creating of technical reports at the REC maturity level, i.e., undergo the REC track process. If a WG fails to move a technical report to REC then it has failed its chartered purpose (as far as that report is concerned). Alternatively, it could formally decide to abandon the work by moving it to a WG Note, which implies it won't be further progressed to REC. The W3C has customers other than browser vendors. It portrays itself as a standards organization (at least informally) and talks about its work products being standards (at least informally). Standards organizations must move their work to some status that is recognized as complete, otherwise they will become a joke in the larger community of SDOs and customers. In my capacity in this WG, I represent a Full Member who pays for membership in order to see technical work reach completion. An ED or a CR does not represent completion. They are willing to help wherever possible, and devote considerable resources to the W3C at large. If at the end of the day I have to tell them that key technical work, such as WebIDL, will never reach REC, and that means that most key specifications (HTML5, DOM4) are technically incomplete or at least untrustworthy (as concrete, well-defined technical works), then it will have a negative impact on their use of those specs as well as a negative impact on future investment in the W3C process. In the current situation, I think the best course would be for the chair and team members of this group to attempt to work with the editor to define a reasonable schedule for moving it forward to REC, and, if necessary call for volunteer co-editors if the current editor is unable to invest sufficient time to see through that process. [I would note that Cameron has done and is doing an outstanding job, but appears to be negatively impacted by constant requests for new IDL features by ongoing spec writers.] The bottom line is this about fulfilling the WhatWG charter and the W3C process goals. In an ideal alignment of incentives, the organizations that need this sort of snapshot would step up to produce it, but I'm not sure how likely that is to happen in practice... -Boris
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 11:57 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/24/14, 1:46 PM, Glenn Adams wrote: The primary goal of the W3C is to produce Technical Reports that reach a stable level of maturity. The Technical Reports are not an end in themselves. They're a means to an end. This is why we don't produce Technical Reports that just say do whatever if we can avoid it, because that would fail to fulfill our _actual_ goals (which might differ for different W3C members of course; for some of them maybe do whatever is good enough for their purposes). You're correct that sometimes the production of the Technical Report is viewed as an end in itself in an attempt to bridge the different member's actual goals. Sometimes this works ok, and sometimes the result is a TR that is useless to some subset of members. I happen to be affiliated with a member for whom most TRs (possibly all of them) as practiced by the W3C tend to be somewhat useless compared to the process of putting together the TR, so I have certain biases in that regard. If a WG fails to move a technical report to REC then it has failed its chartered purpose (as far as that report is concerned). Yes, agreed, as the W3C process is set up right now. It's a bug, not a feature. ;) In my capacity in this WG, I represent a Full Member who pays for membership in order to see technical work reach completion. Is this Member willing to devote resources to getting there? They are. By having me test IDL features, by having me report them to Cameron, by having me participate in this WG. Are you asking if they can supply an editor? That would best be handled by having the chairs issue a call for volunteers for co-editor on WebIDL. Again, I'm not saying we shouldn't have a REC of Web IDL. I'm saying that currently there's a perverse incentives problem: the only people who have stepped up to edit the spec are the ones who are affiliated with a Member which can'e make much use of a Web IDL REC in its current state all that much. Which means that they end up, consciously or not, not prioritizing reaching REC on Web IDL v1, say, particularly highly. In the current situation, I think the best course would be for the chair and team members of this group to attempt to work with the editor to define a reasonable schedule for moving it forward to REC, and, if necessary call for volunteer co-editors if the current editor is unable to invest sufficient time to see through that process. Yep, we agree. -Boris
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 12:36 PM, Marcos mar...@marcosc.com wrote: On June 24, 2014 at 2:33:41 PM, Glenn Adams (gl...@skynav.com) wrote: They are. By having me test IDL features, by having me report them to Cameron, by having me participate in this WG. Are you asking if they can supply an editor? That would best be handled by having the chairs issue a call for volunteers for co-editor on WebIDL. Anyone can edit the spec. It's just a github repo. Send a PR. There is no need for a special call from the Chairs as an excuse to do work. I realize you think that zero process is the best process, but I don't agree. Becoming a co-editor should be a public commitment and not simply random PRs. The chairs are well advised to call for volunteers if that is needed. -- Marcos Caceres
Re: WebIDL Spec Status
On Tue, Jun 24, 2014 at 3:28 PM, Cameron McCormack c...@mcc.id.au wrote: On 24/06/14 20:50, Arthur Barstow wrote: On 6/23/14 4:04 PM, Glenn Adams wrote: What is the plan, i.e., schedule timeline, for moving WebIDL to REC? We have now a two year old CR that appears to be stuck and a 2nd Edition that I'm not sure has made it to FPWD. Hi Glenn, All, I don't have any new info re v1 beyond what Boris said a few weeks ago in this thread: http://lists.w3.org/Archives/Public/public-script-coord/ 2014AprJun/0162.html. Cameron, Boris - please reply to Glenn's question. I've put Web IDL work on my list of Q3 goals, so I will resume work on it next month. I still think that before publishing another draft on TR/ that we should resolve the open issues that apply to v1. Boris and I will be dividing up the open issues to split the work. Sounds good.
WebIDL Spec Status
What is the plan, i.e., schedule timeline, for moving WebIDL to REC? We have now a two year old CR that appears to be stuck and a 2nd Edition that I'm not sure has made it to FPWD. Given the high degree of dependency from other specs and implementations on this work, we really need to find a way to wrap up this work or at least publish something reasonably stable. Regards, Glenn
Re: WebIDL Spec Status
On Mon, Jun 23, 2014 at 3:05 PM, Marcos mar...@marcosc.com wrote: On June 23, 2014 at 4:07:09 PM, Glenn Adams (gl...@skynav.com) wrote: What is the plan, i.e., schedule timeline, for moving WebIDL to REC? We have now a two year old CR that appears to be stuck and a 2nd Edition that I'm not sure has made it to FPWD. Given the high degree of dependency from other specs and implementations on this work, we really need to find a way to wrap up this work or at least publish something reasonably stable. IMO, we should just concede that this document needs to be a Living Standard (tm). I don't mind there being a living standard form of the document. But that is not sufficient. There must be some final REC version of some edition/snapshot of this work that provides a non-movable mark for real-world compliance testing and device certification. Trying to move this through the W3C process is clearly not working. There is no reason it can't or shouldn't. Even if we were able to take the V1 bits to Rec (a lot of which is now obsolete), the V2 stuff is already widely supported and heavily relied on by browser vendors. IMO, it's a waste of everyone's time to try to maintain multiple versions. I agree that the V1 CR should be abandoned or replaced with a completed snapshot of V2. Though it would be useful to ask a wider community about the value of moving some flavor of V1 to REC. -- Marcos Caceres
Re: [webappsec + webapps] CORS to PR plans
On Wed, Aug 7, 2013 at 8:54 AM, Glenn Adams gl...@skynav.com wrote: On Mon, Aug 5, 2013 at 5:48 PM, Brad Hill hillb...@gmail.com wrote: I'd like to issue this as a formal Call for Consensus at this point. If you have any objections to CORS advancing to Proposed Recommendation, please reply to public-webapp...@w3.org. Affirmative response are also encouraged, and silence will be taken as assent. The proposed draft is available at: http://webappsec-test.info/~bhill2/pub/CORS/index.html This CfC will end and be ratified by the WebAppSec WG on Tuesday, August 13, 2013. Thank you, Brad Hill On Tue, Jul 16, 2013 at 12:47 PM, Brad Hill hillb...@gmail.com wrote: WebAppSec and WebApps WGs, CORS advanced to Candidate Recommendation this January, and I believe it is time we consider advancing it to Proposed Recommendation. In the absence of an editor, I have been collecting bug reports sent to the public-webappsec list, and now have a proposed draft incorporating these fixes I would like to run by both WGs. The proposed draft can be found at: http://webappsec-test.info/~bhill2/pub/CORS/index.html A diff-marked version is available at: http://services.w3.org/htmldiff?doc1=http%3A%2F%2Fwww.w3.org%2FTR%2F2013%2FCR-cors-20130129%2Fdoc2=http%3A%2F%2Fwebappsec-test.info%2F~bhill2%2Fpub%2FCORS%2Findex.html (pardon some spurious diffs indicated in pre-formatted text that has not actually changed) A list of changes is as follows: 1. Changed Fetch references. The CR document referenced the WHATWG Fetch spec in a number of places. This was problematic due to the maturity / stability requirements of the W3C for document advancement, and I feel also inappropriate, as the current Fetch spec positions itself as a successor to CORS, not a reference in terms of which CORS is defined. The proposal is to substitute these references for the Fetching Resources section of the HTML5 spec at: http://www.w3.org/TR/html5/infrastructure.html#fetching-resources I do not believe this produces substantive changes in the reading of CORS 2. In the Terminology section, added a comma after Concept in response to: http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0055.html 3. Per discussion to clarify the interaction of HTTP Authorization headers with the user credentials flag, https://www.w3.org/Bugs/Public/show_bug.cgi?id=21013 and http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0366.html, I have inserted the following clarification: user credentials for the purposes of this specification means cookies, HTTP authentication, and client-side SSL certificates !-- begin change -- that would be sent based on the user agent's previous interactions with the origin. !-- end change -- 4. In the defintion of the Access-Control-Allow-Methods header, in response to http://lists.w3.org/Archives/Public/public-webappsec/2013Apr/0046.html, clarified that The Allow header is not relevant for the purposes of the CORS protocol. 5. http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0055.htmland http://lists.w3.org/Archives/Public/public-webappsec/2013Mar/0096.htmlpoint out that header and method are not defined correctly in the response headers for preflight requests. It appears that the intent was to respond with the list provided as part of the preflight request, rather than the potentially unbounded list the resource may actually support The following clarifications were made: (for methods) Since the list of methods can be unbounded, simply returning the method indicated by Access-Control-Request-Method (if supported) can be enough. (for headers) Since the list of headers can be unbounded, simply returning the headers from Access-Control-Allow-Headers (if supported) can be enough. I would suggest using is sufficient or is adequate rather than can be enough. Can be implies that it may be or may not be. Need to be more definite. 6. In response to: http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0055.html, removed spurious 'than' 7. In response to: http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0055.html, added comma after Concept 8. In response to thread beginning at: http://lists.w3.org/Archives/Public/public-webappsec/2013Feb/0078.html, added 204 as a valid code equivalent to 200 for the CORS algorithm. Would this be considered a substantive technical change? Or correction to an editorial oversight? I see below that 204 (and 308) tests need to be added, which makes it sound a little like a technical change. If these changes are acceptable to the WGs, I believe the only remaining steps are to prepare an implementation report and update the test suite to cover the 204 and 308 status codes. I'll let us discuss these for a bit here before beginning a formal call for consensus. What is the status on resolving the open bugs at [1
Re: Updated idlharness.js
On Thu, Jan 24, 2013 at 8:19 AM, Robin Berjon ro...@w3.org wrote: On 23/01/2013 19:11 , Glenn Adams wrote: were you able to incorporate the improvements I suggested at [1]? [1] https://github.com/darobin/**webidl.js/pull/16https://github.com/darobin/webidl.js/pull/16 Well, it's an entirely different code base, so certainly not as is. But at least some of what you describe in there should be supported. (In webidl2.js that is, not idlharness.js). Sure. If the following are supported, then I'll be able to switch to this new implementation: - ability to provide string valued extended attributes, e.g., Documentation - ability to provide extended attributes that apply to other extended attributes, e.g., [Documentation=Foo ... [Documentation=Constructor of Foo ...] Constructor(DOMString foo) ] interface Foo { }
Re: Updated idlharness.js
were you able to incorporate the improvements I suggested at [1]? [1] https://github.com/darobin/webidl.js/pull/16 On Wed, Jan 23, 2013 at 9:31 AM, Robin Berjon ro...@w3.org wrote: Hi all, as you know, one of the tools that we have for testing is idlharness. What it does is basically that it processes some WebIDL, is given some objects that correspond to it, and it tests them for a bunch of pesky aspects that one should not have to test by hand. One of the issues with idlharness is that it has long been based on webidl.js which was a quick and dirty WebIDL parser that I'd written because I needed it for a project that petered out. This meant that it increasingly didn't support newer constructs in WebIDL that are now in common use. In order to remedy this, I have now made an updated version of idlharness that uses webidl2.js, a much better parser that is believed to be rather complete and correct (at least, it tests well against the WebIDL tests that we have). The newer webidl2.js does bring as much backwards compatibility with webidl.js as possible, but in a number of cases that simply wasn't possible (because WebIDL has changed too much to fit well into the previous model, and also because mistakes were made with it). You can find the updated version of idlharness in this branch: https://github.com/w3c/**testharness.js/tree/webidl2https://github.com/w3c/testharness.js/tree/webidl2 The reason I'm prodding you is that idlharness, ironically enough, does not have a test suite. Because of that, I can't be entirely comfortable that the updated version works well and doesn't break existing usage. I've tested it with some existing content (e.g. http://berjon.com/tmp/geotest/* *) but that's no guarantee. So if you've been using idlharness, I'd like to hear about it. If you could give the new version a ride to see if you get the same results it'd be lovely. Once I hear back from enough people that it works (or if no one says anything) I'll merge the changes to the master branch. Thanks! -- Robin Berjon - http://berjon.com/ - @robinberjon
Re: CfC: publish WD of XHR; deadline November 29
On Sat, Dec 1, 2012 at 1:34 PM, Ms2ger ms2...@gmail.com wrote: On 11/27/2012 02:16 PM, Arthur Barstow wrote: On 11/27/12 12:21 AM, ext Jungkee Song wrote: From: Arthur Barstow [mailto:art.bars...@nokia.com] Sent: Tuesday, November 27, 2012 3:05 AM I think the next step is for the XHR Editors to create a TR version using the WD template so that everyone can see exactly what is being proposed for publication as a TR. Please create that version as soon as you can so that this CfC can be based on that version (rather than the ED) and reply with the URL of the TR version. (Please use 6 December 2012 as the publication date.) We prepared a proposed TR version at: http://dvcs.w3.org/hg/xhr/raw-**file/tip/TR/Overview.htmlhttp://dvcs.w3.org/hg/xhr/raw-file/tip/TR/Overview.html Thanks Jungkee. All - http://dvcs.w3.org/hg/xhr/**raw-file/tip/TR/Overview.htmlhttp://dvcs.w3.org/hg/xhr/raw-file/tip/TR/Overview.html is the document proposed for publication as a TR and thus is the basis for this CfC. I object to this publication because of this change: http://dvcs.w3.org/hg/xhr/rev/**2341e31323a4http://dvcs.w3.org/hg/xhr/rev/2341e31323a4 pushed with a misleading commit message. since you don't say what is misleading, and since commit messages are irrelevant for W3C process, this objection is immaterial
Re: CfC: publish WD of XHR; deadline November 29
On Sat, Dec 1, 2012 at 7:07 PM, James Robinson jam...@google.com wrote: On Sat, Dec 1, 2012 at 5:54 PM, Glenn Adams gl...@skynav.com wrote: On Sat, Dec 1, 2012 at 6:34 PM, Tab Atkins Jr. jackalm...@gmail.comwrote: On Sat, Dec 1, 2012 at 4:44 PM, Glenn Adams gl...@skynav.com wrote: On Sat, Dec 1, 2012 at 1:34 PM, Ms2ger ms2...@gmail.com wrote: I object to this publication because of this change: http://dvcs.w3.org/hg/xhr/rev/2341e31323a4 pushed with a misleading commit message. since you don't say what is misleading, and since commit messages are irrelevant for W3C process, this objection is immaterial Ms2ger objected to the change, not the commit message, so your objection to the objection is misplaced. However, the commit message isn't long, so it's not difficult to puzzle out what ey might be referring to. In this case, it's the implication that changing a bunch of normative references from WHATWG specs to W3C copies of the specs is somehow necessary according to pubrules. Then whomever ms2ger is should say so. In any case, there is no reason to reference a WHATWG document if there is a W3C counterpart. Sure there is if the W3C version is stale, as is the case here. That commit replaced a link to http://xhr.spec.whatwg.org/, last updated roughly a week ago, with a link to http://www.w3.org/TR/XMLHttpRequest/which is dated January 17th and is missing an entire section (section 6). It also replaced a link to http://fetch.spec.whatwg.org/# with http://www.w3.org/TR/cors/# which is similarly out of date by the better part of a year and lacking handling for some HTTP status codes. Every single reference updated in this commit changed the document to point to an out-of-date and less valuable resource. It seems that you, like the author of the commit message, mistakenly think it's a goal to replace all links to point to W3C resources even when they are strictly worse. That's not in the W3C pub rules or a good idea. I didn't suggest this was demanded by pubrules, and indeed, I pointed out in a prior message that the pub rules do not dictate what documents or referenced. My position w.r.t WHATWG documents is that they should never be referenced by a W3C document unless there is no other option. Why do I say this? Because WHATWG documents are never final, at least according their principals. The W3C should not reference a document that is by definition never going to reach a final state, at least that is my opinion. Further, the W3C should not reference a document for which the IPR status is not sufficiently well defined, again this is my opinion. You or others may disagree. In the cases in point, someone needs to determine if the referenced documents will continue to move forward in the W3C, and if so, then they need to be updated according to the W3C Process rules. [1] http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/0501.html
Re: CfC: publish WD of XHR; deadline November 29
I need to clarify one point: I don't mind W3C docs making informative references to WHATWG docs. For example, I wouldn't mind a W3C doc making a normative reference to a snapshot of a WHATWG doc that has been republished in the W3C while making an informative reference to its living counterpart in the WHATWG. On Sat, Dec 1, 2012 at 7:40 PM, Glenn Adams gl...@skynav.com wrote: On Sat, Dec 1, 2012 at 7:07 PM, James Robinson jam...@google.com wrote: On Sat, Dec 1, 2012 at 5:54 PM, Glenn Adams gl...@skynav.com wrote: On Sat, Dec 1, 2012 at 6:34 PM, Tab Atkins Jr. jackalm...@gmail.comwrote: On Sat, Dec 1, 2012 at 4:44 PM, Glenn Adams gl...@skynav.com wrote: On Sat, Dec 1, 2012 at 1:34 PM, Ms2ger ms2...@gmail.com wrote: I object to this publication because of this change: http://dvcs.w3.org/hg/xhr/rev/2341e31323a4 pushed with a misleading commit message. since you don't say what is misleading, and since commit messages are irrelevant for W3C process, this objection is immaterial Ms2ger objected to the change, not the commit message, so your objection to the objection is misplaced. However, the commit message isn't long, so it's not difficult to puzzle out what ey might be referring to. In this case, it's the implication that changing a bunch of normative references from WHATWG specs to W3C copies of the specs is somehow necessary according to pubrules. Then whomever ms2ger is should say so. In any case, there is no reason to reference a WHATWG document if there is a W3C counterpart. Sure there is if the W3C version is stale, as is the case here. That commit replaced a link to http://xhr.spec.whatwg.org/, last updated roughly a week ago, with a link to http://www.w3.org/TR/XMLHttpRequest/which is dated January 17th and is missing an entire section (section 6). It also replaced a link to http://fetch.spec.whatwg.org/# with http://www.w3.org/TR/cors/# which is similarly out of date by the better part of a year and lacking handling for some HTTP status codes. Every single reference updated in this commit changed the document to point to an out-of-date and less valuable resource. It seems that you, like the author of the commit message, mistakenly think it's a goal to replace all links to point to W3C resources even when they are strictly worse. That's not in the W3C pub rules or a good idea. I didn't suggest this was demanded by pubrules, and indeed, I pointed out in a prior message that the pub rules do not dictate what documents or referenced. My position w.r.t WHATWG documents is that they should never be referenced by a W3C document unless there is no other option. Why do I say this? Because WHATWG documents are never final, at least according their principals. The W3C should not reference a document that is by definition never going to reach a final state, at least that is my opinion. Further, the W3C should not reference a document for which the IPR status is not sufficiently well defined, again this is my opinion. You or others may disagree. In the cases in point, someone needs to determine if the referenced documents will continue to move forward in the W3C, and if so, then they need to be updated according to the W3C Process rules. [1] http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/0501.html
Re: CfC: publish WD of XHR; deadline November 29
On Thu, Nov 22, 2012 at 6:27 AM, Anne van Kesteren ann...@annevk.nl wrote: If you have any comments or concerns about this proposal, please reply to this e-mail by December 29 at the latest. Putting my name as former editor while all the text is either written by me or copied from me seems disingenuous. note that the label editor does not imply authorship; authors of W3C specs do not necessarily correspond to editors; in other cases in the W3C where editors change over the document's lifetime, all of the editors are often listed without marking which are current and which are not current; perhaps that would serve here, i.e., just include Anne in the list of editors
Re: CfC: publish WD of XHR; deadline November 29
On Fri, Nov 23, 2012 at 12:09 AM, Adam Barth w...@adambarth.com wrote: On Thu, Nov 22, 2012 at 9:16 AM, Ms2ger ms2...@gmail.com wrote: On 11/22/2012 02:01 PM, Arthur Barstow wrote: TheXHR Editors would like to publish a new WD of XHR and this is a Call for Consensus to do so using the following ED (not yet using the WD template) as the basis http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html. Agreement to this proposal: a) indicates support for publishing a new WD; and b) does not necessarily indicate support of the contents of the WD. If you have any comments or concerns about this proposal, please reply to this e-mail by December 29 at the latest. Positive response to this CfC is preferred and encouraged and silence will be assumed to mean agreement with the proposal. I object unless the draft contains a clear pointer to the canonical spec on whatwg.org. I agree. The W3C should not be in the business of plagiarizing the work of others. Are you claiming that the W3C is in the business of plagiarizing? plagiarism. n. The practice of taking someone else's work or ideas and passing them off as one's own. The Status of this Document section should state clearly that this document is not an original work of authorship of the W3C. The SotD section need only refer to the working group that produced the document. Authorship is not noted or tracked in W3C documents. If Anne's work was submitted to and prepared in the context of the WebApps WG, then it is a product of the WG, and there is no obligation to refer to other, prior or variant versions. Referring to an earlier, draft version published outside of the W3C process does not serve any purpose nor is it required by the W3C Process. If there is a question on the status of the Copyright declaration of the material or its origin, then that should be taken up by the W3C Pubs team. G.
Re: CfC: publish WD of XHR; deadline November 29
On Fri, Nov 23, 2012 at 9:36 AM, Adam Barth w...@adambarth.com wrote: My concern is not about copyright. My concern is about passing off Anne's work as our own. As I have pointed out above, W3C specs do not track authorship or individual contributions to the WG process. If Anne performed his work as author in the context of participating in the W3C process, then there is no obligation to acknowledge that, though there is a long standing practice of including an Acknowledgments section or paragraph that enumerates contributors. I would think that listing Anne as Editor or Former Editor and listing Anne in an Acknowledgments paragraph should be entirely consistent with all existing W3C practice. Are you asking for more than this? And if so, then what is the basis for that?
Re: CfC: publish WD of XHR; deadline November 29
On Fri, Nov 23, 2012 at 10:23 AM, Anne van Kesteren ann...@annevk.nlwrote: On Fri, Nov 23, 2012 at 6:11 PM, Glenn Adams gl...@skynav.com wrote: As I have pointed out above, W3C specs do not track authorship or individual contributions to the WG process. If Anne performed his work as author in the context of participating in the W3C process, ... It seems you are missing the fact that I am neither a Member nor an Invited Expert of this WG since August this year. The W3C does have the legal right to publish my work, since I publish it under CC0, but the way the W3C goes about it is not appreciated. I see nothing inconsistent or disingenuous with regard to W3C process here. There seems to be a suggestion here that the process is broken, and I just don't see that. If you as a contributor wish to have more prominent mention in the W3C version, then it would be appropriate for you to discuss this with the current editors. Since it sounds like this is a cooperative process, I would expect you and the editors to find a satisfactory solution. However, I think this solution need not include making a normative reference to the ongoing WHATWG work in this area. It certainly wouldn't hurt to include an informative reference, with sufficient qualification as to why that reference is used. G.
Re: CfC: publish WD of XHR; deadline November 29
On Fri, Nov 23, 2012 at 10:28 AM, Adam Barth w...@adambarth.com wrote: On Fri, Nov 23, 2012 at 9:11 AM, Glenn Adams gl...@skynav.com wrote: On Fri, Nov 23, 2012 at 9:36 AM, Adam Barth w...@adambarth.com wrote: My concern is not about copyright. My concern is about passing off Anne's work as our own. As I have pointed out above, W3C specs do not track authorship or individual contributions to the WG process. If Anne performed his work as author in the context of participating in the W3C process, This premise is false. We're discussing the work that he is currently performing outside the W3C process. Specifically, the changes noted as Merge Anne's change in the past 11 days: http://dvcs.w3.org/hg/xhr/shortlog How is this different from the process being used in the HTML WG w.r.t. bringing WHATWG ongoing work by Ian back into the W3C draft. It seems like whatever solution is used here to satisfy Anne's concerns should be coordinated with Ian and the HTML5 editor team so that we don't end up with two methods for acknowledgment.
Re: [admin] XHR ED Boilerplate
Is Anne the *sole* author? Did the WG or others not contribute any text or suggested text to the spec? It seems like a bit of a slippery slope to attempt to designate a sole author for any W3C product. You might want to check with the pubs team on this matter. On Fri, Nov 23, 2012 at 11:44 AM, Arthur Barstow art.bars...@nokia.comwrote: [ Sorry for the delayed response, I was choking on some turkey ... ] Here's what I did for the URL spec re the boilerplate to help address the attribution issue re Anne and WHATWG: [[ http://dvcs.w3.org/hg/url/**raw-file/tip/Overview.htmlhttp://dvcs.w3.org/hg/url/raw-file/tip/Overview.html This Version: http://dvcs.w3.org/hg/url/raw-**file/tip/Overview.htmlhttp://dvcs.w3.org/hg/url/raw-file/tip/Overview.html Latest WHATWG Version: http://url.spec.whatwg.org/ Previous Versions: http://www.w3.org/TR/2012/ED-**url-20120524/http://www.w3.org/TR/2012/ED-url-20120524/ Author: Anne van Kesteren ann...@annevk.nl Editor: Web Applications Working Group public-webapps@w3.org Former editors: Adam Barth w...@adambarth.com Erik Arvidsson a...@chromium.org Michael[tm] Smith m...@w3.org ]] In the case of XHR, the new Editors would be listed as Editors and if they made contributions to the spec, they would also be added to the Author list too. If something like that would not be acceptable for the XHR ED, what specific change(s) do you request? -Thanks, AB
Re: [admin] XHR ED Boilerplate
On Fri, Nov 23, 2012 at 2:22 PM, Ian Hickson i...@hixie.ch wrote: What I don't really understand, though, is why any of this is needed at all. What value is the W3C adding by creating these forks? The problem as I see it is that the WHATWG documents are living documents and never final per se. If the WHATWG documents were published (by WHATWG) as fixed snapshots during their lifecycle, then perhaps it wouldn't be necessary for the W3C to create snapshots. For business and legal purposes, it is often a requirement to have such snapshots that are known to never change.
Re: Call for Consensus: CORS to Candidate Recommendation
Before going to CR, I believe the [HTML] entry in the references section needs to be changed to reference an appropriate W3C specification. A present, it reference a non-W3C document. On Fri, Nov 16, 2012 at 6:17 AM, Arthur Barstow art.bars...@nokia.comwrote: On 11/15/12 5:31 PM, ext Hill, Brad wrote: I have placed a draft for review at: http://www.w3.org/2011/**webappsec/cors-draft/http://www.w3.org/2011/webappsec/cors-draft/ And this is a Call for Consensus among the WebAppSec and WebApps WGs to take this particular text (with necessary additions to the Status of this Document section if approved) forward to Candidate Recommendation. I support this CfC although I am wondering about the CR exit criteria. Do you expect to re-use the CSP1.0 criteria: [[ The entrance criteria for this document to enter the Proposed Recommendation stage is to have a minimum of two independent and interoperable user agents that implementation all the features of this specification, which will be determined by passing the user agent tests defined in the test suite developed by the Working Group. ]] My preference is what WebApps has used in other CRs because I think it is clearer that a single implementation is not required to pass every test but that at least two implementations must pass every test. F.ex.: http://www.w3.org/TR/2012/CR-**websockets-20120920/#crechttp://www.w3.org/TR/2012/CR-websockets-20120920/#crec -Thanks, AB
Re: Call for Consensus: CORS to Candidate Recommendation
Cox will file an FO (as a W3C member) if it is not fixed. On Fri, Nov 16, 2012 at 6:51 AM, Ms2ger ms2...@gmail.com wrote: I object to making such a change. On 11/16/2012 02:32 PM, Glenn Adams wrote: Before going to CR, I believe the [HTML] entry in the references section needs to be changed to reference an appropriate W3C specification. A present, it reference a non-W3C document. On Fri, Nov 16, 2012 at 6:17 AM, Arthur Barstow art.bars...@nokia.com wrote: On 11/15/12 5:31 PM, ext Hill, Brad wrote: I have placed a draft for review at: http://www.w3.org/2011/webappsec/cors-draft/http://www.w3.org/2011/**webappsec/cors-draft/ http://**www.w3.org/2011/webappsec/**cors-draft/http://www.w3.org/2011/webappsec/cors-draft/ And this is a Call for Consensus among the WebAppSec and WebApps WGs to take this particular text (with necessary additions to the Status of this Document section if approved) forward to Candidate Recommendation. I support this CfC although I am wondering about the CR exit criteria. Do you expect to re-use the CSP1.0 criteria: [[ The entrance criteria for this document to enter the Proposed Recommendation stage is to have a minimum of two independent and interoperable user agents that implementation all the features of this specification, which will be determined by passing the user agent tests defined in the test suite developed by the Working Group. ]] My preference is what WebApps has used in other CRs because I think it is clearer that a single implementation is not required to pass every test but that at least two implementations must pass every test. F.ex.: http://www.w3.org/TR/2012/CR-websockets-20120920/#crechttp://www.w3.org/TR/2012/CR-**websockets-20120920/#crec ht**tp://www.w3.org/TR/2012/CR-**websockets-20120920/#crechttp://www.w3.org/TR/2012/CR-websockets-20120920/#crec -Thanks, AB
Re: [admin] Publication Rules [Was Re: Call for Consensus: CORS to Candidate Recommendation]
Unless I've missed it, I don't believe the #PubRules provides guidelines on what documents are referenced by a spec and whether the reference is normative or non-normative. If I'm wrong, please point out the policy or pubrules text that addresses this issue. Just to be clear, I don't object to including a non-normative reference to the WHATWG variant specification; however, if it is to be a normative reference, I'd like to insist it be the official W3C document that is referenced. On Fri, Nov 16, 2012 at 7:14 AM, Arthur Barstow art.bars...@nokia.comwrote: The W3C's process documents (e.g. #PubRules) define the policies for publications and this issue will be addressed if/when the CR is actually published. WebApps is simply a user of the publication policy. If you want to discuss W3C processes such as PubRules, please use some other list - and not any of WebApps' lists - such as public-w3cprocess #ProcCG. -Thanks, AB #PubRules http://www.w3.org/2005/07/**pubrules?uimode=filterhttp://www.w3.org/2005/07/pubrules?uimode=filter #ProcCG http://lists.w3.org/Archives/**Public/public-w3process/http://lists.w3.org/Archives/Public/public-w3process/ On 11/16/12 8:51 AM, ext Ms2ger wrote: I object to making such a change. On 11/16/2012 02:32 PM, Glenn Adams wrote: Before going to CR, I believe the [HTML] entry in the references section needs to be changed to reference an appropriate W3C specification. A present, it reference a non-W3C document.
Re: [admin] Call for Editor for DOM4 REC track spec
It is worth noting that this is a critical path blocker for publishing HTML5 as a REC. On Fri, Sep 28, 2012 at 2:59 AM, Arthur Barstow art.bars...@nokia.comwrote: Hi All, The current Editors of the DOM4 spec are not interested in moving that spec toward Recommendation (in the context of WebApps WG). Consequently, we need an Editor(s) to work on the DOM4 Recommendation track document. If you are interested in this Editor position and have relevant experience, please contact me offlist. -Thanks, ArtB [DOM4] http://dvcs.w3.org/hg/**domcore/raw-file/tip/Overview.**htmlhttp://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html
Re: [XHR] Setting the User-Agent header
On Wed, Sep 5, 2012 at 12:03 PM, Mark Nottingham m...@mnot.net wrote: The current draft of XHR2 doesn't allow clients to set the UA header. Presumably, by clients you mean client-side script, and not the [client] implementation of the UA. That's unfortunate, because part of the intent of the UA header is to identify the software making the request, for debugging / tracing purposes. Since client-side script, whether in a library referenced by a page or directly from the page, is not part of the UA, then it should not be able to modify the UA string. Given that lots of libraries generate XHR requests, it would be natural for them to identify themselves in UA, by appending a token to the browser's UA (the header is a list of product tokens). As it is, they have to use a separate header. And, IMO, should stay that way (use a separate header).
Re: Lazy Blob
On Tue, Aug 7, 2012 at 6:53 PM, Jungkee Song jungkee.s...@samsung.comwrote: - URLObject represents a resource that can be fetched, FileReader'd, createObjectURL'd, and cloned, but without any knowledge of the contents (no size attribute, no type attribute) and no slice() as URLObjects may not be seekable. - Blob extends URLObject, adding size, type, slice(), and the notion of representing an immutable piece of data (URLObject might return different data on different reads; Blob can not). +1 from me on this one. +1. I get a sense that this could possibly be a consensus position (or at least I'm going to claim that it is so as to get disagreement to manifest). Assuming it is, the next steps are: . Having agreed on a solution, do we agree on the problem? (i.e. would this get implemented?) . If so, we can bake this as a standalone delta spec but it would make more sense to me to make the changes directly to the relevant specs, namely FileAPI and XHR. I've copied Anne, Arun, and Jonas - any thought? In either case, I'm happy to provide the content. Having hammered out a consensus, I would like to contribute to providing the content. I would suggest using a different name than URLObject. I think that name will cause a lot of head scratching.
Re: Lazy Blob
On Tue, Aug 7, 2012 at 7:38 PM, Glenn Maynard gl...@zewt.org wrote: On Tue, Aug 7, 2012 at 8:14 PM, Glenn Adams gl...@skynav.com wrote: I would suggest using a different name than URLObject. I think that name will cause a lot of head scratching. No disagreement there; that was just a placeholder. I'd suggest waiting for further input from Anne, Jonas and Arun (the editors of the specs in question) before spending much time coming up with a name, though. sure... i don't have a suggested replacement, but i know a bad name when i see one; i'll defer to the editors to come up with something reasonable
Re: Lazy Blob
On Mon, Aug 6, 2012 at 6:53 AM, Robin Berjon ro...@berjon.com wrote: So if you do have a use case, by all means please share it. If not, I maintain that you simply have no grounds for objection. I did share a couple of use cases in my response to Ian: On Thu, Aug 2, 2012 at 11:39 AM, Glenn Adams gl...@skynav.com wrote: On Thu, Aug 2, 2012 at 11:26 AM, Ian Hickson i...@hixie.ch wrote: On Thu, 2 Aug 2012, Glenn Adams wrote: Are you asking for use cases for a remote/lazy blob in general? i.e., as would apply to the proposed XHR usage and any other underlying supported data source? or are you asking about high level use cases that are particular to a WS binding but not an XHR binding? Both would be useful, but my primary concern is Web Sockets, since I edit that spec. Before I can consider proposals that affect Web Sockets, I need to know what use case it is we're trying to address. I will let Robin and Jungkee reply to the more general use case requirements. As far as WS is concerned, I don't see any impact of this thread on the WS API or WSP specs, its really simply an application of WS/WSP to remote/lazy blobs. Clearly, there are many high level use cases that involve a repetitive send/response message paradigm, which can certainly be implemented with XHR, but some application authors would prefer using WS for various efficiency reasons. My suggestion is essentially: if we are going to define a remote blob bound to an XHR source for a one-shot send-response, then perhaps we should define a remote blob bound to a WS source for multiple send-response pairs. For example, a symmetric presence protocol or IM protocol would typically fall into this usage category. Using remote blobs for either the send or response data (or both) would be useful for certain architectures and provide more deployment flexibility and perhaps greater efficiencies.
Re: Lazy Blob
On Mon, Aug 6, 2012 at 11:27 AM, Ian Hickson i...@hixie.ch wrote: On Mon, 6 Aug 2012, Glenn Adams wrote: I did share a couple of use cases in my response to Ian: I will let Robin and Jungkee reply to the more general use case requirements. As far as WS is concerned, I don't see any impact of this thread on the WS API or WSP specs, its really simply an application of WS/WSP to remote/lazy blobs. Clearly, there are many high level use cases that involve a repetitive send/response message paradigm, which can certainly be implemented with XHR, but some application authors would prefer using WS for various efficiency reasons. My suggestion is essentially: if we are going to define a remote blob bound to an XHR source for a one-shot send-response, then perhaps we should define a remote blob bound to a WS source for multiple send-response pairs. For example, a symmetric presence protocol or IM protocol would typically fall into this usage category. Using remote blobs for either the send or response data (or both) would be useful for certain architectures and provide more deployment flexibility and perhaps greater efficiencies. Those are still not use cases, for the record. I tried explaining what a use case was here: http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0302.html http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0288.html I'll leave the translation of IM protocol to user facing use case as homework for the reader. It is trivial. My intent is to show a use case where one would use a persistent connection and a series of send/response messages that easily maps to WS. Instant Messaging is such a use case.
Re: Lazy Blob
On Mon, Aug 6, 2012 at 1:31 PM, Florian Bösch pya...@gmail.com wrote: On Mon, Aug 6, 2012 at 8:39 PM, Glenn Adams gl...@skynav.com wrote: I'll leave the translation of IM protocol to user facing use case as homework for the reader. It is trivial. My intent is to show a use case where one would use a persistent connection and a series of send/response messages that easily maps to WS. Instant Messaging is such a use case. What is it exactly that requires you to use a remote blob with type blob in the browser over a WS you cannot achieve with a WS and array buffers? The same reason that a remote blob would be useful with XHR.
Re: Lazy Blob
On Mon, Aug 6, 2012 at 2:06 PM, Florian Bösch pya...@gmail.com wrote: On Mon, Aug 6, 2012 at 9:33 PM, Glenn Adams gl...@skynav.com wrote: The same reason that a remote blob would be useful with XHR. Since you're steadfastly refusing to detail your use case, that'll just mean none to me. I feel I don't have any obligation to justify the use of WS any more than that necessary for XHR. It is simply short-sighted to define a remote blob only for XHR. If you can't see that, then let's not waste our time continuing this thread.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 10:44 PM, Ian Hickson i...@hixie.ch wrote: On Wed, 1 Aug 2012, Glenn Adams wrote: Of course, implementers are free to ignore whatever they want, but last time I checked, the W3C was a consensus based standards organization which means agreement needs to be reached on what specs go out the door and what are in those specs. Doesn't really matter what's in the specs going out the door if it's not what's implemented... I don't really care about the XHR side of this (happy to let Anne figure that out), but since WebSockets was mentioned: what's the use case that involves Web Socket? I don't really understand what problem we're trying to solve here. based on the pattern proposed by partial interface BlobBuilder { Blob getBlobFromURL (XMLHttpRequest xhr); }; i would like to support two use cases: (1) simple - single blob send, single blob response (2) multiple - multiple instances of simple, i.e., send/response pairs these could be handled with the following: partial interface BlobBuilder { // simple Blob getBlobFromSource (WebSocket ws, Blob send); // multiple Blob getBlobFromSource (WebSocket ws, EventHandler sendHandler); }; in the simple case, the creator of the lazy blob provides the initial send blob, which is sent by the underlying lazy blob implementation upon a read on the lazy blob, then the next (and only) response blob is returned as a result from the read in the multiple case, the creator of the lazy blob provides an event handler to invoke to send a blob corresponding to a read on the lazy blob, thus providing for multiple send/receive blob message pairs, with one lazy blob for each pair of course, the simple case could be folded into the multiple case, leaving only one method to define: partial interface BlobBuilder { Blob getBlobFromSource (WebSocket ws, EventHandler sendHandler); }; a use of this might be as follows: var bb = new BlobBuilder(); var ws = new WebSocket(ws://example.com:/test); var lb = []; ws.onopen = function() { lb.push ( bb.getBlobFromSource(ws, function() { ws.send(getSendMessageAsBlob(1)); }) ); lb.push ( bb.getBlobFromSource(ws, function() { ws.send(getSendMessageAsBlob(2)); }) ); lb.push ( bb.getBlobFromSource(ws, function() { ws.send(getSendMessageAsBlob(3)); }) ); setTimeout(sendAndReceive); } function getSendMessageAsBlob(msgNum) { return new Blob ( [ String(msgNum) ] ); } function sendAndReceive() { var numMsgs = 0; var numBytes = 0; // trigger read on queued lazy blobs while ( lb.length 0 ) { b = lb.shift(); // read on size triggers stored send 'promise' numBytes += b.size; numMsgs += 1; } alert('Received ' + numMsgs + ' messages, containing ' + numBytes + ' bytes.'); ws.close(); } of course, this example make use of a particular message paradigm (send/recv pairs); while this may capture only a subset of interchange patterns, one could easily generalize the above to provide more flexibility;
Re: Lazy Blob
On Thu, Aug 2, 2012 at 1:04 AM, Ian Hickson i...@hixie.ch wrote: On Thu, 2 Aug 2012, Glenn Adams wrote: I don't really care about the XHR side of this (happy to let Anne figure that out), but since WebSockets was mentioned: what's the use case that involves Web Socket? I don't really understand what problem we're trying to solve here. i would like to support two use cases: (1) simple - single blob send, single blob response (2) multiple - multiple instances of simple, i.e., send/response pairs Sorry, I was vague. What I mean is what user-facing problem is it that we're trying to solve? see DAR's initial message in this thread; bringing WS into scope doesn't change the problem statement, it merely enlarges the solution space, or keeps it from being unnecessarily narrow
Re: Lazy Blob
On Thu, Aug 2, 2012 at 2:36 AM, Robin Berjon ro...@berjon.com wrote: On Aug 1, 2012, at 22:13 , Glenn Adams wrote: The subject line says Lazy Blob, not Lazy Blob and XHR. For the record, I will object to a LazyBlob solution that is tied solely to XHR, so deal with it now rather than later. Objections need to be built on something — just objecting for the fun of it does not carry some weight. Up to this point, you have provided no real world use case that requires the feature you propose and your sole justification for the whole subthread is that you don't like the idea. Are you saying I am objecting for the fun of it? Where did I say I don't like the idea? You'd best reread my messages. As far as I'm concerned, barring the introduction of better arguments the objection is dealt with hic et nunc. No it hasn't. If you want a real world use case it is this: my architectural constraints as an author for some particular usage requires that I use WS rather than XHR. I wish to have support for the construct being discussed with WS. How is that not a real world requirement?
Re: Lazy Blob
On Thu, Aug 2, 2012 at 9:51 AM, Florian Bösch pya...@gmail.com wrote: On Thu, Aug 2, 2012 at 5:45 PM, Glenn Adams gl...@skynav.com wrote: No it hasn't. If you want a real world use case it is this: my architectural constraints as an author for some particular usage requires that I use WS rather than XHR. I wish to have support for the construct being discussed with WS. How is that not a real world requirement? Your particular use-case of content/range aquisition over WS requires a particular implementation on the server in order to understand the WS application layer protocol. This particular implementation on the server of yours is not implemented by any other common hosting infrastructure based on any kind of existing standard. You should specify this particular protocol standard to be used on top of WS first before you can even discuss how your custom implementation of this protocol justifies enshrining it in a browser standard. All WS usage requires a particular (application specific) implementation on the server, does it not? Notwithstanding that fact, such usage will fall into certain messaging patterns. I happened to give an example of two possible message patterns and showed how the proposal under discussion could address those patterns. It is not necessary to marry my proposal to a specific sub-protocol on WS in order to provide useful functionality that can be exploited by applications that use those functions.
Re: Lazy Blob
On Thu, Aug 2, 2012 at 10:04 AM, Florian Bösch pya...@gmail.com wrote: On Thu, Aug 2, 2012 at 5:58 PM, Glenn Adams gl...@skynav.com wrote: All WS usage requires a particular (application specific) implementation on the server, does it not? Notwithstanding that fact, such usage will fall into certain messaging patterns. I happened to give an example of two possible message patterns and showed how the proposal under discussion could address those patterns. It is not necessary to marry my proposal to a specific sub-protocol on WS in order to provide useful functionality that can be exploited by applications that use those functions. If you wish to introduce a particular browser supported semantic for which a specific implementation on the server is required, then people should be able to consult a standard that tells them how they have to provide this implementation. Therefore it is quite necessary to marry your desire to extend remote blobs to WS to a protocol, otherwise you'll have a browser implemented protocol that nobody knows how to implement. I am not proposing a particular browser supported semantic for a specific implementation on the server. I have suggested, by way of example, two particular patterns be supported independently of any such implementation. I did not restrict the results to just those patterns in case someone wishes to generalize. That is little different from the proposed or implied XHR patterns being discussed.
Re: Lazy Blob
On Thu, Aug 2, 2012 at 11:01 AM, Ian Hickson i...@hixie.ch wrote: On Thu, 2 Aug 2012, Glenn Adams wrote: Sorry, I was vague. What I mean is what user-facing problem is it that we're trying to solve? see DAR's initial message in this thread; bringing WS into scope doesn't change the problem statement, it merely enlarges the solution space, or keeps it from being unnecessarily narrow Do you have a link to a specific message? I went through the archives and couldn't find any e-mails in this thread that came close to describing a use case for anything, let alone anything that would be related to persistent bi-directional full-duplex communication with a remote server. I was referring to [1]. [1] http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0264.html While that message does not specifically refer to a full-duplex comm path, it states the general problem in terms of It is increasingly common that data may flow from a server to an in-browser page, that may then pass that data on to another in-browser page (typically running at a different origin). In a many cases, such data will be captured as Blobs. It goes on to describe a solution space oriented towards XHR as the comm path. It occurred to me that the same problem could apply to WS comm path patterns, which is why I suggested enlarging the solution space to include WS.
Re: Lazy Blob
On Thu, Aug 2, 2012 at 11:09 AM, Florian Bösch pya...@gmail.com wrote: On Thu, Aug 2, 2012 at 6:37 PM, Glenn Adams gl...@skynav.com wrote: I am not proposing a particular browser supported semantic for a specific implementation on the server. I have suggested, by way of example, two particular patterns be supported independently of any such implementation. I did not restrict the results to just those patterns in case someone wishes to generalize. That is little different from the proposed or implied XHR patterns being discussed. So I'll take a stab, the remote blob resource/range protocol over WS 1.0: 1) A websocket to a URL is opened by the browser, the path and query of the URL is interpreted to specify a resource. 2) During the lifetime of a websocket session onto a wsblob resource, the resource is guaranteed to be reflected unchanged to the session, it cannot be changed, appended or removed. 3) The client has to send these bytes handshake as a first message 4) The server has to respond with a handshakelength message to indicate that he understands this protocol and indicate the byte length of the resource. 5) after successful setup the client may request ranges from the server by sending this message: rangestartend, start and end have to be in range of the byte resource. 6) The server will respond to each range request in the form of rangestartendbytes in case that a range request is valid, the length of bytes has to be start - end. In case a range is not valid the server will respond with invalidstartend. These are the protocol field definitions: handshake := wsblob length := unsigned int 4 bytes start := unsigned int 4 bytes end := unsigned int 4 bytes bytes := string of bytes range := 0x01 invalid := 0x02 ok, that is fine, but I never suggested limiting the semantics of interchange to a resource/range protocol; as is clear, the above application specific protocol does in fact use the multiple pattern I described, i.e., each interchange consists of a pair of send-response messages, each of which can be encapsulated in a blob, and each response blob could be implemented as a remotable 'promise' encapsulating a send blob and its resultant response blob;
Re: Lazy Blob
On Thu, Aug 2, 2012 at 11:19 AM, Ian Hickson i...@hixie.ch wrote: On Thu, 2 Aug 2012, Glenn Adams wrote: I was referring to http://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0264.html While that message does not specifically refer to a full-duplex comm path, it states the general problem in terms of It is increasingly common that data may flow from a server to an in-browser page, that may then pass that data on to another in-browser page (typically running at a different origin). In a many cases, such data will be captured as Blobs. The above isn't a use case, it's a description of an architectural design, the first step towards the description of a solution. What I'm trying to understand is the underlying _problem_ that the technology is trying to solve. Something like I want to be able to sell plane tickets for people to go on holiday, say. Or I want to provide a service to users that lets them merge data from a medical drugs database and a patient database, without giving me their credentials to those databases. Or some such. I don't know exactly what the use case here would be, hence my questions. Are you asking for use cases for a remote/lazy blob in general? i.e., as would apply to the proposed XHR usage and any other underlying supported data source? or are you asking about high level use cases that are particular to a WS binding but not an XHR binding?
Re: Lazy Blob
On Thu, Aug 2, 2012 at 11:26 AM, Ian Hickson i...@hixie.ch wrote: On Thu, 2 Aug 2012, Glenn Adams wrote: Are you asking for use cases for a remote/lazy blob in general? i.e., as would apply to the proposed XHR usage and any other underlying supported data source? or are you asking about high level use cases that are particular to a WS binding but not an XHR binding? Both would be useful, but my primary concern is Web Sockets, since I edit that spec. Before I can consider proposals that affect Web Sockets, I need to know what use case it is we're trying to address. I will let Robin and Jungkee reply to the more general use case requirements. As far as WS is concerned, I don't see any impact of this thread on the WS API or WSP specs, its really simply an application of WS/WSP to remote/lazy blobs. Clearly, there are many high level use cases that involve a repetitive send/response message paradigm, which can certainly be implemented with XHR, but some application authors would prefer using WS for various efficiency reasons. My suggestion is essentially: if we are going to define a remote blob bound to an XHR source for a one-shot send-response, then perhaps we should define a remote blob bound to a WS source for multiple send-response pairs. For example, a symmetric presence protocol or IM protocol would typically fall into this usage category. Using remote blobs for either the send or response data (or both) would be useful for certain architectures and provide more deployment flexibility and perhaps greater efficiencies.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 9:44 AM, Glenn Maynard gl...@zewt.org wrote: On Wed, Aug 1, 2012 at 9:59 AM, Robin Berjon ro...@berjon.com wrote: var bb = new BlobBuilder() , blob = bb.getBlobFromURL(http://specifiction.com/kitten.png;, GET, { Authorization: Basic DEADBEEF }); Everything is the same as the previous version but the method and some headers can be set by enumerating the Object. I *think* that those are all that would ever be needed. We already have an API to allow scripts to make network requests: XHR. Please don't create a new API that will end up duplicating all of that. However this might be done, it should hang off of XHR. Why restrict to XHR? How about WebSocket as data source?
Re: Lazy Blob
On Wed, Aug 1, 2012 at 10:46 AM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 6:40 PM, Glenn Adams gl...@skynav.com wrote: Why restrict to XHR? How about WebSocket as data source? Websockets support array buffers and therefore by extension any blob/file object. However as a stream oriented API websockets have no content aquisition, negotation, range and transfer semantics unless you prop those up by yourself as an application layer protocol. I'm questioning defining a LazyBlob that is solely usable with XHR. It would be better to have a more generic version IMO.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 11:13 AM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 6:51 PM, Glenn Adams gl...@skynav.com wrote: I'm questioning defining a LazyBlob that is solely usable with XHR. It would be better to have a more generic version IMO. Websockets have no content semantics, therefore any lazy content negotiating reader cannot deal with websockets unless an additional application layer protocol and implementation on the server side is introduced, something that does not exist for websockets otherwise. You could for instance implement HTTP over websockets to get the content semantics, and if your server gets a websocket request, it could be proxied to a domain socket which happend to have a webserver listening which would understand the HTTP request and deliver the resource/range. Now instead of implementing HTTP over websockets over HTTP over sockets, you could just use XHRs which implement HTTP over sockets. Which is why generalising lazy readers to websockets does not make sense. Given the Simple approach suggested by DAR: partial interface BlobBuilder { Blob getBlobFromURL (DOMString url); }; Usage: var bb = new BlobBuilder() , blob = bb.getBlobFromURL(http://specifiction.com/kitten.png;); I don't see why the following isn't feasible: blob = bb.getBlobFromURL(ws://specifiction.com/image/kitten.pnghttp://specifiction.com/kitten.png ) Or, given the Using XHR for Options approach: partial interface BlobBuilder { Blob getBlobFromURL (XMLHttpRequest xhr); }; Usage: var bb = new BlobBuilder() , xhr = new XMLHttpRequest(); xhr.open(GET, /kitten.png, true); xhr.setRequestHeader(Authorization, Basic DEADBEEF); var blob = bb.getBlobFromURL(xhr); why one couldn't have: partial interface BlobBuilder { Blob getBlobFromURL (WebSocket ws); }; var bb = new BlobBuilder() , ws = new WebSocket(ws://specifiction.com/imagehttp://specifiction.com/kitten.png ); ws.onopen = function(){ws.send(kitten.png);} var blob = bb.getBlobFromURL(ws);
Re: Lazy Blob
On Wed, Aug 1, 2012 at 12:03 PM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 7:57 PM, Glenn Adams gl...@skynav.com wrote: blob = bb.getBlobFromURL(ws://specifiction.com/image/kitten.pnghttp://specifiction.com/kitten.png ) There is no application layer transfer protocol inherent in websockets. Requesting a resource does not have any inherent meaning other than that you are opening a channel onto /image/kitten.png. Whoever receives that request is free to respond to that however he likes. You would need to introduce an application layer content protocol on top of websockets, and introduce a default websocket server framework capable of understanding such content requests. You're not just extending lazy reading to websockets. You're putting the burden on yourself to also specify a completely new standard application layer protocol for transfer and range and acquisition of resources over websocket channels. So? Why should lazy blob be specific to HTTP specific semantics when an arbitrary URL is not specific to HTTP?
Re: Lazy Blob
On Wed, Aug 1, 2012 at 1:36 PM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 9:26 PM, Glenn Adams gl...@skynav.com wrote: So? Why should lazy blob be specific to HTTP specific semantics when an arbitrary URL is not specific to HTTP? So if you want to have a lazy reader on Websockets you have either: 1) respecify the websocket protocol to include content semantics for accessing resources defined by an URL and having a specified size OR 2) define an additional protocol on top of websockets, which websockets know nothing about, that allows a custom implementation at the server side to respond in a meaningful fashion to resource range requests. OR define a mechanism for LazyBlob that permits the injection of app specific code into the underlying LazyBlob reader loop.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 1:47 PM, Glenn Adams gl...@skynav.com wrote: On Wed, Aug 1, 2012 at 1:36 PM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 9:26 PM, Glenn Adams gl...@skynav.com wrote: So? Why should lazy blob be specific to HTTP specific semantics when an arbitrary URL is not specific to HTTP? So if you want to have a lazy reader on Websockets you have either: 1) respecify the websocket protocol to include content semantics for accessing resources defined by an URL and having a specified size OR 2) define an additional protocol on top of websockets, which websockets know nothing about, that allows a custom implementation at the server side to respond in a meaningful fashion to resource range requests. OR define a mechanism for LazyBlob that permits the injection of app specific code into the underlying LazyBlob reader loop. Further, a default behavior in the absence of such an injection might be defined simply to read data from the WS and stuff into the blob.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 1:54 PM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 9:50 PM, Glenn Adams gl...@skynav.com wrote: Further, a default behavior in the absence of such an injection might be defined simply to read data from the WS and stuff into the blob. Which kind of defeats the purpose because you wanted to read ranges, so not a whole resource has to be transferred, and you can already read binary data from websockets if you wish to do that, without having to invent another blob. A default behavior does not have to handle all uses cases. App specific code injection could handle this if the author wished it, provided a mechanism supported it.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 2:04 PM, Glenn Maynard gl...@zewt.org wrote: Can we please stop saying lazy blob? It's a confused and confusing phrase. Blobs are lazy by design. On Wed, Aug 1, 2012 at 2:26 PM, Glenn Adams gl...@skynav.com wrote: So? Why should lazy blob be specific to HTTP specific semantics when an arbitrary URL is not specific to HTTP? XHR is no more specific to HTTP than it is to XML. It serves as the primary JavaScript API for performing generic network fetches. WebSockets has an entirely different API from blobs, and bringing them up is only derailing the thread. The subject line says Lazy Blob, not Lazy Blob and XHR. For the record, I will object to a LazyBlob solution that is tied solely to XHR, so deal with it now rather than later.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 2:35 PM, Florian Bösch pya...@gmail.com wrote: On Wed, Aug 1, 2012 at 10:13 PM, Glenn Adams gl...@skynav.com wrote: The subject line says Lazy Blob, not Lazy Blob and XHR. For the record, I will object to a LazyBlob solution that is tied solely to XHR, so deal with it now rather than later. Then you better get onto specifying a resource/range transfer protocol on top of websockets alongside with web-server modules/extensions to be able to understand that protocol, because other than that there is no way that you'll get what you want. I don't think so. There is nothing about Blob that would require a data source to implement range access. Blob.slice() does not require the underlying source to provide range access. The source could be read in entirety and buffered by a Blob instance. If a reasonable WS enabled mechanism were defined for a Lazy Blob that permitted an application injected range access, then that could be used to perform actual range access. There is no need for WS/WSP to support those semantics directly. I don't particularly care if a default behavior for WS is provided that buffers the entire read stream. It's fine to mandate that an application defined function implement those semantics on a WS instance. My concern is that use of WS be recognized as a legitimate source for filling a lazy blob, and that an author should have an option to use WS, depending on app injected code as needed, instead of mandating XHR for this purpose. I'll leave the details of defining this to the proposers of lazy blob.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 9:35 PM, Glenn Maynard gl...@zewt.org wrote: On Wed, Aug 1, 2012 at 9:54 PM, Glenn Adams gl...@skynav.com wrote: I don't particularly care if a default behavior for WS is provided that buffers the entire read stream. Sorry, but that doesn't make sense. You don't access a message-based protocol (Web Sockets) using a character-based API (Blob). They're utterly different APIs. Have you read the Blob interface spec? To quote: This interface represents *immutable* raw data. It provides a method to slice http://dev.w3.org/2006/webapi/FileAPI/#dfn-slice data objects between ranges of bytes into further chunks of raw data. The last time I checked, bytes are bytes, not characters. The fact that the interface provides access to those bytes via a particular string encoding is irrelevant. I'll leave the details of defining this to the proposers of lazy blob. You're free to come up with your own proposal, of course, and editors and vendors will choose among them (or come up with something else, or reject the idea entirely) as they always do, but others are not obligated to twist their proposals to your demands. Of course, implementers are free to ignore whatever they want, but last time I checked, the W3C was a consensus based standards organization which means agreement needs to be reached on what specs go out the door and what are in those specs. Since this is a W3C ML and not an implementers' forum, then I will continue to assume that the W3C process applies. There is a fixed obligation for editors and WG to address comments. They can't simply be rejected because they require work on the part of the editors or proposers.
Re: Lazy Blob
On Wed, Aug 1, 2012 at 2:13 PM, Glenn Adams gl...@skynav.com wrote: On Wed, Aug 1, 2012 at 2:04 PM, Glenn Maynard gl...@zewt.org wrote: Can we please stop saying lazy blob? It's a confused and confusing phrase. Blobs are lazy by design. On Wed, Aug 1, 2012 at 2:26 PM, Glenn Adams gl...@skynav.com wrote: So? Why should lazy blob be specific to HTTP specific semantics when an arbitrary URL is not specific to HTTP? XHR is no more specific to HTTP than it is to XML. It serves as the primary JavaScript API for performing generic network fetches. WebSockets has an entirely different API from blobs, and bringing them up is only derailing the thread. The subject line says Lazy Blob, not Lazy Blob and XHR. For the record, I will object to a LazyBlob solution that is tied solely to XHR, so deal with it now rather than later. Just to make it clear, I support the idea of defining a lazy blob mechanism. However, I am not satisfied that a solution that is tied solely to XHR is sufficient. I would like to see a mechanism that supports both XHR and WS [and others?]. Despite the repeated claims of Florian and GlennM that it doesn't make sense, etc., I think it does make sense and can be reasonably (and simply) defined to handle such use cases. If necessary I can volunteer a strawman to that end. However, I would prefer that DAR or other proposers take the time to consider this use case and factor it into their proposals.
Re: CfC: publish Candidate Recommendation of Web Sockets API; deadline July 18
On Wed, Jul 11, 2012 at 10:52 AM, Edward O'Connor eocon...@apple.comwrote: Art wrote: As such, this is a Call for Consensus to publish a Candidate Recommendation of Web Sockets. Ship it! :) +1
Re: Howto spec
On Wed, May 23, 2012 at 6:45 AM, Anne van Kesteren ann...@annevk.nl wrote: Hi, I have made some updates to the howto spec wiki page outlining how you should go about writing a specification, with some emphasis on specifications for APIs. http://wiki.whatwg.org/wiki/Howto_spec In particular the Patterns and Legacy DOM-style sections are probably of interest. I would love to have feedback to see what else people would like to see explained or how what is explained thus far can be done better. I would like to see more explanation of some statements under the Legacy DOM-style section, particularly: - what is the particular style of defining methods and attributes that is to be discouraged? - how does ReSpec.js use or promote the discouraged particulars?
Re: Howto spec
On Wed, May 23, 2012 at 9:55 AM, Dimitri Glazkov dglaz...@chromium.orgwrote: This is neat! I especially appreciated the Method/Attribute patterns. I will use this. Should I be concerned about what seems to be a lively competition between ReSpec and Anolis. Do we need this tussle? Can we not just decide which tool to use? editor tools are at the editors' prerogative, so we should not mandate specific tools i think in fact, i am not completely happy with either respec or anolis, and have put together something of a hybrid i'm using on cssom*; in particular, what i'm doing is: - writing all IDL and related documentation in WebIDL format, with each top-level definition in a distinct file, while using 'Documentation' extended attributes in the WebIDL files that contains both substitution patterns and markup, rather akin to javadoc but with different substitution keywords that better pertain to the WebIDL usage context; - use a driver file with CPP includes, then running (gnu) CPP to create a single IDL resource for the subsequent processing - use robin's WebIDLParser.js [1] (via node.js) to validate and dump JSON representation of IDL - use Aria Stewart's (aredridel) HTML5.js parser [2] (via node.js) to parse then serialize with substitution replacement based on the JSON IDL, e.g., !--widl(MediaList)-- is replaced with an HTML5 representation of the MediaList IDL, !--widl-intro(MediaList)--, !--widl-attrs(MediaList)--, !--widl-methods(MediaList)--, etc., get the associated documentation - finally use anolis to perform other substitutions, toc generation, etc. the reason I'm doing this is because i prefer embedding documentation in IDL sources than embedding IDL in HTML sources; i also want to do all processing at authoring time, and not at load time via the ReSpec approach once i'm satisfied with this approach, i'll post it and document with a wiki in case some other editor wishes to use this method; but, again, i think which approach is used should be left to specific editors, since it affects their productivity cheers, glenn
Re: Howto spec
On Wed, May 23, 2012 at 11:29 AM, Glenn Adams gl...@skynav.com wrote: On Wed, May 23, 2012 at 9:55 AM, Dimitri Glazkov dglaz...@chromium.orgwrote: This is neat! I especially appreciated the Method/Attribute patterns. I will use this. Should I be concerned about what seems to be a lively competition between ReSpec and Anolis. Do we need this tussle? Can we not just decide which tool to use? editor tools are at the editors' prerogative, so we should not mandate specific tools i think in fact, i am not completely happy with either respec or anolis, and have put together something of a hybrid i'm using on cssom*; in particular, what i'm doing is: - writing all IDL and related documentation in WebIDL format, with each top-level definition in a distinct file, while using 'Documentation' extended attributes in the WebIDL files that contains both substitution patterns and markup, rather akin to javadoc but with different substitution keywords that better pertain to the WebIDL usage context; - use a driver file with CPP includes, then running (gnu) CPP to create a single IDL resource for the subsequent processing - use robin's WebIDLParser.js [1] (via node.js) to validate and dump JSON representation of IDL - use Aria Stewart's (aredridel) HTML5.js parser [2] (via node.js) to parse then serialize with substitution replacement based on the JSON IDL, e.g., !--widl(MediaList)-- is replaced with an HTML5 representation of the MediaList IDL, !--widl-intro(MediaList)--, !--widl-attrs(MediaList)--, !--widl-methods(MediaList)--, etc., get the associated documentation - finally use anolis to perform other substitutions, toc generation, etc. the reason I'm doing this is because i prefer embedding documentation in IDL sources than embedding IDL in HTML sources; i also want to do all processing at authoring time, and not at load time via the ReSpec approach once i'm satisfied with this approach, i'll post it and document with a wiki in case some other editor wishes to use this method; but, again, i think which approach is used should be left to specific editors, since it affects their productivity cheers, glenn relevant links [1] https://github.com/darobin/webidl.js [2] https://github.com/aredridel/html5
Re: [widgets] HTML5 dependency blocking Widget Interface Proposed Recommendation
On Thu, Apr 19, 2012 at 7:06 AM, Marcos Caceres marcosscace...@gmail.comwrote: On Thursday, 19 April 2012 at 13:48, Arthur Barstow wrote: Marcos - would you please enumerate the CR's uses of HTML5 and state whether each usage is to a stable part of HTML5? 3. When getting or setting the preferences attribute, if the origin of a widget instance is mutable (e.g., if the user agent allows document.domain to be dynamically changed), then the user agent must perform the preference-origin security check. The concept of origin is defined in [HTML]. Origin is concept that is well understood - as is the same origin policy used by browsers. TWI [1] does not define the origin of a widget instance. Nor does HTML5. It is also confusing to say that HTML5 defines the 'concept of origin', given that it normatively refers to The Web Origin Concept [2]. TWI needs to be more specific about what aspect of Origin is being referenced and where that specific aspect is defined. [1] http://www.w3.org/TR/2011/CR-widgets-apis-20111213/ [2] http://tools.ietf.org/html/rfc6454
Re: [widgets] HTML5 dependency blocking Widget Interface Proposed Recommendation
On Thu, Apr 19, 2012 at 9:02 AM, Marcos Caceres marcosscace...@gmail.comwrote: On Thursday, 19 April 2012 at 15:58, Glenn Adams wrote: On Thu, Apr 19, 2012 at 7:06 AM, Marcos Caceres marcosscace...@gmail.com (mailto:marcosscace...@gmail.com) wrote: On Thursday, 19 April 2012 at 13:48, Arthur Barstow wrote: Marcos - would you please enumerate the CR's uses of HTML5 and state whether each usage is to a stable part of HTML5? 3. When getting or setting the preferences attribute, if the origin of a widget instance is mutable (e.g., if the user agent allows document.domain to be dynamically changed), then the user agent must perform the preference-origin security check. The concept of origin is defined in [HTML]. Origin is concept that is well understood - as is the same origin policy used by browsers. TWI [1] does not define the origin of a widget instance. That's because they are not bound to any particular URI scheme. Just to some origin. Nor does HTML5. It is also confusing to say that HTML5 defines the 'concept of origin', given that it normatively refers to The Web Origin Concept [2]. TWI needs to be more specific about what aspect of Origin is being referenced and where that specific aspect is defined. As there are no interoperability issues, I don't agree the TWI spec needs to be updated any further. It's just a simple spec and any further clarifications would just be academic. [1] http://www.w3.org/TR/2011/CR-widgets-apis-20111213/ [2] http://tools.ietf.org/html/rfc6454 in that case, please record an objection on my part
Re: [widgets] HTML5 dependency blocking Widget Interface Proposed Recommendation
On Thu, Apr 19, 2012 at 9:04 AM, Glenn Adams gl...@skynav.com wrote: On Thu, Apr 19, 2012 at 9:02 AM, Marcos Caceres marcosscace...@gmail.comwrote: On Thursday, 19 April 2012 at 15:58, Glenn Adams wrote: On Thu, Apr 19, 2012 at 7:06 AM, Marcos Caceres marcosscace...@gmail.com (mailto:marcosscace...@gmail.com) wrote: On Thursday, 19 April 2012 at 13:48, Arthur Barstow wrote: Marcos - would you please enumerate the CR's uses of HTML5 and state whether each usage is to a stable part of HTML5? 3. When getting or setting the preferences attribute, if the origin of a widget instance is mutable (e.g., if the user agent allows document.domain to be dynamically changed), then the user agent must perform the preference-origin security check. The concept of origin is defined in [HTML]. Origin is concept that is well understood - as is the same origin policy used by browsers. TWI [1] does not define the origin of a widget instance. That's because they are not bound to any particular URI scheme. Just to some origin. Nor does HTML5. It is also confusing to say that HTML5 defines the 'concept of origin', given that it normatively refers to The Web Origin Concept [2]. TWI needs to be more specific about what aspect of Origin is being referenced and where that specific aspect is defined. As there are no interoperability issues, I don't agree the TWI spec needs to be updated any further. It's just a simple spec and any further clarifications would just be academic. [1] http://www.w3.org/TR/2011/CR-widgets-apis-20111213/ [2] http://tools.ietf.org/html/rfc6454 in that case, please record an objection on my part just to be clear, I mean an objection to publishing as PR unless this is clarified; i believe this is an issue because the concept and use of origin is (1) very complex and (2) thus prone to misinterpretation; for example, it is not well recognized that HTML5 itself does not require a UA to send an Origin header in a URL request (see [3]) [3] https://www.w3.org/Bugs/Public/show_bug.cgi?id=16574
Re: [widgets] HTML5 dependency blocking Widget Interface Proposed Recommendation
On Thu, Apr 19, 2012 at 9:49 AM, Marcos Caceres w...@marcosc.com wrote: On Thursday, 19 April 2012 at 16:14, Marcos Caceres wrote: On Thursday, 19 April 2012 at 16:11, Glenn Adams wrote: in that case, please record an objection on my part just to be clear, I mean an objection to publishing as PR unless this is clarified; i believe this is an issue because the concept and use of origin is (1) very complex and (2) thus prone to misinterpretation; for example, it is not well recognized that HTML5 itself does not require a UA to send an Origin header in a URL request (see [3]) Yes, but those are issue of RFC6454 and the HTML5 spec (as well as Web Storage). But what does this have to do with Widgets? Glenn and I discussed this on IRC. Glenn suggested I add the following to the definition of a widget instance: The origin of a widget instance is the origin of the Document object associated with the widget instance's browsing context. I agree with Glenns recommendation, so I've go ahead and added that: http://dev.w3.org/2006/waf/widgets-api/#widget-instance thanks Marcos, I drop my objection; regarding the reference to HTML5, it would be an improvement if you could change section 6.5 from: The concept of origin is defined in [HTML]http://dev.w3.org/2006/waf/widgets-api/#html5 . to The concept of origin of a Document object is defined in [HTML]http://dev.w3.org/2006/waf/widgets-api/#html5 .
Re: [widgets] HTML5 dependency blocking Widget Interface Proposed Recommendation
On Thu, Apr 19, 2012 at 10:02 AM, Marcos Caceres marcosscace...@gmail.comwrote: On Thursday, 19 April 2012 at 16:57, Glenn Adams wrote: thanks Marcos, I drop my objection; regarding the reference to HTML5, Yay! :) it would be an improvement if you could change section 6.5 from: The concept of origin is defined in [HTML] ( http://dev.w3.org/2006/waf/widgets-api/#html5). to The concept of origin of a Document object is defined in [HTML] ( http://dev.w3.org/2006/waf/widgets-api/#html5). Done, and committed: http://dev.w3.org/2006/waf/widgets-api/#origin thanks for the speed of light resolution! :)
Re: [xhr] statusText is underdefined
On Wed, Mar 28, 2012 at 1:33 AM, Julian Reschke julian.resc...@gmx.dewrote: On 2012-03-28 00:35, Glenn Adams wrote: On Tue, Mar 27, 2012 at 4:17 PM, Boris Zbarsky bzbar...@mit.edu mailto:bzbar...@mit.edu wrote: On 3/27/12 2:46 PM, Glenn Adams wrote: Is this really a problem? Yes. We've run into bug reports in the past of sites sending some pretty random bytes in the HTTP status text, then reading .statusText from script. If we want interop here, we need to define the conversion. HTTP defines the form and encoding of the status text Except it doesn't, last I checked. Has that changed? RFC2616 states (on pages : Fielding, et al. Standards Track [Page 39] Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF Fielding, et al. Standards Track [Page 40] Reason-Phrase = *TEXT, excluding CR, LF Fielding, et al. Standards Track [Page 15] The TEXT rule is only used for descriptive field contents and values that are not intended to be interpreted by the message parser. Words of *TEXT MAY contain characters from character sets other than ISO- 8859-1 [22] only when encoded according to the rules of RFC 2047 [14]. TEXT =any OCTET except CTLs, but including LWS This makes it pretty clear that Reason Phrase must use ISO-8859-1 (Latin1) unless it uses the encoded-word extension from RFC2047. If the latter is used, then a charset must be designated. Given this, I don't see any spec bug (though there may be implementation bugs in case the client side does not correctly implement the above HTTP requirements). It's time to stop citing RFC 2616. Please have a look at http://greenbytes.de/tech/**webdav/draft-ietf-httpbis-p2-** semantics-19.html#rfc.section.**4http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p2-semantics-19.html#rfc.section.4 . Since 2616 is published and HTTPbis is not, I will go on citing it. Summary: HTTPbis does not attempt to define the character encoding anymore; if you use anything other than US-ASCII, you are on your own. RFC 2047 encoding never was used in practice, and has been removed. The right thing to do is the same as for header field values: use a US-ASCII compatible encoding that is most likely to work, and which is non-lossy, so a UTF-8 field value *can* be retrieved when needed. That encoding is ISO-8859-1. I'm not sure what you mean by citing ISO-8859-1 and UTF-8 in the same context. Please elaborate. (And HTTPBis doesn't talk about this because it defines octets on the wire, not an API). If HTTPbis doesn't define the character encoding of bytes on the wire when serializing reason status, then it leaves much to be desired.
Re: [xhr] statusText is underdefined
On Wed, Mar 28, 2012 at 2:33 AM, Anne van Kesteren ann...@opera.com wrote: On Tue, 27 Mar 2012 22:23:15 +0100, Boris Zbarsky bzbar...@mit.edu wrote: But the HTTP status text is a sequence of bytes, while the return value for statusText is a DOMString. The conversion from one to the other needs to be defined. Would using http://dvcs.w3.org/hg/xhr/raw-**file/tip/Overview.html#** inflate-a-byte-sequence-into-**a-domstringhttp://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#inflate-a-byte-sequence-into-a-domstringbe sufficient or is there something in particular we should do? Well, that would define a specific, definite algorithm. Never mind that it would introduce random bytes into DOMStrings that may or may not have anything to do with character data. I personally think a better solution is simply to dictate that reason status *always* be interpreted as ISO-8859-1, which would, in effect, make the inflate algorithm well defined; i.e., no longer simply random bytes.
Re: [xhr] statusText is underdefined
On Wed, Mar 28, 2012 at 3:50 AM, Anne van Kesteren ann...@opera.com wrote: On Wed, 28 Mar 2012 08:52:25 +0100, Glenn Adams gl...@skynav.com wrote: Well, that would define a specific, definite algorithm. Never mind that it would introduce random bytes into DOMStrings that may or may not have anything to do with character data. That's false. What is false? At present, the inflate algorithm does not make reference to any character encoding, so it just treats the data as bytes; therefore, it is *not* well defined when no character encoding is associated with the input byte sequence. Using iso-8859-1 is ambiguous as it is a common alias for windows-1252 which is definitely not what we want here. I'm not sure what you mean by ambiguous. If users/servers mislabel content as 8859-1 or if they insert non-8859-1 data into byte strings that are defined to be 8859-1, then that is a usage problem, not a spec problem. My point about introducing random bytes has to do with whether the inflate algorithm is employed as is or in conjunction with a normative statement about how to (semantically) interpret the input byte string (to the inflate algorithm). If we declare (normatively, in the spec) that it is 8859-1 then the algorithm and spec are now well defined. However, absent of declaring the encoding of the input byte string, the inflate algorithm output is not semantically known. I am assuming here that neither the inflate algorithm nor the (http) client is attempting to guess/sniff the encoding of the reason status string. Or are you suggesting otherwise?
Re: [xhr] statusText is underdefined
On Wed, Mar 28, 2012 at 4:48 AM, Julian Reschke julian.resc...@gmx.dewrote: On 2012-03-28 09:48, Glenn Adams wrote: I'm not sure what you mean by citing ISO-8859-1 and UTF-8 in the same context. Please elaborate. If you have UTF-8 on the wire and the client handles it as ISO-8859-1, the API user can extract the original octets from the string and re-decode from UTF-8. Of course that requires either heuristics or out-of-band information that this actually was UTF-8 in the first place. The problem I have with this is now you have DOMString serving as a container for an arbitrary byte string; i.e., no longer having any relation to a UTF-16 code unit sequence. Naive uses of DOMString should be able to assume it denotes UTF-16 encoded strings. Any use of DOMString to serve as a holder for arbitrary binary data (including inflating from UTF-8 bytes into 16-bit code units), should be specifically marked as such. Since the user authored content will need to know it is in fact not UTF-16 data. Let's call these two modes jekyll and hyde. When the inflate algorithm's input coding is not specified or known, then the output is a hyde mode DOMString, which is in fact not a character string, but merely an unsigned short[] array with no other semantics. It is certainly possible to define reasonStatus in this fashion, but if done this way, it should be made abundantly clear in the spec that this usage of DOMString is of they hyde variety, which has the effect of placing the burden of charset sniffing on the user defined code. This is certainly a possible strategy for XHR client implementations to use in order to deal with the mess of actual usage in the web (wherein the 8859 dictum was ignored).
Re: [xhr] statusText is underdefined
On Tue, Mar 27, 2012 at 3:23 PM, Boris Zbarsky bzbar...@mit.edu wrote: The spec says: Return the HTTP status text. But the HTTP status text is a sequence of bytes, while the return value for statusText is a DOMString. The conversion from one to the other needs to be defined. If I may summarize: (1) although RFC2616 prescribes the use of 8859-1 for the on-the-wire representation of status text, this has not been followed in practice, and indeed, arbitrary character encodings are being used when serializing the reason status; (2) xhr client implementations have two options for exposing status text: - do not interpret status text in terms of character encoding; rather, simply expose the byte string to the user-defined code and leave encoding determination up to the user-defined code; - do interpret status text encoding, and convert to a semantically well defined character string, possibly requiring sniffing the serialized byte sequence; (3) in both of these options, it is possible to use DOMString to return the results: - in the first case, using what I have called hyde mode, the DOMString merely serves as an unsigned short[] for which the originally serialized byte sequence (of status text) is stuffed into the lower bytes (having no necessary relationship to a Unicode coded character sequence); - in the second case, using what I have called jekyll mode, the DOMString is interpreted (as normal) as a UTF-16 encoded Unicode string (corresponding to a well-defined Unicode coded character sequence); Is this a accurate summary? I agree that if the first option above is chosen, then the inflate algorithm is adequate. However, the specification text should make it abundantly clear that the hyde mode flavor of DOMString is being employed, and that the user defined code has the burden of decoding. As a web-content author and user, I would prefer that option #2 is adopted; or, if I were very particular, I would prefer that two accessors were provided: one for obtaining the raw input bytes (e.g., as a BLOB) and another for obtaining the client's best guess at a decoded Unicode string. In this latter case, I could make the decision on which to use. Overall, I could accept option #1 if the spec makes clear that hyde mode applies. G.
Re: [xhr] statusText is underdefined
Is this really a problem? HTTP defines the form and encoding of the status text, and WebIDL/ES defines the form and encoding of DOMString. Adding an explicit conversion definition seems redundant and overspecified. I would argue the same for all other cases in the spec where it calls out an explicit (and unnecessary) conversion. On Tue, Mar 27, 2012 at 3:23 PM, Boris Zbarsky bzbar...@mit.edu wrote: The spec says: Return the HTTP status text. But the HTTP status text is a sequence of bytes, while the return value for statusText is a DOMString. The conversion from one to the other needs to be defined. -Boris
Re: [xhr] statusText is underdefined
On Tue, Mar 27, 2012 at 4:17 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 3/27/12 2:46 PM, Glenn Adams wrote: Is this really a problem? Yes. We've run into bug reports in the past of sites sending some pretty random bytes in the HTTP status text, then reading .statusText from script. If we want interop here, we need to define the conversion. HTTP defines the form and encoding of the status text Except it doesn't, last I checked. Has that changed? RFC2616 states (on pages : Fielding, et al. Standards Track [Page 39] Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF Fielding, et al. Standards Track [Page 40] Reason-Phrase = *TEXT, excluding CR, LF Fielding, et al. Standards Track [Page 15] The TEXT rule is only used for descriptive field contents and values that are not intended to be interpreted by the message parser. Words of *TEXT MAY contain characters from character sets other than ISO- 8859-1 [22] only when encoded according to the rules of RFC 2047 [14]. TEXT = any OCTET except CTLs, but including LWS This makes it pretty clear that Reason Phrase must use ISO-8859-1 (Latin1) unless it uses the encoded-word extension from RFC2047. If the latter is used, then a charset must be designated. Given this, I don't see any spec bug (though there may be implementation bugs in case the client side does not correctly implement the above HTTP requirements).
Re: [xhr] statusText is underdefined
On Tue, Mar 27, 2012 at 4:38 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 3/27/12 3:36 PM, Boris Zbarsky wrote: On 3/27/12 3:35 PM, Glenn Adams wrote: The TEXT rule is only used for descriptive field contents and values that are not intended to be interpreted by the message parser. Words of *TEXT MAY contain characters from character sets other than ISO- 8859-1 [22] only when encoded according to the rules of RFC 2047 [14]. I believe that does not actually match server reality, unfortunately... And one more thing. Even the text you quoted does not define what happens if the rules from RFC 2047 are followed incorrectly (e.g. declaring a UTF-8 encoding but then having byte sequences that are not valid UTF-8 in the data). The behavior needs to actually be defined here for all values of the status text, whichever spec that happens in. Since there are so may places in XHR, HTML5, etc., that interact with HTTP semantics, it would be better to define this in one place for all uses, and not attempt to redefine at every place where conversion to DOMString occurs. DRY.
informal survey - on spec philosophy
It has been stated to me that, at least for open web platform standards, the following statement is true and is shared by the majority: if it isn't written in the spec, it isn't allowed by the spec I happen to disagree with the truth of this, based on my personal experience both with spec writing and with implementation/use of specs, but I would be curious to see who agrees with this idea or not. The case in point is an instance of a possible ambiguity in a spec because a particular assumption/convention is not documented; i.e., an assumption that something isn't allowed even though it isn't explicitly disallowed. While I agree it is, in general, impossible (or at least impractical) to document all disallowances, I do believe it is important to document important disallowances, particular when there are concerns raised about spec ambiguity. Regards, Glenn
Re: informal survey - on spec philosophy
On Mon, Mar 26, 2012 at 2:46 PM, Marcos Caceres w...@marcosc.com wrote: On Monday, 26 March 2012 at 21:40, Glenn Adams wrote: It has been stated to me that, at least for open web platform standards, the following statement is true and is shared by the majority: if it isn't written in the spec, it isn't allowed by the spec Can you provide some examples of what you mean? This seems a little out of the blue? the spec phrase associated with can be interpreted as any of the following relations [1]: - injective and surjective (one-to-one and onto) - injective and non-surjective (one-to-one but not onto) - non-injective and surjective (not one-to-one but onto) - non-injective and non-surjective (not one-to-one and not onto) [1] http://en.wikipedia.org/wiki/Bijection,_injection_and_surjection it has been claimed that associated with means at least injective and perhaps also surjective, and that since the spec does not say it can be non-injective, then the last two could not apply; my position is that, unless somewhere it is documented what the convention associated with means, that it is (1) ambiguous, and (2) can be interpreted in any of the above four ways; this also goes to the issue of whether if it is not documented in the spec it is not allowed applies; my position is that if the spec is ambiguous (allows for multiple reasonable readings), then it is allowed (even though that may not have been the author's intent); I happen to disagree with the truth of this, based on my personal experience both with spec writing and with implementation/use of specs, but I would be curious to see who agrees with this idea or not. The case in point is an instance of a possible ambiguity in a spec because a particular assumption/convention is not documented; Which one? see above i.e., an assumption that something isn't allowed even though it isn't explicitly disallowed. While I agree it is, in general, impossible (or at least impractical) to document all disallowances, I do believe it is important to document important disallowances, particular when there are concerns raised about spec ambiguity. I guess it's a case by case thing. But generally, if the spec is written with a not in spec, not allowed state machine, then it would hold. there are two issues here: (1) whether the spec is ambiguous or not (permits multiple interpretations), and (2) whether there is an unwritten convention (if the spec doesn't say it then it is not allowed) that applies or not my position is that ambiguities should be avoided wherever possible and that important conventions should be documented; further, i'm not sure I would agree with a convention of if the spec doesn't say it then it is not allowed; or at least, that is the point of this thread, to see what others think...
Re: informal survey - on spec philosophy
On Mon, Mar 26, 2012 at 4:23 PM, Kang-Hao (Kenny) Lu kennyl...@csail.mit.edu wrote: (12/03/27 5:43), Glenn Adams wrote: my position is that, unless somewhere it is documented what the convention associated with means, that it is (1) ambiguous, and (2) can be interpreted in any of the above four ways; This is still lacking context, but in general I agree with you. The specific context this came up in is [1]. [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=16299 this also goes to the issue of whether if it is not documented in the spec it is not allowed applies; my position is that if the spec is ambiguous (allows for multiple reasonable readings), then it is allowed (even though that may not have been the author's intent); Agreed. (12/03/27 4:40), Glenn Adams wrote: It has been stated to me that, at least for open web platform standards, the following statement is true and is shared by the majority: if it isn't written in the spec, it isn't allowed by the spec What context was this statement in? For the spec for API A, you can't really write a test that asserts the non-existence of API B of course. A WebApps spec editor made this assertion to me if it isn't written in the spec, it isn't allowed by the spec. I did (do) not agree. I wondered what others think. The specific context is how to interpret associated with and whether it means one-to-one or not. Since the spec doesn't define associated with, I argue that it need not be interpreted as one-to-one. However, the editor argued that if the spec doesn't say that it can be interpreted as non-injective (not one-to-one), then this interpretation is not allowed.
Re: WebSockets -- only TCP?
no On Mon, Mar 19, 2012 at 1:09 AM, Rick van Rein r...@openfortress.nl wrote: Hello, See PeerConnection in http://dev.w3.org/2011/webrtc/editor/webrtc.html Ah, I was looking in the wrong place then :) Is this considered as part of the HTML5 specification?
Re: WebSockets -- only TCP?
RFC 6455 defines WSP as a TCP protocol [1] [1] http://tools.ietf.org/html/rfc6455#section-1.5 at present the WebSocket API is nothing more than a thin layer over WSP, and references WSP for all protocol bindings; there is no discarding of UDP involved; it simply is/was not a requirement driving WSP; if someone defines a new flavor of WSP in the future based on UDP, e.g., WSPU, then the WebSocket API could be updated to make reference to it; in conclusion, I don't see any cause to change the WebSocket API draft to explicitly suggest use of an alternative protocol (to WSP) since none exists at this time; On Thu, Mar 15, 2012 at 5:28 AM, Rick van Rein r...@openfortress.nl wrote: Hello, I would like to comment on the current (20120313) WebSockets specification. The text sounds to me like it implicitly assumes that all protocols are run over TCP. It could be said that the choice of URL makes it sufficiently general to include UDP (and possibly SCTP), but the usage of terms like connecting sends a hint to implementers that support of TCP would suffice. If the intention is to create a TCP-only WebSocket, then I think this should be made explicit. And if UDP would also be supported, then a remark around connection states that some apply only to connection-oriented URL protocols would send a clearer message to implementers. I do think UDP is too important to discard from WebSockets; among the things we can do with current technology (Flash or Java) is a softphone running in a browser; in a TCP-only HTML5 environment with deprecated support for these technologies such options would have no standing ground. I hope this is helpful feedback. Best wishes, Rick van Rein OpenFortress
Re: IME API Use cases editorial feedback
On Wed, Feb 29, 2012 at 2:59 AM, Kang-Hao (Kenny) Lu kennyl...@csail.mit.edu wrote: http://dvcs.w3.org/hg/ime-api/raw-file/default/use-cases/Overview.html SUN Haitao found the description of the Traditional Chinese IME used as an example in this use cases document somewhat inaccurate. 3.1.2 Radical composer __ # typing ‘o’ produces ‘人’ on a Traditional-Chinese (or Bopomofo) # keyboard s/Bopomofo/Changjie/ I would suggest Cangjie as the preferred spelling for 倉頡 (仓颉) [1]. [1] http://en.wikipedia.org/wiki/Cangjie_input_method (It's not clear to me if Changjie radicals are phonetic but I am totally ignorant on this subject) they are not phonetic, nor are they semantic; they are geometric only (as graphical mnemonics) 3.2 Converter _ # Bopomofo characters ‘人弓’ matches Traditional-Chinese # ideographic characters ‘乞’, ‘亿’, ‘亇’, etc. s/Bopomofo characters/Changjie components/ s/Changjie/Cangjie/ Cheers, Kenny
Re: [FileAPI, common] UTF-16 to UTF-8 conversion
On Wed, Feb 29, 2012 at 2:36 PM, Arun Ranganathan aranganat...@mozilla.comwrote: On Tue, Feb 28, 2012 at 6:46 PM, Arun Ranganathan aranganat...@mozilla.com wrote: Should the actual UTF-8 encoding algorithm be specified by HTML? I don't know, since I think that Unicode to UTF-8 is pretty common. Might help if it was part of the common infrastructure. what needs to be specified that isn't already found in Unicode [1], clause D92, p92ff? [1] http://www.unicode.org/versions/Unicode6.1.0/ch03.pdf
Re: [FileAPI, common] UTF-16 to UTF-8 conversion
On Wed, Feb 29, 2012 at 2:58 PM, Arun Ranganathan aranganat...@mozilla.comwrote: On Wed, Feb 29, 2012 at 2:36 PM, Arun Ranganathan aranganat...@mozilla.com wrote: On Tue, Feb 28, 2012 at 6:46 PM, Arun Ranganathan aranganat...@mozilla.com wrote: Should the actual UTF-8 encoding algorithm be specified by HTML? I don't know, since I think that Unicode to UTF-8 is pretty common. Might help if it was part of the common infrastructure. what needs to be specified that isn't already found in Unicode [1], clause D92, p92ff? [1] http://www.unicode.org/versions/Unicode6.1.0/ch03.pdf I think that gets us by. Do you think we need a reference in FileAPI? Or can we merely say to encode as UTF-8 and leave it to implementations (a reasonable assumption IMHO). I think you should have a reference. You could either use the following, as does HTML5: [RFC3629]UTF-8, a transformation format of ISO 10646http://tools.ietf.org/html/rfc3629, F. Yergeau. IETF. or you could modify the language in Section 4 Terminology and Algorithms to read: The terms and algorithms *UTF-8*, fragment, scheme, document, unloading document cleanup steps, event handler attributes, event handler event type, origin, same origin, event loops, task, task source, URL, and queue a task are defined by the HTML specification [HTMLhttp://dev.w3.org/2006/webapi/FileAPI/#HTML ]. HTML A conforming user agenthttp://dev.w3.org/2006/webapi/FileAPI/#dfn-conforming-implementation MUST support at least the subset of the functionality defined in HTML that this specification relies upon; in particular, it must support event loopshttp://dev.w3.org/2006/webapi/FileAPI/#event-loops and event handler attributeshttp://dev.w3.org/2006/webapi/FileAPI/#event-handler-attributes. [HTML http://dev.w3.org/2006/webapi/FileAPI/#HTML]
Re: [FileAPI, common] UTF-16 to UTF-8 conversion
On Wed, Feb 29, 2012 at 3:43 PM, Glenn Adams gl...@skynav.com wrote: HTML A conforming user agenthttp://dev.w3.org/2006/webapi/FileAPI/#dfn-conforming-implementation MUST support at least the subset of the functionality defined in HTML that this specification relies upon; in particular, it must support event loopshttp://dev.w3.org/2006/webapi/FileAPI/#event-loops and event handler attributeshttp://dev.w3.org/2006/webapi/FileAPI/#event-handler-attributes. [HTML http://dev.w3.org/2006/webapi/FileAPI/#HTML] Ignore the above paragraph.
Re: CfC by 02-14: Add IME API to the charter
will there be liaison/participation with I18N Core WG on this work? On Wed, Feb 8, 2012 at 5:29 AM, Charles McCathieNevile cha...@opera.comwrote: Hi, thanks to Mike and the Google guys, we have http://dvcs.w3.org/hg/ime-api/ **raw-file/default/use-cases/**Overview.htmlhttp://dvcs.w3.org/hg/ime-api/raw-file/default/use-cases/Overview.htmlwhich explains what an IME API would do and why it would be useful. I believe we have editors but it doesn't name a test facilitator (don't blame me, Art chose that as the name ;) ) and we need one. I am assuming that will be forthcoming, so this is a formal call for Consensus to add this item to the charter. Silence will be considered assent, positive response is preferred, and the deadline is the end of Tuesday 14th February. cheers Chaals -- Charles 'chaals' McCathieNevile Opera Software, Standards Group je parle français -- hablo español -- jeg kan litt norsk http://my.opera.com/chaals Try Opera: http://www.opera.com
Re: CfC by 02-14: Add IME API to the charter
thanks, i was just checking; i'll defer to Addison and the editor of the proposed work to handle the details On Wed, Feb 8, 2012 at 9:02 AM, Michael[tm] Smith m...@w3.org wrote: Hi Glenn, @2012-02-08 08:33 -0700: will there be liaison/participation with I18N Core WG on this work? I've already given Richard Ishida and Felix Sasaki a heads-up about it. I believe Richard is planning to propose an agenda item for it on the i18n WG call today. But anyway certainly there shall be active liaise-ing with i18n folk on this API. If you believe we need to capture that in the charter then I can work with the chairs to make sure we do that. --Mike -- Michael[tm] Smith http://people.w3.org/mike/+
Re: [webcomponents] Considering declarative event handlers
On Tue, Feb 7, 2012 at 12:41 PM, Dimitri Glazkov dglaz...@chromium.orgwrote: To make Web Components more usable, I would like to consider providing a way to declare event handlers in markup. As I look over the use cases and try to implement them using the proposed syntax (http://dvcs.w3.org/hg/webcomponents/raw-file/tip/explainer/index.html), a pattern emerges, where a bunch of event handlers is declared and registered early in the lifecycle of the custom elements ( http://dvcs.w3.org/hg/webcomponents/raw-file/tip/samples/entry-helper.html , http://dglazkov.github.com/Tabs/tabs-control.js as rough examples). Is there a reason not to use (modifying as required) XML Events [1] for this purpose? [1] http://www.w3.org/TR/2003/REC-xml-events-20031014/
Re: CfC: Charter addition for Fullscreen API
On Thu, Feb 2, 2012 at 11:37 AM, Anne van Kesteren ann...@opera.com wrote: On Tue, 31 Jan 2012 18:07:39 +0100, Arthur Barstow art.bars...@nokia.com wrote: On 1/31/12 11:04 AM, ext Robin Berjon wrote: We have a draft http://dvcs.w3.org/hg/**fullscreen/raw-file/tip/**Overview.htmlhttp://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html I'm pretty sure that I've seen implementer interest, and it's very obvious that there's a lot of developer interest in it. My understanding is that it has an editor. It would be good to get confirmation from Anne and/or Tantek. I'm fine with publishing this through WebApps. is there any reason this should be done as part of CSSOM View? i notice a to do listed at [1] as: - CSSOM should have a mechanism for taking elements full-screen [1] http://wiki.csswg.org/spec/cssom
Re: CfC: Charter addition for Fullscreen API
On Thu, Feb 2, 2012 at 4:14 PM, Anne van Kesteren ann...@opera.com wrote: On Fri, 03 Feb 2012 00:09:44 +0100, Glenn Adams gl...@skynav.com wrote: On Thu, Feb 2, 2012 at 11:37 AM, Anne van Kesteren ann...@opera.com wrote: I'm fine with publishing this through WebApps. is there any reason this should be done as part of CSSOM View? i notice a to do listed at [1] as: - CSSOM should have a mechanism for taking elements full-screen [1] http://wiki.csswg.org/spec/**cssom http://wiki.csswg.org/spec/cssom That was mostly because it was tangentially related (and at some point suggested to be put there), but now it's drafted as a separate document I think it's fine to keep it as such. ok, sounds good
Re: Obsolescence notices on old specifications, again
On Tue, Jan 24, 2012 at 4:58 AM, Arthur Barstow art.bars...@nokia.comwrote: Ms2ger, Last September, some obsolescence text was added to the DOM 2 Views REC: [[ http://www.w3.org/TR/DOM-**Level-2-Views/#notice-20110922http://www.w3.org/TR/DOM-Level-2-Views/#notice-20110922 http://www.w3.org/TR/2000/REC-**DOM-Level-2-Views-20001113/http://www.w3.org/TR/2000/REC-DOM-Level-2-Views-20001113/ *Document Status Update 2011-09-22*: This paragraph is informative. The concepts this document defines are obsolete. The 'document' and 'defaultView' attributes are defined in the HTML5 http://www.w3.org/TR/html5/ specification with simplified semantics. The Web Applications Working Group http://www.w3.org/2008/**webapps/http://www.w3.org/2008/webapps/ encourages implementation of these concepts as defined by HTML5. ]] I think the proponents for adding obsolescence text to the other RECs should make a specific proposal for each REC. I would support a notice akin to this, however, I am concerned about using the term obsolete without having a normative substitute/replacement to reference. I realize that the potential substitutes are not yet in REC status, and will take some time to get there, and that it is possible to add informative references to work in progress, but this doesn't quite satisfy my notion of what obsolete means.
Re: Obsolescence notices on old specifications, again
On Tue, Jan 24, 2012 at 12:32 AM, Henri Sivonen hsivo...@iki.fi wrote: On Mon, Jan 23, 2012 at 10:38 PM, Glenn Adams gl...@skynav.com wrote: I work in an industry where devices are certified against final specifications, some of which are mandated by laws and regulations. The current DOM-2 specs are still relevant with respect to these certification processes and regulations. Which laws or regulations require compliance with some of the above-mentioned specs? Have bugs been filed on those laws and regulations? I am referring to laws, regulations, and formal processes adopted by various governments (e.g., U.S. and EU) and recognized international standards organizations (e.g., ITU). One does not file bugs against laws and regulations of this type. The industry I am referring to is television broadcast, cable, satellite, and broadband services, much of which is subject to national and international laws and regulations, some of which refer (directly or indirectly) to W3C RECs, including the DOM RECs being discussed here. With very few exceptions, the processes that govern these laws and regulations require that any externally referenced document be final, which, in the W3C process, means REC.
Re: Obsolescence notices on old specifications, again
The problem is that the proposal (as I understand it) is to insert something like: DOM2 (a REC) is obsolete. Use DOM4 (a work in progress). This addition is tantamount (by the reading of some) to demoting the status of DOM2 to a work in progress. 2012/1/24 Bronislav Klučka bronislav.klu...@bauglir.com Hello, I do understand the objection, but how relevant should it be here? If some regulation/law dictates that work must follow e.g. DOM 2, than it does not matter that it's obsolete... The law takes precedence here regardless of status of the document. Technically in such case one don't need to worry himself about any progress or status of such document or specification. On 23.1.2012 19:06, Glenn Adams wrote: I object to adding such notice until all of the proposed replacement specs reach REC status. G. Brona
Re: Obsolescence notices on old specifications, again
I'm sorry, but for some, saying DOM2 (a REC) = DOM4 (a WIP), is the same as saying DOM2 is a WIP. This is because the former can be read as saying that the normative content of DOM2 is now replaced with DOM4. I'm not sure what you mean by [DOM2] is a work on which progress has stopped. DOM2 is a REC, and is only subject to errata [1] and rescinding [2]. [1] http://www.w3.org/2005/10/Process-20051014/tr.html#rec-modify [2] http://www.w3.org/2005/10/Process-20051014/tr.html#rec-rescind I'm not sure where the proposed obsolescence message falls in terms of [1] or [2]. Perhaps you could clarify, since presumably the process document will apply to any proposed change. On Tue, Jan 24, 2012 at 12:36 PM, Ms2ger ms2...@gmail.com wrote: On 01/24/2012 08:33 PM, Glenn Adams wrote: The problem is that the proposal (as I understand it) is to insert something like: DOM2 (a REC) is obsolete. Use DOM4 (a work in progress). This addition is tantamount (by the reading of some) to demoting the status of DOM2 to a work in progress. Not at all; it's a work on which progress has stopped long ago.
Re: Obsolescence notices on old specifications, again
On Tue, Jan 24, 2012 at 12:39 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 24 Jan 2012, Glenn Adams wrote: The problem is that the proposal (as I understand it) is to insert something like: DOM2 (a REC) is obsolete. Use DOM4 (a work in progress). This addition is tantamount (by the reading of some) to demoting the status of DOM2 to a work in progress. It should be: DOM2 (a stale document) is obsolete. Use DOM4 (a work that is actively maintained). It would be more accurate perhaps to say that DOM4 is a work that is under active development. In the minds of most readers, maintenance is an errata process that follows completion (REC status). It doesn't demote DOM2 to a work in progress, because a work in progress is a step _up_ from where DOM2 is now. Many (most?) government, industry, and business activities that formally utilize W3C specifications would view a work in progress as less mature than a REC. That results in the form being assigned a lower value than the latter. So, yes, demote is the correct word. I understand your agenda is to reverse this way of thinking. I have no objection to that agenda per se. But it is not an agenda shared by many members of the W3C. If you think I'm wrong about this, then I'd like to see a poll or ballot that quantifies the membership's perspective on this issue.
Re: Obsolescence notices on old specifications, again
On Tue, Jan 24, 2012 at 12:58 PM, Ojan Vafai o...@chromium.org wrote: You keep saying this throughout this thread without pointing to specifics. It's impossible to argue with broad, sweeping generalizations like this. So far, you have yet to point to one concrete organization/statute that cares about REC status. Ojan, apparently you are not familiar with international or national standards bodies. To mention just a couple, ANSI, ISO, and ITU care. I could give you a list of hundreds if you wish, all having encoded such rules into their formal processes.