Re: [clipops][editing] document.execCommand('copy', false, 'some data') ?
On Mon, Apr 13, 2015 at 3:18 PM, Hallvord Reiar Michaelsen Steen hst...@mozilla.com wrote: So.. are you suggesting something like window.Clipboard.setData('text/plain', 'foo') ? Maybe. I don't know what a good name would be.
Re: [clipops][editing] document.execCommand('copy', false, 'some data') ?
On Fri, Apr 10, 2015 at 2:44 PM, Hallvord Reiar Michaelsen Steen hst...@mozilla.com wrote: However, document.execCommand() is spec'ed as having a value argument. What about actually using it here? Simplifying the above code to: element.onclick = function(){ document.execCommand('copy', false, 'foo'); } Is this really copying? I think a new function for set clipboard contents to specified value would make more sense than overloading execCommand(copy) to mean something more than the standard text-editor meaning of copy. Besides, execCommand() is awful and we should prefer other APIs when possible.
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On Thu, Feb 12, 2015 at 4:45 AM, Marc Fawzi marc.fa...@gmail.com wrote: how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? It already is, but there's no way out. This is true everywhere in computing. Look closely at almost any protocol, API, language, etc. that dates back 20 years or more and has evolved a lot since then, and you'll see tons of cruft that just causes headaches but can't be eliminated. Like the fact that Internet traffic is largely in 1500-byte packets because that's the maximum size you could have on ancient shared cables without ambiguity in the case of collision. Or that e-mail is mostly sent in plaintext, with no authentication of authorship, because that's what made sense in the 80s (or whatever). Or how almost all web traffic winds up going over TCP, which performs horribly on all kinds of modern usage patterns. For that matter, I'm typing this with a keyboard layout that was designed well over a century ago to meet the needs of mechanical typewriters, but it became standard, so now everyone uses it due to inertia. This is all horrible, but that's life.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Tue, Jan 27, 2015 at 4:49 PM, Koji Ishii kojii...@gmail.com wrote: 3 proposals so far: Proposal A: Leave it undefined. If it's not causing interop issues, we can leave it. Proposal B: Clone. Proposal C: Live. I can live with any, but prefer B.
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Tue, Jan 27, 2015 at 4:31 PM, Koji Ishii kojii...@gmail.com wrote: It's true that you could use multi-range selections to select in visual order. But there are bunch of operations defined against selections. What does it copy? What will happen when you type a to replace the selected text? Spell check? Bi-di algorithm? Almost every text algorithm is built on top of the model, which is the DOM today, we can't just replace it. In all of these cases, typically, the most correct thing you can do is do the operation on each range separately in sequence, probably in DOM order for lack of a better option. Copy should concatenate the selected ranges into the clipboard. Replacement probably would delete all the ranges and replace the first one. I don't see what spellcheck or bidi have to do with selections at all. All this is certainly not simple to work out, and in some cases there will be no good answer for what to do, but it's something you have to do if you want to deal with non-contiguous selections. I think visual order selections, if ever happens, should have a different architecture, and it should not be handled together with multi-range selections. What do you mean by visual-order selections, and can you give a specific example of something that should behave differently for visual-order and multi-range selections?
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Sat, Jan 24, 2015 at 9:18 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Though I believe browsers will soon have much more pressure to support multiple ranges as a matter of course, as increased design with Flexbox and Grid will mean that highlighting from one point to another, in the world of a range is defined by two DOM endpoints and contains everything between them in DOM order, can mean highlighting random additional parts of the page that are completely unexpected. Switching to a model of visual highlighting for selections will require multi-range support. In other words, it'll switch from being a rare thing to much more common. Most sites will probably not use flexbox or grid for a long time to come, and on sites that do non-contiguous selections will probably be rare, so I wouldn't rely on this as a mitigating factor. I once went through some common selection use-cases with the new selection API that I suggested before (returning a list of selected nodes or such), and for at least some common cases (like wrap the selection in a span) it handled non-contiguous selections automatically, and was easier to use as well. For typical selection use-cases, the author wants to deal with the selected nodes anyway, not the endpoints.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Sun, Jan 25, 2015 at 1:31 AM, Mats Palmgren m...@mozilla.com wrote: Gecko knows if a Range is part of a Selection or not. Authors don't, I don't think. Of course, we could expose this info to authors if we wanted, so that's not a big problem. True, I'm just saying that I don't see any practical problems in implementing live ranges to manipulate the Selection if we want to. I don't think there are any implementation problems, I just think it's an API that's confusing to authors relative to the alternative (returning copies). And it's probably easier for the UAs that return references to switch to returning copies than the reverse, so it increases the chance of convergence in the near term. Also, if mutating the range throws, it will break author code; but if it fails silently, it creates a what on earth is going wrong?! head-banging scenario for authors. And anything authors can do with a reference, they can do with a copy just as well, by mutating the copy, .removeRange(), .addRange(). So I think returning a copy makes much more sense.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Sat, Jan 24, 2015 at 3:28 PM, Koji Ishii kojii...@gmail.com wrote: Looks like we're in consensus that a) it doesn't really cause issues today, and b) there are scenarios where live-ness is nice. I don't agree that it doesn't cause issues now. Unless we want Range methods to behave differently based on whether they're in a Selection, returning a live range means you can't restrict what nodes are in the selection, e.g., detached nodes. This has caused at least one bug in Gecko. It would be much easier for IE/Gecko to switch to returning copies than for WebKit/Blink to switch to returning live ranges. And this opens up the possibility of normalizing the selection in a way that makes writing code to handle selections significantly easier, e.g., limiting the types of nodes that the selection can be in. So I think it makes more sense to spec returning a copy. I don't have any opinion on how this should be prioritized relative to other editing work. I will note that it would be quite easy for Gecko to switch to returning a copy, so it doesn't have to take significant implementation work away from other projects.
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Sat, Jan 17, 2015 at 10:12 PM, Olivier Forget teleclim...@gmail.com wrote: I'd be interested in hearing more about what didn't work with that API by both devs who tried to make use of it and the implementors too. For the record: web developers don't usually take advantage of additional functionality that is provided by only one browser, or implemented in differing unpolished ways by different browsers. When possible we take the lowest common denominator approach to offer a consistent experience from browser to browser, and to avoid spending resources writing code that only a subset of users will be able to use anyways. What I'm saying is that the fact few devs worked with multiple ranges may not be a reflection of the quality of the API, but rather that because it wasn't implemented across browsers it wasn't worth from a cost-benefit point of view. And no I'm not saying the API is great either, just that saying developers won't do it is not really fair to anybody. It's not just that it was only implemented by one UA. It's also that even in Firefox, multiple-range selections practically never occur. The only way for a user to create them to to either Ctrl-select multiple things, which practically nobody knows you can do; or select a table column, which is also extremely uncommon; or maybe some other obscure ways. In evidence of this fact, Gecko code doesn't handle them properly either. Ehsan might be able to provide more details on this if you're interested.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Wed, Jan 21, 2015 at 5:20 PM, Mats Palmgren m...@mozilla.com wrote: It seems fine to me. WebKit/Blink already rejects(*) a range with detached nodes in the addRange call. Imposing the same restriction on a (live) Selection range is consistent with that. I don't think it's consistent at all. In one case, you're calling a Selection method. In the other case, you're calling a Range method. Range methods shouldn't behave differently based on whether the Range is attached to a Selection. You actually have no way of telling whether a given Range is part of a Selection, right? Selection methods wouldn't provide the same functionality though. Selection.setStart* would presumably be equivalent to setStart* on the first range in the Selection, but how do you modify the start boundary point on other ranges when there are more than one? I guess we could add them as convenience methods, making setStart* operate on the first range and setEnd* on the last, but it's still an incomplete API for multi-range Selections. True. You can still use the Range methods, you just have to do .removeRange() and .addRange() to update it. So it's not a significant issue, I think.
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Mon, Jan 12, 2015 at 9:59 PM, Ben Peters ben.pet...@microsoft.com wrote: Multiple selection is an important feature in the future. Table columns are important, but we also need to think about BIDI. Depending on who you talk to, BIDI should support selection in document order or layout order. Layout order is not possible without multi-selection. I do not believe “everyone wants to kill it” is accurate. I agree with Olivier that it’s crucial to a full-featured editor. We don’t want to make sites implement this themselves. If implementers are interested, then that's fine by me. I was summarizing the result of a previous discussion or two I was aware of, and the current implementation reality. However, I think thought should go into an API that supports non-contiguous selections without making authors try to handle the non-contiguous case specially, because they won't. Exposing a list of selected nodes/parts of CharacterData nodes is a possibility that has occurred to me -- like returning a list of SelectedNodes, where SelectedNode has .node, .start, and .end properties, and .start and .end are null unless it's a partially-selected CharacterData node, and no node is in the list if it has an ancestor in a list. So fo[obbaribaz/i/b]quuz would expose [{node: foo, start: 2, end: 3}, {node: b, start: null, end: null}, {node: quuz, start: 0, end: 0}] as the selected nodes. Then authors would use it by iterating over the selected nodes, and non-contiguous selections would be handled automatically. I once thought over some use-cases and concluded that a lot of them would Just Work for non-contiguous selections this way -- although doubtless some cases would still break. (Obvious disadvantages disadvantages of this approach include a) authors will still continue using the old API, and b) calculating the list might be somewhat expensive. (a) might be mitigated by the fact that it's easier to use for some applications, particularly editing-related ones -- it saves you from having to walk through the range yourself.) I certainly agree that non-contiguous selection is a good feature to have! But as far as I'm aware, in Gecko's implementation experience, multiple Ranges per Selection has proven to be a bad way to expose them to authors. Ehsan could tell you more.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
I just said it in the other thread, but just to clarify in this thread too: I think non-contiguous selections are a great feature. I think exposing them to authors as multiple Ranges in a Selection has proven to be not a good way to do it, because authors basically without exception just ignore any ranges beyond the first. When writing the Selection code, I reviewed a decent amount of author code, and all of it (I don't think I found an exception) just did .getRangeAt(0) and ignored the rest. Gecko has found that they misused the code internally as well, as Ehsan demonstrated to me once. If we want non-contiguous selections to work in author code that's not specially written to accommodate them, we should think of a different API, perhaps the one I suggested in the other thread. Also, to clarify, my initial selection spec accommodated multiple ranges. I deliberately removed support when it looked like no one wanted to support the feature: https://dvcs.w3.org/hg/editing/rev/b1598801692d. Speccing it is not the problem. The bug was here, where I say that Ehsan and Ryosuke agreed with it (at a face-to-face meeting we had at Mozilla Toronto): http://www.w3.org/Bugs/Public/show_bug.cgi?id=13975 On Wed, Jan 14, 2015 at 6:14 PM, Mats Palmgren m...@mozilla.com wrote: On 01/09/2015 12:40 PM, Aryeh Gregor wrote: The advantage of the IE/Gecko behavior is you can alter the selection using Range methods. The advantage of the WebKit/Blink behavior is you can restrict the ranges in the selection in some sane fashion, e.g., not letting them be in detached nodes. It would be easy to change Gecko to ignore addRange() calls if the range start/end node is detached. We could easily do the same for range.setStart*/setEnd* for ranges that are in the Selection. Would that address your concern about detached nodes? I think so, yes, but it would mean making Range methods behave differently depending on whether the range is in a selection. Is that really sane? What are the reasons to return a clone anyway? Is it important to be able to call (mutating) Range methods on a Selection? If we really want authors to have convenience methods like setStartBefore() on Selection, we could add them to Selection.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Mon, Jan 12, 2015 at 3:40 AM, Karl Dubost k...@la-grange.net wrote: I'm using multiple range selection very often. From every day to a couple of times a week. My main usage is when I use bookmarking services and I want to keep part of an article which are distant. Basically a text1 […] text2 […] text3 scenario. I select when reading, and when finish I can bookmark all my selections in once. Also when I want to remove some noise from a paragraph. It's a lot more practical than having to create individual bookmark for each part OR to have to select, bookmark and cut. As a user I find that essential and very practical (implementation details apart). I think the proposal for Gecko was to leave multiple-range selections functional, but not expose anything beyond the primary range to author JavaScript. So there would be no change for users, only to authors.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Fri, Jan 9, 2015 at 8:29 PM, Olivier Forget teleclim...@gmail.com wrote: On Fri Jan 09 2015 at 4:43:49 AM Aryeh Gregor a...@aryeh.name wrote: - It may never happen, but when multiple ranges are supported, are they bound to index? Everyone wants to kill this feature, so it's moot. Could you please point me to the discussion where this conclusion was reached? I searched the mailing list but I only found a few ambivalent threads, none indicating that everyone wants to kill this. Thanks. I don't remember whether it was ever discussed on the mailing list in depth. The gist is that no one has ever implemented it except Gecko, and I'm pretty sure no one else is interested in implementing it. The Selection interface was invented by Netscape to support multiple ranges to begin with, but all the other UAs that reverse-engineered it and/or implemented from the DOM Range specs deliberately made it support only one range (in incompatible UA-specific ways, naturally). Ehsan Akhgari, maintainer of the editor component for Gecko, is in favor of removing (user-visible) support for multiple selection ranges from Gecko, and last I heard no one objected in principle. So the consensus of implementers is to support only one range. As far as I know, the only reason Gecko still supports multiple ranges is because no one has gotten around to removing them. (Ehsan would know more about that.) The reason for all this is that while it makes wonderful theoretical sense to support multiple ranges for a selection, and is necessary for extremely sensible features like allowing a user to select columns of a table, multi-range selections are nonexistent in practice. A selection that has multiple ranges in it is guaranteed to be mistreated by author code, because no one actually tests their code on multi-range selections. More than that, Gecko code -- which is much higher-quality than typical author code and much more likely to take multiple ranges into account -- has tons of bugs with multi-range selections and behaves nonsensically in all sorts of cases. So in practice, multi-range selections break everyone's code in the rare cases where they actually occur. In general, an API that has a special case that will almost never occur is guaranteed to be used in a way that will break the special case, and that's very poor API design. In theory, a redesigned selection API that allows for non-contiguous selections *without* making them a special case would be great. Perhaps a list of selected nodes/character ranges. But multiple ranges is not the way to do things.
Re: [Selection] Should selection.getRangeAt return a clone or a reference?
On Wed, Jan 7, 2015 at 12:32 AM, Ryosuke Niwa rn...@apple.com wrote: Trident (since IE10) and Gecko both return a live Range, which can be modified to update selection. WebKit and Blink both return a clone Range so that any changes to the Range doesn't update the selection. It appears that there is a moderate interest at Mozilla to change Gecko's behavior. Does anyone have a strong opinion about this? The advantage of the IE/Gecko behavior is you can alter the selection using Range methods. The advantage of the WebKit/Blink behavior is you can restrict the ranges in the selection in some sane fashion, e.g., not letting them be in detached nodes. WebKit/Blink cannot change to return a reference unless they allow arbitrary ranges in selections, which last I checked they don't, and I'm guessing they would have trouble supporting it. Whereas IE/Gecko could easily change, and authors who already supported WebKit/Blink wouldn't lose any features. So I guess returning a value makes the most sense. (If you return a reference, you must allow arbitrary ranges, because the author could call setStart() on the returned range with any value you want, and they will expect that the range's new start will be exactly what they set it to.) On Wed, Jan 7, 2015 at 12:08 PM, Koji Ishii kojii...@gmail.com wrote: I also guess that we need to ask more work to the spec editor to support the liveness, such as: My old spec had no trouble answering these questions. I don't think it's particularly complicated, except it requires allowing arbitrary ranges to be in selections, and I suspect WebKit/Blink would have trouble dealing with that. - What will happen to the live-object on removeAllRanges()? The range is detached from the selection, so further changes have no effect. - Would the live-object keeps the same reference for removeAllRanges() + addRanges()? No. If you use addRange(), a reference to your existing range is put in the selection. - It may never happen, but when multiple ranges are supported, are they bound to index? Everyone wants to kill this feature, so it's moot. Specing them in detail and writing tests for all these cases would be quite a bit of work. I already wrote the spec and the tests, although I'm sure there are still some gaps. I think WebKit/Blink are the bigger obstacle.
Re: [clipboard events] click-to-copy support could be hasFeature discoverable?
On Wed, May 21, 2014 at 2:01 AM, Glenn Maynard gl...@zewt.org wrote: I think I'd suggest avoiding the mess of execCommand altogether, and add new methods, eg. window.copy() and window.cut() (or maybe just one method, with a cut option). execCommand is such a nonsensical way to expose an API that trying to stay consistent with its commands is probably not much of a win. I'm inclined to agree, FWIW. If the command is really strictly editor-related, and makes sense only in conjunction with an editor based on existing commands, I would add it to execCommand for consistency (like defaultParagraphSeparator or fontSizePt). But anything else should stay far away. (Actually, if contenteditable wasn't an unsalvageable trainwreck, I would rather write a new API that actually follows JS norms, like window.editor.bold() or similar, but it is, so there's no point in doing anything beyond *maybe* trying to get it a bit more interoperable.)
Re: [selection] [editing] Selection API Bugzilla is available
On Mon, Apr 21, 2014 at 9:19 PM, Ben Peters ben.pet...@microsoft.com wrote: The Selection API Bugzilla component [1] is now available for bugs in the Selection API spec [2]. I propose that we move selection-related bugs from the HTML Editing APIs spec [3] to this new component. Are there any objections? If not, we will be moving some bugs over (in case you're tracking them). Please go ahead. Thanks!
Re: [editing] insertHorizontalRule into p while its ancestor is non-editable
On Thu, Mar 20, 2014 at 6:38 PM, Marta Pawlowska m.pawlow...@samsung.com wrote: Specification details that lead me to my conclusions: https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#the-inserthorizontalrule-command - step 2. If p is not an allowed child of the editing host of node, abort these steps. -- https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#fix-disallowed-ancestors -- p is not allowed child of p - step 4: While node is not an allowed child of its parent, split the parent of the one-node list consisting of node. -- split the parent: --- https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#split-the-parent Wow, you're one of the few people ever to actually try to understand the spec algorithms in detail. Kudos. In step 2, abort these steps means abort the whole algorithm, like a return statement in a function. So the algorithm fails at that point, and no further steps are executed. It doesn't just leave the if statement. I think that addresses your concern, correct? At one point I considered defining and hyperlinking terms like abort these steps in case they were unclear, but I never got around to it. Thanks for the feedback!
Re: [Editing] Splitting Selection API Into a Separate Specification
On Mon, Mar 17, 2014 at 1:59 PM, Robin Berjon ro...@w3.org wrote: My understanding from talking to various people is that at least part of the problem comes from the type of code that is currently deployed in the wild. An awful lot of it works around browser inconsistencies not through feature testing but through user agent switching. This means that when a given browser fixes a bug in order to become more in line with others (and presumably the spec), it actually breaks deployed code (some of which is deployed an awful lot). I don't think this is the primary issue. Most of the users of execCommand I've seen don't depend very much on specific behaviors, and only browser-switch on a few things, and this is a problem whenever browsers converge on common behavior. Generally browsers work around this by someone taking the hit and changing their behavior and dealing with a bit of interop fallout by evangelism, or in IE's case mode-switching. The major issue is that the feature is extremely complex, so it would require tons of resources invested by all the browsers to get interoperable, and this would introduce zillions of clear-cut bugs that would have to be fixed at the cost of even more resources. There just aren't enough consumers to be worth it. Sites that make non-trivial use of editing features mostly have given up and use JS libraries anyway. One suggestion has been to make at least the selection API interoperable, which seems achievable. So I'm very glad to see Ryosuke propose it here, I was about to suggest the same. Yes, this should mostly not be difficult, with a couple of exceptions (.modify and stringification come to mind). Another that I've been mulling over is to have something like contenteditable=minimal (bikeshed syntax at will). This would give you a caret with attendant keyboard motion and selection, but no ability to actually edit the content. Editing would happen by having a script listen to key events and act directly on the content itself. The hope is that not only is this a saner architecture for an editor, but it can also bypass most (possibly all, if the selection API is improved somewhat) browser bugs to do with editing. This would be possible using the beforeinput/input events that are already specced. Per spec, various standard actions like delete next character could be intercepted at a high level -- watch the beforeinput event, and if you see .command == delete cancel it and do your own thing. I don't think browsers actually implement the necessary bits for this, though. Also, it's not so hard to do this yourself with key handlers, although it might require a bit of work to get it to not be error-prone. On Mon, Mar 17, 2014 at 10:58 PM, Ryosuke Niwa rn...@apple.com wrote: I'm very pessimistic about the prospect of fixing execCommand. I think we have a much better chance of coming up with some lower-level API that JS libraries could use to build editors. Yes, especially the bits that are very hard to get right -- like wrap this list of consecutive nodes in tag X https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#wrapping-a-list-of-nodes and such. I've tried to spec some such algorithms separately in the spec (without speccing APIs for authors), but they're likely to still be buggy. Browsers mess this sort of basic operation up a lot, which is a good reason to expect authors won't get it right! We still should have execCommand specced well enough that a new browser could theoretically write a web-compatible implementation based only on the spec, but it's probably not worth the effort relative to other things. The biggest piece we're missing on the web platform today is mapping of key events to intended editing actions. e.g. how do you know that Shift+Enter should insert line break as opposed to start a new paragraph, or that Shift+Control+Left should extend selection to the beginning of the line. Relative to the difficulty of writing a full editing implementation, a JS editor implementation should be able to do this pretty easily, shouldn't it?
Re: [Editing] Splitting Selection API Into a Separate Specification
On Fri, Mar 14, 2014 at 1:43 AM, Ryosuke Niwa rn...@apple.com wrote: It appears that there is a lot of new features such as CSS regions and shadow DOM that have significant implications on selection API, and we really need a spec. for selection API these specifications can refer to. Thankfully, Aryeh has done a great work writing the spec. for selection API as a part of HTML Editing APIs specification [1] but no browser vendor has been able to give meaningful feedback or has implemented the spec due to the inherent complexity in HTML editing. As a result, the specification hasn't made much progress towards reaching Last Call or CR. Given the situation, I think it's valuable to extract the parts of the spec that defines selection API into its own specification and move it forward in the standards process so that we can make it more interoperable between browsers, and let CSS regions, shadow DOM, and other specifications refer to the specification. Any thoughts and opinions? If someone wants to work on part or all of the spec, I'm all in favor of them taking it over in whatever form they find useful. I don't have time to own a spec and don't expect to for the foreseeable future, so the entire spec is up for grabs from my perspective. The important thing is someone has to be willing to take it over. If you're volunteering, please feel free! I'm also available to answer any questions you have, albeit not always promptly. On Fri, Mar 14, 2014 at 3:36 AM, Ryosuke Niwa rn...@apple.com wrote: The separation helps move the selection API forward in the standards process. The problem here is that reviewing and agreeing on exact details of execCommand and other parts of the existing HTML Editing APIs specification is significantly harder than just reviewing and agreeing on the part of the spec that defines the selection API. FWIW, when I edited the spec, it was never in a standards process anyway, so this was historically moot. I wrote it Living Standard-style. If someone else wants to take over part or all of it, they could write it either in the W3C Process or not, as they/their employer chose. I do agree that if someone wants to get the spec through the W3C Recommendation track, all the details of execCommand() implementation would have to be dropped, while almost all the selection stuff could be gotten through. IIRC, selection isn't so far from having two interoperable implementations, although there are doubtless a couple of nontrivial blockers. There are fairly reasonable tests as well, although probably lots more could be usefully written (mine mostly just test lots of permutations of a limited set of things). On Sat, Mar 15, 2014 at 7:44 PM, Johannes Wilm johan...@fiduswriter.com wrote: Hey, yes btw -- where should one go to lobby in favor of the editing spec? I have been communicating with several other browser-based editor projects, and there seems to be a general interest of more communication with the browser creators and spec writers. Currently the situation is that it's so broken in all the browsers, that one needs to use a 100% javascript approach, painting the caret manually and creating a separate system for selections, to circumvent the main problems of contenteditable (for example: https://bugzilla.mozilla.org/show_bug.cgi?id=873883 ). Codemirror is a good example of that. I think it would be a good idea to hear everyone's (and especially the browser maker's) thoughts on what should happen to contenteditable and the rest of it -- are there any plans to fix the main issues? Will it just never be fixed and eventually just be removed from browsers? If this is the case, a clear message concerning this will help all us editor-makers make more informed decisions on whether to hope for browsers being fixed or just forgetting about this option. As far as I know, none of the major browser implementers are expending significant resources on contenteditable right now, and JavaScript-based editing is likely to be the way to do things for a long time to come. Ryosuke could probably tell you more about WebKit.
Re: [editing] nested contenteditable
On Sun, Dec 22, 2013 at 2:22 AM, Johannes Wilm johannesw...@gmail.com wrote: Hey, is there any news on this or on content editable in general? Would it be a better idea to just forget about contenteditable and instead implement everything using javascript, the way Codemirror has done it ( http://codemirror.net/demo/variableheight.html)? I am not aware of any news on this. Authors should definitely use whatever tool works best for them -- last I checked, editors tend to need to use contenteditable for at least some things if they want the editing area to integrate nicely and behave as all users expect, but you need a lot of JavaScript to get it working acceptably. Browser implementers still need to care about contenteditable, because many websites still use it, so they can't just forget about it.
Re: [editing] Multiple ranges in a single selection support
On Mon, Aug 5, 2013 at 10:25 AM, Mihnea-Vlad Ovidenie mih...@adobe.com wrote: I would like to know more about the corner cases mentioned above and the problems encountered when trying to implement this feature. Are they documented somewhere I can take a look? The basic issue is that in ~100% of cases, the selection will contain only one range, so both web developer and implementer code will not be written with the multi-range case in mind and will therefore not handle it properly. There's lots of code inside Gecko itself that just handles only the first range and ignores all others. I'm also pretty sure I remember fixing some code once that iterated over the selection's ranges incorrectly in such a fashion that it crashed if there were multiple ranges -- the case was just never hit in testing, so no one noticed for years. And this is of course better than web developer-written code, which in my experience universally assumes .rangeCount is always either 0 or 1. Allowing non-contiguous selections is a very useful feature and would be great to support, not just for regions but even for more basic things like selecting a column of a table. But the way it's exposed by the API is unusable in practice. A better API would ensure that non-contiguous selections are not a special case -- for instance, exposing a list of selected nodes/data characters instead of a range. The code iterating over the selected items wouldn't have to behave differently depending on whether the selection is contiguous. You can't expect anyone to properly test codepaths that are only hit when the user has selected a table column. On top of that, it's also not always trivial to write code that properly supports multiple ranges. Suppose I write some code that indents the selection by wrapping the selected block(s) in blockquote. To support multiple ranges, I couldn't just take the one-range case and run it separately over each range, because if I had p[foo] bar [baz]/p (both foo and baz selected in the same paragraph), that would indent the same paragraph twice. I'd have to rewrite the code to obtain a list of blocks to indent for each range separately, delete duplicates, and only then indent, or something like that. It is not reasonable to expect anyone to even try to do this for such a small corner case as multiple ranges in the selection, let alone to do it correctly.
Re: [editing] nested contenteditable
On Sat, Jun 1, 2013 at 1:27 AM, Ojan Vafai o...@chromium.org wrote: The main use case I can think of for mixed editability is an image with a caption. If anyone has other use-cases, that would be helpful in reasoning about this. http://jsfiddle.net/UAJKe/ A video with JavaScript controls comes to mind. Any embedded widget, really. Looking at that, I think we should make it so that a selection can never cross an editing boundary. So, in the image caption example, put your cursor right before the uneditable div, then: 1. Right arrow should move your cursor into the caption text. 2. Shift+right arrow should select the whole uneditable div. And delete/backspace can just be defined as extending the selection one position and then removing the selected DOM. Relatedly, if you are at the beginning of the caption text and hit backspace, nothing happens because the backspace had nothing to select (i.e. selections are contained within their first contentEditable=true ancestor). Delete/backspace are more complicated than just selecting one position and removing. For instance, backspacing at the beginning of a block is complicated, and the spec says (following Word and OpenOffice) that backspacing after a link should unlink it rather than delete the last character. (Browsers don't do the latter yet, but it's particularly essential when autolinking is supported -- otherwise it's annoying to unlink something that the browser helpfully linked it without asking you.) The rest of what you say sounds reasonable. As to the question of whether delete/backspace should select or remove non-editable elements, I'm not opposed to giving this a try in Chromium and seeing if users are confused by it, but I'm skeptical it will make sense to people. I'm not sure either. It's what the behavior when typing in contentEditable elements document recommends for tables. Maybe it makes more sense to just delete it, and assume the user is clever enough to undo if they didn't want it deleted.
Re: [editing] defaultParagraphSeparator
On Fri, Feb 22, 2013 at 2:42 AM, Alex Mogilevsky alex...@microsoft.com wrote: Thanks for background, it helps a lot. I don't see a need to comment it by point so let me just reference it [1] and try to summarize. 1. There is no consensus on what the default should be. There are implementations favoring each of br, p and div. 2. It would be good to set it per contenteditable, but we are not sure how to express that. 3. There are no options other than p and div because there is no obvious reason why it would be useful. (my interpretation: if it is something that can't be done with a div, it probably wouldn't work as a default block element anyway). 4. Sometimes Enter inserts br or EOL character instead of a new element. That behavior is independent of the choice between p and div (but deserves a good definition too). Is that a fair summary? Yep, that's about right. I think that main reason it's hard to get a consensus is that it is not clear what problem the feature is trying to solve... In an ideal world, I would prefer that p be the only option, as IE does it. This is in fact what the spec originally required. But there are two reasons for the switch, which I added to the spec by request of people from WebKit and Opera: Opera already implemented the command by the name of opera-defaultblock, because it outputs p but they found some apps were changing it to div to avoid the margins: http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0045.html So that's evidence of a real-world use-case for switching for p to div -- although you'll have to ask Opera for details of what sites these are. Presumably they'd have the same problem with IE. The use-case for switching from div to p is as a transition feature for WebKit, which defaults to div. Now authors can at least opt into p for IE/WebKit/Opera, and deal with only two distinct browser behaviors instead of three. WebKit is not willing to change their default away from div because of the risk of breaking WebKit-specific content. If the only problem is the uncollapsed top border that annoyingly appears on first Enter - then it is an overkill to globally change what HTML element should represent a paragraph of text (if you saw it on paper, you would call it a paragraph, right?), just because it has a different default style. At least someone thinks otherwise, according to Opera. Notably, a major use-case for contenteditable is richtext e-mail. E-mail clients don't reliably support CSS at all, and I don't know if any of them support non-inline CSS, so you don't have any nice way to get rid of p's margins when sending.
Re: [editing] defaultParagraphSeparator
Sorry for the delayed response -- I've been busy with other things and didn't have time to check my e-mail. Thanks a lot for the feedback and questions! On Mon, Feb 4, 2013 at 9:59 PM, Alex Mogilevsky alex...@microsoft.com wrote: There was a discussion here a while ago on desired default behavior for Enter in contenteditable and options for execCommand(“defaultParagraphSeparator”): http://lists.w3.org/Archives/Public/public-whatwg-archive/2011May/thread.html#msg171 Did it ever get to consensus? Or is there new thinking on how that should work? I don't remember if there was consensus. I wound up speccing something based on IE/Opera's behavior (p by default). Unfortunately, the complexity of editing and the level of detail I write the spec in means that everyone other than me seems to have a hard time understanding most of the spec (and so do I a lot of the time . . .), but I can answer any questions people have about what I thought was best and why. On Tue, Feb 5, 2013 at 2:41 AM, Alex Mogilevsky alex...@microsoft.com wrote: * default styles (if 'p' is default, it adds default 1em margin before first line, which most people consider undesirable) I initially thought this was a significant issue, but then I realized the same issue exists anyway for lists and indent, and you can't get around it for them. You have to have at least some CSS if you want it to look nice -- particularly for indent, where the top/bottom margin is rarely desirable (since it hijacks blockquote for indentation). Also, the default margins for p match recent versions of Word, IIRC. Simon Pieters did point out that for e-mail, you can't add styles, so this is a reason to support div as well. But I do think that p is a better default. If IE and Opera would be willing to change to div and WebKit is not willing to match IE/Opera, I'd be in favor of changing the default to div for interop's sake. Otherwise I think it should stay p. * when should Enter insert a line break instead of block (e.g. when inside pre)? This is specified in the insertParagraph command, which behaves the same as hitting Enter: https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#the-insertparagraph-command The actual spec text might prove just a tad difficult to read, but the note at the top explains some of the important parts. The spec currently says a br should be inserted instead of a new block element for address, listing, and pre. My notes (View comments at the side in the normative text) explain the reasoning for this exact list: IE9 and Chrome 13 dev just break pre up into multiple pres. Firefox 5.0a2 and Opera 11.10 insert a br instead, treating it differently from p. The latter makes more sense. What might make the most sense is to just insert an actual newline character, though, since this is a pre after all . . . IE9 and Chrome 13 dev also break address up into multiple addresses. Firefox 5.0a2 inserts br instead. Opera 11.10 nests ps inside. I don't like Opera's behavior, because it means we nest formatBlock candidates inside one another, so I'll go with Firefox. listing and xmp work the same as pre in all browsers. For Firefox and Opera, this results in trying to put a br inside an xmp, so I go with IE/Chrome for xmp. TODO: In cases where hitting enter in a header doesn't break out of the header, we should probably follow this code path too, instead of creating an adjoining header. No browser does this, though, so we don't. For other elements, of course, you can use Shift-Enter (or platform equivalent) to produce a br instead, e.g., to produce a multi-line list item. * can/should the default block be set per editable area and how? This is bug 15522: https://www.w3.org/Bugs/Public/show_bug.cgi?id=15522 I really wanted this to be per-editing host only, with no document-wide flag, because in my experience document-wide flags mean authors have to write wrappers like function myCommand(a, b, c) { document.execCommand(usecss, false, true); document.execCommand(a, b, c); } just in case something else sneakily changed the flag when they weren't looking. But I decided not to block one on the other, so currently the spec has no way to do it per-editing host. * why only 'p' and 'div'? Because no implementation supports any other wrapper for the default paragraph separator, and there's no obvious reason why it would be useful. If people really wanted to allow blockquote as a default line separator, we could add it to the spec easily enough. (I considered adding br as an option, like Firefox now, but it would require an extra code path, which I don't think is worth it unless we really have a good reason.)
Re: [editing] Is this the right list to discuss editing?
On Tue, Feb 19, 2013 at 11:26 AM, Ms2ger ms2...@gmail.com wrote: FWIW, Aryeh is currently studying full time and doesn't follow web standards discussions regularly. I do check them from time to time, though, and will check any personal e-mail I receive for the time being. In particular, I'm happy to answer any questions in public or private about the spec, particularly to help a new editor get the hang of it. It's giant and complicated and very hard to read -- which I suspect is an accurate description of implementations' source code as well! (At least I've heard terrible things about WebKit's implementation, and Gecko's I've seen. As specs get more precise, their complexity eventually matches that of implementations . . .)
Re: [XHR] Open issue: allow setting User-Agent?
(I noticed people talking about this on IRC and commented, and zcorpan pointed me to this thread.) On Tue, Oct 16, 2012 at 7:08 PM, Boris Zbarsky bzbar...@mit.edu wrote: The point is that a browser can act as if every single server response included Vary: User-Agent. And perhaps should. Intermediary caches _certainly_ should. In terms of correctness, yes, but that will make them useless as caches. If a moderate number of users with an assortment of browsers are using the same caching proxy, it's entirely possible that no two of them have the same exact User-Agent string. Varying on User-Agent in a heterogeneous browser environment is going to drop your cache hit rate to the point where the cache hurts performance more than it helps. Proxy caching is always going to break some pages, because not all pages serve correct caching headers. This can cause them to break just due to browser cache too, but more caching is going to break them more. So proxy caching is always a correctness-performance tradeoff. In practice, the loss in correctness is not worth the added performance for most users, which is why most Internet users are not (I think?) behind any sort of client-side proxy caching layer. (I'm not counting reverse proxies here.) Where the performance gain is worth it, such as behind an expensive or high-latency link, users will just have to be trained to try pressing Ctrl-F5 if pages break.
Re: [IndexedDB] Problems unprefixing IndexedDB
On Thu, Aug 9, 2012 at 3:53 PM, Robin Berjon ro...@berjon.com wrote: Trying to evangelise that something is experimental is unlikely to succeed. But when trying out a new API people do look at the console a lot (you tend to have to :). It might be useful to emit a warning upon the first usage of an experimental interface, of the kind You are using WormholeTeleportation which is an experimental API and may change radically at any time. You have been warned. IMO, this is just the wrong attitude to take to the problem. The problem is not that authors are unwisely using experimental features and we should pressure them not to. The problem is that authors are quite rationally using features that are useful in the real world, and some people are sad that this means we have to actually stop changing them once they're used. The solution is not to get authors to use shipped features less. You aren't going to convince authors to stop using useful features no matter how much you insist they're experimental. The solution is for implementers to consider all shipped features frozen until proven otherwise, and stop maintaining the pretense that widely-used features are experimental or changeable just because they're behind a vendor prefix. It would help a lot if implementers stopped shipping new prefixed features to stable channels. I believe Mozilla already intends to do that for CSS features, and I hope it does so for DOM features too. If a feature is really unstable, don't ship it to enough users that you're creating a compat burden on yourself.
Re: [selectors-api] NAMESPACE_ERR or SYNTAX_ERR when both applied
On Sun, Jun 17, 2012 at 4:43 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/17/12 9:33 AM, Anne van Kesteren wrote: Always throwing SyntaxError is probably better. Also probably incompatible with a depth-first recursive descent parser implementation. Are we sure we want to overconstrain implementations like that? I'm not sure what Anne meant, but I'd think we should just always require SyntaxError, including for namespace errors. Do enough people really use namespaces that they deserve a separate exception? CSS itself treats namespace errors the same as syntax errors in stylesheets (right?), so it doesn't make sense to require Selectors APIs to distinguish them.
Re: Clipboard API spec should specify beforecopy, beforecut, and beforepaste events
On Wed, May 2, 2012 at 4:57 PM, Boris Zbarsky bzbar...@mit.edu wrote: I would think that disabling cut/copy/paste would apply to main menus too, not just context menus. Most people I know who use menus for this (which is precious few, btw, for the most part people I know seem to use keyboard shortcuts for cut/copy/paste) use the main menu, not the context menu... Ah, good point. One question is whether this use case (which I agree seems worth addressing) requires a new event. That is, is it better for the browser to fire events every time a menu is opened, or is it better for apps that want to maintain state like this to update some property on the editable area whenever their state changes and for browsers to just read those properties when opening a menu? The latter is, from my point of view as a browser implementor, somewhat simpler to deal with, since it doesn't involve having to worry about alert() or sync XHR or window.close() in menu-opening code. But I can see that it might be more complicated to author against... I'd have said that it would be easier as an author to not have to track state. But I'll certainly defer to Ojan's expertise here.
Re: [editing] input event should have a data property WAS: [D3E] Where did textInput go?
On Thu, May 3, 2012 at 12:44 AM, Ojan Vafai o...@chromium.org wrote: As I've said before, I don't think command/value should be restricted to contentEditable beforeInput/input events. I don't see any downside to making command, value and text all available for all three cases. It simplifies things for authors. The code they use for plaintext inputs can be the same as for rich-text inputs. If command/value make any sense for plaintext inputs, yes. As specced, and AFAICT as implemented in Gecko and Opera, execCommand() only operates on the contents of contenteditable areas, not plaintext inputs. If that were changed to match (AFAICT) IE and WebKit, then I'd agree that it would make sense to expose the same properties.
Re: Clipboard API spec should specify beforecopy, beforecut, and beforepaste events
On Wed, May 2, 2012 at 3:56 AM, Ryosuke Niwa rn...@webkit.org wrote: That might make sense given how confusing these before* are. On the other hand, there are use cases to communicate enabledness of cut, copy, paste with UA. Maybe we can address this use case by letting websites override queryCommandEnabled('cut'), queryCommandEnabled('copy'), queryCommandEnabled('paste')? Aryeh, any opinions here? enabled is pretty useless, so I have no problem making it more useful for specific commands. But browsers don't generally support the cut/copy/paste events in public web pages at all. It would be quite confusing for queryCommandEnabled(cut) to mean can the *user* perform a cut rather than will execCommand('cut') do anything.
Re: Clipboard API spec should specify beforecopy, beforecut, and beforepaste events
On Wed, May 2, 2012 at 9:04 AM, Ryosuke Niwa rn...@webkit.org wrote: That's a good point. What would be a viable alternative then? The use-case is disable cut/copy/paste and also hide those options from context menus, right? I think these are two separate features. First, you want to prevent the action from showing in the context menu. Second, you want to prevent it from occurring. These shouldn't be conflated, because 1) users can cut/copy/paste without using the context menu, and 2) maybe when they opened the context menu you wanted to allow the cut/copy/paste but when they click the option something has changed and you no longer want to allow it, or vice versa. So I think we should have two separate sets of events: one type of event that fires when cut/copy/paste options would appear in a context menu, and one that fires when the cut/copy/paste is actually attempted, both sets cancelable. For the former, I'd suggest onbeforecontextmenu, with some way to disable specific options, like extra boolean parameters (or a dictionary) on the event. So you'd do something like addEventListener(beforecontextmenu, function(e) { if (foo()) { e.enabledOptions.cut = e.enabledOptions.copy = false }}). The logical name for the latter is onbeforecut/onbeforecopy/onbeforepaste, but those are taken. :( Can we maybe repurpose them anyway? (Aside: does it really make sense to fire separate events for cut and copy? cut is equivalent to copy followed by delete, so it would make sense for it to fire events like that. This way, oncopy handlers will fire for cut too, which is almost surely what's wanted. And if we support some type of ondelete or onbeforedelete event, they should fire for cut too. This means separate events probably aren't needed. But they should be treated separately for onbeforecontextmenu, if we have such an event.)
Re: Clipboard API spec should specify beforecopy, beforecut, and beforepaste events
On Wed, May 2, 2012 at 9:27 AM, Ryosuke Niwa rn...@webkit.org wrote: Sounds like beforecut, beforecopy, and beforepaste suffice then... Maybe these events are useful after all. I think they're useful, but very badly named -- authors will think they fire before every cut, copy, and paste. So while it's normally best to specify whatever browsers already support, in this case I think it would be best to introduce a new event and try to get rid of the old ones. The old ones are named too confusingly. Events for the latter are cut, copy, paste. Despite of their names, they fire before editing commands are executed. Ugh. Well, that's confusing but not as bad as it could be. If browsers are interoperable on this score, I guess we have to keep them.
Re: [editing] input event should have a data property WAS: [D3E] Where did textInput go?
On Wed, Apr 4, 2012 at 10:07 PM, Ojan Vafai o...@chromium.org wrote: The original proposal to drop textInput included that beforeInput/input would have a data property of the plain text being inserted. Aryeh, how does that sound to you? Maybe the property should be called 'text'? 'data' is probably too generic. Sounds reasonable. Per spec, the editing variant of these events has .command and .value. I think .text is a good name for the plaintext version. It should just have the value that the input/textarea would have if the beforeinput event isn't canceled.
Re: Selection of a document that doesn't have a window
On Fri, Jan 13, 2012 at 5:12 PM, Ojan Vafai o...@chromium.org wrote: We could define it in terms of defaultView (or browsing context) and put our effort into getting interoperability on defaultView? This is what I've done for now: http://dvcs.w3.org/hg/editing/rev/4dc4d65cc87e At least behavior is pretty clear in the easy case of document.implemention.createHTMLDocument() or such. In more complicated cases, we probably want the same behavior as defaultView anyway, so if we're going to define such behavior precisely we may as well do it for defaultView instead of getSelection(). I've also filed a Mozilla bug: https://bugzilla.mozilla.org/show_bug.cgi?id=718741
Re: Selection of a document that doesn't have a window
On Fri, Jan 13, 2012 at 12:34 PM, Boris Zbarsky bzbar...@mit.edu wrote: I would prefer a definition that doesn't involve defaultView, actually. I don't expect browsers to converge defaultView behavior any time in the near or medium future, so the testability would be illusory: tests would just depend on whether browsers implement defaultView correctly... What well-defined alternative do you suggest? Is the .document of some Window? That would be easy enough to test in simple cases, but what if there's navigation and a reference to the Document is kept but the Window is no longer accessible, or something like that?
Selection of a document that doesn't have a window
What does document.implementation.createHTMLDocument().getSelection() return? * IE9 returns a Selection object unique to that document. * Firefox 12.0a1 and Opera Next 12.00 alpha return the same thing as document.getSelection(). * Chrome 17 dev returns null. I prefer IE's behavior just for the sake of simplicity. If we go with Gecko/WebKit/Opera, we have to decide how to identify which documents get their own selections and which don't. The definition should probably be something like documents that are returned by the .document property of some window, but I have no idea if that's a sane way to phrase it. So should the spec follow IE? If not, what definition should we use to determine which documents get selections?
Re: [editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements
On Wed, Jan 11, 2012 at 3:09 PM, Charles Pritchard ch...@jumis.com wrote: The reason is listed in WCAG2 section 2.1.2 and CR5. http://www.w3.org/TR/WCAG/ The items suggest that a standard means of moving focus be maintained. Users should be given simple instructions on how to move focus if the keyboard is trapped. When the tab key is trapped, I recommend having the escape key move focus and untrap tab. That said, that can interfere with full screen mode, which may also use escape with varying success. What do programs like Word do? Do they allow the user to escape the page and use tab to navigate the UI somehow?
Re: Pressing Enter in contenteditable: p or br or div?
On Thu, Jan 12, 2012 at 4:58 AM, Hallvord R. M. Steen hallv...@opera.com wrote: Probably a stupid question, but one I've always wanted to ask: couldn't we default to a different, smaller, possibly 0 margin for P when in editable content? As Markus says: it breaks WYSIWYG. The idea of contenteditable is you can write a blog post or something in a contenteditable area, then post the resulting HTML to your web page in non-editable form and have it look the same. Having contenteditable behave differently means that you write the post, get it looking the way you want it -- and then suddenly when you post it, it looks different for no obvious reason. On Thu, Jan 12, 2012 at 5:50 AM, Simon Pieters sim...@opera.com wrote: Currently the editing options available, other than enabling and disabling contenteditable, use the execCommand API. I don't see why we should switch to attributes for new editing options. To make editing options per editing host, I prefer this proposal: . . . As do I -- I suggested new attributes before I saw Ojan's suggestion. Indeed, e.g. shift+enter doesn't break out of lists, so it's not equivalent. Making it equivalent would be adding some complexity. Good point. I didn't think of that. So what's the use case? :-) If none are presented, I object to adding it based on the Avoid Needless Complexity and Solve Real Problems design principles. Agreed. That some authors are using it is not a strong enough reason to support it.
Re: Pressing Enter in contenteditable: p or br or div?
On Tue, Jan 10, 2012 at 3:50 PM, Ryosuke Niwa rn...@webkit.org wrote: p has default margins. That alone is enough for us not to adopt p as the default paragraph separator. On Wed, Jan 11, 2012 at 5:15 AM, Simon Pieters sim...@opera.com wrote: Sure, but some apps like to send their stuff in HTML email to clients that don't support styling, or some such. I used to think that this was a strong argument, but then I realized blockquote and ol and ul have default margins too. So if you want it to look right, you'll have to use a stylesheet. Also, it's worth pointing out that recent versions of Word have margins by default when you hit Enter. But Simon makes a good point: for the e-mail use-case, styling might not be available. So this is a decent reason to support div. Also, unfortunately, there are many legacy contents that rely on the fact webkit uses div as the paragraph separator so we need a global or per editing-host switch regardless. This is also a good reason -- it lets preexisting apps that expect div opt into that behavior in new browsers, instead of being rewritten to support p. Okay, so what API should we use? I'd really prefer this be per-editing host. In which case, how about we make it a content attribute on the editing host? It can be a DOMSettableTokenList. Maybe something like div editoptions=tab-indent where the attribute is a whitespace-separated list of tokens. To start with, we can maybe have tab-indent (hitting Tab indents) and div-separator (hitting Enter produces div). Does this sound like a good approach? If so, what should we call the attribute? And should it imply contenteditable=true, or should the author have to specify that separately? Also: are there any good use-cases for br? Allowing div instead of p adds basically no extra complexity, but allowing br would make things significantly more complicated. I almost want a global switch to toggle between legacy UA-specific behavior and new spec-compliant behavior. That's something we definitely shouldn't have. If WebKit wants to go down the IE route and keep its legacy behavior for WebKit-specific content, it's welcome to, but web-facing behavior should be entirely standard. If we had a nonstandard mode for editing, it would be quirks mode all over again -- eventually we'd have to standardize that too so browsers are interoperable on pages that don't opt in to the standard behavior, and we'd just make everything more painful in the end. There's really no way to make this painless. We just have to be as careful to make it as painless as possible. On Wed, Jan 11, 2012 at 4:43 AM, Markus Ernst derer...@gmx.ch wrote: IMO the ability to create clean, state-of-the-art HTML code should be one of the main goals of a new spec. The overriding goal of the spec is to get interop as quickly and painlessly as possible. Everything else is secondary. Once we have interop, we can talk about significantly improving the utility of the features.
Re: [editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements
On Tue, Jan 10, 2012 at 4:48 PM, Charles Pritchard ch...@jumis.com wrote: Would users press Esc to get out of the tab lock? Do they need to be able to get out of it? They can't in a regular word processor, so why should they be able to in Google Docs? If some users need to be able to override the feature, that's a good reason to have it supported by browsers, so browsers can override it. If the page just intercepts tab, you can't get around it. On Tue, Jan 10, 2012 at 7:28 PM, Ojan Vafai o...@chromium.org wrote: I agree the API is not the best. We should put execCommand, et. al. on Element. That would solve the global flag thing for useCss/styleWithCss as well. It's also more often what a website actually wants. They have a toolbar associated with each editing host. They don't want a click on the toolbar to modify content in a different editing host. This is a change we should make regardless of what we decide for tabbing behavior IMO. What would be the behavior on Element? Something like * If the element is not an editing host, throw. * For things like styleWithCSS, set the flag for that editing host and its descendants only. * For regular commands like bold, run the command restricted to the descendants of that editing host. Whereas calling it on document would affect all nodes in the document. This sounds like an interesting idea. You're right that you don't want the bold button for one editing host affecting other editing hosts, which in my spec it currently does. I've filed a bug: https://www.w3.org/Bugs/Public/show_bug.cgi?id=15522 Calling indent doesn't actually match tabbing behavior (e.g. inserting a tab/spaces or, in a table cell, going to the next cell), right? I guess another way we could approach this is to add document.execCommand('Tab') that does the text-editing tabbing behavior. I'd be OK with that (the command name could probably be better). Current indentation behavior is here: http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#indenting-and-outdenting You're right that it doesn't match up with how tab works at all. The way I make other keystrokes work (Enter, Delete, etc.) is by mapping them to some command, following WebKit: http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#additional-requirements So I need to define a tab command. I've filed a bug: https://www.w3.org/Bugs/Public/show_bug.cgi?id=15523 The bitmask is not a great idea, but there are certainly editors that would want tabbing in lists to work, but tab outside of lists to do the normal web tabbing behavior. What are examples, and why? Historically, one of my biggest frustrations with contentEditable is that you have to take it all or none. The lack of configurability is frustrating as a developer. Maybe the solution is to come up with a lower level set of editing primitives in place of contentEditable instead of trying to extend it though. Yes, that's definitely something we need to do. There are algorithms I've defined that would probably be really useful to web authors, like wrap a list of nodes or some version of set the value of the selection (= inline formatting algorithm). I've been holding off on exposing these to authors because I don't know if these algorithms are correct yet, and I don't want implementers jumping the gun and exposing them before using them internally so they're well-tested. I expect they'll need to be refactored a bunch once implementers try actually reimplementing their editing commands in terms of them, and don't want to break them for authors when that happens.
[editing] Avoiding selections with no corresponding range, to simplify authoring
Anne asked me to investigate how exactly Ranges are added to Selections (bug: https://www.w3.org/Bugs/Public/show_bug.cgi?id=15470). It turns out browsers mostly don't interoperate. One interesting thing I found out is that in Gecko, if no one calls addRange/removeRange/removeAllRanges, rangeCount is always exactly one. This means getRangeAt(0) will never throw. This is actually great, because it avoids a common authoring bug -- rangeCount is rarely 0 in any browser, so authors often will call getRangeAt(0) unconditionally, which risks throwing IndexSizeError. I plan to change the spec to match Gecko, in requiring that user-created selections always have exactly one range (which is initially collapsed at (document, 0)). I'd like to go further, though. addRange() already doesn't allow more than one range per spec -- if there's an existing range, it replaces it. How about removeRange() and removeAllRanges() remove the range and then add a new one collapsed at (document, 0)? The common pattern of remove(All)Range(s) followed by addRange will still work the same, because addRange will replace the dummy range. But now rangeCount will *always* be 1, so getRangeAt(0) will *never* throw. This seems like it would prevent an entire class of authoring bugs (although I'm admittedly not totally sure about compat impact). Also, while I'm at it, how about collapsing at (document.documentElement, 0) instead of (document, 0)? This has the minor added benefit of avoiding Selection boundary points that aren't in an Element or Text node, which again makes things simpler for authors. If implementers are okay with this, I'll update the spec.
Re: [editing] Avoiding selections with no corresponding range, to simplify authoring
On Wed, Jan 11, 2012 at 12:27 PM, Ryosuke Niwa rn...@webkit.org wrote: Does gecko returns a Range at (document, 0) for getRange(0) in such cases? Okay, it looks like my testing before was off. Actually, all browsers have no range in the selection initially. But I was testing in Live DOM Viewer, which didn't fully reset the document state when the source code changed, because not all browsers clear the selection's range on unload. I fixed the spec to require the range to initially be null (like all browsers), and specified that the range has to be reset to null when the document is unloaded (like IE/Opera, not like Gecko/WebKit): http://dvcs.w3.org/hg/editing/rev/6aaa4b8455c9 I also added a test for the latter condition, and filed a Gecko bug (WebKit is also now buggy per spec): http://dvcs.w3.org/hg/editing/raw-file/6aaa4b8455c9/selecttest/unload.html https://bugzilla.mozilla.org/show_bug.cgi?id=717339 Since we seem to have interop on the selection's rangeCount initially being 0, I'm no longer enthusiastic about changing that. I'm fine with leaving the spec as-is now, unless implementers would prefer to change. On Wed, Jan 11, 2012 at 11:54 AM, Boris Zbarsky bzbar...@mit.edu wrote: Then you have to handle the case when document.documentElement is null. And yes, this has come up before; there are scripts out there that remove documentElements, do some stuff, insert new documentElements, etc. . . . This would happen anyway if you set up a selection inside document.documentElement and someone removes the documentElement; the normal range algorithm will give you endpoints inside the Document. so you really can't enforce this condition. Well, yes, and you can also do addRange() with whatever you like. But we can at least try to make the condition rarer, so bugs are less likely to crop up in practice when authors inevitably write incorrect code. Anyway, as noted, I retract my suggestion for other reasons, unless someone else is still interested.
Re: Pressing Enter in contenteditable: p or br or div?
On Wed, Jan 11, 2012 at 12:38 PM, Ryosuke Niwa rn...@webkit.org wrote: That sounds like a great idea. . . . I'm not sure if we should add just editoptions though given we might need to add more elaborative options in the future. It might make more sense to add a new attribute per option as in: div contentEditable paragraphSeparator=p tabIndentation Ojan suggested in the other thread that we instead allow calling execCommand() on Element, and have the result restricted to that Element. That solves the global-flags problem too, and doesn't require new attributes. So you'd do div.execCommand(tabindent, false, true); or whatever. Someone could still call document.execCommand(tabindent, false, false), but that would be overridden if it was called on the editing host. I filed a bug on it: https://www.w3.org/Bugs/Public/show_bug.cgi?id=15522 Does that sound good too? Should enter behave like shift+enter when br is the default paragraph separator? Default paragraph separators are used in a couple of other places too, so it would be a little more work than that. But I just looked, and it wouldn't be as bad as I thought. So this is doable if people have any good use-cases.
Re: Pressing Enter in contenteditable: p or br or div?
On Wed, Jan 11, 2012 at 3:15 PM, Ryosuke Niwa rn...@webkit.org wrote: That sounds workable. Presumably it's only available on the editing host (as supposed to any element or any element with contenteditable content attribute). Right.
Re: [editing] tab in an editable area WAS: [whatwg] behavior when typing in contentEditable elements
On Fri, Jan 6, 2012 at 10:12 PM, Ojan Vafai o...@chromium.org wrote: There are strong use-cases for both. In an app like Google Docs you certainly want tab to act like indent. In a mail app, it's more of a toss-up. In something like the Google+ sharing widget, you certainly want it to maintain normal web tabbing behavior. Anecdotally, gmail has an internal lab to enable document-like tabbing behavior and it is crazy popular. People gush over it. Hmm, good point. Google Docs definitely wants tab to indent. We should make this configurable via execCommand: document.execCommand(TabBehavior, false, bitmask); I'm leery of global flags like that, because they mean that if you have two editors on the same page, they can interfere with each other unwittingly. useCss/styleWithCss is bad enough; I've seen author code that just sets useCss or styleWithCss before every single command in case something changed it in between. Could the author just intercept the keystroke and run document.execCommand(indent) themselves? It's not as convenient, I admit. Alternatively, perhaps the flag could be set per editing host somehow, and only function when that editing host has focus, although I'm not sure what API to use. The bitmask is because you might want a different set of behaviors: -Tabbing in lists -Tabbing in table cells -Tabbing blockquotes -Tab in none of the above insert a tab -Tab in none of the above insert X spaces (X is controlled by the CSS tab-size property?) Bitmasks are bad -- many JavaScript authors don't understand binary well, if at all. Also, what are use-cases where you'd want to toggle indentation in all these cases separately? More complexity without supporting use-cases is a bad idea -- browsers have enough trouble being interoperable as it stands, and more complexity just makes it harder.
Re: Pressing Enter in contenteditable: p or br or div?
On Fri, Jan 6, 2012 at 9:57 PM, Ojan Vafai o...@chromium.org wrote: I'm OK with this conclusion, but I still strongly prefer div to be the default single-line container name. Why? I don't like using div as a line separator at all, because it's also used as a block-level wrapper, while p is specifically meant to wrap lines and br is specifically meant to separate them. I wish that UAs never generated div to wrap lines to start with -- it means that authors can't insert div-wrapped editable content without the risk that it will be interpreted as a line wrapper instead of a block wrapper. Also, I'd really like the default single-line container name to be configurable in some way. Different apps have different needs and it's crappy for them to have to handle enter themselves just to get a different block type on enter. What's a use-case for wanting div or br rather than p? Something like document.execCommand(DefaultBlock, false, tagName). I really don't want more document-global flags. If such a switch is added, it should be per editing host. What values are valid for tagName are open to discussion. At a minimum, I'd want to see div, p and br. As one proof that this is valuable, the Closure editor supports these three with custom code and they are all used in different apps. That's not proof that they're valuable, just that people will use them if given the option. What are examples of apps that use div and br? Do you know why they use them? I'm tempted to say that any block type should be allowed, but I'd be OK with starting with the tree above. For example, I could see a use-case for li if you wanted an editable widget that only contained a single list. As Simon says, making the list element itself contenteditable will work for that use-case. Then hitting Enter will make an li no matter what. On Tue, Jan 10, 2012 at 3:40 PM, Ryosuke Niwa rn...@webkit.org wrote: Single br tag is shorter than pairs of div tags when serialized. True, but only slightly, and the difference is even smaller if you use p instead of div. This isn't enough of a reason by itself to justify the extra complexity of another mode. Are there other reasons?
Re: [editing] Feedback Link?
On Sun, Jan 8, 2012 at 2:28 PM, Doug Schepers schep...@w3.org wrote: In the status section of the HTML Editing APIs spec [1], you have detailed instructions for how people should provide feedback, but the links you provide are to the pubic-webapps archive and to your personal email, rather than a mailto link to the list. It might be handy provide an encoded mailto link to make it easier for people to start a discussion, like this: a href=mailto:public-webapps@w3.org?cc=a...@aryeh.namesubject=%5Bediting%5D%20;discussion on the HTML Editing APIs specification/a (a href=http://www.w3.org/Search/Mail/Public/search?type-index=public-webappsindex-type=tkeywords=[editing];public-webapps archive/a) Thanks for the suggestion! I've made the change: http://dvcs.w3.org/hg/editing/rev/d278ee615900 http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html
Affiliation change
This is just a heads-up that as of the new year, I'm contracting for Mozilla instead of Google. I'll continue to work on specifications and tests as before, in particular including the HTML Editing specification.
Re: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?
On Tue, Nov 22, 2011 at 12:19 AM, Yehuda Katz wyc...@gmail.com wrote: I like .is, the name jQuery uses for this purpose. Any reason not to go with it? We might want it for something else. .matches clearly sounds like it's selector-related, and I have more trouble thinking of another meaning we'd ever really want for it.
Re: Adding methods to Element.prototype WAS: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?
On Tue, Nov 22, 2011 at 1:04 PM, Boris Zbarsky bzbar...@mit.edu wrote: Again, some decent data on what pages actually do in on* handlers would be really good. I have no idea how to get it. :( Can't browsers add instrumentation for this? You have users who have opted in to sending anonymized data. So for each user, on a small percentage of pages, intercept all bare-name property accesses in on*. Record the property name, and which object in the scope chain it wound up resolving to. Send info back to mothership. There will be some perf impact, but it should be no big deal if you only do it a small percentage of the time for each user. Of course, it might require a bunch of work to actually code this kind of thing -- that I'm not in a position to judge. Moving forward, this kind of info-gathering will be really essential for us to figure out how we can change stuff. Right now we have to be super-conservative when making changes because we have no idea in advance what impact they'll have. This is not a good thing for the web platform, IMO. (Aside: If we're just looking at some binary question like whether a specific name like matches is doable, you should be able to do this even without user opt-in, with no privacy breach. Just send back noise with probability (n - 1)/n, and the real value with probability 1/n, for n fairly large (say 100,000). Then average all the values together, subtract (n - 1)/n times the mean of the distribution you picked the noise values from, multiply by n, and you get something very close to the true average, by the law of large numbers. E.g., if the data is a bit, send a random bit 99.999% of the time and the real value 0.001% of the time. Average all the values, subtract 0.45, multiply by 100,000, and you have roughly the true average (error bars easily calculable). But the bit sent back by any given user would yield negligible information about that user to either the browser vendor or an eavesdropper, because it's almost surely noise. The same approach would work for any value, provided you can come up with a plausible distribution for the noise -- which is almost certainly not the case for string values, say. This would all have to be reviewed by security teams, but it should be doable in principle. The advantage is your sample would actually be representative, which could be important in some cases.)
Re: XPath and find/findAll methods
On Tue, Nov 22, 2011 at 7:08 PM, Jonas Sicking jo...@sicking.cc wrote: This expression finds all div elements which has at least 6 span descendants and where an odd number of those span elements have a data-foo attribute equal to its parents data-bar attribute. It is obviously trivial to add arbitrary additional complexity to this expression. Trying to do the same thing in Selectors will just result in a incomprehensible mess. At the same time, XPath can't ever compete in expressiveness to Javascript. Finding all div elements with a data-foo attribute that contains a prime number is not possible in XPath but trivial in javascript. I'm not convinced that it's worth investing in XPath. At least not beyond the low-hanging fruit of making most of the arguments to .evaluate optional. But I think trying to make selectors compete in expressiveness with XPath is a loosing battle. This is the key thing. We're talking about JS APIs, so you can already walk the DOM and do anything you want. Or you can use selectors and get a limited set of effects much more concisely and efficiently. There is no need for yet a third language that's at an intermediate level of expressiveness and conciseness. In the cases that selectors can't fully handle, use selectors plus extra JS logic. This means knowing only two languages instead of three, and those two languages are ones authors have to know anyway. Authors would just have no reason to learn XPath even if it were easier to use, because the value is adds is too limited.
Re: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?
On Mon, Nov 21, 2011 at 11:34 AM, Boris Zbarsky bzbar...@mit.edu wrote: The sticking point here is obviously item #2. Data needed on use of matches and is as barewords in on* attributes, specifically. I don't follow. matchesSelector is on Element, not Node, right? Why is it relevant to on* attributes? The lookup chain is first document then window, with no elements anywhere, right? Or am I misunderstanding you? I see why new proposed methods on Node like .prepend could be an issue (although we could leave most of those off Document too, as noted). If this is a recurring problem, could we consider implementing magic so that new methods on Document (or Node) that might cause problems are ignored in on* unless you prefix with document.? So generally a bare name will check for variables on the document first and then the window, but for the magic blacklist (matches, is, whatever causes problems) it will only check the window. Obviously this is not a great solution -- but I'd really hate us to lose out on the ideal names for common methods just because of a tiny number of sites using on*. It's possible that I'm just completely not understanding what you mean here, though.
Re: window.find already exists
On Mon, Nov 21, 2011 at 11:29 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: That only interferes if .find() for selectors is defined on window. qSA is only defined on Document and Element, though, and I see no reason that .find wouldn't be the same. So then we get another built-in method that will do different things if you call it by its bare name in an on* attribute vs. in normal JS? find() in on* would be document.find(), while anywhere else it would be window.find(). I ran into this once with getSelection() on window vs. document, when Gecko had different implementations for the two, and it was really confusing -- there was no way I'd have figured it out if I were a typical web author. Hopefully we can just drop window.find(), though.
Re: [Selectors API 2] Is matchesSelector stable enough to unprefix in implementations?
On Mon, Nov 21, 2011 at 8:54 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: You're not misunderstanding, but you're wrong. ^_^ The element itself is put in the lookup chain before document. See this testcase: !DOCTYPE html button onclick=alert(namespaceURI)foo/button (namespaceURI was the first property I could think of that's on Element but not Document.) Awesome. It seems on* is even more pathological than I realized. So definitely, I don't think we want to avoid adding short names to Node or Element or Document forever just because of this. If the cost is making bare name lookup in on* slightly more pathological than it already is, I don't think that's a big deal. Authors who want to preserve their sanity should already be prefixing everything with window. or document. or whatever is appropriate. Let's add .matches() and just make it not triggered as a bare name from on*.
Re: HTML Editing APIs is already in scope [Was: Re: Draft Minutes: 31 October 2011 f2f meeting]
On Tue, Nov 8, 2011 at 9:17 AM, Arthur Barstow art.bars...@nokia.com wrote: My summary is: although HTML Editing APIs is in scope for WebApps, and we agreed to use public-webapps for related discussions [1], given no one has agreed to actively drive the spec in WebApps, we will not include it as an explicit deliverable in WebApps' charter update. If anyone disagrees with this summary, please speak up. I think this situation is ideal for the time being.
Re: Disabling non-collapsed selection
(sorry for the delay in responding, I was on vacation for about ten days) On Sat, Oct 15, 2011 at 1:51 PM, Ryosuke Niwa rn...@webkit.org wrote: Is there an interest in providing a way to prevent non-collapsed selection under some node in a document? And if there is, what are use cases? Authors periodically file a WebKit bug against our implementation of selectstart event that they can't use it to disable selection. WebKit supports -webkit-user-select: none to do this but some authors apparently want to allow collapsed selection. I personally don't quite understand why authors ever want to do this but I'm not totally against the idea of providing new mechanism for this if there are good use cases. As far as I know, the use-case is to prevent users from copying text easily. For instance, on this page: http://www.snopes.com/science/dhmo.asp Sites that have paid content only available to subscribers don't want subscribers to copy text to other places. Also, sites that are ad-supported might want users to come visit the original page (with the ads) instead of reading the text elsewhere. Or authors might just want credit for their work. There's no way we can stop authors from making things inconvenient for users -- they could always call getSelection().collapseToStart() every 50 ms or something. There's also no way we can stop users from copying if they're determined -- they could save the HTML and copy from there, say. I don't think we need to add features to the spec to make it easier for authors to stop users from copying, because a lot of authors will misuse them. I also don't personally think browsers need to add features to make it easier for users to evade anti-copying measures, because a lot of users will misuse them. The browser can't decide what copying is good or bad, and shouldn't assume that the author or the user is right. So I wouldn't worry about this much either way. I certainly don't think a declarative feature to prevent all non-collapsed selections (or all copying) is a good idea. A lot of authors are overprotective of their content and would stop totally legitimate copying if given the chance.
Re: Disabling non-collapsed selection
On Mon, Oct 24, 2011 at 11:08 AM, Aryeh Gregor a...@aryeh.name wrote: As far as I know, the use-case is to prevent users from copying text easily. . . . although on second thought, why would authors want to allow collapsed selections, then? Maybe I'm just confused.
Re: CfC: publish a Candidate Recommendation of DOM 3 Events; deadline October 21
On Fri, Oct 21, 2011 at 4:42 PM, Ms2ger ms2...@gmail.com wrote: However, *I do object to the publication* of this specification because the inappropriate resolution of the following issues (in no particular order): First (issue 123), it contradicts an uncontested requirement [1] in DOM4 forbidding the minting of new DOM feature strings, as reported by Anne. [2] Second (issue 179), it ignores the consensus about using DOMException instead of custom exception types like EventException, as noted in WebIDL, [3] which I reported. [4] Third (issue 130), the resolution made to add an informative WebIDL appendix is insufficient. Not only did the editors not list any technical reason for this decision in their reply, [5] despite this being required by the Process document. [6] I agree with all three of these objections, and don't think the specification should progress to CR unless they're fixed.
Re: [editing] Using public-webapps for editing discussion
On Fri, Sep 23, 2011 at 4:09 PM, Arthur Barstow art.bars...@nokia.com wrote: With this understanding, and having not noticed any objections to Aryeh's proposal, I think we should consider Aryeh's proposal as accepted. Thank you. I've updated the spec and will use this list accordingly in the future: http://dvcs.w3.org/hg/editing/rev/20505f74e222
Re: [editing] Using public-webapps for editing discussion
On Thu, Sep 22, 2011 at 7:33 AM, Arthur Barstow art.bars...@nokia.com wrote: It seems to me, that by virtue of using public-webapps, it does give WebApps WG a role e.g. to at least comment on the CG's editing spec. [Whether such a role is official or not is probably just splitting hairs.] I absolutely would like comments from everyone who's interested, whether individuals or organizations or Working Groups. That applies no more to the WebApps WG than anyone else, though. I'm more interested in what the comments are than where they come from. And speaking of the spec, would you please clarify which spec is in scope for the CG: http://aryeh.name/spec/editing/editing.html or: https://dvcs.w3.org/hg/editing/ They're the same. As you can see, the aryeh.name spec links to dvcs.w3.org as its primary version control. The script I use to update the aryeh.name spec (https://dvcs.w3.org/hg/editing/file/ee2791b98b92/publish) also pushes the updates from my local git repository to dvcs.w3.org. Actually, I just realized you can view the same spec here: http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html. DOM4 uses a dvcs.w3.org URL for its latest Editor's Draft, so I suppose I might as well too. I've changed the aryeh.name URLs to redirect to dvcs.w3.org, and updated the spec to link to those. There are no longer any aryeh.name URLs left in the spec except my e-mail address. Would you also please explain what you mean by your hoping it will *not* be necessary for the editing spec to move to the W3C's Recommendation track (f.ex. why do you feel this way)? I've explained myself at some length elsewhere, such as the first comment by me here: https://plus.google.com/105458233028934590147/posts/h7nsT7wuNmX I later explained why I think Community Groups address a lot of the issues I see with the standard W3C procedures: https://plus.google.com/100662365103380396132/posts/TSCsoGYSC2h I hope that the Community Group initiative will be successful enough that it isn't perceived as necessary to move specs developed there to traditional W3C Working Groups. I'd like to see CGs become an alternative to WGs, not just a gateway to them. Is there consensus within the CG to not move the spec to the REC track? The spec is in the public domain and anyone can theoretically submit it to the REC track, so consensus isn't an issue either way. However, I hope others will not try to undermine the new Community Group process by taking its specs away until we've had a chance to give it a fair try. Perhaps experience will wind up demonstrating that the Process still serves a useful purpose for specs like HTML Editing APIs, but we won't know unless we try.
Re: [editing] Using public-webapps for editing discussion
On Thu, Sep 22, 2011 at 1:04 PM, Charles Pritchard ch...@jumis.com wrote: Does it have to be an either-or situation? Given that there are pressures to publish in REC, to have a version which follows various procedures, it seems plausible that the two can coexist. That's true, but there's no rush to create an extra copy. The spec wouldn't be ready for CR for at least a year or two, so there's no advantage at all to having extra EDs and WDs floating around. People can give feedback on the preliminary drafts just as well whether it's officially on REC track or not. If it proves to be useful to have a copy published in the WebApps WG too, that can easily be arranged later. For the time being, I would like to use this opportunity to test whether Community Groups can stand on their own *without* merely being satellites of regular Working Groups. For instance, Community Groups have their own patent policy, and it remains to be seen whether that will be effective enough without the regular patent policy being applicable to the same drafts. We won't find out if the same draft is covered by the regular patent policy as well. If there are any deficiencies with Community Groups as compared to regular Working Groups, we won't find out if the draft is a Working Group deliverable too. Again, none of this is to deny the possibility of the draft eventually being moved to REC track. But I don't yet want to deny the possibility of the draft *not* being moved to REC track, either. We should keep our options open until we see how well CGs work. Though it's sometimes cumbersome, I've accepted that I must review at least two drafts when looking at specs these days. I'm at peace with that, now. I'm not. I would like to avoid multiple drafts if at all possible. Fortunately, no notable spec but HTML5 (and semi-broken-off parts like Web Sockets or Web Workers) has multiple versions that are appreciably different. If there wind up being multiple drafts for licensing or patent reasons, I'd expect them to be exact mirrors, as with DOM4.
Re: [editing] Using public-webapps for editing discussion
On Thu, Sep 22, 2011 at 2:44 PM, Arthur Barstow art.bars...@nokia.com wrote: It appears you are intentionally using comments here to differentiate contributions. Is that right? Right. I ask because, as I understand the CG process: before a person can make a contribution to a CG spec, they must agree to a CLA for all of the CG's specs; and a CG is only supposed to accept contributions from its CG members. If your CG uses WebApps' list, how will contributions from non-CG people be managed/tracked and how will the FSA be managed e.g. if non-CG contributions are accepted? I spoke with Ian Jacobs about this. He clarified that contributions only means spec text. To date, I've written all actual spec text myself, and I expect this to continue. It's usual that only the editor writes the actual text of the specifications they edit. If for some reason I wanted to accept spec text from someone else, they'd have to submit it through the CG and we'd ensure it was properly tracked for legal reasons. As I understand it, it couldn't be submitted on public-webapps, but that's not a problem -- I just want to use public-webapps for discussion.
Re: [editing] Using public-webapps for editing discussion
On Mon, Sep 19, 2011 at 12:48 PM, Arthur Barstow art.bars...@nokia.com wrote: Aryeh - coming back to your question below ... Since you are the Chair of the HTML Editing APIs CG [CG], would you please explain what you see as the relationship between the CG and WebApps vis-à-vis the Editing spec? In particular, what role(s) do the CG and WG have? For example [1] indicates the CG already has a mail list (public-editing) so when would it be used versus public-webapps? I do not intend to use any of the mailing lists created for the CG at all. We don't need our own mailing lists -- it will just fragment discussion. The editing spec is too small to deserve its own list. If it turns out we can use public-webapps, I'll ask that the links on the CG page point only to that, and that the CG lists be deleted. If the CG lists have to continue to exist for whatever reason, I'll make sure to tell anyone who uses them to use public-webapps instead. If we can't use public-webapps, then I'll continue using the whatwg list instead. I won't use editing-only lists regardless.
Re: [editing] Using public-webapps for editing discussion
On Mon, Sep 19, 2011 at 12:48 PM, Arthur Barstow art.bars...@nokia.com wrote: Since you are the Chair of the HTML Editing APIs CG [CG], would you please explain what you see as the relationship between the CG and WebApps vis-à-vis the Editing spec? In particular, what role(s) do the CG and WG have? I notice you asked a more general question here too that I didn't answer. My take is that the CG will be the group that publishes the editing spec for the foreseeable future. However, all discussion and development should occur in preexisting, established fora, preferably in the W3C. This means using fora that are specific to particular Working Groups, such as public-webapps, even though those Working Groups aren't formally involved in developing the editing spec. So currently, I don't see the WebApps WG as having any official role in developing the editing spec. I'd only like to be able to use its discussion list, since a lot of interested parties are already subscribed. Eventually, if it turns out to be necessary to move the spec to the REC track (although I hope it's not), I expect that will be at the WebApps WG, given its charter. But that's not an immediate consideration.
Re: [editing] Using public-webapps for editing discussion
On Fri, Sep 16, 2011 at 1:44 PM, Charles Pritchard ch...@jumis.com wrote: I don't think it's malicious. But, Google has unprecedented control over these W3C specs. They are absolutely using that control to benefit their priorities. Google has exercised no control over my spec. I've written it entirely at my own discretion. Various individuals have given me feedback publicly or privately about the spec, and I've taken their feedback into consideration based on what I think its technical merits are. The two people who have the most influence are Ehsan Akhgari (Mozilla) and Ryosuke Niwa (Google), because they're the ones who will be implementing it. I don't give Ryosuke any more say than Ehsan just because he works for Google. Nor do I care more about Google products than others, except to the extent that they're more popular or I'm more familiar with them or the teams that develop them give more or better feedback. Just to be absolutely clear here: I'm an outside contractor working for Google. I have never set foot inside a Google office, nor do I have access to any internal Google mailing lists or other resources. The only time I've met in person with anyone from Google about my work was at a two-day Mozilla/Google meetup a few weeks back at Mozilla Toronto. The only person within Google who has any direct authority over my work is Ian Hickson, and he hasn't read most of the spec, let alone told me how I should write it. Google employees send me feedback publicly and privately, but so do others. The extent of Google's involvement with my work is Hixie suggesting I work on HTML editing, and me submitting an invoice occasionally and getting paid. If you want to say that in the end I only care what browser implementers think, that's a fair point. But Google has nothing to do with it. This puts non-vendors in a bad situation. Where Google has purchased the space to play both sides of the game, the rest of us are struggling to have our use cases accepted as legitimate. By funding so many editors, for so many years, they gained control of the specs. Google has no control over the specs in practice. Individuals do, who in some cases are paid by Google. I am not receiving any marching orders from higher-ups beyond write specs for browsers to implement, and from what I've heard, the same is true for regular employees of Google too. If you would like to criticize our approaches to spec writing, criticize them as the individual opinions they are, not as part of a plot by Google. They use that position to knock-down use cases. When a use case serves Google Docs, or Gmail, it's heard. When it does not, it's shuttered. Point me to anywhere where I ignore use-cases because of who presented them. (Obviously, except for the fact that I'll prioritize use-cases that affect more users.) I'll listen very seriously to what anyone on the Gmail or Docs team says, but no more than Yahoo! Mail or TinyMCE or any other major HTML editing developers. The goal is to make APIs that anyone can use. All this is beside the point, though. If you want more feedback from W3C stakeholders, then you should be happy that I want to use the public-webapps list.
Re: [editing] Using public-webapps for editing discussion
On Wed, Sep 14, 2011 at 7:30 PM, Arthur Barstow art.bars...@nokia.com wrote: Since some related functionality was included (at one point) in the HTML5 spec, it seems like we should ask the HTML WG for feedback on Aryeh's email. Aryeh told me there are some related bugs: http://www.w3.org/Bugs/Public/show_bug.cgi?id=13423 http://www.w3.org/Bugs/Public/show_bug.cgi?id=13425 Maciej, Sam, Ruby - do have a sense if the HTML WG has a (strong) opinion on Aryeh's question below? I should point out that the WebApps WG's charter lets it take on specs split out from HTML5. For such specs to be merely discussed here should be no impingement on the HTML WG's scope, a fortiori. On Thu, Sep 15, 2011 at 12:31 AM, Charles Pritchard ch...@jumis.com wrote: I don't see Shelley Powers' objection being addressed. She has expressed concerns that the HTML Editing APIs have been taken out of W3C WGs and associated processes. Your wording suggests that the functionality was ever meaningfully specified within a W3C WG. This is not the case. The specification text in the HTML5 draft was unusable and would have had to be removed eventually anyway, because it was untestably vague. The current HTML Editing APIs specification was written from scratch and was never within the W3C until now, when it's been moved into a Community Group. Community Groups are within the W3C. Presumably the reason the W3C created Community Groups is because it would like people to use them for specification development, so using them for that purpose seems like it should be uncontroversial. The specification is not covered by W3C's Process, but in my opinion that's a good thing, for reasons I have laid out elsewhere in detail. Apple, Google and Microsoft representatives have vetoed rich text editing as a supported use case for public-canvas-api, the Google/WHATWG editing specification is now the -only- supported solution for developers to author editing environments. It is not accurate to refer to the specification as Google or WHATWG. It's in the public domain, so Google has no more right to it than anyone else. Google paid for its development up to this point, but no one from Google but me has exercised any discretion as to its contents, and I'll continue working on it under different employment if necessary. The spec has nothing to do with the WHATWG, except that I used their mailing list for a while. You can refer to it as the HTML editing specification, since it's the only one. Or the HTML Editing APIs specification, to use its title. If you would like to disambiguate, you can refer to it as mine, since I'm the author and editor. Aryeh, consider releasing more authority to the W3C process. The specification is fairly mature, I'm not seeing push-back on this spec, and I know that there are several voices which would better served through formal process. I disagree. I don't believe that the W3C Process is useful, and in fact I think it's actively harmful, at least for the type of spec I'm working on. I support the W3C Community Groups initiative and believe it will serve a very valuable purpose, and I object to others' attempts to undermine the W3C's goals in undertaking that initiative. If it eventually does prove useful to move the specification to REC track, that can easily be done at any later date. There is nothing to gain and much to lose by prematurely abandoning this trial of the W3C's bold and commendable attempt to introduce alternative, less cumbersome ways to develop web specifications. Also, try to get this onto the hg repositories, in the same style that DOM4 has been entered. It works well for maintaining your CC0/WHATWG labels while also providing the W3C with a publishable draft under their own restrictions. The authoritative version control history has been at dvcs.w3.org since Ian Jacobs gave me access a couple of days ago: https://dvcs.w3.org/hg/editing Note that this is the first link for version history at the top of the draft, with the second one being a github mirror for those who prefer git: http://aryeh.name/spec/editing/editing.html Currently the specification itself is still hosted at aryeh.name because the Community Group technical infrastructure isn't finished yet. As soon as I'm able to post an up-to-date version of the spec at w3.org, I'll move it there and change the aryeh.name URL to a redirect.
[editing] Using public-webapps for editing discussion
For the last several months, I was working on a new specification, which I hosted on aryeh.name. Now we've created a new Community Group at the W3C to host it: http://aryeh.name/spec/editing/editing.html http://www.w3.org/community/editing/ Things are still being worked out, but one issue is what mailing list to use for discussion. I don't want to create new tiny mailing lists -- I think we should reuse some existing established list where the stakeholders are already present. Previously I was using the whatwg list, but as a token of good faith toward the W3C, I'd prefer to switch to public-webapps, even though my spec is not a WebApps WG deliverable. (If it ever does move to a REC track spec, though, which the Community Group process makes easy, it will undoubtedly be in the WebApps WG.) Does anyone object to using this list to discuss the editing spec?
Re: RfC: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]
On Sun, Sep 4, 2011 at 9:12 AM, Arthur Barstow art.bars...@nokia.com wrote: Some members of the group consider the D3E spec as the highest priority of our DOM-related specs and they have put considerable resources into that spec. Doug and Jacob will continue to lead that spec effort, and as I understand it, a CR for D3E is imminent. I expect the group to help progress that spec. At the same time, others members have put substantial resources into DOM Core (and closely related functionality such as DOM Range). Naturally, they want to preserve that investment and I support that work continuing. The real question is not who's invested what, it's what browsers will implement. If we're moving toward a situation where IE will implement D3E and everyone else will implement DOM Core's idea of events, and both groups will claim to be implementing the standard, that's an absolutely terrible idea and we need to put a stop to it right now. If the only real reason for it is because different editors or employers have made investments in different bodies of spec text, instead of because browser implementers actually disagree on what they should implement, that's even worse. I would object in the strongest terms to progressing any standard to CR if it contains features that are specified differently in a different standard, if it looks plausible that different implementers will follow different versions. (I have not looked at the content of D3E or DOM Core, though, so I can't say specifically how bad the situation would be if this happened, nor which should be retired in favor of the other.)
Re: before/after editaction
On Wed, Sep 7, 2011 at 5:47 AM, Olli Pettay olli.pet...@helsinki.fi wrote: What happens if beforeeditaction tears down the document, or changes the document significantly or closes the window etc. (Those a generic problems with before* events) It shouldn't make any difference. The behavior of all the edit actions is well-defined for any document state. This kind of thing is only a problem for something like a mutation event, where the exact action to be performed is predetermined and might no longer make sense after DOM changes. But if you're just doing document.execCommand(foreColor, false, red), then it doesn't matter what any code does that runs before it. If it destroys the document or gets rid of the selection or whatnot, execCommand() will behave as it normally does in such a situation, probably either doing nothing or throwing an exception.
Re: Some way to change an element's name would be useful
I filed a bug against DOM Core: http://www.w3.org/Bugs/Public/show_bug.cgi?id=13971
Re: [selectors-api] Return an Array instead of a static NodeList
On Tue, Aug 30, 2011 at 4:33 AM, Jonas Sicking jo...@sicking.cc wrote: My point was that it was a mistake for querySelectorAll to return a NodeList. It should have returned an Array. Sounds like people agree with that then? I don't have a problem with that, if it can be changed safely. However, some things do have to return NodeLists, at least if the returned list is live. In that case, it's still useful to have the Array methods available.
Re: Some way to change an element's name would be useful
On Tue, Aug 30, 2011 at 4:44 PM, Karl Dubost ka...@opera.com wrote: Le 29 août 2011 à 14:57, Aryeh Gregor a écrit : In editing, it's common to want to change an element's name. For instance, document.execCommand(formatblock, false, h1) will change the current line's wrapper to an h1. Unbolding b id=foo should produce span id=foo. Does that also mean that if you feed a complete different markup, you could reassign a different document in the browser? Could it be the equivalent of an XSLT transform at the top of the document? Sorry, I don't understand the question. Could you elaborate?
Re: [selectors-api] Return an Array instead of a static NodeList
On Thu, Aug 25, 2011 at 7:17 PM, Jonas Sicking jo...@sicking.cc wrote: .push and .pop are generic and work on anything that looks like an Array. However they don't work on NodeList because NodeList isn't mutable. . . . None of these are *mutable* functions. Oh, right. I misunderstood you. Yes, obviously we wouldn't expose things like .push or .pop on NodeList, since they wouldn't make sense. But we should expose things like .forEach, etc. Any reason not to?
Re: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]
On Fri, Aug 26, 2011 at 12:48 AM, Jonas Sicking jo...@sicking.cc wrote: The point is that if it's just a chapter in a larger spec, how do I know that there isn't other important information in the larger spec that I have to read in order to get a understanding of the full feature. The same applies if it's a standalone spec. Microdata is an example of a spec with so many dependencies on HTML5 that having it in its own spec is kind of silly: http://dev.w3.org/html5/md/Overview.html#dependencies A lot of features just aren't orthogonal. DOM mutation events are a great example of something that's tightly coupled to DOM operations, such that everything DOM-related needs to account for them, and it makes little sense to have them in a separate spec from DOM Core. Things like Traversal and Range could be in separate specs, but they're related enough and short enough that having them in Core also makes sense to me, and I think we should just go with whatever the editor finds most convenient. If they delay LC or we want them to progress faster for patent policy reasons, that's a separate story. I do think the HTML5 spec is ridiculously large and could use with being split up into several mostly independent chunks. A spec shouldn't be so large that you don't want to close the tab because it takes too long to reopen. But it also shouldn't be so small that you have to keep a dozen different tabs open to figure out anything nontrivial. CSS3 specs are far too small. I think DOM Core is currently in a reasonable middle ground where it's still fair to add more material to it if it's relevant, just not an excessive amount more. I'm not talking about authors, I'm talking about browser vendors. As someone looking to implement a spec, I'm very interested in knowing which parts of the spec has consensus and which ones doesn't. This is a separate issue. New features and old features have to go in the same drafts regardless, for sanity's sake. If we want to mark them up clearly, we have to do this whether they're in a big spec or a small spec.
Some way to change an element's name would be useful
In editing, it's common to want to change an element's name. For instance, document.execCommand(formatblock, false, h1) will change the current line's wrapper to an h1. Unbolding b id=foo should produce span id=foo. My editing spec defines an algorithm for this http://aryeh.name/spec/editing/editing.html#set-the-tag-name which is used in a bunch of places. The thing is, one requirement for editing is that you want to preserve the user's selection. Real-world use-case: some editors want to produce strong instead of b for bold. One way to do this is to let the browser create b tags via the bold command, then iterate through them all and change them to strong. But this will involve removing the b element from the DOM, and if the selection was inside it, it will now be collapsed outside it. There's no way for the author to avoid this except manually saving and restoring the selection with fixups. And that won't work either for ranges other than the selection. We can't actually change the tag name of the node in place, because then it will have to implement a different interface in general. But we could have a setTagName() method that creates a new Element with the given tag name, moves the children, copies the attributes, puts it in the right place, fixes up the Ranges, then returns the newly-created Element. Does this seem reasonable to anyone, or is it too confusing that the object will be different?
Re: Some way to change an element's name would be useful
On Mon, Aug 29, 2011 at 3:40 PM, Boris Zbarsky bzbar...@mit.edu wrote: Shades of http://www.w3.org/TR/DOM-Level-3-Core/core.html#Document3-renameNode That has some good catches I hadn't thought of -- it preserves event handlers and custom JS attributes too.
Re: [selectors-api] Return an Array instead of a static NodeList
On Thu, Aug 25, 2011 at 2:33 AM, Jonas Sicking jo...@sicking.cc wrote: That works, but what is the advantage? The same advantage as having those methods work for Array. :) They're useful for lots of stuff. And .push/.pop or other mutating functions wouldn't work. Right. I'm only talking about the methods that are already generic and work with anything that looks like an Array: filter, forEach, every, etc. On Thu, Aug 25, 2011 at 3:03 AM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Aug 24, 2011 at 11:47 PM, Julien Richard-Foy jul...@richard-foy.fr wrote: All mutable functions will work (forEach, map, etc.) and bring a better expressiveness to the code. Not if he 'this' object is a NodeList. This works fine right now: alert( [].filter.call(document.querySelectorAll(*), function(elem) { return elem.textContent.length 6 }) .map(function(elem) { return elem.tagName }) .join(, ) ); And I use that pattern a *lot*, but it's both verbose and extremely unintuitive. Why can't that be just this? alert( document.querySelectorAll(*) .filter(function(elem) { return elem.textContent.length 6 }) .map(function(elem) { return elem.tagName }) .join(, ) ); Likewise for all Array-like types, but NodeList is the most common.
Re: [selectors-api] Return an Array instead of a static NodeList
On Sun, Aug 21, 2011 at 1:52 PM, Julien Richard-Foy jul...@richard-foy.fr wrote: Since Javascript 1.6, a lot of useful collection functions are defined for Array [1]. Unfortunately, they can’t be used directly with results returned by .querySelectorAll, or even .getElementsByTagName since these functions return NodeLists. You can already use these methods with .call() if you want, like: [].forEach.call(nodeList, fn). But this is a highly unintuitive hack -- I don't see why nodeList.forEach(fn) shouldn't work. I understand the DOM API is defined without a language in mind, but these collection functions are really useful, easy to implement and already available in most mainstream languages. Therefore, why not create a base Traversable type which would be implemented by all collection types (like NodeList) and which would provide the so useful bunch of iteration methods? Are there some issues or drawbacks I did not think of? This sounds like a good idea. It's not what the subject of your e-mail says, though (Return an Array instead of a static NodeList). I think we should keep returning a NodeList, just make it have the same iteration methods as an Array. On Wed, Aug 24, 2011 at 1:27 PM, Jonas Sicking jo...@sicking.cc wrote: I agree with this, but it might be too late to make this change. The problem is that if we returned an Array object, it would not have a .item function, which the currently returned NodeList has. I guess we could return a Array object and add a .item function to it. Or return a NodeList and add .forEach/.filter/etc. to it?
Re: [IndexedDB] Transaction Auto-Commit
On Mon, Aug 15, 2011 at 11:23 PM, Shawn Wilsher m...@shawnwilsher.com wrote: On 8/3/2011 10:33 AM, Jonas Sicking wrote: IndexedDB does however not allow readers to start once a writing transaction has started. I thought that that was common behavior even for MVCC databases. Is that not the case? Is it more common that readers can start whenever and always just see the data that was committed by the time the reading transaction started? This is one of the many benefits of MVCC (but Mozilla's implementation cannot provide this). I can definitely say that InnoDB (now MySQL's default storage engine) normally allows lockless reads even if a write lock is being held on the relevant rows. A SELECT will not normally block or be blocked by any other reads or writes, and will always read some committed value regardless of what uncommitted changes have been made. This is affected by the transaction isolation level and by the optional IN SHARE MODE and FOR UPDATE flags you can pass to SELECT, plus there are other details like that some SELECTs are always FOR UPDATE (like INSERT ... SELECT), but that's the basic picture. Of course, a statement that *writes* to a row will take out an exclusive lock and hold it until the transaction is committed, but that only blocks other locking statements such as writes. Also, obviously there are some locks being taken behind the scenes here so that different threads don't trample each other in various exciting ways, but they aren't logically exposed to the database user. My impression is that all this is standard for MVCC databases.
Re: [indexeddb] Handing negative parameters for the advance method
On Fri, Aug 12, 2011 at 6:16 PM, Jonas Sicking jo...@sicking.cc wrote: Yup. Though I think WebIDL will take care of the handling for when the author specifies a negative value. I.e. WebIDL will specify what exception to throw, so we don't need to. Similar to how WebIDL specifies what exception to throw if the author specifies too few parameters, or parameters of the wrong type. It doesn't throw an exception -- the input is wrapped. It basically calls the ToUInt32 algorithm from ECMAScript: http://dev.w3.org/2006/webapi/WebIDL/#es-unsigned-long This behavior is apparently needed for compat, or so I was told when I complained that it's ridiculous to treat JS longs like C. It does have the one (arguable) advantage that authors can use -1 for maximum allowed value. But anyway, yes: if your IDL says unsigned, then your algorithm can't define behavior for what happens when the input is negative, because WebIDL will ensure the algorithm never sees a value outside the allowed range. If you want special behavior for negative values, you have to use a regular long.
Re: Rescinding the DOM 2 View Recommendation?
On Fri, Aug 12, 2011 at 7:42 AM, Arthur Barstow art.bars...@nokia.com wrote: Anne, Ms2ger, All - can you live with adding a note to D2V rather than going down the rescind path? I'm fine with having prominent notices in obsolescent standards pointing readers to the up-to-date work. If rescinding is too much of a hassle, there's no reason to go to the trouble. Also, from a Process point of view I doubt it makes sense to rescind a Recommendation in favor of an Editor's Draft.
Re: RfC: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]
On Thu, Aug 11, 2011 at 6:28 AM, Arthur Barstow art.bars...@nokia.com wrote: Before we publish a new WD of Anne's DOM spec, I would like comments on how the DOM specs should be organized. In particular: a) whether you prefer the status quo (currently that is DOM Core plus D3E) or if you want various additional features of DOM e.g. Traversal, Mutation Events, etc. to be specified in separate specs; and b) why. Additionally, if you prefer features be spec'ed separately, please indicate your willingness and availability to contribute as an editor vis-à-vis the editor requirements above. While I think HTML/Web Applications 1.0 might be overboard when it comes to spec length, I strongly feel that we should not be splitting things up into lots of little specs of a few pages each. DOM Core as it stands is a reasonable length and covers a pretty logical grouping of material: everything related to the DOM itself without dependence on the host language. I think it would be logical to add some more things to it, even -- Anne and Ms2ger and I have discussed merging Ms2ger's/my DOM Range spec into DOM Core (Range only, with the HTML-specific Selection part removed). We don't have to feel bound by the way things were divided up before. Historically, we've had lots of little specs in some working groups partly because we had lots of people putting in small amounts of time. These days we have more editors capable of handling larger specs, so it's logical to merge things that were once separate. As long as there are no substantive issues people have with the contents of the spec, I don't think it's productive at all to tell willing and capable editors that they can't edit something or that they have to write it in a more complicated and awkward fashion because some people have an aesthetic preference for smaller specs or because that's the way we used to do it. It's true that procedurally, the more we add to a spec the harder it will be to get it to REC. I have not made any secret of the fact that I view this part of the Process as a harmful anachronism at best, but in any event, it shouldn't be prohibitive. Given that we have to make REC snapshots, the way it's realistically going to have to work is we'll split off a version (say DOM 4 Core) and start stabilizing it, while continuing new work in a new ED (say DOM 5 Core). We can drop features that aren't stable enough from the old draft when necessary -- we don't have to drop them preemptively. That's the whole idea of at-risk features. Also, a lot of the features we're talking about are actually very stable. I've written very extensive test cases for DOM Range, for instance, and I can assure you that the large majority of requirements in the Range portion (as opposed to Selection) have at least two independent interoperable implementations, and often four. I don't think that merging Range in would have to significantly slow progress on the REC track. I imagine Traversal is also very stable. Things like a DOM mutation events replacement would obviously not be suitable for a draft we want to get to REC anytime soon, but again, it can be put in the next DOM Core instead of a separate small spec. I also definitely think that DOM mutation events have to be in DOM Core. Things like Range and Traversal can reasonably be defined on top of Core as separate specs, since Core has no real dependency on them. Mutation events, on the other hand, are intimately tied to some of the basic features of DOM Core and it isn't reasonable to separate them.
Re: CfC: publish new WD of DOM Core; deadline August 10
On Wed, Aug 3, 2011 at 10:12 AM, Arthur Barstow art.bars...@nokia.com wrote: Anne would like to publish a new WD of DOM Core and this is a Call for Consensus (CfC) to do so: http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html Agreeing with this proposal: a) indicates support for publishing a new WD; and b) does not necessarily indicate support for the contents of the WD. I personally support both publishing a new WD, and the contents of the WD. (I cannot speak for Google.)
Re: Element.create(): a proposal for more convenient element creation
On Mon, Aug 1, 2011 at 9:33 PM, Maciej Stachowiak m...@apple.com wrote: In an IRC discussion with Ian Hickson and Tab Atkins, we can up with the following idea for convenient element creation: Element.create(tagName, attributeMap, children…) Creates an element with the specified tag, attributes, and children. How does this compare to popular JS helper libraries like jQuery? It would be useful to know what convenience APIs authors are using now before introducing our own.
Re: Element.create(): a proposal for more convenient element creation
On Tue, Aug 2, 2011 at 2:05 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Read again - the idea is to auto-expand arrays. (I don't have much of a preference between just use an array and use varargs, but expand arrays. I agree that using only varargs without expansion would be bad.) I'm against use varargs, but expand arrays if it would make sense to have one of the arguments be an array itself, as with Array.concat(), since then your code is going to mysteriously fail in the varargs case if you provide one argument that happens to be an array one time. It's also bad if we might want to add more arguments later. But in this case it seems fine. On Tue, Aug 2, 2011 at 2:12 PM, Charles Pritchard ch...@jumis.com wrote: http://mootools.net/docs/core/Element/Element . . . // moo enables: new Element('a.className') as a shorthand. . . . http://api.jquery.com/attr/ var myAnchor = $('a href=http://api.jquery.com;'); These both seem interesting. Allowing class or id to be specified as part of the first argument instead of the second one would make some very common use-cases simpler. Often you want to create some kind of wrapper with a static class or id, and Element.create(div.myClass) is both immediately understandable and significantly shorter than Element.create(div, {class: myClass}). The second one seems like it might be useful as a separate API. It could be a function that accepts a text/html string, and if the resulting tree has a single root node, it could return that. If not, it could return a DocumentFragment containing the result. Or maybe it could return a DocumentFragment unconditionally for consistency, so it would be like a much easier-to-use version of Range.createContextualFragment. Of course, this kind of thing is bad because it doesn't allow easy escaping, so maybe we shouldn't make it easier.
Re: More use-cases for mutation events replacement
On Mon, Jul 25, 2011 at 11:12 PM, Sean Hogan shogu...@westnet.com.au wrote: I assume you are referring to the NodeWatch proposal from Microsoft. 1st draft: http://www.w3.org/2008/webapps/wiki/Selector-based_Mutation_Events 2nd draft: http://www.w3.org/2008/webapps/wiki/MutationReplacement#NodeWatch_.28A_Microsoft_Proposal.29 I wasn't aware of that proposal. It seems like we came up with the same basic idea independently. I think the utility of this proposal is unnecessarily limited by the restriction of one watcher per node. Also, it is not clear that handlers would be called before page reflow / repaint. Yeah, those are two immediate problems I see. Also (based on looking at the second draft, not the first): * I'm not sure what the use-case is for a minimum frequency. If it's not going to be really really common, it shouldn't be part of the API, because authors can always fake it with setTimeout() and some globals. * I don't think we want to return a handle -- don't other APIs let you unwatch by just passing the same callback you originally passed? That makes more sense, IMO. * It says it throws an INDEX_SIZE_ERR if the minimum frequency is negative, but it's an unsigned long, so WebIDL already specifies different behavior if it's negative (it wraps).
Re: [From-Origin] on theft
On Sat, Jul 23, 2011 at 10:04 AM, Glenn Adams gl...@skynav.com wrote: I would suggest not using the word theft, even if placed in quotes. Call it bandwidth leeching or something like that. It certainly is by no means theft by any reasonable definition. Theft is a broad term that can informally encompass pretty much any activity that one person does to gain something at the expense of others. Like many words with strong connotations, it's very commonly used when the speaker wishes to apply the word's connotations to other things that they think are conceptually related to the point of deserving those connotations. Supposing that theft has the same meaning as stealing, which is what your dictionary definition says, it's entirely unremarkable to speak of stealing ideas, stealing a kiss, stealing the show, stealing a base, and so on. The intent is to emphasize the act's injustice, sneakiness, or unexpectedness. However, I agree that there's no need to use loaded language here, even in quotation marks. Bandwidth leeching is probably neutral enough. If not, we could go with something even more neutral, like using others' bandwidth.
Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)
On Thu, Jul 21, 2011 at 1:02 PM, Adrian Bateman adria...@microsoft.com wrote: For platform features that directly affect web developers' pages that might sometimes be true. However, compression is also optional in HTTP and it doesn't appear to have caused problems or made some sites work and others not based on some dominant implementation. Do you think it would be feasible in practice for a mainstream web browser to not support HTTP compression? For instance, if Internet Explorer removed support for it, would you expect to get a sufficient number of bug reports that you'd be forced to re-add support? If so, then HTTP compression is in practice mandatory for web browsers, but optional for web servers. This is exactly the state of affairs proposed for WebSockets compression.
Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)
On Mon, Jul 25, 2011 at 4:58 PM, Adrian Bateman adria...@microsoft.com wrote: First, I don't think that's the same thing at all. Why not? Second, the IETF HyBi working group has asked members of this working group for Last Call feedback. If you think the protocol has the wrong mix of required/optional features then you should provide that feedback through the requested channel. I'm saying that it would be perfectly acceptable for a feature to be optional on the protocol level (what the IETF specifies) but mandatory for web browsers (what WebApps specifies). If HTML5 were to require that conforming user-agents must support HTTP compression, for the sake of argument, that would not contradict the RFCs that make it optional. An HTTP client that didn't support compression would be a conforming HTTP client but a non-conforming HTML5 user agent. There's nothing wrong with that: specifications are supposed to add requirements beyond the specs they normatively reference. Thus this is a question for us, not the IETF. From the discussion here, it sounds like there are problems with WebSockets compression as currently defined. If that's the case, it might be better for the IETF to just drop it from the protocol for now and leave it for a future version, but that's up to them. As far as we're concerned, if the option is really a bad idea to start with, it might make sense for us to prohibit it rather than require it, but there's no reason at all we have to leave it optional for web browsers just because it's optional for other WebSockets implementations.
Re: Mutation events replacement
On Fri, Jul 22, 2011 at 11:54 AM, Boris Zbarsky bzbar...@mit.edu wrote: Actually, you can pretty easily do it in the other order (move the text into the b, and then put the b in the DOM), and may want to so as to minimize the number of changes the the live DOM; that's something that's often recommended as a performance enhancement. Hmm. Interesting. So far I've been writing my draft on the theory that my move preserving ranges operation would actually be implemented as I've specced it, so that all Ranges (not just the current selection) would remain in logically the same place after the DOM operations. The way I've designed it, you have to move stuff around within the tree rather than removing and re-adding it. But of course, that design could always be changed. Either I could just give up on preserving anything other than the current Selection, or I could define different primitives. So point taken. Editing doesn't *have* to involve moving nodes at all. I don't need software that uses mutation events. I need software that triggers editing operations, so I can them actually measure what DOM mutations are performed in the course of these editing operations. What use do you have here for software that doesn't want to use DOM mutations to start with? The question is what users of mutation handlers will need, right? If you do need such software, though, some of the most important WYSIWYG editors out there are TinyMCE and CKEditor, which have easy-to-use online demos: http://tinymce.moxiecode.com/ http://ckeditor.com/ A typical workload is paste in the contents of some blog post or other that you grab from someplace (often this would come preloaded if you're editing or quoting an existing post), then change around some text, type a couple of paragraphs, add an image or some smilies or something, make some links, that sort of thing. On Fri, Jul 22, 2011 at 6:57 PM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Jul 22, 2011 at 2:08 AM, Dave Raggett d...@w3.org wrote: But if you are going to, *don't* coalesce mutations when the resulting DOM tree is dependent on the order in which those mutations took place. This is critical to distributed editing applications. The DOM should have no such behavior. The only exception to this rule that I know of is script elements. They execute their contained script the first time they are inserted into a Document, but don't undo that action when removed (for obvious reasons), nor do they redo it when inserted again. The order of mutations makes a big difference if you're recording them as things like insert node X into node Y at offset Z. If you append two children to a given node, the order you append them in will affect the resulting DOM. Likewise if you insert two nodes at the same index, or before the same existing child, or if you insert two nodes at particular indices but remove some child in between. How could you record arbitrary DOM mutations such that the order wouldn't matter in general? On Fri, Jul 22, 2011 at 6:58 PM, Jonas Sicking jo...@sicking.cc wrote: We should have much richer events to aid with rich text editing. Using mutation notifications for this is will not create a good experience for the page author. Agreed. I'd be really interested in specific use-cases if people are using mutation events for editing here.
More use-cases for mutation events replacement
When discussing mutation events use-cases, mostly so far people have been talking about editors. However, I think mutation event replacements would have a much more general appeal if they were easily usable in certain cases with little performance impact. Specifically, one use-case I've run into a lot is I want to modify some class of node soon after it gets added to the DOM, but before it's actually painted. Examples of where this has come up for me in practice: 1) Some images in Wikipedia articles are offensive to some users, who may want to block them by default. However, we want to serve the same page content to different users for caching reasons, only varying the HTML used for the interface. One way to solve this would be to add classes to potentially offensive images, then have a script run that replaces the image with a placeholder before it's visible to the user. Currently, as far as I can tell, the only way to do this is to put a script immediately after the img. This could theoretically fail if the script winds up getting parsed too long after the img, like if it winds up in a different TCP segment that then gets dropped and takes a few seconds to resend while the image loads from cache. More practically, it's incompatible with CSP, which prohibits inline script. It also can't be used in situations like Wikipedia, where administrators can add scripts to the head but cannot add inline script. It's also excessively verbose if there are lots of places per page you need to do it. (Actual writeup of requirements, albeit abandoned: http://www.mediawiki.org/wiki/User:Simetrical/Censorship) 2) Some pages have content that should be collapsed by default with a way for the user to un-collapse it, but they should be uncollapsed if the user has script disabled, since otherwise the user won't be able to access the contents. This is true for some Wikipedia infoboxes, for instance. details might solve this use-case without the need for script, but it might not (e.g., styling might not be flexible enough). Supposing it doesn't, the way you'd currently have to do this is add a script right after the opening tag that collapses it and adds the uncollapse button. But again, inline script is incompatible with CSP, and incompatible with setups like Wikipedia where you might not be allowed to add inline script, and excessively verbose if there are many occurrences per page. 3) In current HTML drafts, details auto-closes p. I just filed a bug asking that it be made an inline element that doesn't auto-close p: http://www.w3.org/Bugs/Public/show_bug.cgi?id=13345. I want this because smaug complained that my specs didn't contain rationale, and when I pointed out that I had detailed rationale in comments, he said I should make it visible to the reader. So I want to have inline details at the end of some li's or p's. If the change I request is made, details will still auto-close p in current browsers, so I'd want to work around that with a shim for browsers using the current HTML parser. The obvious thing to do would be to run some script after every details is added that's the next sibling of a p, and move it inside the p. Again, this would require a lot of repetitive use of script. 4) Prior to the invention of the autofocus attribute, just like in all the cases above, the only way to reliably autofocus inputs was to add a script immediately after the input. This case is moot now that autofocus exists, but it illustrates that there are more use-cases in the same vein. What would solve all of these use-cases is a way to register a handler that would get notified every time an element is added to the DOM that matches a particular CSS selector, which is guaranteed to run at some point before the element is actually painted. Thus it could be a special type of event that runs when the event loop spins *before* there's any opportunity to paint, or it could be semi-synchronous, or whatever, as long as it runs before paint. Then I could easily solve all the use-cases: 1) Register a handler for img that changes the .src of the newly-added Element. 2) Register a handler for .collapsed or whatever that sets the appropriate part to display: none and adds the uncollapse button. 3) Register a handler for p + details that moves the details inside the p. (This would be trickier if I sometimes put details in the middle of p, but still doable, and anyway I don't plan to.) 4) Register a handler for .autofocus or whatever that calls focus(). It seems to me this dovetails pretty nicely with some of the proposed mutation events replacement APIs. Specifically, people have been talking about allowing filtering of events, so this use-case should be solved easily enough if you can use CSS selectors as filters. In that case, the perf hit from using such events should be negligible, right? I think there are lots of cases like the four I gave above where this sort of API would be handy for very general-purpose
Re: Mutation events replacement
On Thu, Jul 21, 2011 at 4:21 PM, Boris Zbarsky bzbar...@mit.edu wrote: I'd really like numbers. Having looked at the Gecko editor code in the past, I don't share your assurance that this is how it works That said, if you point to a workload, I (or anyone else; it's open source!) can probably generate some numbers by instrumenting the Gecko DOM. But I need a workload. Pretty much any formatting command is going to involve adding and removing wrapper elements. To add a wrapper element, say adding a b around some text to make it bold, you first have to insert the wrapper before or after the thing you want to wrap, then move all the nodes to wrap into the wrapper. Likewise, to remove a wrapper, you have to first move all its contents adjacent to it, then actually remove it from its parent. Likewise, for instance, suppose you delete some text that spans blocks, like: pfoo[bar/pdivbaz]quz/div. The result will be something like pfoo[]quz/p. How do you do that? First delete bar and baz, then move quz to the p, then remove the div. Or let's say you have pfoo[]bar/p and the user hits Enter -- you first create an empty p after the existing one, then you move bar into it. Of the 37 execCommand()s I've defined, every single one will commonly move at least one node within the DOM, except for insertHorizontalRule and the ones that don't actually change the DOM (copy, selectAll, styleWithCSS, useCSS). I defined an algorithm move preserving ranges to handle this because of the range mutation problem: http://aryeh.name/spec/editcommands/editcommands.html#preserving-ranges It's invoked in 17 places in my draft currently, and nearly all of those are in general algorithms that are themselves invoked in multiple places. So I don't have any numbers, but anecdotally, editing things definitely does a lot of moving. If you want numbers, though, you probably don't want to look at my implementation -- you want some real-world software that actually uses mutation events.
Re: Mutation events replacement
On Wed, Jul 20, 2011 at 3:11 PM, Ryosuke Niwa rn...@webkit.org wrote: But internally, a node movement is a removal then an insertion. There's always possibility that a node gets removed then inserted again after mutation observers are invoked. Also, what happens if a function removed a bunch of nodes and then inserted back one of them? I'm suggesting that we change insertNode()/appendChild()/etc. so that they're *not* internally a removal then an insertion: they're internally atomic. If you call foo.removeChild(bar); foo.appendChild(bar) then that would be a remove/insert no matter what. But if you call foo.appendChild(bar) and bar has a parent and bar is not the last child of foo, that would be a move. Yes, this causes problems as long as mutation events exist. But when mutation event handlers modify the DOM, behavior is undefined and is totally inconsistent between browsers in practice, so I don't think it's a big deal. Just do whatever's convenient and leave the behavior inconsistent in this case like in others. We don't need to standardize behavior here unless we're going to standardize behavior in all other cases where DOM mutation listeners mutate the DOM, which we aren't. On Wed, Jul 20, 2011 at 10:17 PM, Boris Zbarsky bzbar...@mit.edu wrote: What I do have a strong opinion on is that it would be good to have some data on how common move operations are compared to remove and insert on the web. Then we'll at least know how common or edge-case the situation is and hence how much effort we should spend on optimizing for it... I can say that it's very common and critical for editors. Tons of what you're doing is shuffling nodes around: splitting up text nodes and wrapping bits of them in new elements that you just inserted before them, moving all the contents of an element next to it before you remove it, etc. Editors of various types seem like they're one of the big use-cases for a mutation events replacement anyway, so my guess is it's important. But nobody's even made a list of use-cases for mutation listeners, have they? I don't think moving nodes is as common a use-case for typical sites. But typical sites don't want mutation listeners either, so they aren't what we should be concerned about here.
Re: Mutation events replacement
On Wed, Jul 20, 2011 at 1:43 AM, David Flanagan dflana...@mozilla.com wrote: Finally, I still think it is worth thinking about trying to model the distinction between removing nodes from the document tree and moving them (atomically) within the tree. I'll chip in that I think this is useful. It makes things somewhat more complicated, but remove/insert and move are conceptually very different. I'd really want to handle them differently for range mutations, as I previously explained: http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-March/031053.html A move operation is unnecessary if the goal is to synchronize changes between DOMs, but it's useful if the goal is to store and update data about the nodes themselves. In that case, moving a node could imply a very different sort of change to the data than removing/inserting. Specifically, you might want to throw away data when a node is removed, but keep it when the node is moved. Like: * If some nodes get moved to a nearby position and are in a Range to start with, they might conceptually belong in the Range afterward. See the examples in the e-mail I link to before. If they're removed and re-inserted, you have to keep extra state somewhere to track that. In my edit commands spec, I had to work around this in many different places by defining special primitives like move preserving ranges, or in some cases by manually saying For every Range with boundary point X, do Y. * If you're associating spellcheck data with text nodes in an editable region, then if a node gets moved elsewhere within the region, you want to keep the data. If it gets removed, you want to throw away the data. * Other things? Of course, we'd have to update every method anywhere that moves nodes to do so atomically instead of removing then inserting. Do we have a list of use-cases for mutation events anywhere?
Re: Test suites and RFC2119
On Sun, Jul 10, 2011 at 5:04 PM, Charles McCathieNevile cha...@opera.com wrote: Not quite. I'm saying that there are cases where violoating the requirement is reasonable, so test results shouldn't determine simple conformance. On the other hand, where these are things that in *most* cases we want interoperability, it makes sense to have test suites so people who don't violate the requirement can check that what they are doing is consistent with what others do. More specifically, we should have a set of requirements that we expect all major browsers to follow out of the box, and a full test suite associated with those requirements. (I assume browser-targeted standards here, for the sake of argument.) That way other implementations can verify that, in practice, they interoperate with all the major browsers. If we expect some implementations to only implement parts of the specification, it's useful to separate the requirements into classes, so that (for instance) a non-interactive HTML5 processor can verify that it's parsing HTML the same as major browsers, without being distracted by the noise about all the interaction-related requirements it fails. On the other hand, if you use should for requirements that all major browsers intend to conform to, but also for things that they don't, you reduce the usefulness of any should-related test suite. Some of the tests are things that any implementation wants to pass if it intends to be fully compatible, and some are things that they can ignore in practice. Separating these into distinct test suites is valuable, and that's what must requirements with multiple conformance classes permits. On Sun, Jul 10, 2011 at 6:17 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote: And if I make an implementation that does not fit in any of the classes I can just argue that the specification did not anticipate the class my implementation falls in. You would have to explain how arguing about a missing conformance class is better than arguing about whether should level requirements have been met. With your model you would have more clarity, but you would also be more wrong, and require more effort to make things right, in addition to inhibiting innovation. I think that's a very difficult argument to make and we have should because of that. The standards we're discussing are not coercive. They and their associated test suites exist solely to assist interested parties in writing software that will operate interoperably. To that end, conformance requirements and test suites should be designed so as to best permit interested implementers to write interoperable implementations. The point isn't whether you can argue that you do or don't conform to the letter of the standard, it's whether your implementation is interoperable in practice, and that's the only thing tests should be targeted at. Having a suite of should tests that mix together things we expect content might depend on (because major browsers agree) and things we don't think it will depend on (because they don't agree, or the difference isn't visible to content) is less useful than dividing up test suites based on interoperability impact. I don't foresee any major arguments here on the specifics. It's usually pretty clear whether a given requirement affects interoperability or not. If it does, and it looks like we can get the major implementations to agree, make it must. If it doesn't, or we can't, make it should or may. How to split the must requirements up into conformance classes is unlikely to be particularly controversial or hard to decide -- I'm not aware of any significant disagreement that's come up in HTML5. There aren't going to be that many classes of UAs in practice, even for a huge standard like HTML5. So I continue to feel that if a requirement has potential interoperability implications and we can get browsers to agree on it, it should be a must. This covers practically all requirements that are readily testable anyway, so it would leave little need to consider a should test suite.
Re: Test suites and RFC2119
On Sun, Jul 10, 2011 at 3:59 PM, Charles McCathieNevile cha...@opera.com wrote: Privacy and security restrictions leap to mind. There are things that really are should requirements because there are valid use cases for not applying them, and no reason to break those cases by making the requirement a must. In the normal case where they are applied you want to be able to test. Were you thinking of specific examples? I can't think of any offhand. But the difference between should and must is already two sets of conformance profiles (or whatever you want to call it), where one applies always and the other applies unless there's a reason not to do the thing that is assumed to be normal. The difference is that if you have must requirements that are specific to a single conformance class, you can write a test suite and expect every implementation in that class to pass it. For should requirements, you're saying it's okay to violate it, so test suites don't make a lot of sense.
Re: [WebIDL] Exceptions
On Thu, Jul 7, 2011 at 3:47 PM, Ian Hickson i...@hixie.ch wrote: Anything that allows us to _not_ coordinate is an epic disaster, IMHO. We absolutely should be coordinating. How else can we ensure the platform is a consistent platform? This is a feature, not a bug. Maybe, but I still think the .code system is bad: 1) It's excessively verbose. e.code == DOMException.HIERARCHY_REQUEST_ERR is gratuitously hard to type compared to e.name == HIERARCHY_REQUEST_ERR. 2) It's harder to debug. DOMException.HIERARCHY_REQUEST_ERR shows up in your debugger as 3, which is totally incomprehensible. 3) It clutters every exception object with dozens of useless member variables that you have to sift through in your debugger. There's absolutely no reason to use numbers here. The tendency to use named constants instead of strings comes from people who write in C or C++. They're used to making code harder to write for the sake of performance. In C, using a named constant instead of a pointer to a string might mean you can save ten or more bytes per struct, save a malloc() on every initialization and a free() on every deinitialization, test integer equality instead of having to use strcmp(), switch() on the code, etc. In JavaScript, none of these reasons make any sense -- there's just no reason to use named constants for anything at all. It's a bad pattern and we should move away from it. My concern is with having newer parts of the platform use entirely different models (e.g. new exception interfaces) relative to older parts of the platform (which e.g. use codes). It leads to the kind of problem you describe with JS vs DOM, except that we'd have JS vs DOM vs new DOM vs even new DOM, etc. I don't think it would lead to much long-term inconsistency if we don't introduce new codes for new exception types. Yes, some exceptions would have a .code attribute and some wouldn't, but that doesn't strike me as a big deal. I do agree that we should really be coordinating here, not having random specs make up new exception types that aren't added to DOM Core. For instance, I don't get why we need FileError and FileException duplicating NOT_FOUND_ERR, SECURITY_ERR, and ABORT_ERR (with different codes!) from DOMException and adding NOT_READABLE_ERR and ENCODING_ERR. Why can't we just add those to DOMException?
Re: [WebIDL] Exceptions
On Fri, Jul 8, 2011 at 3:56 PM, Ian Hickson i...@hixie.ch wrote: If the proposal is to make all exceptions have a name property (or whatever we call it) whether in ES, in DOM, or anywhere else, and to have everyone pick consistent exception names, then I'm fine with that. If we do do that then I'd still say we should just have one exception interface object, or at least no more objects than we have today, purely because there would be no advantage to having more in such a situation. I think this is the proposal as it stands, yes. I don't *think* anyone participating in this discussion supports adding new interfaces for every exception type at this point. Exactly. This is the kind of problem that occurs if we can avoid coordination. We shouldn't avoid coordination. Authors don't care that there's six working groups or twelve. They just have one platform they're authoring to. We need to act like one. Agreed.