Re: [whatwg] Why children of datalist elements are barred from constraint validation?
On Fri, 2011-07-29 at 15:20 -0700, Jonas Sicking wrote: On Fri, Jul 29, 2011 at 2:59 PM, Aryeh Gregor simetrical+...@gmail.com wrote: On Fri, Jul 29, 2011 at 5:51 PM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Jul 29, 2011 at 9:43 AM, Ian Hickson i...@hixie.ch wrote: Looking specifically at datagrid's ability to fall back to select, I agree that it's not necessarily doing to be widely used, but given that it's so simple to support and provides such a clean way to do fallback, I really don't see the harm in supporting it. I haven't looked at datagrid yet, so I can't comment. I think he meant datalist. datagrid was axed quite some time ago and hasn't made a reappearance that I know of. Ah, well, then it definitely seems like we should get rid of this feature. The harm is definitely there in that it's adding a feature without solving any problem. The current design solves the problem that the datalist feature needs to Degrade Gracefully (and preferably without having to import a script library). I think the solution is quite elegant and don't see a need to drop it. -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/
Re: [whatwg] sic element
On Fri, 2011-07-29 at 22:39 +, Ian Hickson wrote: If it's ok if it's entirely ignored, then it's presentational, and not conveying any useful information. Presentational markup may convey useful information, for example that a quotation from printed matter contains an underlined word. HTML is the wrong language for this kind of thing. I disagree. From time to time, people want to take printed matter an publish it on the Web. In practice, the formats available are PDF and HTML. HTML works more nicely in browsers and for practical purposes works generally better when the person taking printed matter to the Web decides that the exact line breaks and the exact font aren't of importance. They may still consider it of importance to preserve bold, italic and underline and maybe even delegate that preservation to OCR software that has no clue about semantics. (Yes, bold, italic and underline are qualitatively different from line breaks and the exact font even if you could broadly categorize them all as presentational matters.) I think it's not useful for the Web for you to decree that HTML is the wrong language for this kind of thing. There's really no opportunity to launch a new format precisely for that use case. Furthermore, in practice, HTML already works fine for this kind of thing. The technical solution is there already. You just decree it wrong as a matter of principle. When introducing new Web formats is prohibitively hard and expensive, I think it doesn't make sense to take the position that something that already works is the wrong language. I think you are confused as to the goals here. The presentational markup that was u, i, b, font, small, etc, is gone. I think the reason why Jukka and others seem to be confused about your goals is that your goals here are literally incredible from the point of view of other people. Even though you've told me f2f what you believe and I want to trust that you are sincere in your belief, I still have a really hard time believing that you believe what you say you believe about the definitions of b, i and u. When after discussing this with you f2f, I still find your position incredible, I think it's not at all strange if other people when reading the spec text interpret your goals inaccurately because your goals don't seem like plausible goals to them. If if the word presentational carries too much negative baggage, I suggest defining b, i and u as typographic elements on visual media (and distinctive elements on other media) and adjusting the rhetoric that HTML is a semantic markup language to HTML being a mildly semantic markup language that also has common phrase-level typographic features. -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/
Re: [whatwg] PeerConnection, MediaStream, getUserMedia(), and other feedback
On 2011-07-26 07:30, Ian Hickson wrote: On Tue, 19 Jul 2011, Per-Erik Brodin wrote: Perhaps now that there is no longer any relation to tracks on the media elements we could also change Track to something else, maybe Component. I have had people complaining to me that Track is not really a good name here. I'm happy to change the name if there's a better one. I'm not sure Component is any better than Track though. OK, let's keep Track until someone comes up with a better name then. Good. Could we still keep audio and video in separate lists though? It makes it easier to check the number of audio or video components and you can avoid loops that have to check the kind for each iteration if you only want to operate on one media type. Well in most (almost all?) cases, there'll be at most one audio track and at most one video track, which is why I didn't put them in separate lists. What use cases did you have in mind where there would be enough tracks that it would be better for them to be separate lists? Yes, you're right, but even with zero or one track it's more convenient to have them separate because that way you can more easily check if the stream contains any audio and/or video tracks and check the number of tracks of each kind. I also think it will be problematic if we would like to add another kind at a later stage if all tracks are in the same list since people will make assumptions that audio and video are the only kinds. I also think that it would be easier to construct new MediaStream objects from individual components rather than temporarily disabling the ones you do not want to copy to the new MediaStream object and then re-enabling them again afterwards. Re-enabling them afterwards would re-include them in the copies, too. Why is this needed? If a new MediaStream object is constructed from another MediaStream I think it would be simpler to just let that be a clone of the stream with all tracks present (with the enabled/disabled states independently set). The main use case here is temporarily disabling a video or audio track in a video conference. I don't understand how your proposal would work for that. Can you elaborate? A new MediaStream object is created from the video track of a LocalMediaStream to be used as self-view. The LocalMediaStream can then be sent over PeerConnection and the video track disabled without affecting the MediaStream being played back locally in the self-view. In addition, my proposal opens up for additional use cases that require combining tracks from different streams, such as recording a conversation (a number of audio tracks from various streams, local and remote combined to a single stream). It is also unclear to me what happens to a LocalMediaStream object that is currently being consumed in that case. Not sure what you mean. Can you elaborate? I was under the impression that, if a stream of audio and video is being sent to one peer and then another peer joins but only audio should be sent, then video would have to be temporarily disabled in the first stream in order to construct a new MediaStream object containing only the audio track. Again, it would be simpler to construct a new MediaStream object from just the audio track and send that. Why should the label the same as the parent on the newly constructed MediaStream object? The label identifies the source of the media. It's the same source, so, same label. I agree, but usually you have more than one source in a MediaStream and if you construct a new MediaStream from it which doesn't contain all of the sources from the parent I don't think the label should be the same. By the way, what happens if you call getUserMedia() twice and get the same set of sources both times, do you get the same label then? What if the user selects different sources the second time? If you send two MediaStream objects constructed from the same LocalMediaStream over a PeerConnection there needs to be a way to separate them on the receiving side. What's the use case for sending the same feed twice? If the labels are the same then that should indicate that it's essentially the same stream and there should be no need to send it twice. If the streams are not composed of the same underlying sources then you may want to send them both and the labels should differ. I also think it is a bit unfortunate that we now have a 'label' property on the track objects that means something else than the 'label' property on MediaStream, perhaps 'description' would be a more suitable name for the former. In what sense do they mean different things? I don't understand the problem here. Can you elaborate? As Tommy pointed out, label on MediaStream is an identifier for the stream whereas label och MediaStreamTrack is a description of the source. The current design is just the result of needing to define what happens when you call getRecordedData() twice in a row. Could you
Re: [whatwg] Why children of datalist elements are barred from constraint validation?
On Tue, Aug 2, 2011 at 1:30 AM, Henri Sivonen hsivo...@iki.fi wrote: On Fri, 2011-07-29 at 15:20 -0700, Jonas Sicking wrote: On Fri, Jul 29, 2011 at 2:59 PM, Aryeh Gregor simetrical+...@gmail.com wrote: On Fri, Jul 29, 2011 at 5:51 PM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Jul 29, 2011 at 9:43 AM, Ian Hickson i...@hixie.ch wrote: Looking specifically at datagrid's ability to fall back to select, I agree that it's not necessarily doing to be widely used, but given that it's so simple to support and provides such a clean way to do fallback, I really don't see the harm in supporting it. I haven't looked at datagrid yet, so I can't comment. I think he meant datalist. datagrid was axed quite some time ago and hasn't made a reappearance that I know of. Ah, well, then it definitely seems like we should get rid of this feature. The harm is definitely there in that it's adding a feature without solving any problem. The current design solves the problem that the datalist feature needs to Degrade Gracefully (and preferably without having to import a script library). I think the solution is quite elegant and don't see a need to drop it. What's the purpose of a degrading mechanism if it produces a result which is ugly enough that sites will not want to use it? It's not a goal in and of itself to have a fallback mechanism. The goal is helping sites deploy the feature. Looking at it some more, the example in Jeremy's blog post does in fact not work that well since it adds please choose... and Other as suggestions when the datalist is supported. This is likely not acceptable for websites. video and canvas provide good data points. Both have fallback mechanisms which are supposed to work without script. Yet in the far most common case people don't use these fallback mechanisms since they don't result in a rendering which lives up to their requirements. Instead they use script based feature detection and deal with lack of support by generating a wholly different DOM. I'd be very curious to know what percentage of sites that use video or canvas support non-scripted fallback mechanism with a useable result. frameset is another good example where the far most common use of the fallback mechanism was to deliver the wholly unhelpful Your browser doesn't support frames message. I talked this over with Mounir some more. The current design of wanting fallback for datalist results in three behavioral requirements: 1. Elements inside a datalist are not supposed to be submitted 2. Elements inside a datalist are not supposed to be subject to constraint validation 3. When looking for options the search is recursive rather than just looking at direct children of the datalist It's not clear what problem 1 solves. It's easy for sites to ignore the value submitted for the select if the contents of the input is non-empty. Same thing with 2, it's easy to create fallback which works in all browsers by simply not adding any constraint requirements. 3 does indeed provide some value in theory, though I'm still very skeptical that anyone will use it and thus it will just be feature bloat. Especially since I have yet to see a decent example of good UI that can be created with it. It is however the easiest one to implement at least in Gecko since we have a simple switch which allows to choose between a deep or a shallow search. But as I've stated before, ease of implementation is not a good reason to add a feature. It seems to me that if we looked at any other feature with this small set of sites that we expect to use it (sites that are ok with imperfect rendering and which target browsers with javascript turned off), and for such a short period of time (only until datalist is supported in all major browsers) we would not add such a feature. I'm all for having a sensible upgrade path, but I think we have that anyway, which is simply that users will have to type the value. / Jonas
Re: [whatwg] Prevent a document from being manipulated by a top document
Hello Anne, I took a look at the X-Frame-Options and it only disallows displaying in a frame, not forbidding only script access. Also this is another case of a HTTP header that would also find a good place in the HTML itself, like with the Content-Disposition attribute I suggested (and now got standardized). Am 02.08.2011, 12:30 Uhr, schrieb Anne van Kesteren ann...@opera.com: On Tue, 02 Aug 2011 12:21:31 +0200, Dennis Joachimsthaler den...@efjot.de wrote: [...] The X-Frame-Options header addresses this if I understand the concern correctly.
Re: [whatwg] Prevent a document from being manipulated by a top document
Am 02.08.2011, 12:38 Uhr, schrieb Anne van Kesteren ann...@opera.com: On Tue, 02 Aug 2011 12:33:18 +0200, Dennis Joachimsthaler den...@efjot.de wrote: I took a look at the X-Frame-Options and it only disallows displaying in a frame, not forbidding only script access. What kind of script access is allowed cross-origin that you are concerned about? I agree that just disallowing that the page gets shown is one solution but I am mainly concerned about reading important information out of an iframe site. Say, there's a site which uses an autologin facility to automatically log their users in when the site is opened. Malicious guy #1 prepares a site that loads the same site in an iframe. The site with the precious information could now do either: a) Use a javascript to try getting the top site off the iframe (top.location) If it's sandboxed and top.location is disallowed, this doesn't help. b) Use the X-Frame-Options header Doesn't work in all browsers! (But seriously, this would be also a weakness of my proposition, so I give it that) Also what if he wants to allow his content framed? This is a use case when theres a cross-site login system using a frame. Of course the login provider does not want the site that uses it spies the login info from his clients. I just had another idea: The same protection would apply to pop-ups.
Re: [whatwg] Prevent a document from being manipulated by a top document
Am 02.08.2011, 13:00 Uhr, schrieb Anne van Kesteren ann...@opera.com: On Tue, 02 Aug 2011 12:48:06 +0200, Dennis Joachimsthaler den...@efjot.de wrote: Say, there's a site which uses an autologin facility to automatically log their users in when the site is opened. Malicious guy #1 prepares a site that loads the same site in an iframe. You cannot get to that information cross-origin. It is not possible anyway? That kind of renders my worries baseless. But this use case still holds: Userscripts and addons could still read out everything from the sites. It might be way too much a niche case though.
Re: [whatwg] Prevent a document from being manipulated by a top document
On Tue, 02 Aug 2011 13:05:07 +0200, Dennis Joachimsthaler den...@efjot.de wrote: It is not possible anyway? That kind of renders my worries baseless. Right. But this use case still holds: Userscripts and addons could still read out everything from the sites. It might be way too much a niche case though. If users cannot trust their userscripts and addons (provided they can do unsafe things) they have lost already. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Prevent a document from being manipulated by a top document
Am 02.08.2011, 13:12 Uhr, schrieb Anne van Kesteren ann...@opera.com If users cannot trust their userscripts and addons (provided they can do unsafe things) they have lost already. True. We do not make standards solely to protect inexperienced users. Thank you for your insight on this matter, though.
Re: [whatwg] sic element
Þann þri 2.ágú 2011 09:04, skrifaði Henri Sivonen: On Fri, 2011-07-29 at 22:39 +, Ian Hickson wrote: Presentational markup may convey useful information, for example that a quotation from printed matter contains an underlined word. HTML is the wrong language for this kind of thing. I disagree. From time to time, people want to take printed matter an publish it on the Web. In practice, the formats available are PDF and HTML. HTML works more nicely in browsers and for practical purposes works generally better when the person taking printed matter to the Web decides that the exact line breaks and the exact font aren't of importance. They may still consider it of importance to preserve bold, italic and underline and maybe even delegate that preservation to OCR software that has no clue about semantics. (Yes, bold, italic and underline are qualitatively different from line breaks and the exact font even if you could broadly categorize them all as presentational matters.) I think it's not useful for the Web for you to decree that HTML is the wrong language for this kind of thing. There's really no opportunity to launch a new format precisely for that use case. Furthermore, in practice, HTML already works fine for this kind of thing. The technical solution is there already. You just decree it wrong as a matter of principle. When introducing new Web formats is prohibitively hard and expensive, I think it doesn't make sense to take the position that something that already works is the wrong language. So you're arguing that a subset of HTML should be favored over presentational markup languages for marking up digital retypes of printed matter, with b, i, u, font, small and big be redefined to their HTML 3 typographical meanings. And perhaps blockquote standardized to mean indent. If you simply retype print without any interpretation of the typography used, a valid speech rendering would e.g. cue bold text with bold and unbold marks to convey the meaning: this text was bold. The current definition of b does not exactly hint at such renderings. If all you want is to suggest original typographic rendering, then (save for Excerpt/Blockquote, Nofill/Pre and Lang/@lang) CSS does the job, better - and is vastly more powerful. I think the reason why Jukka and others seem to be confused about your goals is that your goals here are literally incredible from the point of view of other people. Even though you've told me f2f what you believe and I want to trust that you are sincere in your belief, I still have a really hard time believing that you believe what you say you believe about the definitions ofb,i andu. When after discussing this with you f2f, I still find your position incredible, I think it's not at all strange if other people when reading the spec text interpret your goals inaccurately because your goals don't seem like plausible goals to them. If if the word presentational carries too much negative baggage, I suggest definingb,i andu as typographic elements on visual media (and distinctive elements on other media) and adjusting the rhetoric that HTML is a semantic markup language to HTML being a mildly semantic markup language that also has common phrase-level typographic features. The problem is that the facts that something was written underlined, spoken with a stress and that styles guides recommend underlining the text when printed to convey it's semantics are not all equal. They might all be conveyed in print by underlining the text, but the semantics differ and thus each needs an element of it's own. Much as authors must use ol, ul and blockquote to convey their defined meanings, even though some UAs might render all of them the same way.
Re: [whatwg] Prevent a document from being manipulated by a top document
On Tue, Aug 2, 2011 at 7:15 AM, Dennis Joachimsthaler den...@efjot.dewrote: Am 02.08.2011, 13:12 Uhr, schrieb Anne van Kesteren ann...@opera.com If users cannot trust their userscripts and addons (provided they can do unsafe things) they have lost already. True. We do not make standards solely to protect inexperienced users. Thank you for your insight on this matter, though. If you need to run untrusted code, consider Cajahttp://code.google.com/p/google-caja/. JS itself doesn't provide the necessary mechanisms to safely execute untrusted code, so either you trust the code you are running completely (at least to the limits of what you can enforce running it in an iframe jail) or you do something like Caja. -- John A. Tamplin Software Engineer (GWT), Google
Re: [whatwg] Support for RDFa in HTML5
On Tue, 2011-08-02 at 13:55 +, aykut.sen...@bild.de wrote: I would like to know if these attributes will be part of HTML5 or is there another valid method to integrate RDFa into HTML5? Why do you need RDFa? -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/
Re: [whatwg] Support for RDFa in HTML5
On Tue, 2 Aug 2011, aykut.sen...@bild.de wrote: according to the W3C Specification: http://www.w3.org/TR/rdfa-in-html/ 1. the xmlns attribute has been replaced with the prefix attribute example: html prefix=rdfa: http://www.w3.org/ns/rdfa#;http://www.w3.org/ns/rdfa#%22 2. the RDFa declaration must be defined with the version attribute example: html version=HTML+RDFa 1.1 Complete example: html version=HTML+RDFa 1.1 prefix=rdfa: http://www.w3.org/ns/rdfa#;http://www.w3.org/ns/rdfa#%22 But both attributes are not supported in HTML5. I would like to know if these attributes will be part of HTML5 or is there another valid method to integrate RDFa into HTML5? Any specification can define extensions to HTML that are then allowed if you use that specification. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction
This is a very nice proposal; thanks for working on this, Ryosuke! I have a number of questions and concerns on it, and I would appreciate if you can comment on these: 1. The definition of @undoscope seems to not address the question of whether the document element should be an Undo Scope or not. 2. @undoscope seems to make it very hard to support the usecase of moving the undo scope from an element to another. (I'm not sure if that is a concern that we need to address at all, though). 3. In regard to Should apply return a boolean value indicating whether it successfully applied or not?, answering yes means that we should make sure that we're going to be able to cleanly revert a transaction when it fails, right? Also, saying yes here means that we should decide what happens if that transaction is in a transaction group. ... all of which makes me want to say no. :-) 4. In regard to Need to restore selection as well, is that something which we want all the time? Imagine an indent transaction which indents a paragraph by increasing its start margin, should it change the selection when it's undone? 5. I have serious doubts about the current specification of manual transactions. I don't know why we need to exclude them from group transactions, but honestly, I'm not sure why we need to have them at all. What use cases are we trying to address by manual transactions that would otherwise be impossible to address with managed transactions? 6. I think if we want to address selection saving/restoring, that part belongs to the Mutation of DOM section. We might also need to address some other editing related stuff in the DOM state, such as the keyboard layout language, selection, etc. 7. I'm not sure if we should leave the interaction of @contenteditable and @undoscope unaddressed. At the very least, we need to specify whether by default all contenteditable elements on a web page share the same undo manager or not. If I were to pick the default, I would suggest that by default they should all share the document's undo manager. 8. As a last comment, I think a better name for UndoManager is TransactionManager, since, well, that's what it really is! :-) Cheers, Ehsan
Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction
On Tue, Aug 2, 2011 at 11:30 AM, Ehsan Akhgari eh...@mozilla.com wrote: This is a very nice proposal; thanks for working on this, Ryosuke! I have a number of questions and concerns on it, and I would appreciate if you can comment on these: Nope! I just REALLY want to fix this. 1. The definition of @undoscope seems to not address the question of whether the document element should be an Undo Scope or not. Each document has its own undo scope: http://rniwa.com/editing/undomanager.html#undo-scope 2. @undoscope seems to make it very hard to support the usecase of moving the undo scope from an element to another. (I'm not sure if that is a concern that we need to address at all, though). Right, I don't support that use case. But I couldn't think of a case where this is useful. Also, I was concerned that this will make browser's undo management much harder since I don't know how Opera and IE manage undo transaction history. 3. In regard to Should apply return a boolean value indicating whether it successfully applied or not?, answering yes means that we should make sure that we're going to be able to cleanly revert a transaction when it fails, right? Also, saying yes here means that we should decide what happens if that transaction is in a transaction group. ... all of which makes me want to say no. :-) Not necessary. I think saying yes means that apply function returned true, and we've successfully added new entry to the undoManager. i.e. apply nor DOM mutation handlers did something insane like removing undoManager or interfering with DOM mutation, etc... Also, if we add editAction/transaction event, we may want to make it cancelable so that the entire transaction may be prevented (not individual mutations). So returning boolean will let websites figure out whether a transaction was really added to the list or not. 4. In regard to Need to restore selection as well, is that something which we want all the time? Imagine an indent transaction which indents a paragraph by increasing its start margin, should it change the selection when it's undone? Oh, so what I mean is that selection needs to be restored to the state before the transaction was applied. e.g. when I select then delete hello world and undo, I should be selecting hello world. 5. I have serious doubts about the current specification of manual transactions. I don't know why we need to exclude them from group transactions, but honestly, I'm not sure why we need to have them at all. What use cases are we trying to address by manual transactions that would otherwise be impossible to address with managed transactions? In collaborative editing apps, it's infeasible for the UA to manage undo transaction history because their undo history will be a tree, or an arbitrary graph. Also if you wanted to make an app that modifies both contenteditable region and canvas, you'll almost certainly need to modify canvas by script manually and yet you may want to let UA manage the undo transaction history of text fields. And the reason scripts want to use manual transaction as supposed to just modifying document, is to update UA's native UI. Without manual transaction or a comparable mechanism, UA won't be able to enable undo/redo menu items or show a list of undoable items in their menu. 6. I think if we want to address selection saving/restoring, that part belongs to the Mutation of DOM section. We might also need to address some other editing related stuff in the DOM state, such as the keyboard layout language, selection, etc. That's a good point. I'd have to look into what each UA does and what needs to be preserved. Aryeh, do you have any idea as to what UAs do for native editing actions? 7. I'm not sure if we should leave the interaction of @contenteditable and @undoscope unaddressed. At the very least, we need to specify whether by default all contenteditable elements on a web page share the same undo manager or not. If I were to pick the default, I would suggest that by default they should all share the document's undo manager. Yes, they do share the document's undo manger. I'll make sure to explicitly say that in the document. 8. As a last comment, I think a better name for UndoManager is TransactionManager, since, well, that's what it really is! :-) Alternatively, we can change the name transaction to something else because transaction sounds too general. - Ryosuke
Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction
On Tue, Aug 2, 2011 at 1:51 PM, Eric U er...@google.com wrote: I think the manual transaction is what I'd want to make undo/redo in the edit menu work with jV [https://addons.mozilla.org/en-US/firefox/addon/jv/]*. That's great to hear! I've spent so much time reconciling the way managed transactions and manual transactions interact so it's good to know my work wasn't put into vain. It looks like using manual transactions would be the straightforward way to make this work...I assume it could also be made to work with managed transactions, but I'm having trouble picturing how that would look from this early spec. Perhaps you could add a little sample code of an app making a number of small changes and merging them into a single undo record after each? Sure. The following example will add two transactions each inserting hello and br before the selection anchor and groups them into one transaction group: myEditor.undoManager.transact( new ManualTransaction( function () { this.text = document.createTextNode('hello'); this.nodeBefore = window.getSelection().anchorNode; this.nodeBefore.parentNode.insertBefore(this.text, this.nodeBefore); }, function () { this.text.parentNode.removeChild(this.text); }, function () { this.nodeBefore.parentNode.insertBefore(this.text, this.nodeBefore); }) ); myEditor.undoManager.transact( new ManualTransaction( function () { this.br = document.createElement('br'); this.nodeBefore = window.getSelection().anchorNode; this.nodeBefore.parentNode.insertBefore(this.br, this.nodeBefore); }, function () { this.br.parentNode.removeChild(this.br); }, function () { this.nodeBefore.parentNode.insertBefore(this.br, this.nodeBefore); } ), true); - Ryosuke
Re: [whatwg] [editing] HTML Editing APIs specification ready for implementer feedback
On Tue, Jul 26, 2011 at 5:26 PM, Aryeh Gregor simetrical+...@gmail.com wrote: Anyone reviewing the spec should be advised that I put extensive rationale in HTML comments. If you want to know why the spec says what it does, check the HTML source. I plan to change this to use details or such in the near future. Since the comments were relatively hard to spot, I've rewritten them to be visible as you read the spec. There are now tons of Comments buttons floated to the right, which contain lots of rationale and other commentary. Some are pretty terse and are reminders to me as much as anything, others are detailed explanations of the reasons behind various decisions (some inordinately long, with the toggle lists one being the most egregious). I probably introduced some editorial mistakes in the course of converting the comments, but they should be extremely helpful for review.
Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction
On Tue, Aug 2, 2011 at 2:17 PM, Ryosuke Niwa rn...@webkit.org wrote: On Tue, Aug 2, 2011 at 1:51 PM, Eric U er...@google.com wrote: I think the manual transaction is what I'd want to make undo/redo in the edit menu work with jV [https://addons.mozilla.org/en-US/firefox/addon/jv/]*. That's great to hear! I've spent so much time reconciling the way managed transactions and manual transactions interact so it's good to know my work wasn't put into vain. It looks like using manual transactions would be the straightforward way to make this work...I assume it could also be made to work with managed transactions, but I'm having trouble picturing how that would look from this early spec. Perhaps you could add a little sample code of an app making a number of small changes and merging them into a single undo record after each? Sure. The following example will add two transactions each inserting hello and br before the selection anchor and groups them into one transaction group: myEditor.undoManager.transact( new ManualTransaction( function () { this.text = document.createTextNode('hello'); this.nodeBefore = window.getSelection().anchorNode; this.nodeBefore.parentNode.insertBefore(this.text, this.nodeBefore); }, function () { this.text.parentNode.removeChild(this.text); }, function () { this.nodeBefore.parentNode.insertBefore(this.text, this.nodeBefore); }) ); myEditor.undoManager.transact( new ManualTransaction( function () { this.br = document.createElement('br'); this.nodeBefore = window.getSelection().anchorNode; this.nodeBefore.parentNode.insertBefore(this.br, this.nodeBefore); }, function () { this.br.parentNode.removeChild(this.br); }, function () { this.nodeBefore.parentNode.insertBefore(this.br, this.nodeBefore); } ), true); Ah, sorry--I wasn't clear. How to do it with manual transactions was pretty obvious. That's one of the things I like about the API--it's very straightforward. Could you add an example of the user typing e.g. h ... e ... l ... l ... o, via an app that's doing the DOM modifications, using managed transactions, such that a browser undo/redo will act on the whole word hello? It looks like you'd have an open transaction for a while, adding a letter at a time, and then you'd close it at some point? Thanks, Eric
Re: [whatwg] AppCache-related e-mails
On the subject of diagnostics for appcache: On Wed, 8 Jun 2011, Patrick Mueller wrote: On Wed, Jun 8, 2011 at 15:21, Ian Hickson i...@hixie.ch wrote: On Tue, 1 Feb 2011, Patrick Mueller wrote: I just tested Chrome beta this morning and saw nothing interesting in appcache error events, however progress events have now grown loaded and total properties (think those were the names, and I think they're new-ish). That's nice, as I can provide a progress meter during cache load/reload. I wouldn't mind having the URL of the resource being loaded (that was just loaded?) as well as those numbers. And for errors it would be nice to know, in the case of an error caused by a cache manifest entry 404'ing (or otherwise unavailable), what URL it was. HTTP error code, if appropriate, etc. In theory, we don't want to expose this information because it can be used to introspect intranets. I never considered that introspect internets angle. I guess the thought is that a rogue site could send a manifest with pointers to files inside someone's intranet, and then get someone inside that intranet to load that manifest, at which point JavaScript could have access to which URLs returned 200's, etc. Nasty. Right. Is this just an issue if the manifest or originating document's origin is different than a file listed in the manifest itself? Perhaps errors on these entries would less diagnostic data available for them - perhaps no diagnostic data. That would kind of fit with other cross-origin access capabilities. That might work. What kind of information would be most useful? Should it be in the same format from every browser or should it be detailed and freeform? Start with URL, because we know a URL was involved. Then allow for an optional vendor-specific freeform message. Vendor-specific messages end up being parsed by scripts, and shortly after that we end up having to hard-code those messages as the spec. So I'd rather not add a freeform message! What is the URL for? Can you describe the way this information would be used in a user interface or however it would be used? I'm just trying to make sure we address the actual problems that need addressing. Regarding TLS and cross-origin requests: On Thu, 16 Jun 2011, Michael Nordman wrote: On Tue, 8 Feb 2011, Michael Nordman wrote: Just had an offline discussion about this and I think the answer can be much simpler than what's been proposed so far. All we have to do for cross-origin HTTPS resources is respect the cache-control no-store header. Let me explain the rationale... first let's back up to the motivation for the restrictions on HTTPS. They're there to defeat attacks that involve physical access the the client system, so the attacker cannot look at the cross-origin HTTS data stored in the appcache on disk. But the regular disk cache stores HTTPS data provided the cache-control header doesn't say no-store, so excluding this data from appcaching does nothing to defeat that attack. Maybe the spec changes to make are... 1) Examine the cache-control header for all cross-origin resources (not just HTTPS), and only allow them if they don't contain the no-store directive. 2) Remove the special-case restriction that is currently in place only for HTTPS cross-origin resources. On Wed, 30 Mar 2011, Michael Nordman wrote: Fyi: This change has been made in chrome. * respect no-store headers for cross-origin resources (only for HTTPS) * allow HTTPS cross-origin resources to be listed in manifest hosted on HTTPS This seems reasonable. Done. I had proposed respecting the no-store directive only for cross-origin resources. The current draft is examining the no-store directive for all resources without regard for their origin. The intent behind the proposed change was to allow authors to continue to override the no-store header for resources in their origin, and to disallow that override only for cross-origin resources. The proposed change is less likely to break existing apps, and I think there are valid use cases for the existing behavior where no-store can be overriden by explicit inclusion in an appcache. I guess we can restrict no-store to cross-origin HTTPS resources, but it seems far easier to explain that no-store in general is honoured. Otherwise you end up with these weird situations where some resources can be cached and some can't, and the only reason one can or can't be stored is where the manifest is, but only if it has no-store, etc... It gets rather confusing. Also, what use cases are there for specifying no-store that don't apply across all resources? On the topic of appcache being used to cache everything but the main page: On Wed, 29 Jun 2011, Felix Halim wrote: On Thu, Jun 9, 2011 at 3:21 AM, Ian Hickson i...@hixie.ch wrote: If
Re: [whatwg] Proposal for a web application descriptor
On Wed, 27 Jul 2011, Mike Hanson wrote: On Jul 26, 2011, at 2:44 PM, Ian Hickson wrote: On Fri, 29 Apr 2011, Simon Heckmann wrote: I have read a lot in the last month about the future of html and web applications and I am very impressed by the progress this makes. However, I have come across some thing that annoys me: Permissions. I know they are important and I know they are needed but currently I find this quite inconvenient. And with more and more permissions coming up this might get worse so I spent some time thinking about it. [...] In short, the better solution isn't to ask for permissions up-front, but to ask for fewer permissions. The ideal solution is to not ask for any permission but to base the permission on a natural user gesture. For example, drag-and-drop of files to a site doesn't require permissions, but it is an implicit permission grant. Same with input type=file. With getUserMedia() we are doing something similar: instead of asking for permission, the user is asked for a specific input to be selected. snip Indeed. The system shouldn't ask for any permissions. For example instead of reading contact data, it could cause the OS to pop up a contacts list from which you can pick a contact to give access to it to the app. The challenging use case, and one that we had trouble with when we prototyped the Contacts API, is for ongoing or persistent access. The best approach we have right now is to use explicit markup to sandbox the permissions grant away from untrusted code. In the Contacts case, for example, autocomplete of email addresses, names, and phone numbers was a desired use case. A naive approach is to let the web app read the entire database and perform autocompletion in content. The safe approach, which was harder and less flexible, is to attach autocomplete behavior to input type=tel and email, and to set the autocompleted value only when the user has selected it. There are definite UX limitations to that approach - the content can't provide visual hinting during the autocomplete, for example (it would be nice if a photo gallery could trim down the set of photos from my friends as I narrow in on the name). The limitations create an incentive for content to try to get the full set of data anyway, through some other channel. As Roc commented, finding a way to be comfortable with a higher-level permissions grant that persisted over a longer span could be one way to address that. One way to grant access to the whole database is to have the user drag the database into the app. Without knowing more about the concrete use case, though, it's hard to say exactly what the right solution is. Can you elaborate? What kind of application is this for, and what is the expected user interaction? Going forward, it's possible that the address book would actually be just in a Web app, and granting access might really consist of dragging a MessagePort from the address book app to the other app. This would then allow the address book app to grant rights to the other app, including potentially bootstrapping a longer-term relationship (e.g. server-side). On Wed, 27 Jul 2011, Cameron Heavon-Jones wrote: The mapping of tel and email inputs to a contacts list is functional, not systematic. Might this be extended for some other inputs: date(*), url, search etc? This functionality may be declared and defined through a new attribute, since autocomplete is already used, something like autoassist? Maybe this would be able to over-ride the default file input behaviour of launching a popup in the case i just want to manually enter a file:/// I'm not sure I really understand what you are describing here. There are definite UX limitations to that approach - the content can't provide visual hinting during the autocomplete, for example (it would be nice if a photo gallery could trim down the set of photos from my friends as I narrow in on the name). This would seems to be ok as long as the contents remain sandboxed until selection is confirmed. Assuming the photos are server-side, there's no way to do this without giving the app essentially full read access to the contacts. It would be nice for a page\site\app to still be able to access a full contacts list if desired. Though it would seem to extend the integration into the full Contacts API which is of far larger scope. There is definitely a question of whether there should be an API for this specific case or if there are so many that we need a generic solution. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Small consistency issue with HTML5 nav element examples
On Wed, 4 May 2011, Bjartur Thorlacius wrote: On 5/4/11, Ian Hickson i...@hixie.ch wrote: IMO browsers should implement link. link should be implementable cross-browser in CSS. Unfortunately, what we want and what we get don't always match. :-) On a more serious note, implementing link can't be that hard. It's not a matter of it being hard. Some browsers have even implemented it and then dropped support. I'll probably patch my UA myself when I get the graphics layer working on my system (or just use links2). But I'm slowly coming to the conclusion that a should be used for creating hyperlinks that seem to belong to head, in a tree of htmlbodyasidea, for compatibility with mainstream UAs. That seems fine to me. My actual concern regard navigation links not forming a part of the linear body of the document, but still being in body. Navigation links will most likely be rendered out of band, potentially only on demand and paged/scrolled seperately from the body, or at the end of the document in one dimensional renderings (such as audio and text streams). They might even be triggered without being rendered at all, such as by scrolling out of range of the current document. It seems most authors desire far more control over their navigation links. On many pages, it's almost as if the navigation links are more important to the authors than the content, at least when you look at the amount of effort put into them... Sadly, the things authors desire may conflict with the things users desire. I also desire control over navigation links (among many other things). From authors, I desire only content. Unfortunately, as I said above... what we want and what we get don't always match. There's not much we can do here to push authors further. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] sic element
On Tue, Aug 2, 2011 at 11:10 PM, Bjartur Thorlacius svartma...@gmail.com wrote: Şann şri 2.ágú 2011 09:04, skrifaği Henri Sivonen: On Fri, 2011-07-29 at 22:39 +, Ian Hickson wrote: Presentational markup may convey useful information, for example that a quotation from printed matter contains an underlined word. HTML is the wrong language for this kind of thing. I disagree. From time to time, people want to take printed matter an publish it on the Web. In practice, the formats available are PDF and HTML. HTML works more nicely in browsers and for practical purposes works generally better when the person taking printed matter to the Web decides that the exact line breaks and the exact font aren't of importance. They may still consider it of importance to preserve bold, italic and underline and maybe even delegate that preservation to OCR software that has no clue about semantics. (Yes, bold, italic and underline are qualitatively different from line breaks and the exact font even if you could broadly categorize them all as presentational matters.) I think it's not useful for the Web for you to decree that HTML is the wrong language for this kind of thing. There's really no opportunity to launch a new format precisely for that use case. Furthermore, in practice, HTML already works fine for this kind of thing. The technical solution is there already. You just decree it wrong as a matter of principle. When introducing new Web formats is prohibitively hard and expensive, I think it doesn't make sense to take the position that something that already works is the wrong language. So you're arguing that a subset of HTML should be favored over presentational markup languages for marking up digital retypes of printed matter, with b, i, u, font, small and big be redefined to their HTML 3 typographical meanings. And perhaps blockquote standardized to mean indent. If you simply retype print without any interpretation of the typography used, a valid speech rendering would e.g. cue bold text with bold and unbold marks to convey the meaning: this text was bold. The current definition of b does not exactly hint at such renderings. If all you want is to suggest original typographic rendering, then (save for Excerpt/Blockquote, Nofill/Pre and Lang/@lang) CSS does the job, better - and is vastly more powerful. I think the reason why Jukka and others seem to be confused about your goals is that your goals here are literally incredible from the point of view of other people. Even though you've told me f2f what you believe and I want to trust that you are sincere in your belief, I still have a really hard time believing that you believe what you say you believe about the definitions ofb,i andu. When after discussing this with you f2f, I still find your position incredible, I think it's not at all strange if other people when reading the spec text interpret your goals inaccurately because your goals don't seem like plausible goals to them. If if the word presentational carries too much negative baggage, I suggest definingb,i andu as typographic elements on visual media (and distinctive elements on other media) and adjusting the rhetoric that HTML is a semantic markup language to HTML being a mildly semantic markup language that also has common phrase-level typographic features. The problem is that the facts that something was written underlined, spoken with a stress and that styles guides recommend underlining the text when printed to convey it's semantics are not all equal. They might all be conveyed in print by underlining the text, but the semantics differ and thus each needs an element of it's own. Much as authors must use ol, ul and blockquote to convey their defined meanings, even though some UAs might render all of them the same way. I don't see why we need to throw out the baby with the bathwater. In my mind,, HTML5 is good both for semantic markup (i.e. application development) and for content presentation (i.e. document publication). Some elements serve one purpose better than the other (such as u, b, i being mostly presentational), others serve both purposes equally (like ul, ol). It's been a mix from the start and both a blessing and a curse. Trying to ignore that history will only give us confused users, not better markup. Cheers, Silvia.
Re: [whatwg] Browsers delay window.print() action until page load finishes
On Wed, 4 May 2011, Alexey Proskuryakov wrote: 04.05.2011, в 15:38, Ian Hickson написал(а): There seems to be no provision in the spec for a behavior Firefox and IE (and now WebKit-based browsers, too) have. If window.print() is called during page load, then its action is delayed until loading is finished. I haven't been able to reproduce this. I can reproduce cases where the browser entirely ignores a window.print() call (as allowed by the spec), but none where the call is delayed until later. Do you have a test case demonstrating this? Yes - for example, http://leiz.org/chromium/25027.htm. Basically, it's: script window.print(); /script pPrint me/p Safari 5 prints a blank page, while other IE and Firefox print Print me. WebKit nightly builds print Print me, too. Notably, we've seen different results in Firefox when printing file: vs. http: documents. I'd be happy to spec this, I'm just trying to work out what it means with respect to event firing, etc (the rest of the algorithm). Presumably, in many cases we want it to be synchronous as now, since pages presumably depend on being able to edit the DOM before and after. There is a number of subtleties, some of which we know about from a discussion in https://bugs.webkit.org/show_bug.cgi?id=43658. E.g. what happens if window.print() is called multiple times during loading, or if window.close() is called immediately afterwards (which happens on live sites in window.open()/write()/print()/close() scenario). And yes, we only defer window.print() if the document is still loading at the time of the call. There are obviously multiple definitions of loading possible for this feature. On Wed, 4 May 2011, Boris Zbarsky wrote: In Gecko's case if a print operation is pending then further calls to print() are effectively ignored. In Gecko, if window.close() is called while a print operation is pending or while printing is in progress (printing is async), the close is deferred until the printing stuff is done. Note that the tab/window the user sees may still appear to go away in the meantime, but the internal data structures are kept alive until the print operation completes. Or at least that's what the code is trying to do; I haven't tested how it works in practice. I _think_ Gecko's current code just aims for has onload started firing? I've tried to spec this. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] AppCache-related e-mails
A common request that maybe we can agree upon is the ability to list the manifests that are cached and to delete them via script. Something like... String[] window.applicationCache.getManifests(); // returns appcache manifest for the origin void window.applicationCache.deleteManifest(manifestUrl); This is trivial to do already; just return 404s for all the manifests you no longer want to keep around. It involves creating hidden iframes loaded with pages that refer to the manifests to be deleted, straightforward but gunky. 0. [DONE] A means of not invoking the fallback resource for some error responses that would generally result in the fallback resource being returned. An additional response header would suite they're needs... something like... x-chromium-appcache-fallback-override: disallow-fallback If a response header is present with that value, the fallback response would not be returned. http://code.google.com/p/chromium/issues/detail?id=82066 What's the use case? When would you ever want to show the user an error yet really desire to indicate that it's an error and not a 200 OK response? Google Docs. Instead of seeing a fallback page that erroneously says You must be offline and this document is not available., they wanted to show the actual error page generated by the server in the case of a deleted document or when the user doesn't have rights to access that doc.
Re: [whatwg] AppCache-related e-mails
On Tue, 2 Aug 2011, Michael Nordman wrote: A common request that maybe we can agree upon is the ability to list the manifests that are cached and to delete them via script. Something like... String[] window.applicationCache.getManifests(); // returns appcache manifest for the origin void window.applicationCache.deleteManifest(manifestUrl); This is trivial to do already; just return 404s for all the manifests you no longer want to keep around. It involves creating hidden iframes loaded with pages that refer to the manifests to be deleted, straightforward but gunky. If you actively want to seek out old manifests, sure, but what's the use case for doing that? It would be like trying to actively evict things from HTTP caches. 0. [DONE] A means of not invoking the fallback resource for some error responses that would generally result in the fallback resource being returned. An additional response header would suite they're needs... something like... x-chromium-appcache-fallback-override: disallow-fallback If a response header is present with that value, the fallback response would not be returned. http://code.google.com/p/chromium/issues/detail?id=82066 What's the use case? When would you ever want to show the user an error yet really desire to indicate that it's an error and not a 200 OK response? Google Docs. Instead of seeing a fallback page that erroneously says You must be offline and this document is not available., they wanted to show the actual error page generated by the server in the case of a deleted document or when the user doesn't have rights to access that doc. I don't see what's wrong with using 200 OK for that case. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] AppCache-related e-mails
On Tue, Aug 2, 2011 at 4:40 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 2 Aug 2011, Michael Nordman wrote: A common request that maybe we can agree upon is the ability to list the manifests that are cached and to delete them via script. Something like... String[] window.applicationCache.getManifests(); // returns appcache manifest for the origin void window.applicationCache.deleteManifest(manifestUrl); This is trivial to do already; just return 404s for all the manifests you no longer want to keep around. It involves creating hidden iframes loaded with pages that refer to the manifests to be deleted, straightforward but gunky. If you actively want to seek out old manifests, sure, but what's the use case for doing that? It would be like trying to actively evict things from HTTP caches. 0. [DONE] A means of not invoking the fallback resource for some error responses that would generally result in the fallback resource being returned. An additional response header would suite they're needs... something like... x-chromium-appcache-fallback-override: disallow-fallback If a response header is present with that value, the fallback response would not be returned. http://code.google.com/p/chromium/issues/detail?id=82066 What's the use case? When would you ever want to show the user an error yet really desire to indicate that it's an error and not a 200 OK response? Google Docs. Instead of seeing a fallback page that erroneously says You must be offline and this document is not available., they wanted to show the actual error page generated by the server in the case of a deleted document or when the user doesn't have rights to access that doc. I don't see what's wrong with using 200 OK for that case. You should talk to the app developers. I think there are other consumers of these urls besides the browser. To change the status code to 200 would break those other consumers. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] AppCache-related e-mails
On Tue, Aug 2, 2011 at 4:40 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 2 Aug 2011, Michael Nordman wrote: A common request that maybe we can agree upon is the ability to list the manifests that are cached and to delete them via script. Something like... String[] window.applicationCache.getManifests(); // returns appcache manifest for the origin void window.applicationCache.deleteManifest(manifestUrl); This is trivial to do already; just return 404s for all the manifests you no longer want to keep around. It involves creating hidden iframes loaded with pages that refer to the manifests to be deleted, straightforward but gunky. If you actively want to seek out old manifests, sure, but what's the use case for doing that? It would be like trying to actively evict things from HTTP caches. You should talk to some app developers. View source on angry birds for a use case, they are doing this to get rid of stale version tied to old manifest urls. 0. [DONE] A means of not invoking the fallback resource for some error responses that would generally result in the fallback resource being returned. An additional response header would suite they're needs... something like... x-chromium-appcache-fallback-override: disallow-fallback If a response header is present with that value, the fallback response would not be returned. http://code.google.com/p/chromium/issues/detail?id=82066 What's the use case? When would you ever want to show the user an error yet really desire to indicate that it's an error and not a 200 OK response? Google Docs. Instead of seeing a fallback page that erroneously says You must be offline and this document is not available., they wanted to show the actual error page generated by the server in the case of a deleted document or when the user doesn't have rights to access that doc. I don't see what's wrong with using 200 OK for that case. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] AppCache-related e-mails
On Tue, 2 Aug 2011, Michael Nordman wrote: If you actively want to seek out old manifests, sure, but what's the use case for doing that? It would be like trying to actively evict things from HTTP caches. You should talk to some app developers. View source on angry birds for a use case, they are doing this to get rid of stale version tied to old manifest urls. But why? I couldn't figure out the use case from the source you mention. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] AppCache-related e-mails
On Mon, 13 Jun 2011, Michael Nordman wrote: Let's say there's a page in the cache to be used as a fallback resource, refers to the manifest by relative url... html manifest='x' Depending on the url that invokes the fallback resource, 'x' will be resolved to different absolute urls. When it doesn't match the actual manifest url, the fallback resource will get tagged as FOREIGN and will no longer be used to satisfy main resource loads. I'm not sure if this is a bug in chrome or a bug in the appcache spec just yet. I'm pretty certain that Safari will have the same behavior as chrome in this respect (the same bug). The value of the manifest attribute is interpreted as relative to the location of the loaded document in chrome and all webkit based browsers and that value is used to detect foreign'ness. The workaround/solution for this is to NOT put a manifest attribute in the html tag of the fallback resource (or to put either an absolute url or host relative url as the manifest attribute value). Or just make sure you always use relative URLs, even in the manifest. I don't really understand the problem here. Can you elaborate further? Suppose the fallback resource is setup like this... FALLBACK: / FallbackPage.html ... and that page contains a relative link to the manifest in its html tag like so... html manifest=file.manifest Any server request that fails under / will get FallbackPage.html in response. For example... /SomePage.html When the fallback is used in this case the manifest url will be interpreted as /file.manifest /Some/Other/Page.html And in this case the manifest url will be interpreted as /Some/Other/file.manifest On Fri, 1 Jul 2011, Michael Nordman wrote: Cross-origin resources listed in the CACHE section aren't retrieved with the 'Origin' header This is incorrect. They are fetched with the origin of the manifest. What makes you say no Origin header is included? I don't see mention of that in the draft? If that were the case then this wouldn't be an issue. I'm not familiar with CORS usage. Do xorigin subresource loads of all kinds (.js, .css, .png) carry the Origin header? I can imagine a server implementation that would examine the Origin header upfront, and if it didn't like what it saw, instead of computing the response without the origin listed in the Access-Control-Allow-Origin response header... it just wouldn't compute the response body and return an empty response without the origin listed in the Access-Control-Allow-Origin response header. If general subresource loads aren't sent with the Origin header, fetching all manifest listed resource with that header set could cause problems.
Re: [whatwg] [editing] HTML Editing APIs specification ready for implementer feedback
On Wed, Jul 27, 2011 at 4:51 PM, Ryosuke Niwa rn...@webkit.org wrote: Feedback on sections 1 through 3: - WebKit treats any font-weight above or equal to 600 as bold because that's what user sees, and boldness is a binary concept in execCommand; Firefox 5 appears to do the same. - WebKit compares colors in rgb/rgba format; e.g. red is first parsed as rgb(255, 0, 0). Firefox 5 seems to does the same as well. - WebKit compares font sizes in legacy font size used in font element; See CSSStyleSelector::legacyFontSize or legacyFontSizeFromCSSValue in EditingStyle.cpp - Throwing SYNTAX_ERR might cause a backward compatibility issue because the UAs don't throw an error now. We can probably throwing console messages first to give authors some grace period to transition. - For font element vs. span with style issue, we could add another boolean flag that forces UAs to toggle font-element; i.e. add StyleWithFont command. - 3.2: Prefix webkit- doesn't seem natural given all editing commands use Camel case. Maybe Ms, Gecko, WebKit and Opera? e.g. WebKitFontSizeDelta. But again this might cause a backward compatibility because we do implement few editing commands that are not in the spec and they are not prefixed. - 3.3: The return value of queryCommandEnable depends on beforecut, beforecopy, and beforepaste events and selection state in WebKit; WebKit returns false if default actions are prevented in those events or selection is not a range. Firefox 5 appears to do the same for selection but doesn't seem to fire beforecut, beforecopy, and beforepaste. Feedback on selections 5 through 7: - The definition of collapsed line break isn't clear. Maybe it's br immediately before the end of a block? - Isn't this essentially the collapsed line break and a br inside a block where br is the sole visible node? - The definition of visible should definitely take display: none and visibility: hidden into account. Also, you should take collapsed whitespace into account. e.g. br is invisible even though there are Text nodes before and after br. CSS2.1 spec section 16.6.1 has some elaboration on how whitespace is collapsed. - Step 3 in remove extraneous line breaks before seems redundant because we traverse the tree in the reversed tree order in step 4. - What are sibling criteria and new parent instructions in 6.2? - Also where would new parent be inserted if new parent's parent was not null? Or will it stay where it was? - I'm not sure why we'd need to add br's so aggressively in this algorithm - Can 6.3 be tied with phrasing content concept used in the rest of HTML5 spec? - 7.2: Firefox appears to differentiate backColor and hiliteColor; namely backColor is always the first non-transparent background color of the block ancestors. - 7.2: The last time I checked, only WebKit respected vertical-align for sub and sup so I'm not sure we should keep that. It appears that all other browser ignores vertical-align. - 7.6: In WebKit, we have the concept of *typing style*, which is a collection of CSS styles that will be applied when user types characters (uses b, i, etc... when StyleWithCSS is false). Maybe we can introduce this concept in the spec, and step 2 in 7.6 can update that? - 7.7: Should we assume two colors are same if both of them had alpha=0 since they are transparent anyway? - 7.8: WebKit (and Firefox 5 as far as I checked) regards 700, 800, 900 as bold. - The algorithm to compute the legacy font size in 7.11 doesn't really match the one WebKit and Firefox implement. Maybe it's better to say it's the numbers between 1 and 7 such that it would have produced the closest font size to the resolved value of font-size in pixels when used in font element's size attribute. - 7.15: WebKit uses blacklist. And IE doesn't modify any inline style declaration so I'd vote for black-listing. I did an extensive research about this when I recently re-implemented WebKit's RemoveFormat: https://bugs.webkit.org/show_bug.cgi?id=43017 I've read a part of sections 7 and 8 but I've kind of lost here. The spec is very detailed and I can't really get a high-level view of what will happen. It might be helpful to have some high-level summary of what it tries to do for each algorithm something like one at the beginning of 7.6. I'm mainly concerned that there doesn't seem to be a good way for me to check whether the current implementation is consistent with your spec because the spec is defined in terms of algorithms. Indeed, it's a NP-hard problem :( Also, I'm not certain if there's a much value in each browser matching the spec exactly. I feel like we need to have some kind of tolerance level as done in browserscope's RTE2 test suite (+roland since he worked on this stuff). Test suites like