[Bug 27418] New: [Shadow]: Need to define what .styleSheets actually does on a shadow root
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27418 Bug ID: 27418 Summary: [Shadow]: Need to define what .styleSheets actually does on a shadow root Product: WebAppsWG Version: unspecified Hardware: PC OS: All Status: NEW Severity: normal Priority: P2 Component: Component Model Assignee: dglaz...@chromium.org Reporter: bzbar...@mit.edu QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org Blocks: 14978 The spec says: On getting, the attribute must return a StyleSheetList sequence containing the shadow root style sheets. The term shadow root style sheets is not defined anywhere. In fact, the string sheet does not appear anywhere else in this spec. This needs to actually be defined, with particular attention to older shadow roots (e.g. do things in older shadow trees even load/parse their stylesheets?). -- You are receiving this mail because: You are on the CC list for the bug.
[Bug 27420] New: [Custom]: need a hook for transfering data while cloning elements
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27420 Bug ID: 27420 Summary: [Custom]: need a hook for transfering data while cloning elements Product: WebAppsWG Version: unspecified Hardware: PC OS: Windows NT Status: NEW Severity: normal Priority: P2 Component: Component Model Assignee: dglaz...@chromium.org Reporter: d...@domenic.me QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org Blocks: 14968 Many elements have internal state that should be cloned when using `cloneNode()`. From https://dom.spec.whatwg.org/#concept-node-clone: Run any cloning steps defined for node in other applicable specifications and pass copy, node, document and the clone children flag if set, as parameters. For example in HTML the input element specifies: The cloning steps for input elements must propagate the value, dirty value flag, checkedness, and dirty checkedness flag from the node being cloned to the copy. This behavior should be hookable by authors as well for their custom elements. My proposal is that we introduce a clonedCallback(source, dest) that, for cloned nodes, is called after the createdCallback with the original as the source and the new clone as the dest. Ideally (probably later) we should also redefine DOM to delegate to clonedCallback instead of to other applicable specifications and then HTML should specify the clonedCallback behavior instead of specifying the cloning steps. That way if you e.g. create a custom element that extends an input element you can call `super.clonedCallback(source, dest)` inside your own `clonedCallback`. -- You are receiving this mail because: You are on the CC list for the bug.
[Bug 24338] Spec should have Fetch for Blob URLs
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24338 Arun a...@mozilla.com changed: What|Removed |Added Status|REOPENED|RESOLVED Resolution|--- |FIXED --- Comment #25 from Arun a...@mozilla.com --- Resolving this. Fetch's body now has an error flag which is set by the read operation: http://dev.w3.org/2006/webapi/FileAPI/#readOperationSection -- You are receiving this mail because: You are on the CC list for the bug.
[Bug 27420] [Custom]: need a hook for transfering data while cloning elements
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27420 Adam Klein ad...@chromium.org changed: What|Removed |Added Status|NEW |RESOLVED CC||ad...@chromium.org Resolution|--- |DUPLICATE --- Comment #4 from Adam Klein ad...@chromium.org --- *** This bug has been marked as a duplicate of bug 24570 *** -- You are receiving this mail because: You are on the CC list for the bug.
RE: =[xhr]
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] IMO, exposing such degree of (low level) control should be avoided. I disagree on principle :). If we want true webapps we need to not be afraid to give them capabilities (like POSTing data to S3) that native apps have. In cases where the size of the body is known beforehand, Content-Length should be generated automatically; in cases where it is not, chunked encoding should be used. I agree this is a nice default. However it should be overridable for cases where you know the server in question doesn't support chunked encoding.
RE: =[xhr]
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] If you absolutely need to stream content whose length is unknown beforehand to a server not supporting ckunked encoding, construct your web service so that it supports multiple POSTs (or whatever), one per piece of data to upload. Unfortunately I don't control Amazon's services or servers :(
Re: =[xhr]
On Wed, Nov 19, 2014 at 1:45 AM, Domenic Denicola d...@domenic.me wrote: From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of Anne van Kesteren On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote: How about padding the remaining bytes forcefully with e.g. 0x20 if the WritableStream doesn't provide enough bytes to us? How would that work? At some point when the browser decides it wants to terminate the fetch (e.g. due to timeout, tab being closed) it attempts to transmit a bunch of useless bytes? What if the value is really large? It's a problem that we'll provide a very easy way (compared to building a big ArrayBuffer by doubling its size repeatedly) to a malicious script to have a user agent send very large data. So, we might want to place a limit to the maximum size of Content-Length that doesn't hurt the benefit of streaming upload too much. I think there are several different scenarios under consideration. 1. The author says Content-Length 100, writes 50 bytes, then closes the stream. 2. The author says Content-Length 100, writes 50 bytes, and never closes the stream. 3. The author says Content-Length 100, writes 150 bytes, then closes the stream. 4. The author says Content-Length 100 , writes 150 bytes, and never closes the stream. It would be helpful to know how most servers handle these. (Perhaps HTTP specifies a mandatory behavior.) My guess is that they are very capable of handling such situations. 2 in particular resembles a long-polling setup. As for whether we consider this kind of thing an attack, instead of just a new capability, I'd love to get some security folks to weigh in. If they think it's indeed a bad idea, then we can discuss mitigation strategies; 3 and 4 are easily mitigatable, whereas 1 could be addressed by an idea like Takeshi's. I don't think mitigating 2 makes much sense as we can't know when the author intends to send more data. The extra 50 bytes for the case 3 and 4 should definitely be ignored by the user agent. The user agent should probably also error the WritableStream when extra bytes are written. 2 is useful but new situation to web apps. I agree that we should consult security experts.