[Bug 23772] New: Just return read result when there's data synchronously readable
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23772 Bug ID: 23772 Summary: Just return read result when there's data synchronously readable Product: WebAppsWG Version: unspecified Hardware: All OS: All Status: NEW Severity: normal Priority: P2 Component: Streams API Assignee: tyosh...@google.com Reporter: tyosh...@google.com QA Contact: public-webapps-bugzi...@w3.org CC: m...@w3.org, public-webapps@w3.org It's currently implemented in preview ver. https://dvcs.w3.org/hg/streams-api/raw-file/tip/preview.html ByteStreamReadResult.data is a Promise when no data synchronously readable. Otherwise, data is actual data. This is inspired by e.g. Chrome's network stack code (ERR_IO_PENDING). It's efficient, but I got feedback against this from Elliott. Quoting. - It's really weird that sometimes this thing is a Promise and sometimes it isn't. - It also doesn't make sense that the data property is a Promise since the size and eof properties cannot be known until the promise is resolved so we might as well return PromiseResult instead of a Result with a Promise property and a size/eof property external to it. - Having the value jump between Promise and real data based on network latency is going to be very error prone for authors, you can probably write your code all one way if your network is fast and then see it fail when sometimes it's slower. - I'd suggest always returning a Promise and dealing with it at microtask time for the synchronous case. -- You are receiving this mail because: You are on the CC list for the bug.
Re: Thoughts behind the Streams API ED
Please see here https://github.com/whatwg/streams/issues/33, I realized that this would apply to operations like textDecoder too without the need of an explicit stream option, so that's no more WebCrypto only related. Regards Aymeric Le 07/11/2013 11:25, Aymeric Vitte a écrit : Le 07/11/2013 10:42, Takeshi Yoshino a écrit : On Thu, Nov 7, 2013 at 6:27 PM, Aymeric Vitte vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote: Le 07/11/2013 10:21, Takeshi Yoshino a écrit : On Thu, Nov 7, 2013 at 6:05 PM, Aymeric Vitte vitteayme...@gmail.com mailto:vitteayme...@gmail.com wrote: stop/resume: Indeed as I mentioned this is related to WebCrypto Issue22 but I don't think this is a unique case. Issue22 was closed because of lack of proposals to solve it, apparently I was the only one to care about it (but I saw recently some other messages that seem to be related), and finally this would involve a public clone method with associated security concerns. But with Streams it could be different, the application will internally clone the state of the operation probably eliminating the security issues, as simple as that. To describe simply the use case, let's take a progressive hash computing 4 bytes by 4 bytes: incoming stream: ABCDE bytes hash operation: process ABCD, keep E for the next computation incoming stream: FGHI bytes + STOP-EOF hash operation: process EFGH, process STOP-EOF: clone the state of the hash, close the operation: digest hash with I So, here, partial hash for ABCDEFGH is output No, you get the digest for ABCDEFGHI and you get a cloned operation which will restart from ABCDEFGH OK. resume: incoming stream: JKLF hash operation (clone): process IJKL, keep F for next computation etc... and if we close the stream here we'll get a hash for ABCDEFGHIJKLFPPP (P is padding). Right? If you close the stream here you get the digest for ABCDEFGHIJKLF resume happens implicitly when new data comes in without explicit method call say resume()? Good question, I would say yes, so we don't need resume finally, but maybe others have different opinion, let's ask whatwg if they foresee this case. So you do not restart the operation as if it was the first time it was receiving data, you just continue it from the state it was when stop was received. That's not so unusual to do this, it has been requested many times in node. -- Peersm :http://www.peersm.com node-Tor :https://www.github.com/Ayms/node-Tor GitHub :https://www.github.com/Ayms -- Peersm :http://www.peersm.com node-Tor :https://www.github.com/Ayms/node-Tor GitHub :https://www.github.com/Ayms -- Peersm : http://www.peersm.com node-Tor : https://www.github.com/Ayms/node-Tor GitHub : https://www.github.com/Ayms
Re: [coord] Request for Inter-Group Coordination at TPAC
Here’s the updated draft we’ll be discussing in Monday’s meeting. FWIW, It’s a ground-up rewrite since last week, so it’s still loaded with todos and editorial notes. https://dvcs.w3.org/hg/IndieUI/raw-file/default/src/indie-ui-context.html#toc I know many of you are traveling, so if you don’t have time to do a thorough review before then, I’d suggest spending 5 minutes to read the Introduction and then Section 1.1, including the example in 1.1.2 “Example restricted call to matchMedia”… Thanks for your consideration. On Nov 1, 2013, at 7:59 AM, James Craig jcr...@apple.com wrote: + public-webapps On Nov 1, 2013, at 7:44 AM, James Craig jcr...@apple.com wrote: Hi Art. On Nov 1, 2013, at 5:32 AM, Arthur Barstow art.bars...@nokia.com wrote: To help WebApps prepare for the discussion, I think it would be helpful if you would please provide some background information. For example, the URL of the IndieUI User Context specification, the proposal for a new way for user agents to restrict access to certain groups of media resources. The new approach we’re hoping to discuss is pretty new, so these aren’t in spec form yet. I’m hoping to change that before TPAC. The current editor’s draft still has the old key/value pair approach (which we’d decided to scrap), and can be found here: https://dvcs.w3.org/hg/IndieUI/raw-file/default/src/indie-ui-context.html
Re: Thoughts behind the Streams API ED
On Fri, Nov 8, 2013 at 5:38 PM, Aymeric Vitte vitteayme...@gmail.comwrote: Please see here https://github.com/whatwg/streams/issues/33, I realized that this would apply to operations like textDecoder too without the need of an explicit stream option, so that's no more WebCrypto only related. Similar but a bit different? For clarification, could you review the following? textDecoderStream.write(araybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1); textDecoderStream.stop(); textDecoderStream.write(arraybuffer of 0x81 0xd1 0x81 0xd0 0xb8 0xd1 0x8f) This generates DOMString stream [Рос, сия]. Right? Or you want to get [Рос, Россия]?
Re: Thoughts behind the Streams API ED
Sorry. I've cut the input at wrong position. textDecoderStream.write(arraybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1 0x81 0xd1); textDecoderStream.stop(); textDecoderStream.write(arraybuffer of 0x81 0xd0 0xb8 0xd1 0x8f)
Re: Thoughts behind the Streams API ED
I would expect Poc (stop, keep 0xd1 for the next data) and сия It can be seen a bit different indeed, while with crypto you expect the finalization of the operation since the begining (but only by computing the latest bytes), here you can not expect the string since the begining of course. It just depends how the Operation (here TextDecoder) handles stop but I find it very similar, TextEncoder closes the operation with the bytes it has and clone its state (ie do nothing here except clearing resolved bytes and keeping unresolved ones for data to come). Regards Aymeric Le 08/11/2013 11:33, Takeshi Yoshino a écrit : Sorry. I've cut the input at wrong position. textDecoderStream.write(arraybuffer of 0xd0 0xa0 0xd0 0xbe 0xd1 0x81 0xd1); textDecoderStream.stop(); textDecoderStream.write(arraybuffer of 0x81 0xd0 0xb8 0xd1 0x8f) -- Peersm : http://www.peersm.com node-Tor : https://www.github.com/Ayms/node-Tor GitHub : https://www.github.com/Ayms
Re: Thoughts behind the Streams API ED
On Fri, Nov 8, 2013 at 8:54 PM, Aymeric Vitte vitteayme...@gmail.comwrote: I would expect Poc (stop, keep 0xd1 for the next data) and сия It can be seen a bit different indeed, while with crypto you expect the finalization of the operation since the begining (but only by computing the latest bytes), here you can not expect the string since the begining of course. It just depends how the Operation (here TextDecoder) handles stop but I find it very similar, TextEncoder closes the operation with the bytes it has and clone its state (ie do nothing here except clearing resolved bytes and keeping unresolved ones for data to come). I'd say more generally that stop() is kinda in-band control signal that is inserted between elements of the stream and distinguishable from the elements. As you said, interpretation of the stop() symbol depends on what the destination is. One thing I'm still not sure is that I think you can just add stop() equivalent method to the destination, and - pipe() data until the point you were calling stop() - call the stop() equivalent on e.g. hash - restart pipe() At least our spec allows for this. Of course, it's convenient that Stream can carry such a signal. But there's trade-off between the convenience and API size. Similar to decision whether we include abort() to WritableByteStream or not. Extremely, abort(), close() and stop() can be merged into one method (unless abort() method has a functionality to abandon already written data). They're all signal inserting methods. close() - signal(FIN) stop(info) - signal(CONTROL, info) abort(error) - signal(ABORT, error) and the signal is packed and inserted into the stream's internal buffer.
Re: [FileAPI] LC Comment Tracking
Hi Art, On Nov 7, 2013, at 9:40 AM, Arthur Barstow wrote: Since it appears you will not be at WebApps' f2f meeting next week, I would appreciate it if you would please summarize the status of the comment processing, your next steps, etc. I am especially interested in whether or not you consider any of the bug fixes you applied as substantive and/or add a new feature (which would require a new LC). Most LC commentary that was substantive became a spec bug; I've fixed most such spec. bugs, and the contributor/commentor has been notified. In my opinion, the biggest change is to the File constructor. This is https://www.w3.org/Bugs/Public/show_bug.cgi?id=23479. I don't think this is a new feature, since the previous document pushed to /TR had a constructor, athough a different signature. Other changes include moving Blob URL to be redefined in terms of terminology in the WHATWG URL spec, in lieu of ABNFs. If you provide a dial-in on the day that you discuss File + FileSystem, I can try and dial in, but this depends on time. There will be others present from Mozilla :) The LC commentary is tracked at http://www.w3.org/wiki/Webapps/LCWD-FileAPI-20130912 -- A* -Thanks, ArtB [1] http://www.w3.org/wiki/Webapps/LCWD-FileAPI-20130912 On 9/12/13 10:39 AM, ext Arthur Barstow wrote: [ Bcc public-sysapps ; comments from SysApps are welcome ] This is a Request for Comments for the 12 September 2013 Last Call Working Draft of File API: http://www.w3.org/TR/2013/WD-FileAPI-20130912/ The comment deadline is October 24 and all comments should be sent to the public-webapps@w3.org list with a subject: prefix of [FileAPI]. The spec's bug list is [Bugs] and the few `approved` tests we have can be run in a browser at [Tests]. -Thanks, ArtB [Bugs] http://tinyurl.com/Bugs-FileAPI [Tests] http://w3c-test.org/web-platform-tests/master/FileAPI/
[webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax
Hi all, We have been discussing cross-orign use case and declarative syntax of web components internally at Apple, and here are our straw man proposal to amend the existing Web Components specifications to support it. 1. Modify HTML Imports to run scripts in the imported document itself This allows the importee and the importer to not share the same script context, etc… 2. Add “importcomponents content attribute on link element It defines the list of custom element tag names to be imported from the imported HTML document. e.g. link rel=import href=~ importcomponents=tag-1 tag-2 will export custom elements of tag names tag-1 and tag-2 from ~. Any name that didn't have a definition in the import document is ignored (i.e. if tag-2 was not defined in ~, it would be skipped but tag-1 will be still imported). This mechanism prevents the imported document from defining arbitrary components in the host document. 3. Support static (write-once) binding of a HTML template e.g. template id=cardTemplateName: {{name}}brEmail:{{email}}/template script document.body.appendChild(cardTemplate.instantiate({name: Ryosuke Niwa, email:rn...@webkit.org})); /script 4. Add “interface content attribute to template element This content attribute specifies the name of the JavaScript constructor function to be created in the global scope. The UA creates one and will be used to instantiate a given custom element. The author can then setup the prototype chain as needed: template defines=name-card interface=NameCardElement Name: {{name}}brEmail:{{email}} /template script NameCardElement.prototype.name = function () {...} NameCardElement.prototype.email = function () {...} /script This is similar to doing: var NameCardElement = document.register(’name-card'); 5. Add defines content attribute on HTML template element to define a custom element This new attribute defines a custom element of the given name for the template content. e.g. template defines=nestedDivdivdiv/div/div/template will let you use nestedDiv/nestedDiv We didn’t think having a separate custom element was useful because we couldn’t think of a use case where you wanted to define a custom element declaratively and not use template by default, and having to associate the first template element with the custom element seemed unnecessary complexity. 5.1. When a custom element is instantiated, automatically instantiate template inside a shadow root after statically binding the template with dataset This allows statically declaring arguments to a component. e.g. template defines=name-cardName: {{name}}brEmail:{{email}}/template name-card data-name=Ryosuke Niwa data-email=rn...@webkit.org” 5.2. When a new custom element object is constructed, created callback is called with a shadow root Unfortunately, we can't let the author define a constructor because the element hadn't been properly initialized with the right JS wrapper at the time of its construction. So just like we can't do new HTMLTitleElement, we're not going to let the author do an interesting things inside a custom element's constructor. Instead, we're going to call created function on its prototype chain: template defines=name-card interface=NameCardElement Name: {{name}}brEmail:{{email}} /template script NameCardElement.prototype.name = function () {...} NameCardElement.prototype.email = function () {...} NameCardElement.prototype.created = function (shadowRoot) { ... // Initialize the shadowRoot here. } /script This is similar to the way document.register works in that document.register creates a constructor automatically. 6. The cross-origin component does not have access to the shadow host element, and the host document doesn’t have access to the element object. When member functions of the element is called, “this” object will be undefined. This is necessary because exposing the object to a cross-origin content will result in tricky security issues, forcing us to have proxy objects, etc… Inside the document that imported a component, the element doesn’t use the prototype defined by the component as that exposes JS objects cross-origin. e.g. even if LikeButtonElement was defined in facebook.com/~/like-button.html, the document that uses this component wouldn’t see the prototype or the constructor. It’ll be HTMLUnknownElement. (We could create a new custom element type such as HTMLCrossOriginCustomElement if think that’s necessary). 7. Expose shadow host’s dataset on shadow root This allows the component to communicate with the host document in a limited fashion without exposing the element directly. This design allows us to have an iframe-like boundary between the shadow host (custom element itself) and the shadow root (implementation details), and address our cross-origin use case elegantly as follows: rniwa.com/webkit.html - !DOCTYPE html html head link rel=import href=https://webkit.org/components.html;