Re: [whatwg] [whartwg] Video processing ability for MediaStreamTrack
On Fri, Aug 7, 2015 at 8:56 AM, Chia-Hung Tai c...@mozilla.com wrote: http://chiahungtai.github.io/mediacapture-worker/ Given that removeVideoProcessor() does not take arguments, should addVideoProcessor() not check for duplicates? VideoProcessEventThe looks like a typo. The events don't define constructors. They probably should. You should also use IDL [Exposed=] syntax to indicate they are available in workers. (Or dedicated workers, anyway.) The outputImageBitmap=null syntax is wrong. You want to remove =null there and define the default in prose. You also want to define the processing model a bit more carefully I think. E.g., it seems to be that for processing the event instance is modified and then once the dispatch flag is unset that data is copied somehow. But how is it copied? Is it a structured clone transfer that detaches the buffer? -- https://annevankesteren.nl/
[whatwg] APIs to interrogate default outgoing HTTP headers, i.e. Accept-Encoding
I posted this at http://discourse.wicg.io/ a long time ago and forgot to email the list about it, so here goes... ## original post There's currently no good way to determine whether or not a browser / environment supports GZIP-deflated content entirely from the front-end. Servers can interrogate the Accept-Encoding header when they receive the request, but client-side JavaScript cannot see this value at all. This is important when using a CDN that doesn't facilitate selection of appropriately deflated content (e.g. AWS CloudFront). I've had projects where the initial HTML content is dynamically generated only so that the server can pass the Accept-Encoding header back to the client. That way, the client can adjust the other URLs it uses to pick pre-GZIPed files, e.g. blah.js.gz instead of blah.js all the time. I was initially thinking that navigator.acceptEncoding could just be specified to contain the default outgoing value of this header, but it occurred to me that there are probably other headers where this is handy. Should this be a function such as XMLHttpRequest.getDefaultRequestHeaders()? Should all such headers just dangle from the navigator object as in my previous example? ## summary of responses http://discourse.wicg.io/users/stuartpb seemed interested at the time. - it's probably more appropriate to store them in navigator instead of XMLHttpRequest - it's probably a good idea to discuss this with the Fetch specification people - for security reasons, certain headers (e.g. Cookies) should not be available this way
Re: [whatwg] [whartwg] Video processing ability for MediaStreamTrack
I'm incredibly excited to see this moving forward.
Re: [whatwg] APIs to interrogate default outgoing HTTP headers, i.e. Accept-Encoding
Ron Waldon jokeyrh...@gmail.com writes: I posted this at http://discourse.wicg.io/ a long time ago and forgot to email the list about it, so here goes... ## original post There's currently no good way to determine whether or not a browser / environment supports GZIP-deflated content entirely from the front-end. Servers can interrogate the Accept-Encoding header when they receive the request, but client-side JavaScript cannot see this value at all. This is important when using a CDN that doesn't facilitate selection of appropriately deflated content (e.g. AWS CloudFront). I've had projects where the initial HTML content is dynamically generated only so that the server can pass the Accept-Encoding header back to the client. That way, the client can adjust the other URLs it uses to pick pre-GZIPed files, e.g. blah.js.gz instead of blah.js all the time. I do not understand that use case. It reads incredibly convoluted to me. The UA controls the transport anyway – it should not make any practical difference to a script how the data is transmitted. Btw, why can AWS CloudFront not into compressed content? -- Nils Dagsson Moskopp // erlehmann http://dieweltistgarnichtso.net
Re: [whatwg] APIs to interrogate default outgoing HTTP headers, i.e. Accept-Encoding
On Tue, 11 Aug 2015 at 08:31 Nils Dagsson Moskopp n...@dieweltistgarnichtso.net wrote: I do not understand that use case. It reads incredibly convoluted to me. The UA controls the transport anyway – it should not make any practical difference to a script how the data is transmitted. My use case is centred around trying to optimise network usage when requesting content from AWS CloudFront backed by S3. I 100% agree with you that this should not be a script's problem. However, it is. When the server (CloudFront in this case) has raw and GZIP'ed copies of content, and no automatic server-side selection between the two, the only way to optimise network usage is for the script to make this determination. Unfortunately, there is no way to gain access to the default Accept Encoding header from JavaScript, which is necessary to figure out whether to download raw of GZIP'ed content. So we currently do more hoop-jumping by serving a dynamic initial HTML, where the server constructing it can reflect the UA's Accept Encoding header back to the client in a generated script tag. It's yucky. Beyond our own need for access to the Accept Encoding header, there may be other use cases that are supported by providing access to other headers. Btw, why can AWS CloudFront not into compressed content? http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html CloudFront doesn't compress the files itself Amazon S3 doesn't compress files automatically AWS CloudFront will do the right thing if it is backed by a Custom Origin that honours the Accept Encoding header (not S3). We have repeatedly requested improvements from AWS, and they are likely on the way, but we have many hoops to jump until then. When last I checked, many sites using AWS CloudFront and S3, including the first-party AWS Console itself, do not serve GZIP'ed resources, which is sub-optimal. -- Nils Dagsson Moskopp // erlehmann http://dieweltistgarnichtso.net
Re: [whatwg] APIs to interrogate default outgoing HTTP headers, i.e. Accept-Encoding
Ron Waldon jokeyrh...@gmail.com writes: On Tue, 11 Aug 2015 at 08:31 Nils Dagsson Moskopp n...@dieweltistgarnichtso.net wrote: I do not understand that use case. It reads incredibly convoluted to me. The UA controls the transport anyway – it should not make any practical difference to a script how the data is transmitted. My use case is centred around trying to optimise network usage when requesting content from AWS CloudFront backed by S3. I 100% agree with you that this should not be a script's problem. However, it is. When the server (CloudFront in this case) has raw and GZIP'ed copies of content, and no automatic server-side selection between the two, the only way to optimise network usage is for the script to make this determination. Unfortunately, there is no way to gain access to the default Accept Encoding header from JavaScript, which is necessary to figure out whether to download raw of GZIP'ed content. So we currently do more hoop-jumping by serving a dynamic initial HTML, where the server constructing it can reflect the UA's Accept Encoding header back to the client in a generated script tag. It's yucky. So the server is buggy (for your purposes) and you want to have another feature to work around it. Introducing that will take considerable time and resources. I suggest to just wait until the server is able to serve compressed content transparently or use a more functional server setup. Beyond our own need for access to the Accept Encoding header, there may be other use cases that are supported by providing access to other headers. Btw, why can AWS CloudFront not into compressed content? http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html CloudFront doesn't compress the files itself Amazon S3 doesn't compress files automatically AWS CloudFront will do the right thing if it is backed by a Custom Origin that honours the Accept Encoding header (not S3). We have repeatedly requested improvements from AWS, and they are likely on the way, but we have many hoops to jump until then. When last I checked, many sites using AWS CloudFront and S3, including the first-party AWS Console itself, do not serve GZIP'ed resources, which is sub-optimal. It may take considerable longer time to spec a feature, ship it and wait for user agents to catch up than to just fix your server setup. If your setup really is so buggy that it cannot serve compressed content (Am I understanding it correctly? It sounds so stupid!) switch to something that can, instead of externalizing the costs to UA implementors and future web developers who will have one more interface to learn. -- Nils Dagsson Moskopp // erlehmann http://dieweltistgarnichtso.net