Re: [whatwg] Appcache feedback (various threads)
On 8/12/10 6:29 PM, Ian Hickson wrote: On Thu, 29 Jul 2010, Anne van Kesteren wrote: XML would be much too complex for what is needed. We could possibly remove the media type check and resort to using the CACHE MANIFEST identifier (i.e. sniffing), but the HTTP gods will get angry. Yeah, that's pretty much the way it is. Although I haven't personally had a problem dealing with the content-type requirement, I have heard from at least one other colleague who did; their server was harder to configure. I had assumed the reason for having the specific text/cache-manifest content type was to force people to opt-in to support, instead of being able to just read a random URL and having it interpreted, perhaps maliciously, as a manifest. If that's not a concern, then I'd like to understand the ramifications of getting the HTTP angry gods angry by ignoring the content-type. -- Patrick Mueller - http://muellerware.org
Re: [whatwg] Appcache feedback (various threads)
On 8/12/10 6:29 PM, Ian Hickson wrote: On Wed, 19 May 2010, Patrick Mueller wrote: I've been playing with application cache for a while now, and found the diagnostic information available to be sorely lacking. For example, to diagnose user-land errors that occur when using appcache, this is the only practical tool I have at my disposal: tail -f /var/log/apache2/access_log /var/log/apache2/error_log I'd like to be able to get the following information: - during progress events, as identified in step 17 of the application cache download process steps in 6.6.4 Downloading or updating an application cache), I'd like to have the URL of the resource that is about to be downloaded. The progress event from step 18 ( indicating all resources have been downloaded) doesn't need this. What do you need this for? See the first sentence: diagnostic information. - for all error conditions, some indication of WHAT error occurred. Presumably an error code. If the error involved a particular resource, I'd like the URL of the resource as well. I'm not sure what the best mechanisms might be to provide this info: - extend the events used to add this information - provide this information in the ApplicationCache interface - lastErrorCode, lastResourceDownloaded, etc - define a new object as the target for these events (currently undefined,or at least not clear to me), and add that info to the target - something else Could you describe how you would use this information? What would you do differently based on this information? again: diagnostic information. Of course, there's not much I can do differently, based on this information, since there's little I can do with app-cache to begin with, being largely declarative. I understand the typical response here is: use a debugger. That's fine, and that's right, for most of my purposes, but means I'm relying on a tool to get information, that a normal application might not be able to retrieve. As an example, an application might collect a log of errors and post them back to a server for diagnostic purposes later. Not possible if the only way to get app-cache diagnostics is with a web debugger. There's a good argument for not providing this information, I suppose: you can't get it for non-app-cache scenarios, why should you get it for app-cache scenarios? (I'm assuming here that you can't get HTTP-transport level information from anything but XHR, WebSocket, etc sort of APIs - you don't get that sort of information about a .css file you referenced in your .html file, for instance). Additionally, are there security issues that I'm not aware of (haven't though enough about)? None-the-less, we do have these nice events coming in, there's plenty of room for more information in them, and they would serve a useful purpose since app-cache has proven, to me, to be an occasionally challenging corner of the room to play in. -- Patrick Mueller - http://muellerware.org
Re: [whatwg] Appcache feedback (various threads)
On Fri, 13 Aug 2010 15:02:01 +0200, Patrick Mueller pmue...@muellerware.org wrote: On 8/12/10 6:29 PM, Ian Hickson wrote: On Thu, 29 Jul 2010, Anne van Kesteren wrote: XML would be much too complex for what is needed. We could possibly remove the media type check and resort to using the CACHE MANIFEST identifier (i.e. sniffing), but the HTTP gods will get angry. Yeah, that's pretty much the way it is. Although I haven't personally had a problem dealing with the content-type requirement, I have heard from at least one other colleague who did; their server was harder to configure. I had assumed the reason for having the specific text/cache-manifest content type was to force people to opt-in to support, instead of being able to just read a random URL and having it interpreted, perhaps maliciously, as a manifest. If that's not a concern, then I'd like to understand the ramifications of getting the HTTP angry gods angry by ignoring the content-type. In HTTP (starting HTTP/1.0), entity bodies are identified by the Content-Type header, not by themselves. We violate that for a number of scenarios, but we try to stay clear of it in new, until such time comes that we give up completely on Content-Type. It's a compromise. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] HTML5 video source dimensions and bitrate
On Thu, Aug 12, 2010 at 10:03 PM, Silvia Pfeiffer silviapfeiff...@gmail.com wrote: As far as I am aware, the adaptive HTTP streaming approaches work with an ordinary HTTP server such as Apache, so do not need anything special on the server. It's more about authoring the right set of resources. The publisher has to create a set of video copies at different bandwidth and maybe even resolutions carefully such that switching between them can happen at specific points. Then he puts them on the server together with a manifest file that links to these resources and states what they provide and when switching can happen. So, wile authoring is challenging, no new server software is used and it also wouldn't listen to any information that the client would want to send. However, the big challenge is to support the switching between resources on the client. And the client needs a gather as much information as possible about the quality of the playback to make the switching decision. Switching is then simply a different HTTP request. It would be nice if we had such switching functionality available for HTML5 video. [It's Friday, so this is a bit more lighthearted than usual] Completely correct. I was thinking of FMS (RTMP with dynamic streaming) and in a momentary lapse of reason [1], wrongly believed that the server would automatically switch bitrates when the client hit a certain threshold of dropped frames. It's at least somewhat ironic given that one of my favorite function names of all time is the Flash 10 Netstream's play2() [2], which is the flux capacitor of Flash bitrate switching: it makes dynamic streaming possible [3]. It would still be nice if the video made dropped frame information available, but that's probably not in the cards. Microsoft Smooth Streaming also uses a SMIL variant. Is yours compatible? Currently no, but perhaps in the future. Yes, that sounds like a sensible approach. It's also what Apple do when they use Live Streaming: they put the m3u8 file into the @src element. It would be nice if that was all we would need to enable adaptive HTTP streaming. It might be good to standardise on a baseline file format for the manifest though - unless we want to support all types of files that people will come up with to do adaptive HTTP streaming. Agreed. Right now we only support MRSS, but SMIL and m3u8 would all work as well. Given that SMIL is a W3C format, it seems to be the logical choice, no? [1] http://en.wikipedia.org/wiki/A_Momentary_Lapse_of_Reason [2] http://help.adobe.com/en_US/AS3LCR/Flash_10.0/flash/net/NetStream.html [3] http://www.imdb.com/title/tt0088763/quotes?qt0416300
Re: [whatwg] Appcache feedback (various threads)
On 2010/8/13, at 上午6:42, Anne van Kesteren wrote: On Fri, 13 Aug 2010 15:02:01 +0200, Patrick Mueller pmue...@muellerware.org wrote: On 8/12/10 6:29 PM, Ian Hickson wrote: On Thu, 29 Jul 2010, Anne van Kesteren wrote: XML would be much too complex for what is needed. We could possibly remove the media type check and resort to using the CACHE MANIFEST identifier (i.e. sniffing), but the HTTP gods will get angry. Yeah, that's pretty much the way it is. Although I haven't personally had a problem dealing with the content-type requirement, I have heard from at least one other colleague who did; their server was harder to configure. I had assumed the reason for having the specific text/cache-manifest content type was to force people to opt-in to support, instead of being able to just read a random URL and having it interpreted, perhaps maliciously, as a manifest. If that's not a concern, then I'd like to understand the ramifications of getting the HTTP angry gods angry by ignoring the content-type. In HTTP (starting HTTP/1.0), entity bodies are identified by the Content-Type header, not by themselves. We violate that for a number of scenarios, but we try to stay clear of it in new, until such time comes that we give up completely on Content-Type. It's a compromise. I can understand wanting to do things right, in terms of using Content-Type for the file. I can also attest that it can be a royal pain to diagnose when this is set wrong. I wonder it it would make sense to have a recommended file extension for the manifest (e.g. cachemanifest so myapp.cachemanifest). (maybe manifest is a fine extension, as implied in the spec. It seems a bit generic of a name to me, though). This way, web server developers could add this into their default configurations. That is, life will be a lot easier for page developers in the future, if (say) apache ships with a rule that automatically delivers cachemanifest (or whatever) files with the text/cache-manifest content type. That way everything will just work for normal situations. David
Re: [whatwg] HTML5 video source dimensions and bitrate
On Sat, Aug 14, 2010 at 2:05 AM, Zachary Ozer z...@longtailvideo.comwrote: On Thu, Aug 12, 2010 at 10:03 PM, Silvia Pfeiffer silviapfeiff...@gmail.com wrote: As far as I am aware, the adaptive HTTP streaming approaches work with an ordinary HTTP server such as Apache, so do not need anything special on the server. It's more about authoring the right set of resources. The publisher has to create a set of video copies at different bandwidth and maybe even resolutions carefully such that switching between them can happen at specific points. Then he puts them on the server together with a manifest file that links to these resources and states what they provide and when switching can happen. So, wile authoring is challenging, no new server software is used and it also wouldn't listen to any information that the client would want to send. However, the big challenge is to support the switching between resources on the client. And the client needs a gather as much information as possible about the quality of the playback to make the switching decision. Switching is then simply a different HTTP request. It would be nice if we had such switching functionality available for HTML5 video. [It's Friday, so this is a bit more lighthearted than usual] Completely correct. I was thinking of FMS (RTMP with dynamic streaming) and in a momentary lapse of reason [1], wrongly believed that the server would automatically switch bitrates when the client hit a certain threshold of dropped frames. It's at least somewhat ironic given that one of my favorite function names of all time is the Flash 10 Netstream's play2() [2], which is the flux capacitor of Flash bitrate switching: it makes dynamic streaming possible [3]. LOL - that made my Saturday. :-) It would still be nice if the video made dropped frame information available, but that's probably not in the cards. Microsoft Smooth Streaming also uses a SMIL variant. Is yours compatible? Currently no, but perhaps in the future. Yes, that sounds like a sensible approach. It's also what Apple do when they use Live Streaming: they put the m3u8 file into the @src element. It would be nice if that was all we would need to enable adaptive HTTP streaming. It might be good to standardise on a baseline file format for the manifest though - unless we want to support all types of files that people will come up with to do adaptive HTTP streaming. Agreed. Right now we only support MRSS, but SMIL and m3u8 would all work as well. Given that SMIL is a W3C format, it seems to be the logical choice, no? All of SMIL is a bit much for the little need that there is. But a small subpart similar to what MS Smooth Streaming does isn't out of the books. It's something that should be investigated IMO. Just like we are investigating the different subtitle formats that can be used. Sharing your experiences here and your new file format when you have something to show would be a start, IMHO. Cheers, Silvia.
Re: [whatwg] HTML5 video source dimensions and bitrate
On Sat, Aug 14, 2010 at 4:05 AM, Zachary Ozer z...@longtailvideo.com wrote: It would still be nice if the video made dropped frame information available, but that's probably not in the cards. I have a work in progress bug with patch that adds this to the video implementation in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=580531 It adds a 'mozDroppedFrames' as well as a couple of other stats people have queried about here (download rate, framerate, etc). I'd be keen to see something like this get discussed/added. Chris. -- http://www.bluishcoder.co.nz
[whatwg] Please (re)consider explicitly allowing marking up speaker names with cite (new information)
Summary: there has been longstanding discussion about the use (or not) of cite to markup names of speakers. From original intent of cite, to references to the Chicago Manual of Style, to the practicality of it just being an alias for i. I (and others) have done a bunch of research and documentation of additional examples, discussions, and follow-ups regarding the use of cite for marking up names of speakers, including follow-ups to common counter-arguments. Please (re)consider explicitly allowing marking up speaker names with cite More details, use-cases, research here: http://wiki.whatwg.org/wiki/Cite_element#Speaker I encourage fellow web authors and browser implementers to add their opinions/comments to that wiki page section, *INSTEAD OF* in an email thread - as I couldn't even find previous email threads on this topic. Thanks! Tantek -- http://tantek.com/ - I made an HTML5 tutorial! http://tantek.com/html5
Re: [whatwg] HTML5 video source dimensions and bitrate
On Sat, Aug 14, 2010 at 11:48 AM, Chris Double chris.dou...@double.co.nzwrote: On Sat, Aug 14, 2010 at 4:05 AM, Zachary Ozer z...@longtailvideo.com wrote: It would still be nice if the video made dropped frame information available, but that's probably not in the cards. I have a work in progress bug with patch that adds this to the video implementation in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=580531 It adds a 'mozDroppedFrames' as well as a couple of other stats people have queried about here (download rate, framerate, etc). I'd be keen to see something like this get discussed/added. Chris. -- http://www.bluishcoder.co.nz I've checked your code and apparently you have the following extra IDL attributes for video: readonly attribute unsigned long mozDroppedFrames; readonly attribute float mozPlaybackRate; readonly attribute float mozDownloadRate; readonly attribute unsigned mozFrameCount; These are very useful indeed and I would like to see them added to the spec - at least then an implementation of HTTP adaptive streaming in JavaScript can be done and other functionalities such as making analysis graphs in JavaScript about video performance are possible. Cheers, Silvia.
Re: [whatwg] HTML5 video source dimensions and bitrate
Hi Chris - On Aug 13, 2010, at 6:48 PM, Chris Double wrote: On Sat, Aug 14, 2010 at 4:05 AM, Zachary Ozer z...@longtailvideo.com wrote: It would still be nice if the video made dropped frame information available, but that's probably not in the cards. I have a work in progress bug with patch that adds this to the video implementation in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=580531 It adds a 'mozDroppedFrames' as well as a couple of other stats people have queried about here (download rate, framerate, etc). I'd be keen to see something like this get discussed/added. I see the following additions: interface HTMLMediaElement { readonly attribute float mozDownloadRate; readonly attribute float mozPlaybackRate; }; interface HTMLVideoElement { readonly attribute unsigned long mozFrameCount; readonly attribute unsigned long mozDroppedFrames; }; A few questions: mozDownloadRate - What are the units, bit per second? mozPlaybackRate - Is this the movie's data rate (total bytes / duration)? mozFrameCount - What do you propose a UA report for a partually downloaded VBR movie, or for a movie in a container that doesn't have a header (ie. one where you don't know the fame count until you have examined every byte in the file)? eric
Re: [whatwg] HTML5 video source dimensions and bitrate
On Sat, Aug 14, 2010 at 2:26 PM, Eric Carlson eric.carl...@apple.com wrote: mozDownloadRate - What are the units, bit per second? Bytes per second. mozPlaybackRate - Is this the movie's data rate (total bytes / duration)? Yes. This and mozDownloadRate were available internally already for 'can play through' calculations so I just exposed what we already have. 'mozPlaybackRate' is a bad name though since there is already a concept of 'playback rate' for playback speed. mozFrameCount - What do you propose a UA report for a partually downloaded VBR movie, or for a movie in a container that doesn't have a header (ie. one where you don't know the fame count until you have examined every byte in the file)? This is another bad name for what it actually is. It's a count of each frame as it is displayed. So the inverse of mozDroppedFrames really - it probably should be mozDisplayedFrames or something. I'm not sure if it's useful but it's the original stat I had for computing framerate playback in JavaScript in my tests. Chris. -- http://www.bluishcoder.co.nz