Re: [whatwg] Event loop processing model, and current time
On Mon, Feb 23, 2015 at 4:57 PM, Simon Fraser s...@me.com wrote: https://html.spec.whatwg.org/multipage/webappapis.html#processing-model-9 says: 1. Let now be the value that would be returned by the Performance object's now() method 2. Let docs be the list of Document objects associated with the event loop in question… ... 4. For each fully active Document in docs, run the resize steps for that Document, passing in now as the timestamp ... This makes no sense, as performance.now() is per-document (it’s relative to the document start time), so passing the same value to all documents in the browsing context is bogus. What may be intended is to “freeze” the performance.now() time in all documents before processing those documents, but give each document its own performance.now() time. That is the intent. The algorithm should grab a timestamp for each document at the same time (which is really just a matter of grabbing one timestamp and applying the correct offset for each document). - James
[whatwg] Move RequestAnimationFrame steps into HTML?
Cameron and I are editors of the Timing control for script-based animations spec, more commonly known as the spec for requestAnimationFrame. This spec has some outstanding feedback from folks like Anne that needs to be addressed at a basic editorial level that I haven't had bandwidth to address. It also needs to integrate more tightly into the HTML's rendering model to get proper timing. I think that adding the appropriate hooks to both specs will be complicated and I know I don't have the bandwidth to do this correctly, so I propose that we simply move this algorithm into HTML itself and ask that the HTML editors (aka Hixie) take over this part of the spec. If this doesn't happen, I'm afraid that the spec will languish and it'll be hard to correctly specify the various things that are supposed to coordinate with each other to produce a smoothly functioning and consistent system. Specifically, the spec defines the following WebIDL: partial interface Window { long requestAnimationFrame(FrameRequestCallback callback); void cancelAnimationFrame(long handle); }; callback FrameRequestCallback = void (DOMHighResTimeStamp time); and a relatively simple processing model that should integrate tightly into HTML 8.4.1.2 Processing Model's Update the rendering step: https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/RequestAnimationFrame/Overview.html#processingmodel I'm happy to provide any help with technical issues here, but can't promise to actually edit anything useful. Does this sound useful to folks? - James
Re: [whatwg] Move RequestAnimationFrame steps into HTML?
On Wed, Sep 17, 2014 at 3:53 PM, Ian Hickson i...@hixie.ch wrote: On Wed, 17 Sep 2014, James Robinson wrote: I'd be happy to do this. I've filed a bug to track it: https://www.w3.org/Bugs/Public/show_bug.I can't speak for the editors of that spec, but I'm pretty sure thcgi?id=26839 https://www.w3.org/Bugs/Public/show_bug.cgi?id=26839 This will actually help substantially with resolving this issue also: https://www.w3.org/Bugs/Public/show_bug.cgi?id=26636 Do you have a log of the issues that are outstanding on this spec? http://lists.w3.org/Archives/Public/public-web-perf/2014Jul/0019.html and http://lists.w3.org/Archives/Public/public-web-perf/2014Jun/0035.html describe two open issues with the spec. The resolutions for both are (I believe) pretty simple and described in those threads. https://www.w3.org/Bugs/Public/show_bug.cgi?id=26440 and https://www.w3.org/Bugs/Public/show_bug.cgi?id=26636 (which you mentioned) are issues that would be a lot easier to resolve if we had the processing model for this system described in one place. The Web Animation spec defines a fuzzy hook into the requestAnimationFrame processing model here: http://w3c.github.io/web-animations/#script-execution-and-live-updates-to-the-model, but that should probably be tightened up as it doesn't specify many details exactly such as the order things happen in different frames or exactly what sorts of things can happen between sampling a WebAnimations animation and running a requestAnimationFrame callback. I believe that the editors (+cc Shane) of that spec would really appreciate having a better model to hook in to in order to precisely define these things. Thanks, - James -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Fullscreen API and out-of-process iframe
On Tue, Jul 29, 2014 at 8:46 AM, Anne van Kesteren ann...@annevk.nl wrote: On Tue, Jul 29, 2014 at 5:29 PM, Adam Barth w...@adambarth.com wrote: Given that you haven't produced a black-box experiment that distinguishes the two approaches, different implementations can use different approaches and be interoperable. I guess. That still doesn't help us much defining it. However, I'm not convinced that just because I can't come up with an example, there is none. B is nested through A. A invokes requestFullscreen() and then does synchronous XMLHttpRequest, locking its event loop. B also invokes requestFullscreen(), posts a message to A about updating its state. A's synchronous XMLHttpRequest stops, it updates its state per B, and then gets to the point of putting its own element fullscreen. The end result is something that the current specification deems impossible which seems bad. The race you describe is possible today if you assume A and B are collaborating by communicating with servers that can talk to each other. It's even true if A and B are loaded into different tabs or different browsers completely. A invokes requestFullscreen() or requestPointerLock() or anything else that touches a machine-global resorce and then sends a message over the network to A's server which forwards to B's server which forwards to B. B then invokes requestWhatever() with the 'knowledge' that its invocation has a happens-after relationship with A's, even though in practice A's requestWhatever() may not have propagated through the browser sufficient to touch the shared resource (i.e. ask the OS for fullscreen support / pointer lock / whatnot). This isn't new, but it's so rare and convoluted that I really doubt it ever happens in practice. A more practical sort of race of this nature can happen with NPAPI plugins which in multi-process browsers are a shared, global resource. Chrome (and I believe other multi-process browsers as well) do not run the event loops for different tabs referencing the same plugin in lockstep, so it's very easy for otherwise unrelated tabs to communicate information to each other through a plugin. This can include cross-origin pages if the plugin's same-origin policy is relaxed or relax-able as it is in some cases. It is theoretically possible to construct all sorts of cases where the behavior is black-box distinguishable from running all such tabs in lockstep with each other, but in practice nobody cares. I strongly suspect the situation with cross-origin iframes is similar. While you can construct scenarios where different frames communicate with each other through various channels and come up with answers that seem to contradict knowledge about shared global state, in practice it won't matter. In practice one frame will 'win' and one will 'lose', both will run the appropriate promise handler, and both will continue executing normally. I guess what needs to happen is that when requestFullscreen() is invoked it needs to do synchronous checks and those need to be done again just before the document changes state. And the only check that involves out-of-process iframe (nested browsing contexts) will block I guess, but that only needs to be made at the initial invocation I think. You could, of course, come up with a synchronous checking scheme similar to the storage mutex but barring somebody discovering a significant web compat issue I suspect that, as with the storage mutex, it would be completely ignored by all vendors. - James -- http://annevankesteren.nl/
Re: [whatwg] Fullscreen API and out-of-process iframe
On Mon, Jul 28, 2014 at 9:03 AM, Anne van Kesteren ann...@annevk.nl wrote: (How are animation frames synchronized across iframe boundaries?) requestAnimationFrame specifies that the callback fires for all iframes within the same task, but it's not black-box observable between cross-origin iframes so it doesn't matter. - James
Re: [whatwg] Proposal: requestBackgroundProcessing()
On Thu, Feb 20, 2014 at 7:25 AM, Ashley Gullen ash...@scirra.com wrote: The host is effectively acting as the game server, and this basically hangs the server. If there were 20 peers connected to the host, the game hangs for all 20 players. That's a bug in your application design. If one web page is performing operations necessary for things orthogonal to that page's visual display, those operations should not be tied to a requestAnimationFrame loop. If the host is responding to network updates from other clients, for instance, then it could perform that work in response to the network events coming in. The page may also be performing the normal game updates for that one client in a rAF loop concurrently. - James
Re: [whatwg] Canvas in workers
On Thu, Oct 24, 2013 at 6:59 AM, Glenn Maynard gl...@zewt.org wrote: - Original Message - From: Robert O'Callahan rob...@ocallahan.org We talked through this proposal with a lot of Mozilla people in a meeting and collectively decided that we don't care about the case of workers that commit multiple frames to a canvas without yielding --- at least for now. So we want to remove commit() and copy the main-thread semantics that a canvas frame is eligible for presentation whenever script is not running in the worker. On Thu, Oct 24, 2013 at 7:25 AM, Jeff Gilbert jgilb...@mozilla.com wrote: This is not the current WebGL semantics: WebGL presents its drawing buffer to the HTML page compositor immediately before a compositing operation[...] (Can you please quote correctly? Having one person top-quoting makes a mess of the whole thread, and it looked like you were saying that the WebGL spec language you were quoting was incorrect.) The assumption WebGL is making here is that compositing is a synchronous task in the event loop, which happens while no script is running. That is, the semantics Robert describes are the same as what the WebGL spec is trying to say. That's not necessarily how compositing actually works, though, and that language also won't make sense with threaded rendering. It might be better for WebGL to define this using the global script clean-up jobs task that HTML now defines. http://www.whatwg.org/specs/web-apps/current-work/#run-the-global-script-clean-up-jobs I'd recommend spinning off a separate thread if we want to go into this further. The time that compositing occurs is already specified by the HTML event loop processing model (7.1.4.2): http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#processing-model-4 An event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop must continually run through the following steps for as long as it exists: 1. Run the oldest taskhttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#concept-task on one of the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop 's task queueshttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#task-queue, if any, ignoring tasks whose associated Documenthttp://www.whatwg.org/specs/web-apps/current-work/multipage/dom.html#documents are not fully activehttp://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#fully-active. The user agent may pick any task queuehttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#task-queue . 2. If the storage mutexhttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#storage-mutex is now owned by the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop, release it so that it is once again free. 3. If a task was run in the first step above, remove that task from its task queuehttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#task-queue . 4. If this event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop is not a worker's event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop, run these substeps: 1. Perform a microtask checkpointhttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#perform-a-microtask-checkpoint . 2. Provide a stable statehttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#provide-a-stable-state . 3. If necessary, update the rendering or user interface of any Documenthttp://www.whatwg.org/specs/web-apps/current-work/multipage/dom.html#document or browsing contexthttp://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#browsing-context to reflect the current state. 5. Otherwise, if this event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop is running for a WorkerGlobalScopehttp://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#workerglobalscope, but there are no events in the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop 's task queueshttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#task-queue and the WorkerGlobalScopehttp://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#workerglobalscope object's closinghttp://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#dom-workerglobalscope-closing flag is true, then destroy the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loop, aborting these steps. 6. Return to the
[whatwg] Bug in 12.2.5.4.8 (The text insertion mode) when invoking the spin the event loop algorithm
12.2.5.4.8 (The text insertion mode) defines an following algorithm for dealing with inline script tags that aren't ready to execute when parsed. I believe there are some subtle bugs with the way the algorithm is specified. More importantly, the invocation of the spin the event loop algorithm makes it harder to reason about the system as a whole. The algorithm in question runs when parsing a /script at a script nesting level (i.e. not one generated by document.write()): 1.) Let *the script* be the pending parsing-blocking scripthttp://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#pending-parsing-blocking-script. There is no longer a pending parsing-blocking scripthttp://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#pending-parsing-blocking-script . 2.) Block the tokenizerhttp://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenizationfor this instance of the HTML parserhttp://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#html-parser, such that the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#event-loopwill not run taskshttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#concept-taskthat invoke the tokenizerhttp://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenization . 3.) If the parser's Documenthttp://www.whatwg.org/specs/web-apps/current-work/multipage/dom.html#document has a style sheet that is blocking scriptshttp://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#has-a-style-sheet-that-is-blocking-scriptsor *the script*'s ready to be parser-executedhttp://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#ready-to-be-parser-executedflag is not set: spin the event loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#spin-the-event-loopuntil the parser's Documenthttp://www.whatwg.org/specs/web-apps/current-work/multipage/dom.html#document has no style sheet that is blocking scriptshttp://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#has-no-style-sheet-that-is-blocking-scriptsand *the script*'s ready to be parser-executedhttp://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#ready-to-be-parser-executedflag is set. 4.) Unblock the tokenizerhttp://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenizationfor this instance of the HTML parserhttp://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#html-parser, such that taskshttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#concept-taskthat invoke the tokenizerhttp://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenizationcan again be run. 5.) Let the insertion pointhttp://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#insertion-pointbe just before the next input characterhttp://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#next-input-character . 6.) Increment the parser's script nesting levelhttp://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#script-nesting-levelby one (it should be zero before this step, so this sets it to one). 7.) Executehttp://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#execute-the-script-block *the script*. ... Step 3 spins the event loop. The issue is that while the tokenizer is blocked other tasks can run whenever the event loop is spun and cause changes that make the rest of the algorithm incorrect. For example, consider: !DOCTYPE html script window.setTimeout(function() { document.write(Goodbye); }, 50); link rel=stylesheet type=text/css href=long_loading.css/link script window.alert(Hello); /script The algorithm in question will run when parsing the last /script. The second script can't execute until the stylesheet loads, so the spec spins the event loop until that happens. However, if the setTimeout fires before long_loading.css loads then the document.write() call will first perform an implicit document.open(), since there is no insertion point, and blow away the entire Document. This cancels any pending tasks but doesn't (as far as I can tell) cancel already started tasks. By my reading of the spec, the rest of the steps of the algorithm should still run and the script should execute. However, what actually happens in every browser I can test (Chrome Canary* / Firefox 22 / IE10) the alert never fires. What the Blink code actually does is simply suspend the tokenizer and then return control the the underlying (non-nested, usually) event loop. I propose that instead of spinning the event loop, we instead have step 3 enter an asynchronous section if the script isn't ready to run yet which queues a task once the script is ready to run. Since this algorithm only runs at a script nesting level of zero this is a fairly minor tweak in
Re: [whatwg] Challenging canvas.supportsContext
On Wed, Jun 19, 2013 at 1:22 PM, Brandon Benvie bben...@mozilla.com wrote: On 6/19/2013 12:46 PM, Boris Zbarsky wrote: On 6/19/13 3:43 PM, Kenneth Russell wrote: Accurate feature detection in libraries like Modernizr was mentioned as a key use case: http://lists.whatwg.org/**pipermail/whatwg-whatwg.org/** 2012-September/037249.htmlhttp://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-September/037249.html Right, this is the use case that's not really making sense to me. The fact that Modernizr was doing this _eagerly_ sounds like a bug in Modernizr to me... The point of using Modernizr or something like it is to detect availability of features on page load, and then conditionally load polyfills/alternate fallback implementations. It specifically does need to do eager detection to be useful. It can't wait until the first usage to do feature detection; it needs to be done up front when preparing dependencies for the main application. What would a page using Modernizr (or other library) to feature detect WebGL do if the supportsContext('webgl') call succeeds but the later getContext('webgl') call fails? I'm also failing to see the utility of the supportsContext() call. It's impossible for a browser to promise that supportsContext('webgl') implies that getContext('webgl') will succeed without doing all of the expensive work, so any correctly authored page will have to handle a getContext('webgl') failure anyway. - James This is also why Modernizr provides a custom build tool. It allows for users to only do the feature detection on features they know they need to care about, because each check has some cost that needs to be paid early on in a page load.
Re: [whatwg] Challenging canvas.supportsContext
On Wed, Jun 19, 2013 at 3:04 PM, Kenneth Russell k...@google.com wrote: On Wed, Jun 19, 2013 at 2:20 PM, Brandon Benvie bben...@mozilla.com wrote: On 6/19/2013 2:05 PM, James Robinson wrote: What would a page using Modernizr (or other library) to feature detect WebGL do if the supportsContext('webgl') call succeeds but the later getContext('webgl') call fails? I don't have an example, I was just explaining how Mozernizr is often used. I'm also failing to see the utility of the supportsContext() call. It's impossible for a browser to promise that supportsContext('webgl') implies that getContext('webgl') will succeed without doing all of the expensive work, so any correctly authored page will have to handle a getContext('webgl') failure anyway. Given this, it would seem supportsContext is completely useless. The whole purpose of a feature detection check is to detect if a feature actually works or not. Accuracy is more important than cost. supportsContext() can give a much more accurate answer than !!window.WebGLRenderingContext. I can only speak for Chromium, but in that browser, it can take into account factors such as whether the GPU sub-process was able to start, whether WebGL is blacklisted on the current card, whether WebGL is disabled on the current domain due to previous GPU resets, and whether WebGL initialization succeeded on any other page. All of these checks can be done without the heavyweight operation of actually creating an OpenGL context. That's true, but the answer still doesn't promise anything about what getContext() will do. It may still return null and code will have to check for that. What's the use case for calling supportsContext() without calling getContext()? - James -Ken
Re: [whatwg] Challenging canvas.supportsContext
On Wed, Jun 19, 2013 at 3:24 PM, Kenneth Russell k...@google.com wrote: On Wed, Jun 19, 2013 at 3:06 PM, James Robinson jam...@google.com wrote: On Wed, Jun 19, 2013 at 3:04 PM, Kenneth Russell k...@google.com wrote: supportsContext() can give a much more accurate answer than !!window.WebGLRenderingContext. I can only speak for Chromium, but in that browser, it can take into account factors such as whether the GPU sub-process was able to start, whether WebGL is blacklisted on the current card, whether WebGL is disabled on the current domain due to previous GPU resets, and whether WebGL initialization succeeded on any other page. All of these checks can be done without the heavyweight operation of actually creating an OpenGL context. That's true, but the answer still doesn't promise anything about what getContext() will do. It may still return null and code will have to check for that. What's the use case for calling supportsContext() without calling getContext()? Any application which has a complex set of fallback paths. For example, - Preference 1: supportsContext('webgl', { softwareRendered: true }) - Preference 2: supportsContext('2d', { gpuAccelerated: true }) - Preference 3: supportsContext('webgl', { softwareRendered: false }) - Fallback: 2D canvas I'm assuming you have (1) and (3) flipped here and both supportsContext() and getContext() support additional attributes to dictate whether a software-provided context can be supplied. In that case, in order to write correct code I'd still have to attempt to fetch the contexts before using them, i.e.: var ctx = canvas.getContext('webgl', { 'allowSoftware': false}); if (ctx) { doPreference1(ctx); return; } ctx = canvas.getContext('2d', {'allowSoftware': false}); if (ctx) { doPreference2(ctx); // etc how could I simplify this code using supportsContext() ? I agree that ideally, if supportsContext returns true then -- without any other state changes that might affect supportsContext's result -- getContext should return a valid rendering context. It seems overwhelmingly likely that one of the state changes that might affect the result will be attempting to instantiate a real context. It's simply impossible to guarantee this correspondence 100% of the time, but if supportsContext's spec were tightened somehow, and conformance tests were added which asserted consistent results between supportsContext and getContext, would that address your concern? I don't see how supportsContext() could be as accurate as getContext() without doing all of the work getContext() does. If it's not 100% accurate, when is it useful? - James -Ken
Re: [whatwg] Enabling LCD Text and antialiasing in canvas
Fonts are not vector art and are not rendered as paths at commonly read sizes. I don't think anyone is using or would be tempted to use LCD subpixel AA for anything other than text. - James On Wed, Apr 3, 2013 at 5:07 PM, Gregg Tavares g...@google.com wrote: On Wed, Apr 3, 2013 at 5:04 PM, Rik Cabanier caban...@gmail.com wrote: On Wed, Apr 3, 2013 at 9:04 AM, Gregg Tavares g...@google.com wrote: On Wed, Apr 3, 2013 at 8:41 AM, Stephen White senorbla...@chromium.org wrote: Would Mozilla (or other browser vendors) be interested in implementing the hint as Gregg described above? If so, we could break out the LCD text issue from canvas opacity, and consider the latter on its own merits, since it has benefits apart from LCD text (i.e., performance). Regarding that, if I'm reading correctly, Vladimir Vukicevic has expressed support on webkit-dev for the ctx.getContext('2d', { alpha: false }) proposal (basically, a syntactic rewrite of canvas opaque). Does this indeed have traction with other browser vendors? As for naming, I would prefer that it be something like ctx.fontSmoothing or ctx.fontSmoothingHint, to align more closely with canvas's ctx.imageSmoothingEnabled and webkit's -webkit-font-smoothing CSS property. -webkit-font-smoothing has none, antialiased and subpixel-antialiased as options. I think it's ok to explicitly call out subpixel antialiasing, even if the platform (or UA) does not support it, especially if the attribute explicitly describes itself as a hint. Why call it Font smoothing? Shouldn't a UA be able to also render paths using the same hint? I have not heard of anyone using sub-pixel antialiasing for vector art. It might look weird... ??? Fonts are vector art. Why should this flag be specific to fonts? So I decide tomorrow that I want vector art to be prettier than the competition in by implementing LCD anti-aliasing I'll have to lobby for a new flag to turn it on? Why? Stephen On Sun, Mar 17, 2013 at 11:17 PM, Gregg Tavares g...@google.com wrote: On Sun, Mar 17, 2013 at 1:40 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Sat, Mar 16, 2013 at 5:52 PM, Gregg Tavares g...@google.com wrote: Let me ask again in a different way ;-) Specifically about LCD style antialiasing. What about a context attribute antialiasRenderingQualityHint for now with 2 settings default and displayDependent context.antialiasRenderingQualityHint = displayDependent How would this interact with canvas opacity? E.g. if the author uses displayDependent and then draws text over transparent pixels in the canvas, what is the UA supposed to do? Whatever the UA wants. It's a hint. From my POV, since the spec doesn't say anything about anti-aliasing then it really doesn't matter. My preference, if I was programming a UA, would be if the user sets displayDependent and the UA is running on a lo-dpi machine I'd unconditionally render LCD-AA with the assumption that the canvas is composited on white. If they want some other color they'd fill the canvas with as solid color first. Personally I don't think that needs to be specced, but it would be my suggestion. As I mentioned, even without this hint the spec doesn't prevent a UA from unconditionally using LCD-AA. Very few developers are going to run into issues. Most developers that use canvas aren't going to set the hint. Most developers that use canvas dont' make it transparent nor do they CSS rotate/scale them. For those few developers that do happen to blend and/or rotate/scale AND set the hint they'll get probably get some fringing but there (a) there was no guarantee they wouldn't already have that problem since as pointed out, the spec doesn't specify AA nor what kind, and (b) if they care they'll either stop using the hint or they'll search for why is my canvas fringy and the answer will pop up on stackoverlow and they can choose one of the solutions. Rob -- Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
Re: [whatwg] Hardware accelerated canvas
I believe this ship has already sailed for the most part - several major browsers (starting with IE9) have shipped GPU based canvas 2d implementations that simply lose the image buffer on a lost context. Given that there are a fair number of benchmarks (of varying quality) around canvas 2d speed I doubt vendors will be able to give up speed. It's also important to note that unlike WebGL the only thing lost on a lost context is the image buffer itself. With WebGL, the page has to regenerate a large number of resources (shaders, buffers, textures) before it can render the next frame. With canvas the page can just start drawing. Many applications redraw the entire canvas on every frame so lost context recovery is identical to normal operation - just draw the thing. All other resources are managed and can be regenerated by the browser without script intervention. On Mon, Sep 3, 2012 at 9:11 AM, Ian Hickson i...@hixie.ch wrote: There are ways to make it work without forgoing acceleration, e.g. taking regular backups of the canvas contents, remembering every instruction that was sent to the canvas, etc. We investigated these and other options when first looking at GPU acceleration in Chrome. None seemed feasible. Readbacks are expensive. Bandwidth from GPU to main memory in split memory systems is limited and doing a readback is a pipeline stall. Recording draw commands works for some path-only use cases but many canvases reference from dynamic sources such as videos or other canvases. Preserving these resources around is quite expensive, especially when they might be GPU-resident to start and require a readback. The more basic problem with all of these approaches is that they require considerable complexity, time and memory to deal with a (hopefully) rare situation. There will never be a benchmark that involves a context loss in the middle, so any time spent on recovery is time wasted. On Mon, 3 Sep 2012, Benoit Jacob wrote: Remember this adage from high-performance computing which applies here as well: The fast drives out the slow even if the fast is wrong. This isn't an issue of the spec -- there is existing content that would be affected. It is the spec's problem so far as the spec wants to reflect reality. I really doubt UAs are going to be able to implement something significantly more complicated or slow than what they have been shipping for a few years. I think it would be useful for some sorts of applications to be notified when the image buffer data is lost so that they could regenerate it. This would be useful for applications that use a canvas to cache mostly-static intermediate data or applications that only repaint dirty rectangles in normal operation. - James
Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio
If we are adding new APIs for manipulating the backing directly, can we make them asynchronous? This would allow for many optimization opportunities that are currently difficult or impossible. - James On Mar 20, 2012 10:29 AM, Edward Oapos;Connor eocon...@apple.com wrote: Hi, Unfortunately, lots of canvas content (especially content which calls {create,get,put}ImageData methods) assumes that the canvas's backing store pixels correspond 1:1 to CSS pixels, even though the spec has been written to allow for the backing store to be at a different scale factor. Especially problematic is that developers have to round trip image data through a canvas in order to detect that a different scale factor is being used. I'd like to propose the addition of a backingStorePixelRatio property to the 2D context object. Just as window.devicePixelRatio expresses the ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would express the ratio of backing store pixels to CSS pixels. This allows developers to easily branch to handle different backing store scale factors. Additionally, I think the existing {create,get,put}ImageData API needs to be defined to be in terms of CSS pixels, since that's what existing content assumes. I propose the addition of a new set of methods for working directly with backing store image data. (New methods are easier to feature detect than adding optional arguments to the existing methods.) At the moment I'm calling these {create,get,put}ImageDataHD, but I'm not wedded to the names. (Nor do I want to bikeshed them.) Thanks for your consideration, Ted
Re: [whatwg] API for encoding/decoding ArrayBuffers into text
On Fri, Mar 16, 2012 at 4:25 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 3/16/12 5:25 PM, Brandon Jones wrote: Everyone knows that typed arrays /can/ be Big Endian, but I'm not aware of any devices available right now that support WebGL that are. I believe that recent Firefox on a SPARC processor would fit that description. Of course the number of web developers that have a SPARC-based machine is 0 to a very good approximation You can s/web developers/users/ and the statement would still apply, wouldn't it? - James -Boris
Re: [whatwg] should we add beforeload/afterload events to the web platform?
On Sun, Jan 15, 2012 at 1:23 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 1/12/12 9:22 AM, Boris Zbarsky wrote: On 1/12/12 5:16 AM, Simon Pieters wrote: Note that it removes the root element when the script element is executed, not at DOMContentLoaded. Ah, I missed that. I guess the HTML5 parsing algorithm means that now the elements are parsed into the other document, eh? That's actually pretty cute. I wonder whether we can get the mobify folks to switch to this Thinking back on this, this still has the issue of not preventing preloads. Again, preventing preloads on a per-load basis is a hard problem if you want to have sane parallelism. Preventing _all_ loads for a document based on some declarative thing near the start of the document, on the other hand, should not be too bad. Even this scheme doesn't work with a model like SPDY push or other bundling techniques or with more aggressive preloading that initiates loads before the main resource is loaded. It seems like there are two use cases: 1.) Monitoring/modifying/preventing network activity for a given resource load 2.) Monitoring/modifying/preventing DOM modifications that occur as the result of a resource load For (1) I can't think of any web-facing needs. For extensions, I believe this is better addressed by APIs that target the network layer more directly - for example proxy auto config scripts, or things like http://code.google.com/chrome/extensions/trunk/webRequest.html. For (2) I think this would be better addressed by using next-generation mutation events to observe (and potentially react) to the changes that occur when an img is loaded, for example. I struggle to think of good web-facing use cases for this, though. In any event I think that beforeload as it exists today is a bad API for the web and hope that we can stop exposing it to the web in WebKit (although I suspect it'll stick around for extension contexts, which is more acceptable in my view). - James If that plus a beforeprocess event addresses the majority of the web-facing use cases, we should consider adding that. -Boris
Re: [whatwg] should we add beforeload/afterload events to the web platform?
On Tue, Jan 17, 2012 at 4:29 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 1/17/12 7:24 PM, James Robinson wrote: Even this scheme doesn't work with a model like SPDY push or other bundling techniques or with more aggressive preloading that initiates loads before the main resource is loaded. Er... you mean it initiates loads before it has any idea whether the main resource might have changed such that it no longer links to the objects in question? The way that these sorts of schemes work is that the server knows that a set of resources are needed in addition to the main resource and it starts sending them down before the client has received/parsed the main resource. The server serving foo.html can have a pretty good idea about whether foo.html contains the string script src=foo.js so there isn't any real reason for it to not serve foo.js at the same time assuming that the underlying protocol can handle such a thing. In situations with high RTTs and reasonable bandwidth (like common mobile networks) this can be a big win. I bring this up to make sure that we aren't making promises about resource loads that we can't keep. - James I agree that such aggressive preloading is impossible to control from the source document; an interesting question is whether it's desirable. I know that in the past when Gecko preloaded too aggressively we got huge complaints from various ad providers about bogus impressions 1.) Monitoring/modifying/**preventing network activity for a given resource load 2.) Monitoring/modifying/**preventing DOM modifications that occur as the result of a resource load For (1) I can't think of any web-facing needs. I believe mobify does in fact want (1) as much as it can to conserve bandwidth... In any event I think that beforeload as it exists today is a bad API for the web Good, we agree on that. ;) (although I suspect it'll stick around for extension contexts, which is more acceptable in my view). It's obviously just fine from my pov at that point. ;) -Boris
Re: [whatwg] Node inDocument
On Thu, Sep 1, 2011 at 1:39 AM, Anne van Kesteren ann...@opera.com wrote: On Thu, 01 Sep 2011 00:18:26 +0200, Erik Arvidsson a...@chromium.org wrote: After thinking more about this we believe that moving contains to Node is a better alternative. The problem with Node inDocument is that it does not say which document it is in so code would need to also check ownerDocument to be robust in the presence of frames and multiple windows. You got it. Now everyone please implement :-) http://dvcs.w3.org/hg/domcore/**raw-file/tip/Overview.html#** dom-node-containshttp://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-node-contains What is the expected behavior for nodes in iframes? IOW, with this sort of DOM: body iframe id=a iframe id=b div id=node what is the return value for: b.contentDocument.contains(node); // I'd expect true, the spec seems to say true a.contentDocument.contains(node); // I'd expect false, the spec seems to say true document.contains(node); // I'd expect false, the spec seems to say true - James -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Timing API proposal for measuring intervals
The discussion of audio concerns seems to have died down. I think anything specific to audio will be additive to this API, and there are still several non-audio use cases, so here's an updated proposal: double window.performance.now(); The idea of putting it on window.performance is that this seems closely related to the UserTiming spec currently under development by the Web Perf working group ( http://w3c-test.org/webperf/specs/UserTiming/#the-usertiming-interface). When window.performance.now() is called, the browser returns a double representing the number of milliseconds that have elapsed since the most recent top-level navigation in that context. That's the same time value that is used by the navigation and user timing specs. Defining this as a function instead of an attribute makes it clearer that the returned value can change on every access and is not fixed while script is running. This is important for the script profiling use case. When a fixed time value is needed, as for example in requestAnimationFrame, it should be provided by that API. By putting this on window.performance, we can pick a nice short function name without having to worry so much about potential collisions with existing content. - James
[whatwg] Timing API proposal for measuring intervals
PROBLEM It is not possible to accurately measure time intervals using existing web platform APIs, or to specify times at a given interval from the current time. Date.now() and DOM timestamps are inadequate for this purpose, see sectiontitle below for reasons why this is so. USE CASES 1.) When updating an imperative animation state from script, authors need to know how much time has elapsed in the animation so far in order to properly update the animation. 2.) When synchronizing imperative animation updates with audio, authors need to know how much time has elapsed in the animation and in the audio sample's progression and be able to schedule future audio cues to specific points in the animation. 3.) When measuring the time that a given operation has taken (for example, a network request or a application process), authors need to be able to measure the amount of time elapsed from script. ISSUES WITH EXISTING APIS In ECMAScript the Date object is typically used for timing. It is defined (in ES-262 5th edition section 15.9.1.1) as representing milliseconds since the unix epoch, Jan 1 1970 00:00:00 UTC, ignoring leap seconds. DOM timestamps are defined in a similar way, although it doesn't seem to specify anything about leap seconds. In practice, implementations depend on the system clock for these APIs and are likely to use the same implementation for both. This poses a problem whenever the system clock is adjusted. In all implementations I tested, Date.now() varies whenever the system clock is adjusted. This means that, for example, the following snippet: var start = Date.now(); dosomething(); window.alert(Date.now() - start); may alert a positive number, negative number, or zero if the system clock is adjusted in between the two calls to Date.now(). Similarly, timestamps from a series of DOM events may be increasing, decreasing, or unchanging if the system clock adjusts in between event dispatches. System clock adjustments are not as rare as you might thing, many systems are configured to receive clock updates over the network via NTP or similar systems. When developing and implementing the navigation timing spec we ran in to many reported time intervals from users in the wild that were bogus in one way or another, either negative (easily detectable) or artificially inflated (very difficult to detect). I've put a simple test page up here: http://webstuff.nfshost.com/timers.html. Additionally, there's a practical concern that querying the system clock on some systems is more expensive and/or less reliable than other timing APIs. On windows, for instance, GetSystemTimeAsFileTime() has a resolution of ~15.5ms, so browsers use a combination of GetSystemTimeAsFileTime() with higher-resolution timing APIs like QueryPerformanceCounter() that provide better resolution but are not affected by adjustments to the system clock. See http://drdobbs.com/windows/184416651?pgno=1 and https://bugzilla.mozilla.org/show_bug.cgi?id=363258 for some background information. PROPOSAL I propose that we add a new attribute to the Window interface that provides a monotonic, uniformly increasing timestamp suitable for interval measurements. bikeshed-topic partial interface Window { readonly attribute double monotonicTime; }; /bikeshed-topic bikeshed-topic I propose that monotonicTime be defined as the number of milliseconds elapsed since the window creation. There is likely to be no meaningful relationship between the value exposed by this interval and a date and time in the past (such as the unix epoch), so starting at zero seems a good at choice as any. /bikeshed-topic I do not believe we can change the meaning of Date.now() in ECMAScript since the current behavior has existing for a very long time and is genuinely useful when the author wants to know the system clock's current value, for example in a calendar type application. RELATIONSHIP TO EXISTING WORK, IMPLEMENTATION NOTES The setTimeout() and setInterval() algorithms ( http://www.whatwg.org/specs/web-apps/current-work/multipage/timers.html#timers) implicitly depend on a uniformly monotonic clock in the various wait for X milliseconds phase, since there is no allowance in this text for adjustments to the system clock to change when the timer actually fires. All browsers except for WebKit ignore system clock changes for timer scheduling, and the WebKit behavior is a bug which I plan to fix. The Web Perf WG has run into similar issues and defined a monotonic clock as part of the Navigation Timing API: http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html#mono-clock. This clock is very similar to the above proposal but is not exposed directly to authors. I expect that implementations of the Navigation Timing API would use the same mechanism to implement this proposal. The proposed Web Audio API ( http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html#AudioContext-section) exposes a
Re: [whatwg] Timing API proposal for measuring intervals
On Thu, Jul 7, 2011 at 6:47 PM, Ojan Vafai o...@chromium.org wrote: On Thu, Jul 7, 2011 at 6:15 PM, James Robinson jam...@google.com wrote: PROBLEM It is not possible to accurately measure time intervals using existing web platform APIs, or to specify times at a given interval from the current time. Date.now() and DOM timestamps are inadequate for this purpose, see sectiontitle below for reasons why this is so. USE CASES 1.) When updating an imperative animation state from script, authors need to know how much time has elapsed in the animation so far in order to properly update the animation. 2.) When synchronizing imperative animation updates with audio, authors need to know how much time has elapsed in the animation and in the audio sample's progression and be able to schedule future audio cues to specific points in the animation. 3.) When measuring the time that a given operation has taken (for example, a network request or a application process), authors need to be able to measure the amount of time elapsed from script. ISSUES WITH EXISTING APIS In ECMAScript the Date object is typically used for timing. It is defined (in ES-262 5th edition section 15.9.1.1) as representing milliseconds since the unix epoch, Jan 1 1970 00:00:00 UTC, ignoring leap seconds. DOM timestamps are defined in a similar way, although it doesn't seem to specify anything about leap seconds. In practice, implementations depend on the system clock for these APIs and are likely to use the same implementation for both. This poses a problem whenever the system clock is adjusted. In all implementations I tested, Date.now() varies whenever the system clock is adjusted. This means that, for example, the following snippet: var start = Date.now(); dosomething(); window.alert(Date.now() - start); may alert a positive number, negative number, or zero if the system clock is adjusted in between the two calls to Date.now(). Similarly, timestamps from a series of DOM events may be increasing, decreasing, or unchanging if the system clock adjusts in between event dispatches. System clock adjustments are not as rare as you might thing, many systems are configured to receive clock updates over the network via NTP or similar systems. When developing and implementing the navigation timing spec we ran in to many reported time intervals from users in the wild that were bogus in one way or another, either negative (easily detectable) or artificially inflated (very difficult to detect). I've put a simple test page up here: http://webstuff.nfshost.com/timers.html. Additionally, there's a practical concern that querying the system clock on some systems is more expensive and/or less reliable than other timing APIs. On windows, for instance, GetSystemTimeAsFileTime() has a resolution of ~15.5ms, so browsers use a combination of GetSystemTimeAsFileTime() with higher-resolution timing APIs like QueryPerformanceCounter() that provide better resolution but are not affected by adjustments to the system clock. See http://drdobbs.com/windows/184416651?pgno=1 and https://bugzilla.mozilla.org/show_bug.cgi?id=363258 for some background information. PROPOSAL I propose that we add a new attribute to the Window interface that provides a monotonic, uniformly increasing timestamp suitable for interval measurements. bikeshed-topic partial interface Window { readonly attribute double monotonicTime; }; /bikeshed-topic bikeshed-topic I propose that monotonicTime be defined as the number of milliseconds bikeshed-nit Is milliseconds sufficient? Could we use seconds and encourage implementations to do decimal values? Would be nice to support microseconds on most modern hardware. /bikeshed-nit It's a double, so implementations can provide higher resolution if they like. setTimeout() and setInterval() clamp to milliseconds, so that seems to be the de-factor resolution of the platform today, but I don't have any issue with supporting higher resolution times. - James elapsed since the window creation. There is likely to be no meaningful relationship between the value exposed by this interval and a date and time in the past (such as the unix epoch), so starting at zero seems a good at choice as any. /bikeshed-topic I do not believe we can change the meaning of Date.now() in ECMAScript since the current behavior has existing for a very long time and is genuinely useful when the author wants to know the system clock's current value, for example in a calendar type application. RELATIONSHIP TO EXISTING WORK, IMPLEMENTATION NOTES The setTimeout() and setInterval() algorithms ( http://www.whatwg.org/specs/web-apps/current-work/multipage/timers.html#timers ) implicitly depend on a uniformly monotonic clock in the various wait for X milliseconds phase, since there is no allowance in this text for adjustments to the system clock to change when
Re: [whatwg] Timing API proposal for measuring intervals
On Thu, Jul 7, 2011 at 7:36 PM, Robert O'Callahan rob...@ocallahan.orgwrote: I like it so far, module bikeshedding. (I might call it window.currentTime.) One question is whether you allow the value to change while a script runs. When using the value to schedule animations, it would be helpful for the value to only change between stable states. True. It's also useful to be able to query the now time multiple times from script when trying to time some action. I think that animation APIs (or anything else providing a timestamp) should provide a fixed value but this attribute should always be up to date. That said, maybe it should be a function rather than an attribute to make that point clearer. If you refer to this value during requestAnimationFrame, does it give you the current time, or the predicted time at which the frame will render? I think we'll want to use the same mechanism for requestAnimationFrame, but I'm not yet sure whether we want to provide the current time, predicted frame time, or both. (CVDisplayLink provides both, for example http://developer.apple.com/library/mac/#qa/qa1385/_index.html). There's a thread on this topic on public-web-perf currently, I think we can hash out the details there. Using this value as a clock for media synchronization sounds appealing but is complicated by audio clock drift. When you play N seconds of audio, it might take slightly more or less time to actually play, so it's hard to keep media times perfectly in sync with another timing source. Just something to keep in mind. True. On OS X, however, the CoreVideo and CoreAudio APIs are specified to use a unified time base (see http://developer.apple.com/library/ios/#documentation/QuartzCore/Reference/CVTimeRef/Reference/reference.html) so if we do end up with APIs saying play this sound at time X, like Chris Roger's proposed Web Audio API provides, it'll be really handy if we have a unified timescale for everyone to refer to. - James Rob -- If we claim to be without sin, we deceive ourselves and the truth is not in us. If we confess our sins, he is faithful and just and will forgive us our sins and purify us from all unrighteousness. If we claim we have not sinned, we make him out to be a liar and his word is not in us. [1 John 1:8-10]
Re: [whatwg] Proposal for separating script downloads and execution
On Thu, May 26, 2011 at 3:49 PM, Aryeh Gregor simetrical+...@gmail.comwrote: On Thu, May 26, 2011 at 11:56 AM, Nicholas Zakas nza...@yahoo-inc.com wrote: I'm a little surprised that this conversation has swooped back around to performance and whether or not there's a valid use case here. In addition to standalone solutions like Steve's ControlJS and Kyle's LABjs, the Mozilla and Chrome teams were also trying to come up with solutions to enable preloading of JavaScript. What I was hoping for was a consolidation of the efforts rather than a discussion as to whether or not such efforts should continue. The question isn't whether or not such efforts should continue, it's whether any features need to be added to web standards to help the efforts continue. This is a web standards discussion list, after all, not a list about user JavaScript library development, or browser implementation. If it turns out that the libraries can be developed just fine with existing standard features, like perhaps if browsers improve script async handling, then no further discussion is needed here. Moving parts of the JavaScript download/execution process doesn't allow me to control when that script will be executed, which as I mentioned in a previous email, is really the crux of the issue. So now we're back to the question of, why can't you just wrap all the code in a function, put the function in a script async, and not execute the function until you want the code to execute? This is assuming that future browsers parse/preprocess/whatever script async on a background thread. This isn't practical if the contents of the script are not under the author's direct control. For example, an author that wanted to use jquery would create a script tag with the src set to one of the popular jquery mirrors (to maximize the chance of the resource being cached), but then have no control over when the script actually evaluated. It's easy to imagine a case where the author wants to initiate the network load as soon as possible but might not need to actually start using the code until some point further along in the loading sequence, possibly after a simple form of the page is made visible to the user. For this use case I think it would be handy to have a way to express please download this script but do not start evaluating it until I'm ready. As a straw man, what about using the disable attribute? When the load completes, if the disabled attribute is set then the script is not evaluated until the disabled attribute is unset. After the script evaluates it goes into a state where the disabled attribute is ignored. Browsers that ignored this behavior would evaluate the script sooner than the author might expect, but it's usually every easy to detect when this is happening and react appropriately. - James
Re: [whatwg] Proposal for a tab visibility API
On Thu, Apr 28, 2011 at 12:40 PM, Ian Hickson i...@hixie.ch wrote: Back in December there was a discussion about a tab visibility API. I haven't added this feature to the HTML specification at this time, for a couple of reasons: first, it seems like something we'd really want to have implementation experience before deciding on a specific API, and second, it seems like something that belongs more in the CSSOM API spec than in the primarily media-independent HTML spec. I have saved all the feedback on the topic in case anyone is interested in working on a specification for this. For what it's worth the Web Performance working group has added this to its charter and has started work on this API: http://www.w3.org/2011/04/webperf.html. - James -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Canvas feedback (various threads)
On Thu, Feb 10, 2011 at 8:39 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 2/10/11 11:31 PM, Ian Hickson wrote: I think you had a typo in your test. As far as I can tell, all WebKit-based browsers act the same as Opera and Firefox 3 on this: http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'transparent'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A On that test, Safari 5.0.3 on Mac outputs red and transparent for the two strings. And this test: http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'orly%2C%20do%20you%20think%20so'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A outputs red and orly, do you think so in the same browser. Does Safari on Mac behave differently from Safari on Windows here? The version of WebKit used by Safari 5.0.3 is rather antiquated at this point. Using the latest WebKit nightly build, or Chrome 10.0.648.45 dev (which has a significantly newer version of WebKit), I get #ff and rgba(0, 0, 0, 0.0) on the first test and #ff / #ff on the second. Presumably at some point Apple will release a new version of Safari that matches the behavior nightlies currently have. - James Which is less interop than it seems (due to Safari's behavior), and about to disappear completely, since both IE9 and Firefox 4 will ship with the 0 instead of 0.0 :( Is there no chance to fix this in Firefox 4? It _is_ a regression. :-) At this point, probably not. If it's not actively breaking websites it's not being changed before final release. If it is, we'd at least think about it... -Boris
Re: [whatwg] suggestion for HTML5 spec
On Mon, Aug 2, 2010 at 6:43 PM, Dirk Pranke dpra...@chromium.org wrote: On Mon, Aug 2, 2010 at 5:53 PM, Ian Hickson i...@hixie.ch wrote: On Sat, 1 May 2010, rya...@mail.com wrote: My suggestion for the HTML5 spec is that the video tag should have a feature that can enable GPU acceleration on a user's graphics card, so it will take some stress off the CPU. Do you like my suggestion? Why would a user ever want anyone to disable their GPU acceleration? I believe I've heard people say that they might sometimes want this for power management, i.e. performing the same computation on the GPU might take more power than performing it more slowly on the CPU. I imagine this would depend on the specific configuration and computations involved, though. That's a decision for either a user or a user agent, not an author. It should not be toggleable from HTML. - James -- Dirk
Re: [whatwg] Canvas: clarification of compositing operations needed
On Wed, Jul 28, 2010 at 2:46 PM, Tab Atkins Jr. jackalm...@gmail.comwrote: On Wed, Jul 28, 2010 at 2:43 PM, David Flanagan da...@davidflanagan.com wrote: Firefox and Chrome disagree about the implementation of the destination-atop, source-in, destination-in, and source-out compositing operators. Test code is attached. I don't think your attachment made it through. https://developer.mozilla.org/samples/canvas-tutorial/6_1_canvas_composite.html shows some of the differences, although it does not cover all cases. Chrome doesn't touch any destination pixels that are not underneath the source pixels. Firefox, on the other hand, treats the entire canvas (inside the clipping region) as the destination and if you use the destination-in operator, for example, will erase any pixels outside of whatever you are drawing. I suspect, based on the reference to an infinite transparent black bitmap in 4.8.11.1.13 Drawing model that Firefox gets this right and Chrome gets it wrong, but it would be nice to have that confirmed. I suggest clarifying 4.8.11.1.3 Compositing to mention that the compositing operation takes place on all pixels within the clipping region, and that some compositing operators clear large portions of the canvas. The spec is completely clear on this matter - Firefox is right, Chrome/Safari are wrong. They do it wrongly because that's how CoreGraphics, their graphics library, does things natively. The spec is certainly clear but that does not make the behavior it specifies good. I find the spec's behavior pretty bizarre and Microsoft has expressed a preference for the Safari/Chrome interpretation: http://lists.w3.org/Archives/Public/public-canvas-api/2010AprJun/0046.html - although that thread did not get much discussion. For example, I think drawing a 20x20 image into a 500x500 canvas without scaling with a globalCompositeOperation of 'copy' should result in only the 20x20 region being cleared out, not the entire canvas. In informal discussions I got the impression that most folks would be happy to standardize on something closer to the Safari/Chrome model if it could be specified exactly. In particular, there has to be a precise definition of what region the compositing operation should apply in. - James ~TJ
Re: [whatwg] The real issue with HTML5's sectioning model
Is this sort of reply really necessary? I have not been following the surrounding discussion, but this email showed up as a new thread in my mail client. Based on this tone, I now have no desire to catch up on the rest of the discussion. - James On Fri, Apr 30, 2010 at 6:26 PM, Anne van Kesteren ann...@opera.com wrote: On Sat, 01 May 2010 03:57:42 +0900, Eduard Pascual herenva...@gmail.com wrote: XHTML2's approach was clean and simple: section, h, and @role do everything. Period. Bullshit: http://www.w3.org/TR/2006/WD-xhtml2-20060726/mod-structural.html#sec_8.5. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] HTML Cookie API
On Tue, Feb 23, 2010 at 9:21 PM, Adam Barth w...@adambarth.com wrote: On Tue, Feb 23, 2010 at 9:15 PM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Feb 23, 2010 at 8:56 PM, Adam Barth w...@adambarth.com wrote: The document.cookie API is kind of terrible. Web developers shouldn't have to parse a cookie-string or prepare a properly formated set-cookie-string. Here's a proposal for an HTML cookie API that isn't as terrible: https://docs.google.com/Doc?docid=0AZpchfQ5mBrEZGQ0cDh3YzRfMTRmdHFma21kMghl=en I'd like to propose we include this API in a future version of HTML. As always, feedback welcome. I really think the API should be asynchronous, as to avoid the mess that .localStorage currently is. Done. The array-like object containing the Cookies for the document should be a read-only copy of a set of objects that represent all the applicable cookies at some point between the request and the response. This needs to be really clear and it needs to be clear what happens if a user, say, calls setCookie() in the middle of iterating through the array-like object (imho the iteration should be unaffected). It's probably best to specify the ordering of Cookies in this array-like object to match rfc2965's ordering rules so that users of the API don't have to implement this ordering themselves. Accessing cookies from script is inherently racy - there is no way to promise that the browser will or will not return a cookie being set by some HTTP response arriving at the same time as the getCookies() call. There's nothing really you can do about this but I think that this fact should be highlighted in the spec. If a U-A's privacy settings disallow script from accessing cookies, there should be some clear behavior. It looks like a U-A could make setCookie() a no-op and always invoke the getCookies() callback with an empty list now - should that be specified? - James Adam
Re: [whatwg] style sheet blocking scripts
2009/12/9 tali garsiel t_gars...@hotmail.com Well, not completely. Regarding the first question- Webkit guys told me (on their IRC channel) that the don't block the parser and only block scripts that request visual information, so I'm still confused. Here's my understanding of the implementation inside WebKit currently: During parsing, WebKit does not block the parser on stylesheet loads, but does block external scripts from running until previously-encountered stylesheets have loaded. WebKit does not suspend script execution on requests for visual information if stylesheets have not loaded (for example for inline scripts or in the case of stylesheets added dynamically after parsing has completed). WebKit does suspend parsing of the document on script loads, but has a speculative preloader to attempt to start fetches for resources past the script tag. - James Date: Wed, 9 Dec 2009 17:01:30 + From: i...@hixie.ch To: t_gars...@hotmail.com; bzbar...@mit.edu CC: wha...@whatwg.org Subject: Re: [whatwg] style sheet blocking scripts On Wed, 28 Oct 2009, tali garsiel wrote: This is a quote from Section 4.2 of the spec: A style sheet in the context of the Document of an HTML parser or XML parser is said to be a style sheet blocking scripts if the element was created by that Document's parser, and the element is either a style element or a link element that was an external resource link that contributes to the styling processing model when the element was c reated by the parser, and the element's style sheet was enabled when the element was created by the parser, and the element's style sheet ready flag is not yet set, and, the last time the event loop reached step 1, the element was in that Document And the section about parsing - the script tag says that before executing a script the parser must: 3. Spin the event loop until there is no style sheet blocking scripts and the script's ready to be parser-executed flag is set. I have two questions: 1. As far as I know, Firefox and Webkit have a stall on demand behavior, where a stylesheet blocks a script only if the script asks from style information. According to the spec the style sheet always blocks a script, am I right? 2. Can you clarify the condition - the element's style sheet was enabled when the element was created by the parser, and the element's style sheet ready flag is not yet set, and, the last time the event loop reached step 1, the element was in that Document Does it mean the style sheet blocks scripts only if it's currently being parsed? On Wed, 28 Oct 2009, Boris Zbarsky wrote: On 10/28/09 2:59 AM, tali garsiel wrote: 1. As far as I know, Firefox and Webkit have a stall on demand behavior, where a stylesheet blocks a script only if the script asks from style information. You know wrong, sorry. Firefox has the behavior the spec describes; webkit blocks the parser completely on stylesheets (the behavior Firefox used to have). Last I chec ked, at least. 2. Can you clarify the condition - the element's style sheet was enabled when the element was created by the parser, and the element's style sheet ready flag is not yet set, and, the last time the event loop reached step 1, the element was in that Document The parts of that condition basically mean: 1) When the element was created by the parser, it was in the then-enabled stylesheet set (i.e. not an alternate stylesheet). 2) The stylesheet, or one of its @import descendants, is still loading. 3) The stylesheet linking element is still in the document (so the stylesheet still applies). Thanks Boris. Tali, does this answer your question to your satisfaction? -- Ian Hickson U+1047E )\._.,--,'``. fL http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.' -- Windows Live Hotmail: Your friends can get your Facebook updates, right from Hotmail®.http://www.microsoft.com/middleeast/windows/windowslive/see-it-in-action/social-network-basics.aspx?ocid=PID23461::T:WLMTAGL:ON:WL:en-xm:SI_SB_4:092009
Re: [whatwg] style sheet blocking scripts
On Wed, Dec 9, 2009 at 2:57 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Dec 9, 2009 at 2:10 PM, James Robinson jam...@google.com wrote: 2009/12/9 tali garsiel t_gars...@hotmail.com Well, not completely. Regarding the first question- Webkit guys told me (on their IRC channel) that the don't block the parser and only block scripts that request visual information, so I'm still confused. Here's my understanding of the implementation inside WebKit currently: During parsing, WebKit does not block the parser on stylesheet loads, but does block external scripts from running until previously-encountered stylesheets have loaded. WebKit does not suspend script execution on requests for visual information if stylesheets have not loaded (for example for inline scripts or in the case of stylesheets added dynamically after parsing has completed). WebKit does suspend parsing of the document on script loads, but has a speculative preloader to attempt to start fetches for resources past the script tag. Why does webkit treat external scripts different from inline scripts here? I.e. why is an inline script allowed to run even if there are pending stylesheet loads, but external scripts not? That seems inconsistent and confusing. Is this considered a bug or desired behavior? The former: http://trac.webkit.org/browser/trunk/WebCore/html/HTMLTokenizer.cpp#L2017 http://trac.webkit.org/browser/trunk/WebCore/html/HTMLTokenizer.cpp#L2017I'm not sure how much this matters in practice. In theory, this is unobservable to the page unless it queries the loaded stylesheets directly or a property derived from layout both of which should suspend script execution. - James / Jonas
Re: [whatwg] style sheet blocking scripts
On Wed, Dec 9, 2009 at 3:18 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 12/9/09 3:06 PM, James Robinson wrote: On Wed, Dec 9, 2009 at 2:10 PM, James Robinson jam...@google.com mailto:jam...@google.com wrote: WebKit does not suspend script execution on requests for visual information if stylesheets have not loaded In theory, this is unobservable to the page unless it queries the loaded stylesheets directly or a property derived from layout both of which should suspend script execution. I'm having a hard time reconciling the above two claims. Hence the in theory. If WebKit did suspend script execution on requests for information that pending stylesheets might influence, then theory would match practice. It currently does not (which I believe is contrary to what the spec says). I'm curious if this actually negatively impacts anyone in the wild, as suspending script execution in the middle of a block to wait for a network load is generally not ideal. - James -Boris
Re: [whatwg] style sheet blocking scripts
On Wed, Dec 9, 2009 at 3:19 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Dec 9, 2009 at 3:18 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Dec 9, 2009 at 3:06 PM, James Robinson jam...@google.com wrote: On Wed, Dec 9, 2009 at 2:57 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, Dec 9, 2009 at 2:10 PM, James Robinson jam...@google.com wrote: 2009/12/9 tali garsiel t_gars...@hotmail.com Well, not completely. Regarding the first question- Webkit guys told me (on their IRC channel) that the don't block the parser and only block scripts that request visual information, so I'm still confused. Here's my understanding of the implementation inside WebKit currently: During parsing, WebKit does not block the parser on stylesheet loads, but does block external scripts from running until previously-encountered stylesheets have loaded. WebKit does not suspend script execution on requests for visual information if stylesheets have not loaded (for example for inline scripts or in the case of stylesheets added dynamically after parsing has completed). WebKit does suspend parsing of the document on script loads, but has a speculative preloader to attempt to start fetches for resources past the script tag. Why does webkit treat external scripts different from inline scripts here? I.e. why is an inline script allowed to run even if there are pending stylesheet loads, but external scripts not? That seems inconsistent and confusing. Is this considered a bug or desired behavior? The former: http://trac.webkit.org/browser/trunk/WebCore/html/HTMLTokenizer.cpp#L2017 I'm not sure how much this matters in practice. In theory, this is unobservable to the page unless it queries the loaded stylesheets directly or a property derived from layout both of which should suspend script execution. Why is this more in theory for inline scripts than for external scripts? Or rather, why is this more unobservable for inline scripts than for external scripts? You're right, there's no real difference in the observability of this behavior for inline vs external scripts. - James / Jonas
Re: [whatwg] LocalStorage in workers
On Wed, Sep 16, 2009 at 10:53 AM, Michael Nordman micha...@google.comwrote: On Wed, Sep 16, 2009 at 9:58 AM, Drew Wilson atwil...@google.com wrote: Jeremy, what's the use case here - do developers want workers to have access to shared local storage with pages? Or do they just want workers to have access to their own non-shared local storage? Because we could just give workers their own separate WorkerLocalStorage and let them have at it. A worker could block all the other accesses to WorkerLocalStorage within that domain, but so be it - it wouldn't affect page access, and we already had that issue with the (now removed?) synchronous SQL API. I think a much better case can be made for WorkerLocalStorage than for give workers access to page LocalStorage, and the design issues are much simpler. Putting workers in their own storage silo doesn't really make much sense? Sure it may be simpler for browser vendors, but does that make life simpler for app developers, or just have them scratching their heads about how to read/write the same data set from either flavor of context in their application? I see no rhyme or reason for the arbitrary barrier except for browser vendors to work around the awkward implict locks on LocalStorage (the source of much grief). Consider this... would it make sense to cordon off the databases workers vs pages can see? I would think not, and i would hope others agree. The difference is that the database interface is purely asynchronous whereas storage is synchronous. If multiple threads have synchronous access to the same shared resource then there has to be a consistency model. ECMAScript does not provide for one so it has to be done at a higher level. Since there was not a solution in the first versions that shipped, the awkward implicit locks you mention were suggested as a workaround. However it's far from clear that these solve the problem and are implementable. It seems like the only logical continuation of this path would be to add explicit, blocking synchronization primitives for developers to deal with - which I think everyone agrees would be a terrible idea. If you're worried about developers scratching their heads about how to pass data between workers just think about happens-before relationships and multi-threaded memory models. In a hypothetical world without synchronous access to LocalStorage/cookies from workers, there is no shared memory between threads except via message passing. This can seem a bit tricky for developers but is very easy to reason about and prove correctness and the absence of deadlocks. - James -atw On Tue, Sep 15, 2009 at 8:27 PM, Jonas Sicking jo...@sicking.cc wrote: On Tue, Sep 15, 2009 at 6:56 PM, Jeremy Orlow jor...@chromium.org wrote: One possible solution is to add an asynchronous callback interface for LocalStorage into workers. For example: function myCallback(localStorage) { localStorage.accountBalance = localStorage.accountBalance + 100; } executeLocalStorageCallback(myCallback); // TODO: Make this name better :-) The interface is simple. You can only access localStorage via a callback. Any use outside of the callback is illegal and would raise an exception. The callback would acquire the storage mutex during execution, but the worker's execution would not block during this time. Of course, it's still possible for a poorly behaving worker to do large amounts of computation in the callback, but hopefully the fact they're executing in a callback makes the developer more aware of the problem. First off, I agree that not having localStorage in workers is a big problem that we need to address. If I were designing the localStorage interface today I would use the above interface that you suggest. Grabbing localStorage can only be done asynchronously, and while you're using it, no one else can get a reference to it. This way there are no race conditions, but also no way for anyone to have to lock. So one solution is to do that in parallel to the current localStorage interface. Let's say we introduce a 'clientStorage' object. You can only get a reference to it using a 'getClientStorage' function. This function is available both to workers and windows. The storage is separate from localStorage so no need to worry about the 'storage mutex'. There is of course a risk that a worker grabs on to the clientStorage and holds it indefinitely. This would result in the main window (or another worker) never getting a reference to it. However it doesn't affect responsiveness of that window, it's just that the callback will never happen. While that's not ideal, it seems like a smaller problem than any other solution that I can think of. And the WebDatabase interfaces are suffering from the same problem if I understand things correctly. There's a couple of other interesting things we could expose on top of this: First, a synchronous API
Re: [whatwg] LocalStorage in workers
On Wed, Sep 16, 2009 at 11:34 AM, Michael Nordman micha...@google.comwrote: On Wed, Sep 16, 2009 at 11:24 AM, James Robinson jam...@google.comwrote: On Wed, Sep 16, 2009 at 10:53 AM, Michael Nordman micha...@google.comwrote: On Wed, Sep 16, 2009 at 9:58 AM, Drew Wilson atwil...@google.comwrote: Jeremy, what's the use case here - do developers want workers to have access to shared local storage with pages? Or do they just want workers to have access to their own non-shared local storage? Because we could just give workers their own separate WorkerLocalStorage and let them have at it. A worker could block all the other accesses to WorkerLocalStorage within that domain, but so be it - it wouldn't affect page access, and we already had that issue with the (now removed?) synchronous SQL API. I think a much better case can be made for WorkerLocalStorage than for give workers access to page LocalStorage, and the design issues are much simpler. Putting workers in their own storage silo doesn't really make much sense? Sure it may be simpler for browser vendors, but does that make life simpler for app developers, or just have them scratching their heads about how to read/write the same data set from either flavor of context in their application? I see no rhyme or reason for the arbitrary barrier except for browser vendors to work around the awkward implict locks on LocalStorage (the source of much grief). Consider this... would it make sense to cordon off the databases workers vs pages can see? I would think not, and i would hope others agree. The difference is that the database interface is purely asynchronous whereas storage is synchronous. Sure... we're talking about adding an async api that allows worker to access a local storage repository... should such a thing exist, why should it not provide access to the same repository as seen by pages? Not quite - Jeremy proposed giving workers access to a synchronous API (localStorage.*) but to only allow it to be called within the context of a callback that the UA can run when it chooses. It's another way to approach the implicit locking since a UA would have to, in effect, hold the storage mutex for the duration of the callback. The page's context could still be blocked for an indefinite amount of time by a worker thread. Drew suggested isolating the worker's access to a separate storage 'arena' so that there wouldn't be shared, synchronous access between the page context and a worker context. This way the synchronous Storage API can be used essentially unchanged without having to deal with the more nasty parts of synchronization. - James If multiple threads have synchronous access to the same shared resource then there has to be a consistency model. ECMAScript does not provide for one so it has to be done at a higher level. Since there was not a solution in the first versions that shipped, the awkward implicit locks you mention were suggested as a workaround. However it's far from clear that these solve the problem and are implementable. It seems like the only logical continuation of this path would be to add explicit, blocking synchronization primitives for developers to deal with - which I think everyone agrees would be a terrible idea. If you're worried about developers scratching their heads about how to pass data between workers just think about happens-before relationships and multi-threaded memory models. In a hypothetical world without synchronous access to LocalStorage/cookies from workers, there is no shared memory between threads except via message passing. This can seem a bit tricky for developers but is very easy to reason about and prove correctness and the absence of deadlocks. - James -atw On Tue, Sep 15, 2009 at 8:27 PM, Jonas Sicking jo...@sicking.ccwrote: On Tue, Sep 15, 2009 at 6:56 PM, Jeremy Orlow jor...@chromium.org wrote: One possible solution is to add an asynchronous callback interface for LocalStorage into workers. For example: function myCallback(localStorage) { localStorage.accountBalance = localStorage.accountBalance + 100; } executeLocalStorageCallback(myCallback); // TODO: Make this name better :-) The interface is simple. You can only access localStorage via a callback. Any use outside of the callback is illegal and would raise an exception. The callback would acquire the storage mutex during execution, but the worker's execution would not block during this time. Of course, it's still possible for a poorly behaving worker to do large amounts of computation in the callback, but hopefully the fact they're executing in a callback makes the developer more aware of the problem. First off, I agree that not having localStorage in workers is a big problem that we need to address. If I were designing the localStorage interface today I would use the above interface that you suggest. Grabbing localStorage
Re: [whatwg] Application defined locks
On Thu, Sep 10, 2009 at 1:55 PM, Darin Fisher da...@chromium.org wrote: On Thu, Sep 10, 2009 at 1:08 PM, Oliver Hunt oli...@apple.com wrote: On Sep 10, 2009, at 12:55 PM, Darin Fisher wrote: On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.comwrote: On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote: On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.orgwrote: On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman micha...@google.comwrote: If this feature existed, we likely would have used it for offline Gmail to coordinate which instance of the app (page with gmail in it) should be responsible for sync'ing the local database with the mail service. In the absence of a feature like this, instead we used the local database itself to register which page was the 'syncagent'. This involved periodically updating the db by the syncagent, and periodic polling by the would be syncagents waiting to possibly take over. Much ugliness. var isSyncAgent = false; window.acquireFlag(syncAgency, function() { isSyncAgent = true; }); Much nicer. How do you deal with the user closing the syncagent while other app instances remain open? In our db polling world... that was why the syncagent periodically updated the db... to say still alive... on close it would say i'm gone and on ugly exit, the others would notice the lack of still alives and fight about who was it next. A silly bunch of complexity for something so simple. In the acquireFlag world... wouldn't the page going away simply relinquish the flag? How would the pages that failed to acquire it before know that they should try to acquire it again? Presumably they would still have to poll (assuming the tryLock model). Regards, Maciej In my proposed interace, you can wait asynchronously for the lock. Just call acquireLock with a second parameter, a closure that runs once you get the lock. What if you don't want to wait asynchronously? My reading of this is that you need two copies of the code, one that works synchronously, but you still need to keep the asynchronous model to deal with an inability to synchronously acquire the lock. What am I missing? Sounds like a problem that can be solved with a function. The reason for the trylock support is to allow a programmer to easily do nothing if they can't acquire the lock. If you want to wait if you can't acquire the lock, then using the second form of acquireLock, which takes a function, is a good solution. I don't think there is much value in the first form of acquireLock() - only the second form really makes sense. I also strongly feel that giving web developers access to locking mechanisms is a bad idea - it hasn't been a spectacular success in any other language. I think the useful semantics are equivalent to the following (being careful to avoid mentioning 'locks' or 'mutexes' explicit): A script passes in a callback and a token. The UA invokes the callback at some point in the future and provides the guarantee that no other callback with that token will be invoked in any context within the origin until the invoked callback returns. Here's what I mean with an intentionally horrible name: window.runMeExclusively(callback, arbitrary string token); An application developer could then put all of their logic that touches a particular shared resource behind a token. It's also deadlock free so long as each callback terminates. Would this be sufficient? If so it is almost possible to implement it correctly in a JavaScript library using a shared worker per origin and postMessage, except that it is not currently possible to detect when a context goes away. - James -Darin
Re: [whatwg] Application defined locks
On Thu, Sep 10, 2009 at 6:11 PM, Jeremy Orlow jor...@chromium.org wrote: On Fri, Sep 11, 2009 at 9:28 AM, Darin Fisher da...@chromium.org wrote: On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan rob...@ocallahan.orgwrote: On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher da...@chromium.orgwrote: I think there are good applications for setting a long-lived lock. We can try to make it hard for people to create those locks, but then the end result will be suboptimal. They'll still find a way to build them. One use case is selecting a master instance of an app. I haven't really been following the global script thread, but doesn't that address this use case in a more direct way? No it doesn't. The global script would only be reachable by related browsing contexts (similar to how window.open w/ a name works). In a multi-process browser, you don't want to _require_ script bindings to span processes. That's why I mentioned shared workers. Because they are isolated and communication is via string passing, it is possible for processes in unrelated browsing contexts to communicate with the same shared workers. What other use-cases for long-lived locks are there? This is a good question. Most of the use cases I can imagine boil down to a master/slave division of labor. For example, if I write an app that does some batch asynchronous processing (many setTimeout calls worth), then I can imagine setting a flag across the entire job, so that other instances of my app know not to start another such overlapping job until I'm finished. In this example, I'm supposing that storage is modified at each step such that guaranteeing storage consistency within the scope of script evaluation is not enough. What if instead of adding locking, we added a master election mechanism? I haven't thought it out super well, but it could be something like this: You'd call some function like |window.electMaster(name, newMasterCallback, messageHandler)|. The name would allow multiple groups of master/slaves to exist. The newMasterCallback would be called any time that the master changes. It would be passed a message port if we're a slave or null if we're the master. messageHandler would be called for any messages. When we're the master, it'll be passed a message port of the slave so that responses can be sent if desired. In the gmail example: when all the windows start up, they call window.electMaster. If they're given a message port, then they'll send all messages to that master. The master would handle the request and possibly send a response. If a window is closed, then the UA will pick one of the slaves to become the master. The master would handle all the state and the slaves would be lighter weight. -- There are a couple open questions for something like this. First of all, we might want to let windows provide a hint that they'd be a bad master. For example, if they expected to be closed fairly soon. (In the gmail example, a compose mail window.) We might also want to consider allowing windows to opt out of masterhood with something like |window.yieldMasterhood()|. This would allow people to build locks upon this interface which could be good and bad. Next, we could consider adding a mechanism for the master to pickle up some amount of state and pass it on to another master. For example, maybe the |window.yieldMasterhood()| function could take a single state param that would be passed into the master via the newMasterCallback function. Lastly and most importantly, we need to decide if we think shared workers are the way all of this should be done. If so, it seems like none of this complexity is necessary. That said, until shared workers are first class citizens in terms of what APIs they can access (cookies, LocalStorage, etc), I don't think shared workers are practical for many developers and use cases. What about eliminating shared memory (only one context would be allowed access to cookies, localStorage, etc)? It seems to be working out fine for DOM access and is much, much easier to reason about. - James
Re: [whatwg] Application defined locks
On Thu, Sep 10, 2009 at 7:59 PM, Darin Fisher da...@chromium.org wrote: rt oOn Thu, Sep 10, 2009 at 6:35 PM, James Robinson jam...@google.comwrote: On Thu, Sep 10, 2009 at 6:11 PM, Jeremy Orlow jor...@chromium.orgwrote: On Fri, Sep 11, 2009 at 9:28 AM, Darin Fisher da...@chromium.orgwrote: On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher da...@chromium.orgwrote: I think there are good applications for setting a long-lived lock. We can try to make it hard for people to create those locks, but then the end result will be suboptimal. They'll still find a way to build them. One use case is selecting a master instance of an app. I haven't really been following the global script thread, but doesn't that address this use case in a more direct way? No it doesn't. The global script would only be reachable by related browsing contexts (similar to how window.open w/ a name works). In a multi-process browser, you don't want to _require_ script bindings to span processes. That's why I mentioned shared workers. Because they are isolated and communication is via string passing, it is possible for processes in unrelated browsing contexts to communicate with the same shared workers. What other use-cases for long-lived locks are there? This is a good question. Most of the use cases I can imagine boil down to a master/slave division of labor. For example, if I write an app that does some batch asynchronous processing (many setTimeout calls worth), then I can imagine setting a flag across the entire job, so that other instances of my app know not to start another such overlapping job until I'm finished. In this example, I'm supposing that storage is modified at each step such that guaranteeing storage consistency within the scope of script evaluation is not enough. What if instead of adding locking, we added a master election mechanism? I haven't thought it out super well, but it could be something like this: You'd call some function like |window.electMaster(name, newMasterCallback, messageHandler)|. The name would allow multiple groups of master/slaves to exist. The newMasterCallback would be called any time that the master changes. It would be passed a message port if we're a slave or null if we're the master. messageHandler would be called for any messages. When we're the master, it'll be passed a message port of the slave so that responses can be sent if desired. In the gmail example: when all the windows start up, they call window.electMaster. If they're given a message port, then they'll send all messages to that master. The master would handle the request and possibly send a response. If a window is closed, then the UA will pick one of the slaves to become the master. The master would handle all the state and the slaves would be lighter weight. -- There are a couple open questions for something like this. First of all, we might want to let windows provide a hint that they'd be a bad master. For example, if they expected to be closed fairly soon. (In the gmail example, a compose mail window.) We might also want to consider allowing windows to opt out of masterhood with something like |window.yieldMasterhood()|. This would allow people to build locks upon this interface which could be good and bad. Next, we could consider adding a mechanism for the master to pickle up some amount of state and pass it on to another master. For example, maybe the |window.yieldMasterhood()| function could take a single state param that would be passed into the master via the newMasterCallback function. Lastly and most importantly, we need to decide if we think shared workers are the way all of this should be done. If so, it seems like none of this complexity is necessary. That said, until shared workers are first class citizens in terms of what APIs they can access (cookies, LocalStorage, etc), I don't think shared workers are practical for many developers and use cases. What about eliminating shared memory (only one context would be allowed access to cookies, localStorage, etc)? It seems to be working out fine for DOM access and is much, much easier to reason about. - James It is a good idea. If we were to start fresh, it'd probably be the ideal answer. We could say that each SharedWorker gets its own slice of persistent storage independent from the rest. But this ship has sailed for cookies at least, document.cookies is problematic but considering the many other issues with this API it's probably not going to be the end of the world to have it be a touch pricklier. and database and localstorage are already shipping in UAs. Is it really too late for DB and localStorage? I'm still trying to get used to the standards process used here but I thought the idea with UAs implementing draft specs is that the feedback
[whatwg] Issues with Web Sockets API
Hello, I'm very excited about the concept of web sockets and look forward to building apps with it but the web sockets API at http://dev.w3.org/html5/websockets/ has some issues. Many issues seem to be inherited from the original XmlHttpRequest specification, which was extremely useful but not a very good spec. I'm sure I'm not the only one who has spent far too many hours dealing with underspecified or poorly implemented XHR flavors and would love to avoid doing that in the future. I know several vendors have started work on implementations already but I hope that this feedback is still useful. 0) postMessage() looks as if it is intended to mimic MessagePort.postMessage() or , but the arguments and error conditions are different. While it would be conceptually nice to treat a web socket in the same way as a message port, it's not possible to treat the two postMessage() functions in the same way. I'd recommend the WebSocket version be renamed to something like send() to avoid confusion and false expectations. There's similar oddness with receiving events that satisfy the MessageEvent interface - since all fields except 'data' will necessarily be invalid I don't see the value in receiving something more complex. 1) The 'readyState' attribute can never actually be used by an application and is redundant. Initially, the 'readyState' attribute is set to CONNECTING, but while the object is in this state the user is not permitted to interact with the WebSocket in any way. The only useful thing that a user could do is set event handlers and wait for the 'open' event to fire. When the WebSocket becomes connected, the readyState becomes 1 and the 'open' event is fired. Once the WebSocket is open, the spec states that whenever the connection is closed the readyState changes to CLOSED and a 'close' event is enqueued. However, users can't usefully check the readyState to see if the WebSocket is still open because there are not and cannot be any synchronization guarantees about when the WebSocket may close. A user will have to wrap all calls to postMessage() (or send() if the function is renamed) in a try/catch block in order to handle INVALID_STATE_ERRs. Once the 'close' event has been received the readyState attribute is useless since the state of the WebSocket is known and can never change. I think 'readyState' should just go away since an application will have to keep track of state updates through the fired events and use try/catch blocks around all API calls anyway. - James
Re: [whatwg] Issues with Web Sockets API
, it generates a close event which marks the WebSocket as closed. It means that you could have a situation where you post messages to a WebSocket which aren't received by the server because the connection is closed, but that's true regardless due to the asynchronous nature of the networking protocol. -atw On Fri, Jun 26, 2009 at 9:52 AM, Darin Fisher da...@chromium.orgwrote: On Fri, Jun 26, 2009 at 9:46 AM, Drew Wilson atwil...@google.comwrote: On Fri, Jun 26, 2009 at 9:18 AM, James Robinson jam...@google.comwrote: However, users can't usefully check the readyState to see if the WebSocket is still open because there are not and cannot be any synchronization guarantees about when the WebSocket may close. Is this true? Based on our prior discussion surrounding cookies, it seems like as a general rule we try to keep state from changing dynamically while JS code is executing for exactly these reasons. I think this is a very different beast. The state of a network connection may change asynchronously whether we like it or not. Unlike who may access cookies or local storage, the state of the network connection is not something we solely control. -Darin -- If you received this communication by mistake, you are entitled to one free ice cream cone on me. Simply print out this email including all relevant SMTP headers and present them at my desk to claim your creamy treat. We'll have a laugh at my emailing incompetence, and play a game of ping pong. (offer may not be valid in all States). -- If you received this communication by mistake, you are entitled to one free ice cream cone on me. Simply print out this email including all relevant SMTP headers and present them at my desk to claim your creamy treat. We'll have a laugh at my emailing incompetence, and play a game of ping pong. (offer may not be valid in all States).
Re: [whatwg] Issues with Web Sockets API
On Fri, Jun 26, 2009 at 5:01 PM, Drew Wilson atwil...@google.com wrote: On Fri, Jun 26, 2009 at 1:14 PM, Kelly Norton knor...@google.com wrote: One thing about postMessage that I'm curious about. Since it has to report failure synchronously by throwing an INVALID_STATE_ERR, that seems to imply that all data must be written to a socket before returning and cannot be asynchronously delivered to an I/O thread without adding some risk of silently dropping messages. I don't think that's the intent of the spec - the intent is that INVALID_STATE_ERR is sent if the port is in a closed state, not if there's an I/O error after send. But Michael's right, I don't think there's any way to determine that the server received the message - I guess the intent is that applications will build their own send/ack protocol on top of postMessage(), as you note. -atw The concept of a port being in a closed state is not very well defined - if the state means only the readyState status, then when can the state legally be updated? If it has some meaning closer to the state of the underlying connection, then it can't be queried synchronously without very expensive synching to the I/O thread or process. Forcing applications to build their own send/ack functionality would be pretty tragic considering that WebSockets are built on top of TCP. - James