Re: [whatwg] framesets

2009-10-17 Thread Ian Hickson
On Tue, 13 Oct 2009, Peter Brawley wrote:
  
  Your requirements aren't met by framesets
 
 Eh? Our solution meets the requirement and uses framesets.

As others have discussed, you explained that use wanted framesets because 
they prevented bookmarking, but they don't.


  iframes have been demonstrated to work as well as framesets

 No-one's been able to point to a working non-frameset solution that 
 meets this requirement.

I provided a sample myself:

   http://damowmow.com/playground/demos/framesets-with-iframes/001.html


 , and, well, framesets suck.
 
 Unargued, subjective.

Agreed. Here's a summary of the problems with framesets:

 * Poor accessibility. Frames are inherently a visual concept, and do not 
   map well to screen readers, speech browsers, or braille browsers.

 * Poor bookmarking story, which is difficult to work around even with
   pushState(). Users cannot reconstruct a frameset that they wish to 
   bookmark. Scripted support for this requires extensive server-side 
   effort to get right.

 * Poor searchability. Search engines cannot reconstruct the frameset that 
   represents the data that they have found. Working around this with one 
   frameset page per frame combination is a maintenance nightmare.

 * Poor printing story. There's really no good way to print a frameset 
   page. Browsers have tried various techniques, but they all 
   fundamentally lead to a poor experience.

 * Poor usability. Multiple scrollbars can lead to users being unclear as 
   to what to do when looking for content.

 * Slowness and high latency. Multiple HTML files means the total time 
   from going to a page to the page being loaded is automatically forced 
   to be greater than with ordinary pages, since there's more files to 
   fetch and thus a higher minimum round-trip time.

(Not all these problems are fixed by iframes, and I wouldn't really 
recommend using iframes instead of framesets in general.)


 I agree that there's lots of legacy content using framesets; that's why 
 HTML5 defines how they should work (in more detail than any previous 
 spec!).
 
 ?! According to 
 http://www.html5code.com/index.php/html-5-tags/html-5-frameset-tag/, 
 The frameset tag is not supported in HTML 5.

It's not allowed in HTML5 documents, but HTML5 does define how it works 
for the purpose of handling legacy (HTML4) documents.


 The only thing that is easier with framesets than otherwise appears to 
 be resizing, which I agree is not well-handled currently.
 
 Unsubstantiated claim absent a working example of the spec implemented 
 without framesets.

I provided a sample showing frameset (without resizing) here:

   http://damowmow.com/playground/demos/framesets-with-iframes/001.html


 As noted before, though, that's an issue for more than just frames; we 
 need a good solution for this in general, whether we have framesets or 
 not. Furthermore, that's a styling issue, not an HTML issue.
 
 For those who have to write or generate HTML, that's a distinction 
 without a difference.

Possibly, but it's an important difference for spec design. :-)


On Wed, 14 Oct 2009, Peter Brawley wrote:
 
 Of course the frameset /by itself /doesn't satisfy that requirement. It 
 permits us to meet the requirement with little code.

How does it help you do it better than iframes?


 It's a nice, partial demo---side-by-side scrolling  no node bookmarking. But
 no borders

I've added borders for you.


 no resizing,

Granted. This needs fixing in general, though, not just for frames.


 no horizontal scrolling within frames,

I've added a wide page to demonstrate that this is also supported.


 it requires a separate html page for each node, c.

It requires exactly as many HTML pages as framesets.


 If that blocks a use case, by all means don't use a frameset for it. For 
 this use, the above poses no problem at all. And if CSS were actually as 
 efficient for this spec as framesets, surely some developers would have 
 taken advantage of that by now.

If that mindset was used for everything, we'd never invent anything. :-)


On Wed, 14 Oct 2009, Peter Brawley wrote:
 
 The full use case is treeview database maintenance. Tree logic has been 
 slow to mature in SQL, is non-trivial in HTML as we see, and is hard to 
 generate from PHP/Ruby/whatever.

I agree that we need to address the treeview control use case. However, 
with all due respect, there are many better ways to provide a tree view 
than framesets, even without an explicit tree-view control. An AJAX 
application can have a much better interface. Yes, it requires more 
scripting, but the cost to you translates into value for your customers.


 Correct, but excluding frameset from HTML5 increases the likelihood that 
 browsers will drop support for the feature.

This is incorrect. HTML5 requires that browsers supports framesets.


On Wed, 14 Oct 2009, Aryeh Gregor wrote:

 The *only* effect on you if you use frames is that your pages will not 
 validate as HTML5.  

Re: [whatwg] object behavior

2009-10-17 Thread Ben Laurie
On Fri, Oct 16, 2009 at 9:55 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/16/09 8:21 PM, Ben Laurie wrote:

 The point is that if I think I'm sourcing something safe but it can be
 overridden by the MIME type, then I have a problem.

 Perhaps we need an attribute on object that says to only render the data
 if the server provided type and @type match?  That way you can address your
 use case by setting that attribute and we don't enable attacks on random
 servers by allowing @type to override the server-provided type?

That would work.


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-17 Thread Robert O'Callahan
On Sat, Oct 17, 2009 at 5:37 PM, Oliver Hunt oli...@apple.com wrote:


 On Oct 16, 2009, at 8:10 PM, Robert O'Callahan wrote:

 I think there is a reasonable argument that the spec should be changed so
 that compositing happens only within the shape. (In cairo terminology, all
 operators should be bounded.) Perhaps that's what Safari and Chrome
 developers want.


 This is the behaviour of the original canvas implementation (and it makes a
 degree of sense -- it is possible to fake composition implying an infinite
 0-alpha surrounding when the default composite operator does not do this,
 but vice versa is not possible).


Can't you just clip to the shape?

That said I suspect we are unable to do anything this anymore :-/


If Safari and Chrome currently do bound composition to the shape, and always
have, then surely it's not too late to spec that?

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] a onlyreplace

2009-10-17 Thread Gregory Maxwell
On Fri, Oct 16, 2009 at 4:43 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
[snip]
 Isn't if inefficient to request the whole page and then throw most of
 it out?  With proper AJAX you can just request the bits you want.
 ==
 This is a valid complaint, but one which I don't think is much of a
 problem for several reasons.
[snip]
 3. Because this is a declarative mechanism (specifying WHAT you want,
 not HOW to get it), it has great potential for transparent
 optimizations behind the scenes.
[snip]

Yes— A HTTP request header that gives the only-replace IDs requested,
and the server is free to pare the page down to that (or not). I hit
reply to point out this possibility but then saw you already basically
thought of it,  but a query parameter is not a good method: it'll
break bookmarking ... and preserving bookmarking is one of the most
attractive aspects of this proposal.

[snip]
 What about document.write()?  What if the important fragment of the
 page is produced by document.write()?
 
 Then you're screwed.  document.write()s contained in script blocks
 inside the target fragment will run when they get inserted into the
 page, but document.write()s outside of that won't.  Producing the
 target fragment with document.write() is a no-go from the start.
 Don't do that anyway; it's a bad idea.

I'm guessing that the rare case where you need to write into a
replaced ID you can simply have a JS hook that fires on the load and
fixes up the replaced sections as needed.


Re: [whatwg] The new content model for details breaks rendering in MSIE5-7

2009-10-17 Thread Ian Hickson
On Wed, 14 Oct 2009, Michael(tm) Smith wrote:
 Ian Hickson i...@hixie.ch, 2009-10-14 03:41 +:
 
  As far as I can see the options are as follows:
  
   1. Drop support for details and figure for now, revisit it later.
  
   2. Use legend, and don't expect to be able to use it in any browsers 
  sanely for a few years.
  
   3. Use dt/dd, and don't expect to be able to use it in old versions 
  of IE without rather complicated and elaborate hacks for a few years.
  
   4. Invent a new element with a weird name (since all the good names are 
  taken already), and don't expect to be able to use it in IE without 
  hacks for a few years.
  
  I am not convinced of the wisdom of #4. I prefer #2 long term, but I see 
  the argument for #3.
 
 In terms of the Priority of Constituencies principle, it'd seem to me 
 that between #2 and #3, #2 will -- in the long term -- ultimately have 
 lower costs and difficulties for authors, though higher costs and 
 difficulties for implementors in the short term.
 
 I would think a big red flag ought to go up for any proposed solution 
 that we know will lead to introducing complicated and elaborate hacks 
 into new content. For one thing, we know from experience that due to 
 cargo-cult copy-and-paste authoring, such hacks have a tendency to live 
 on in content for years after the need for them in widely used UAs has 
 disappeared.

We tried legend, and the complaints were more concrete than those for 
dt/dd. Unless there's a really compelling reason, I don't really want 
to flip-flop back to the legend idea.


On Wed, 14 Oct 2009, Dean Edwards wrote:
 On 14/10/2009 04:41, Ian Hickson wrote:
  On Tue, 29 Sep 2009, Dean Edwards wrote:
   
   It's going to take a while for IE7 to go away. In the meantime it 
   becomes an education issue -- You can start using HTML5 except 
   details which will look OK in some browsers but completely break 
   others.
  
  ...and except forcanvas which will be slow or not work in IE for the 
  forseeable future, and the drag and drop model's draggable attribute 
  which will only work in new browsers, or the new controls which will 
  look like text fields in legacy UAs, or... how isdetails different?
 
 The other things you mentioned don't work but don't break anything. 
 Using details can potentially break entire pages.

style scoped can break entire pages. Using any of the new APIs can too. 
Using draggable can make something work in one browser but do something 
quite different in another, potentially with poor behaviour.

There are workarounds for all these issues, some are more painful or risky 
than others. I think it's not significantly more difficult to educate 
authors about detailsdt hacks than about the other hacks.


   Can't we just invent some new elements? We've already created 20 new 
   ones. Two more won't hurt. :)
  
  We have more than a dozen elements whose names would be appropriate 
  here. Inventing entirely new elements to do the same thing again just 
  to work around a very short-term problem is just silly.
 
 I don't think it is silly to prevent severe rendering problems in 30% of 
 installed browsers.

Consider what this argument would sound like 30 years from now.


On Wed, 14 Oct 2009, Remco wrote:
 
 So what you'd expect is that #3 would take about 4 years to completely 
 fix itself, and #2 would take about 5 years. With such a small 
 difference, I'd just choose the best option in the long term.

I think the numbers are more like 10 years for #2 and 5 years for #3.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Transparent Content

2009-10-17 Thread Ian Hickson
On Wed, 14 Oct 2009, Yuvalik Webdesign wrote:

 I'll give it one more go. ;-)
 
 Perhaps you could leave the existing sentence, but add:
 
 In short; a transparent element must have the same content model as its 
 parent.
 
 Or something to that effect?

On Wed, 14 Oct 2009, Tab Atkins Jr. wrote:
 
 That's still not accurate, though.  ^_^ I mean, it's *correct*, but it's 
 not a summarization of the existing sentence (which is implied by in 
 short).  Ian pointed out how a transparent element can have children 
 that would match the content model of the parent, but that wouldn't be 
 correct if simply inserted into the parent (the example with unique).

On Wed, 14 Oct 2009, Yuvalik Webdesign wrote:
 
 Hmm, yes.  Oh well. I give up.

You see why I ended up with the complicated text that's in the spec now. :-)


 It's not that important anyway. And with the added example I am sure it 
 will be ok.

Ok.

Cheers,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] object behavior

2009-10-17 Thread Michael A. Puls II

On Fri, 16 Oct 2009 05:28:46 -0400, Ian Hickson i...@hixie.ch wrote:

On Sun, 20 Sep 2009, Michael A. Puls II wrote:


O.K., so put simply, HTML5 should explicitly mention that the css
display property for object, embed (and applet in the handling
section) has absolutely no effect on plug-in instantiation and
destroying and has absolutely no effect on @src and @data resource
fetching.

HTML5 could also be extra clear by example that display: none doesn't
destroy, or prevent the creation of, the plug-in instance and that
changing the display value doesn't destroy the instance.

Lastly, HTML5 could briefly mention that what the plug-in does when its
window/area is not displayed because of display: none, is plug-in and
plug-in API dependent.


I've added a note to this effect.


Thanks

I see the note in the object element section, but don't see it in the  
embed element section and the applet element section.


--
Michael


Re: [whatwg] a onlyreplace

2009-10-17 Thread Nelson Menezes
2009/10/17 Jonas Sicking jo...@sicking.cc:
 On Fri, Oct 16, 2009 at 11:06 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Promoting this reply to top-level because I think it's crazy good.

 On Fri, Oct 16, 2009 at 11:09 AM, Aryeh Gregor simetrical+...@gmail.com 
 wrote:
 On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 As well, this still doesn't answer the question of what to do with
 script links between the static content and the original page, like
 event listeners placed on content within the static.  Do they get
 preserved?  How would that work?  If they don't, then some of the
 benefit of 'static' content is lost, since it will be inoperable for a
 moment after each pageload while the JS reinitializes.

 Script links should be preserved somehow, ideally.  I would like to
 see this be along the lines of AJAX reload of some page content,
 without JavaScript and with automatically working URLs.
 [snip]
 I'm drawn back to my original proposal.  The idea would be as follows:
 instead of loading the new page in place of the new one, just parse
 it, extract the bit you want, plug that into the existing DOM, and
 throw away the rest.  More specifically, suppose we mark the dynamic
 content instead of the static.

 Let's say we add a new attribute to a, like a onlyreplace=foo,
 where foo is the id of an element on the page.  Or better, a
 space-separated list of elements.  When the user clicks such a link,
 the browser should do something like this: change the URL in the
 navigation bar to the indicated URL, and retrieve the indicated
 resource and begin to parse it.  Every time an element is encountered
 that has an id in the onlyreplace list, if there is an element on the
 current page with that id, remove the existing element and then add
 the element from the new page.  I guess this should be done in the
 usual fashion, first appending the element itself and then its
 children recursively, leaf-first.

 This. Is. BRILLIANT.

 [snip]

 Thoughts?

 We actually have a similar technology in XUL called overlays [1],
 though we use that for a wholly different purpose.

 Anyhow, this is certainly an interesting suggestion. You can actually
 mostly implement it using the primitives in HTML5 already. By using
 pushState and XMLHttpRequest you can download the page and change the
 current page's URI, and then use the DOM to replace the needed parts.
 The only thing that you can't do is stream in the new content since
 mutations aren't dispatched during parsing.

 For some reason I'm still a bit uneasy about this feature. It feels a
 bit fragile for some reason. One thing I can think of is what happens
 if the load stalls or fails halfway through the load. Then you could
 end up with a page that contains half of the old page and half the
 new. Also, what should happen if the user presses the 'back' button?
 Don't know how big of a problem these issues are, and they are quite
 possibly fixable. I'm definitely curious to hear what developers that
 would actually use this think of the idea.

I have spent most of last night trying to figure out what's wrong with
this proposal. I can't think of anything important except for the back
button behaviour. The suggestions I had have already been mentioned:
the base tag extension and the marker HTTP headers. You'd obviously
also need a JS hook to be able to invoke this functionality
programmatically (location.onlyreplace...?)

Another plus point for this idea is that it is implementable on
existing browsers with some JS (I'm trying something simple at the
moment and it works, albeit only for XML documents).

As for the back button, there are a few possibilities:

- Reload the full page
- Load  process the document using the same onlyreplace behaviour
as explained in the original email
- Allow a response header that specifies which of the above the
browser should do on clicking the back button
(backwards-navigation-safe: True?)
- The browser remembers the state of the document as it was prior to
each history point and resets it to that state before applying the
point in history we are jumping to (yikes!)

Any concerns about caching that aren't covered by the above?

Nelson Menezes
http://fittopage.org


Re: [whatwg] fieldset (was: The legend element)

2009-10-17 Thread Ian Hickson
On Wed, 14 Oct 2009, Jeremy Keith wrote:
 Hixie wrote:
   Then it might be nice to clarify this with a few words in the spec, 
   as The fieldset element represents a set of form controls 
   optionally grouped under a common name can be read as implying 
   structuring and thus accessibility matters.
  
  The element does add structure and help with accessibility, but that 
  doesn't mean it's always necessary.
 
 I just had a thought (that I sanity-checked in IRC)...
 
 Perhaps fieldset should be a sectioning root?
 
 It feels like it's a similar kind of grouping element to blockquote 
 and td in that, while it might well contain headings, you probably 
 wouldn't want those headings to contribute to the overall outline of the 
 document.
 
 What do you think?

On Wed, 14 Oct 2009, Tab Atkins Jr. wrote:
 
 Since I was the one it was sanity-checked against, +1 to this 
 suggestion.  I sometimes use headings to label individual inputs in a 
 form, and I don't particularly want to expose these in the page outline.

Done. (Also for details.)


 A rider suggestion: expose legend in the page outline as the heading 
 for the fieldset.

I considered this, but I don't really want to make the algorithm any more 
complicated, and I'm not really sure we'd want that exposed, any more than 
you want the headings for individual inputs exposed, in the page outline.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] fieldset

2009-10-17 Thread Keryx Web

2009-10-17 12:04, Ian Hickson skrev:


A rider suggestion: exposelegend  in the page outline as the heading
for thefieldset.


I considered this, but I don't really want to make the algorithm any more
complicated, and I'm not really sure we'd want that exposed, any more than
you want the headings for individual inputs exposed, in the page outline.



And such a move might also have unintended consequences for 
accessibility. AT do give legend in fieldsets a special treatment that 
headings do not get. Without careful dialogue and user testing, we 
should *not* change the meaning of legend.


However, does not this open up the possibility of getting rid of dt/dd 
in details and use an hx element, which is a much better solution from a 
semantic POV?



--
Keryx Web (Lars Gunther)
http://keryx.se/
http://twitter.com/itpastorn/
http://itpastorn.blogspot.com/


Re: [whatwg] Issues with Web Sockets API

2009-10-17 Thread Ian Hickson
On Wed, 14 Oct 2009, Alexey Proskuryakov wrote:
 13.10.2009, в 4:11, Ian Hickson написал(а):
   
   Is this meant to mimic some behavior that existing clients have for 
   HTTP already?
  
  Yes, as it says, the idea is for UAs to send the same headers they 
  would send if the protocol had been HTTP.
 
 For HTTP, this depends on authentication scheme in use. For Basic and 
 Digest authentication in particular, clients are allowed to make certain 
 assumptions about protection spaces: A client MAY preemptively send the 
 corresponding Authorization header with requests for resources in that 
 space without receipt of another challenge from the server.
 
 I don't think the Web Sockets protocol is sufficiently similar to HTTP 
 to defer to RFC 2617 or other HTTP specs here. Also, implementing just 
 support the same authentication mechanisms you do for HTTP is a tall 
 order, since HTTP back-ends don't (always?) expose the necessary APIs 
 for encryption.

I'm not really sure what else to say to be honest. Should I just leave it 
at cookies and nothing else? Really I just want to support Basic (and I 
guess Digest) authentication (primarily for over-TLS connections), so that 
sites that use Basic auth, like, say, porn sites, or the W3C, can also use 
it for their Web Socket connections. I could just limit it that way; would 
that work?


If /code/, interpreted as ASCII, is 401, then let /mode/ be 
_authenticate_. Otherwise, fail the Web Socket connection and 
abort these steps.
  
   407 (proxy authenticate) also likely needs to be supported.
  
  Proxies wouldn't work with WebSockets in general.
 
 Could you please elaborate? I thought there was a setup that could work 
 with most deployed HTTPS proxies - one could run WebSockets server on 
 port 443.

Oh, I see what you're saying. Proxy authentication of this nature is 
covered by step 2 of the handshake algorithm, as part of connect to that 
proxy and ask it to open a TCP/IP connection to the host given by /host/ 
and the port given by /port/. There's even an example showing auth 
headers being sent to the proxy. By the time we get down to parsing the 
response, we're long past the point where we might be authenticating to a 
proxy. Is that a problem? I could add support for 407 here and just say 
that you jump back to step 2 and include the authentication this time, 
would that work?


   Some authentication schemes (e.g. NTLM) work on connection basis, so 
   I don't think that closing the connection right after receiving a 
   challenge can work with them.
  
  Yeah, that's quite possible.
 
 Is this something you plan to correct in the spec?

Is there much to correct? I don't understand what would need to change 
here. Does NTLM not work with HTTP without pipelining? Or do you mean that 
you would rather have authentication be a first-class primitive operation 
in Web Socket, instead of relying on the HTTP features? We could do that:
instead of faking an HTTP communication, we could have a header in the 
handshake that means after this, the client must send one more handshake 
consisting of an authentication token, and if the UA fails to send the 
right extra bit, then fail. I think if we did this, we'd want to punt 
until version 2, though.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] fieldset

2009-10-17 Thread Ian Hickson
On Sat, 17 Oct 2009, Keryx Web wrote:
 
 However, does not this open up the possibility of getting rid of dt/dd 
 in details and use an hx element, which is a much better solution from a 
 semantic POV?

The hx elements are valid flow content, so there'd be no way to 
distinguish the hx element from the main contents of the element.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] a onlyreplace

2009-10-17 Thread Tab Atkins Jr.
On Sat, Oct 17, 2009 at 12:22 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Oct 16, 2009 at 11:06 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Promoting this reply to top-level because I think it's crazy good.

 On Fri, Oct 16, 2009 at 11:09 AM, Aryeh Gregor simetrical+...@gmail.com 
 wrote:
 On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr. jackalm...@gmail.com 
 wrote:
 As well, this still doesn't answer the question of what to do with
 script links between the static content and the original page, like
 event listeners placed on content within the static.  Do they get
 preserved?  How would that work?  If they don't, then some of the
 benefit of 'static' content is lost, since it will be inoperable for a
 moment after each pageload while the JS reinitializes.

 Script links should be preserved somehow, ideally.  I would like to
 see this be along the lines of AJAX reload of some page content,
 without JavaScript and with automatically working URLs.
 [snip]
 I'm drawn back to my original proposal.  The idea would be as follows:
 instead of loading the new page in place of the new one, just parse
 it, extract the bit you want, plug that into the existing DOM, and
 throw away the rest.  More specifically, suppose we mark the dynamic
 content instead of the static.

 Let's say we add a new attribute to a, like a onlyreplace=foo,
 where foo is the id of an element on the page.  Or better, a
 space-separated list of elements.  When the user clicks such a link,
 the browser should do something like this: change the URL in the
 navigation bar to the indicated URL, and retrieve the indicated
 resource and begin to parse it.  Every time an element is encountered
 that has an id in the onlyreplace list, if there is an element on the
 current page with that id, remove the existing element and then add
 the element from the new page.  I guess this should be done in the
 usual fashion, first appending the element itself and then its
 children recursively, leaf-first.

 This. Is. BRILLIANT.

 [snip]

 Thoughts?

 We actually have a similar technology in XUL called overlays [1],
 though we use that for a wholly different purpose.

Interesting.  It does seem like nearly the same idea in practice.

 Anyhow, this is certainly an interesting suggestion. You can actually
 mostly implement it using the primitives in HTML5 already. By using
 pushState and XMLHttpRequest you can download the page and change the
 current page's URI, and then use the DOM to replace the needed parts.
 The only thing that you can't do is stream in the new content since
 mutations aren't dispatched during parsing.

Yup, it's basic AJAX (or AHAH if you want to be specific, I guess)
with some history hacking.  Doing this in JS is completely possible
(it's done today), it's just non-trivial and, duh, requires js.

(However, I'm still not sure what the accessibility/searchability
story is for AJAXy single-page apps that operate similarly.  I'm
concerned that it's suboptimal, but I'm pretty sure that @onlyreplace
solves those issues by allowing such clients to just perform ordinary
navigation if they don't want to/don't know how to deal with
@onlyreplace properly.  The fact that @onlyreplace is a simple
declarative mechanism, however, may allow such clients to finally
handle this sort of interaction properly, as they can predict in
advance which parts of the page will change; something that is
difficult/impossible to do in AJAX apps without actually running the
JS and watching for mutation.)

 For some reason I'm still a bit uneasy about this feature. It feels a
 bit fragile for some reason. One thing I can think of is what happens
 if the load stalls or fails halfway through the load. Then you could
 end up with a page that contains half of the old page and half the
 new.

I imagine a browser could stall on swapping in the new bits until it's
found all of them?  I'm not sure if that would give too much of a
penalty in common situations or not.

This same sort of situation can occur with an AJAX load too.  There
you have error callbacks, though.  Does it feel like something similar
might be necessary?  I'd prefer to keep this as transparent as
possible.

Heck, though, this situation can occur with an *ordinary* load.  When
that happens you just get a partial page.  We don't have special
handling for that; we just expect the user to hit Refresh and carry
on.  @onlyreplace is supposed to be robust against refreshing, so do
we expect partial-page loads to be any worse under it than under
ordinary navigation?

 Also, what should happen if the user presses the 'back' button?

If the browser can remember what the page state was previously, just
swap in the old parts.  If not, but it at least remembers what parts
were replaced, make a fresh request for the previous page and
substitute in just those bits.  If it can't remember anything, just do
an ordinary navigation with a full page swap.

It should act as exactly like current Back behavior as 

Re: [whatwg] a onlyreplace

2009-10-17 Thread Markus Ernst

Tab Atkins Jr. schrieb:

On Sat, Oct 17, 2009 at 12:22 AM, Jonas Sicking jo...@sicking.cc wrote:

Also, what should happen if the user presses the 'back' button?


If the browser can remember what the page state was previously, just
swap in the old parts.  If not, but it at least remembers what parts
were replaced, make a fresh request for the previous page and
substitute in just those bits.  If it can't remember anything, just do
an ordinary navigation with a full page swap.

It should act as exactly like current Back behavior as possible.
We're not really playing with the semantics of navigation, so that
shouldn't be difficult.


I agree to that. I click a link on a page with an URI, and after 
clicking I get a new page with another URI - so if I hit the back 
button, I expect getting back the page I had seen before clicking the 
link. (Of course with the browser-specific peculiarities - Firefox e.g. 
remembers the scroll position, others may not...) The user experience 
when using the back button should not differ whether a browser supports 
@onlyreplace or not.



On Sat, Oct 17, 2009 at 3:53 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

I'm guessing that the rare case where you need to write into a
replaced ID you can simply have a JS hook that fires on the load and
fixes up the replaced sections as needed.


The functioning of load events here confuses me a bit, because I've
never done any hacking in that area and so don't understand the
mechanics well enough to know what's reasonable.  Should the new page
still fire a load event once its replaceable content has loaded?  I'm
guessing that the old page never fires an unload event?  I really
don't know what should happen in this area.

(After giving it a little thought, though, I think we shouldn't change
the semantics of load/unload/etc.  These are still useful, after all,
for when the page *is* completely unloaded or loaded, such as on first
visit or when the user hits Refresh.  We'd probably want a new set of
events that fire at the elements being swapped out, and then at the
new elements once they've been pushed in.)


I admit I don't fully understand load events either. If I get it 
correctly, this is about functions called on load, that access elements 
in the replaceable parts of the page. A common use case for this is 
setting the focus on the first input element of a form. I don't think 
that this can be solved at the UA side, some authoring will be 
necessary; some possible workarounds are:
- Put page-specific scripts into a separate script element with an id, 
and include it in the @onlyreplace list;
- make one script that fits for all pages, by checking if an element 
exists before doing actions on it;
- instead of using body onLoad=foo(), put the function call into a 
script element at the bottom of the replaceable element.


Anyway such things would be much easier (with or without @onlyreplace) 
if the onLoad event handler would be allowed on every HTML element 
rather than on window and body only:

input type=text name=Name onLoad=this.focus()

But this looks that trivial to me - element.onLoad must have been 
suggested long ago and been declined for good reasons, I assume?


Re: [whatwg] a onlyreplace

2009-10-17 Thread Dion Almaer
This feels like really nice sugar, but maybe the first step should be to get
the shim out that gets it working using JS now and then see how it works
in practice. I totally understand why this looks exciting, but I have the
same uneasiness as Jonas.  It feels like a LOT of magic to go grab a page
and grab out the id and . and I am sure there are edges. Cool idea for
sure! It also feels like this should work nicely with the history/state work
that already exists.

On Sat, Oct 17, 2009 at 9:57 AM, Markus Ernst derer...@gmx.ch wrote:

 Tab Atkins Jr. schrieb:

 On Sat, Oct 17, 2009 at 12:22 AM, Jonas Sicking jo...@sicking.cc wrote:

 Also, what should happen if the user presses the 'back' button?


 If the browser can remember what the page state was previously, just
 swap in the old parts.  If not, but it at least remembers what parts
 were replaced, make a fresh request for the previous page and
 substitute in just those bits.  If it can't remember anything, just do
 an ordinary navigation with a full page swap.

 It should act as exactly like current Back behavior as possible.
 We're not really playing with the semantics of navigation, so that
 shouldn't be difficult.


 I agree to that. I click a link on a page with an URI, and after clicking I
 get a new page with another URI - so if I hit the back button, I expect
 getting back the page I had seen before clicking the link. (Of course with
 the browser-specific peculiarities - Firefox e.g. remembers the scroll
 position, others may not...) The user experience when using the back button
 should not differ whether a browser supports @onlyreplace or not.

  On Sat, Oct 17, 2009 at 3:53 AM, Gregory Maxwell gmaxw...@gmail.com
 wrote:

 I'm guessing that the rare case where you need to write into a
 replaced ID you can simply have a JS hook that fires on the load and
 fixes up the replaced sections as needed.


 The functioning of load events here confuses me a bit, because I've
 never done any hacking in that area and so don't understand the
 mechanics well enough to know what's reasonable.  Should the new page
 still fire a load event once its replaceable content has loaded?  I'm
 guessing that the old page never fires an unload event?  I really
 don't know what should happen in this area.

 (After giving it a little thought, though, I think we shouldn't change
 the semantics of load/unload/etc.  These are still useful, after all,
 for when the page *is* completely unloaded or loaded, such as on first
 visit or when the user hits Refresh.  We'd probably want a new set of
 events that fire at the elements being swapped out, and then at the
 new elements once they've been pushed in.)


 I admit I don't fully understand load events either. If I get it correctly,
 this is about functions called on load, that access elements in the
 replaceable parts of the page. A common use case for this is setting the
 focus on the first input element of a form. I don't think that this can be
 solved at the UA side, some authoring will be necessary; some possible
 workarounds are:
 - Put page-specific scripts into a separate script element with an id,
 and include it in the @onlyreplace list;
 - make one script that fits for all pages, by checking if an element exists
 before doing actions on it;
 - instead of using body onLoad=foo(), put the function call into a
 script element at the bottom of the replaceable element.

 Anyway such things would be much easier (with or without @onlyreplace) if
 the onLoad event handler would be allowed on every HTML element rather than
 on window and body only:
 input type=text name=Name onLoad=this.focus()

 But this looks that trivial to me - element.onLoad must have been suggested
 long ago and been declined for good reasons, I assume?



Re: [whatwg] a onlyreplace

2009-10-17 Thread Jonas Sicking
On Sat, Oct 17, 2009 at 11:16 AM, Dion Almaer d...@almaer.com wrote:
 This feels like really nice sugar, but maybe the first step should be to get
 the shim out that gets it working using JS now and then see how it works
 in practice. I totally understand why this looks exciting, but I have the
 same uneasiness as Jonas.  It feels like a LOT of magic to go grab a page
 and grab out the id and . and I am sure there are edges. Cool idea for
 sure! It also feels like this should work nicely with the history/state work
 that already exists.

Yeah, I think this puts the finger on my uneasiness nicely. There's
simply a lot of stuff going on with very little control for the
author. I'd love to see a JS library developed on top of
pushState/XMLHttpRequest that implements this functionality, and then
see that JS library deployed on websites, and see what the experiences
from that are.

If it turns out that this works well then that would be a strong case
for adding this to browsers natively.

In fact, you don't even need to use pushState. For now this can be
faked using onhashchange and fragment identifier tricks. It's
certainly not as elegant as pushState (that is, after all, why
pushState was added), but it's something that can be tried today.

/ Jonas


Re: [whatwg] a onlyreplace

2009-10-17 Thread Nelson Menezes
2009/10/17 Jonas Sicking jo...@sicking.cc:
 In fact, you don't even need to use pushState. For now this can be
 faked using onhashchange and fragment identifier tricks. It's
 certainly not as elegant as pushState (that is, after all, why
 pushState was added), but it's something that can be tried today.


Well, here's a badly-hacked-together solution that emulates this behaviour...

I think it'll be helpful even if it only gets used in a JS library as
you mention (change the attribute to a classname then). Still, it can
be made to work with today's browsers:

http://test.fittopage.org/page1.php


Nelson Menezes
http://fittopage.org


Re: [whatwg] a onlyreplace

2009-10-17 Thread Schuyler Duveen
If you'd like to see what this looks like in Javascript, I implemented
this technique several years ago.  One place you can see it publicly is
the swapFromHttp() function at:
http://havel.columbia.edu/media_panels/js/MochiPlus.js

You can see it in action on some pages like:
http://havel.columbia.edu/media_panels/video_window/?material=abrams
where it adds in the page on the left from this file
http://havel.columbia.edu/media_panels/materials/abrams.html

One of the big issues we found using it on some other sites is that
javascript listeners (rather than onclick= attributes), and other DOM
pointers in the system became stale.  Thus, only half the problem was
solved.

Also, the problem (as I implemented it) is that XMLHttpRequest.xml has
been very finicky in past (and current) browsers.  My comments in the
code reflect some of the things you need to make sure you're doing to
make it work across browsers (at least if you want a DOM vs. regex
implementation):
* IE 6 needed the Content-type: text/xml
* Firefox (?2.x) wants xmlns=http://www.w3.org/1999/xhtml; in html tag
* IE and Safari don't handle named entities like nbsp; well in this
context and should be numeric (e.g. #160;)

Vendors might better serve us by reducing these hoops to jump through so
a javascript library could do the job reliably.

This method did make it much easier to leverage server template code.
But since it largely simplifies server template code, then why not stick
with server-side solutions like Ian Bicking's:
http://blog.ianbicking.org/2008/09/08/inverted-partials/

It's still a bit weird that this proposal, instead of allowing
every element to be a link (like XHTML2), would allow every element to
be something like an IFRAME (all while a thread remembering how evil
framesets are continues).

cheers,
sky


 Date: Sat, 17 Oct 2009 11:34:25 -0700
 From: Jonas Sicking jo...@sicking.cc
 To: Dion Almaer d...@almaer.com
 Cc: Markus Ernst derer...@gmx.ch, Tab Atkins Jr.
   jackalm...@gmail.com, Aryeh Gregor simetrical+...@gmail.com,
   whatwg wha...@whatwg.org
 Subject: Re: [whatwg] a onlyreplace
 Message-ID:
   63df84f0910171134j193e35exf4d79dcddc5de...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1
 
 On Sat, Oct 17, 2009 at 11:16 AM, Dion Almaer d...@almaer.com wrote:
 This feels like really nice sugar, but maybe the first step should be to get
 the shim out that gets it working using JS now and then see how it works
 in practice. I totally understand why this looks exciting, but I have the
 same uneasiness as Jonas. ?It feels like a LOT of magic to go grab a page
 and grab out the id and . and I am sure there are edges. Cool idea for
 sure! It also feels like this should work nicely with the history/state work
 that already exists.
 
 Yeah, I think this puts the finger on my uneasiness nicely. There's
 simply a lot of stuff going on with very little control for the
 author. I'd love to see a JS library developed on top of
 pushState/XMLHttpRequest that implements this functionality, and then
 see that JS library deployed on websites, and see what the experiences
 from that are.
 
 If it turns out that this works well then that would be a strong case
 for adding this to browsers natively.
 
 In fact, you don't even need to use pushState. For now this can be
 faked using onhashchange and fragment identifier tricks. It's
 certainly not as elegant as pushState (that is, after all, why
 pushState was added), but it's something that can be tried today.
 
 / Jonas
 
 
 --
 
 ___
 whatwg mailing list
 whatwg@lists.whatwg.org
 http://lists.whatwg.org/listinfo.cgi/whatwg-whatwg.org
 
 
 End of whatwg Digest, Vol 67, Issue 81
 **
 



Re: [whatwg] Storage events

2009-10-17 Thread Ian Hickson
On Thu, 15 Oct 2009, Jeremy Orlow wrote:

 I'd like to propose we remove the source attribute from storage 
 events.  ( http://dev.w3.org/html5/webstorage/#the-storage-event) In 
 Chrome, we cannot provide access to a window object unless it's in the 
 same process.  Since there's no way to guarantee that two windows in the 
 same origin are in the same process, Chrome would need to always set it 
 to null in order to avoid confusing developers (since what process a 
 page is in is really an implementation detail).

 But, as far as I can tell, Safari is the only browser that currently 
 provides this.  I suspect that as other multi-process implementations 
 are developed, they'll run into the same issue.  And, even if they can 
 technically provide synchronous access to another processes Window 
 object, there are _very_ strong arguments against it.  So, can we please 
 remove the source attribute from storage events?

On Thu, 15 Oct 2009, João Eiras wrote:
 
 The specification tells source is a WindowProxy, so if the underlying 
 window is deleted, or inaccessible, accessing any member of source could 
 just throw an INVALID_STATE_ERR. The problem is there also if a storage 
 event is queued and the originating window is deleted meanwhile, or the 
 document with the listener keeps a reference to the originating window 
 for a long time, and that window is closed, unless the user agent keeps 
 the originating window live while it's WindowPRoxy is not garbage 
 collected, which is not desirable.

I've removed the 'source' attribute.


On Thu, 15 Oct 2009, Maciej Stachowiak wrote:
 
 I would guess the main use case for this is to distinguish changes from 
 *this* window (the one receiving the event) and changes from other 
 windows. Perhaps a boolean flag to that effect could replace source.

I haven't added this currently, but we may add this in the future if it 
turns out to be useful.


On Thu, 15 Oct 2009, Jeremy Orlow wrote:
 
 One other question: is the URL attribute supposed to be the same as 
 documentURI or location.href?  I ask because WebKit currently uses the 
 documentURI but if this were the correct behavior, I would have expected 
 the spec to make that more clear.

The spec is completely explicit about what it should be set to -- it says:

# the event must have its url attribute set to the address of the document 
# whose Storage object was affected
 -- 
http://www.whatwg.org/specs/web-apps/current-work/complete.html#event-storage

...where the address of the document is defined as being the same as the 
value returned by document.URL (which is different than the value returned 
by location.href -- that's the document's current address). If you 
follow the hyperlinks from the link above, and click on the bold dfn 
text to find where the terms are used, you should find it to be 
unambiguous.


On Thu, 15 Oct 2009, Darin Fisher wrote:
 
 This is interesting since documentURI is a read/write property: 
 http://www.w3.org/TR/DOM-Level-3-Core/core.html#Document3-documentURI

I assume that is a mistake. Does anyone support documentURI? It seems 
completely redundant with document.URL.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] window.setInterval if visible.

2009-10-17 Thread Ian Hickson
On Thu, 15 Oct 2009, Boris Zbarsky wrote:
 On 10/15/09 3:35 PM, Gregg Tavares wrote:
  I was wondering if there as been a proposal for either an optional 
  argument to setInterval that makes it only callback if the window is 
  visible OR maybe a window.setRenderInterval.
 
 You might be interested in 
 http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/527d0cedb9b0df7f/57625c94cdf493bf
  
 for some more discussion about approaches to this problem.  In 
 particular, that proposal tries to address overeager animations in 
 visible windows as well.
 
 Note, by the way, that testing whether a window is visible is not 
 cheap; testing whether an element is visible is even less cheap

On Thu, 15 Oct 2009, Jeremy Orlow wrote:
 
 I'd imagine that UAs could use an overly conservative metric of when 
 things are visible to make things cheaper if/when this is a concern.  
 All that really matters is that the UA never say it isn't visible when 
 any part of the window is visible.
 
 I agree that some mechanism to know when things aren't visible would be 
 very useful.

On Thu, 15 Oct 2009, Boris Zbarsky wrote:
 
 It's a concern any time someone's checking it every 10ms interval 
 invocation. For example, I'm right now looking at a browser window where 
 the check would probably take longer than that (ping time from the X 
 client to the X server is 50ms in this case).
 
 What are the use cases?  Are they addressed by roc's proposal?  If not, 
 is an explicit script-triggered visibility check the only way to address 
 them?

On Thu, 15 Oct 2009, João Eiras wrote:
 
 You're trying to solve a real problem with a very specific API. You 
 might use setInterval, but someone else might use a worker or 
 setTimeout.
 
 The best way would be an attribute on the window, like window.isVisible 
 returning either true of false which would return true if the document 
 is partially or totally visible. This way, all other possible use cases 
 to prevent animations or other complex and heavy dom/layout operations 
 could be postponed just by checking that value.
 
 I personally think it's a good idea to have that info available.

On Thu, 15 Oct 2009, Markus Ernst wrote:
 
 From a performance point of view it might even be worth thinking about 
 the contrary: Allow UAs to stop the execution of scripts on non-visible 
 windows or elements by default, and provide a method to explicitly 
 specify if the execution of a script must not be stopped.
 
 If you provide methods to check the visibility of a window or element, 
 you leave it up to the author to use them or not. I think performance 
 issues should rather be up to the UA.

On Fri, 16 Oct 2009, Gregg Tavares wrote:
 
 I agree that would be ideal. Unfortunately, current webpages already 
 expect setInternval to function even when they are not visible. web 
 based chat and mail clients come to mind as examples. So, unfortunately, 
 it doesn't seem like a problem a UA can solve on it's own.
 
 On the otherhand, if the solution is as simple as add a flag to 
 setInterval then it's at least a very simple change for those apps that 
 want to not hog the CPU when not visible.

I haven't added this feature to HTML5, as it seems more of a 
presentational thing and would be best addressed in a spec like CSSOM. I 
would recommend taking this up in the webapps group.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Storage events

2009-10-17 Thread Darin Fisher
On Sat, Oct 17, 2009 at 8:20 PM, Ian Hickson i...@hixie.ch wrote:
...

 On Thu, 15 Oct 2009, Darin Fisher wrote:
 
  This is interesting since documentURI is a read/write property:
  http://www.w3.org/TR/DOM-Level-3-Core/core.html#Document3-documentURI

 I assume that is a mistake. Does anyone support documentURI? It seems
 completely redundant with document.URL.


Gecko and WebKit appear to both support documentURI.  Only WebKit allows it
to be modified.
-Darin


Re: [whatwg] Canvas Proposal: aliasClipping property

2009-10-17 Thread Ian Hickson
On Thu, 15 Oct 2009, Charles Pritchard wrote:
 
  Turning off anti-aliasing just trades one problems for another.

 I'm not sure I understand what that trade is -- isn't that something 
 that the individual user of Canvas would take into account when flipping 
 the switch?

Sure, but you're still just trading one problem for another. What if you 
want neither aliasing (i.e. you don't want 1 bit clipping paths) nor the 
problem to which ou allude (i.e. painting can't be done incrementally)?
Surely it's best to solve both problems rather than force authors to pick 
which problem they want.


 The setTimeout/setInterval loop (intrinsic to Canvas, via Window... and 
 intrinsic to its double-buffering properties) appropriately segments one 
 set of primitive drawings from another set (drawing them all at once). 
 That particular loop is already setup, browser vendors within the 
 current standard could make appropriate adjustments (turning off 
 coverage based anti-aliasing for adjacent lines).

You can't know, in the current setup, which lines are supposed to be 
adjacent and which lines are supposed to be superimposed.


  In either case, it seems like something best handled in a future 
  version.

 It seems like something that won't be handled in this version.

Right.


 As far as I can tell; the area (width and height, extent) of source image A
 [4.8.11.13 Compositing]
 when source image A is a shape, is not defined by the spec.
 
 And so in Chrome, when composting with a shape, the extent of image A is 
 only that width and height the shape covers, whereas in Firefox, the 
 extent of image A is equivalent to the extent of image B (the current 
 bitmap). This led to an incompatibility between the two browsers.
 
 Best as I can tell, Chrome takes the most efficient approach.
 
 For a very visible example, see the Moz page below in both browsers: 
 https://developer.mozilla.org/en/Canvas_tutorial/Compositing

On Fri, 16 Oct 2009, Philip Taylor wrote:
 
 I think the spec is clear on this (at least when I last looked; not sure 
 if it's changed since then). Image A is infinite and filled with 
 transparent black, then you draw the shape onto it (with no compositing 
 yet), and then you composite the whole of image A (using 
 globalCompositeOperation) on top of the current canvas bitmap. With some 
 composite operations that's a different result than if you only 
 composited pixels within the extent of the shapes you drew onto image A.
 
 (With most composite operations it makes no visible difference, because 
 compositing transparent black onto a bitmap has no effect, so this only 
 affects a few unusual modes.)
 
 There is currently no definition of what the extent of a shape is 
 (does it include transparent pixels? shadows? what about text with a 
 bitmap font? etc), and it sounds like a complicated thing to define and 
 to implement interoperably, and I don't see obvious benefits to users, 
 so the current specced behaviour (using infinite bitmaps, not extents) 
 seems to me like the best approach (and we just need everyone to 
 implement it).

On Sat, 17 Oct 2009, Robert O'Callahan wrote:

 Ah, so you mean Firefox is right in this case?

On Fri, 16 Oct 2009, Philip Taylor wrote:
 
 Yes, mostly. 
 http://philip.html5.org/tests/canvas/suite/tests/index.2d.composite.uncovered.html
  
 has relevant tests, matching what I believed the spec said - on Windows, 
 Opera 10 passes them all, Firefox 3.5 passes all except 'copy' 
 (https://bugzilla.mozilla.org/show_bug.cgi?id=366283), Safari 4 and 
 Chrome 3 fail them all.
 
 (Looking at the spec quickly now, I don't see anything that actually 
 states this explicitly - the only reference to infinite transparent 
 black bitmaps is when drawing shadows. But 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#drawing-model
  
 is phrased in terms of rendering shapes onto an image, then compositing 
 the image within the clipping region, so I believe it is meant to work 
 as I said (and definitely not by compositing only within the extent of 
 the shape drawn onto the image).)

On Fri, 16 Oct 2009, Charles Pritchard wrote:

 Then, should we explicitly state it, so that the next versions of Chrome 
 and Safari are pressured to follow?
 
 I agree, that the spec has an infinite bitmap for filters: shadows are a 
 unique step in the rendering pipeline.
 
 ...
 
 In regard to this: 'There is currently no definition of what the 
 extent of a shape is'
 
 While I want a common standard, and I think we are in agreement here, 
 that we'll be defining Image A as an infinite bitmap, I believe that 
 this statement should be addressed.

On Sat, 17 Oct 2009, Robert O'Callahan wrote:
 
 That shouldn't be necessary. If the composition operation was limited to 
 the extents of the source shape, the spec would have to say this 
 explicitly and define what those extents are. I don't see how you can 
 argue from silence that the composition operation