Re: [AC] Helping server admins not making mistakes

2008-06-12 Thread Jonas Sicking


Hi Thomas and everyone,

So I realize that I'm not quite understanding your previous mail. It 
sounds like you have some alternative proposal in mind which I'm not 
following.


So let me start by stating my concerns:

My concern with the current spec is that once a server in the pre-flight 
request has opted in to the Access-Control spec, it is not going to be 
able to correctly handle all the possible requests that are enabled by 
the opt-in. With correctly here defined as what the server operator 
had in mind when opting in.


I have this concern since currently opting in means that you have to 
deal with all possible combinations of all valid http headers and http 
methods.


There is currently no way for the server operator to opt in without also 
having to deal with this.


In the initial mail in this thread I had a proposal to address this 
concern. At the cost of some complexity in the client.



It sounds like you have a counter proposal. Before you describe this 
proposal, I have four questions:


What is the purpose of the proposal?
Does this proposal still address all or part of my above concern?
Is it simpler than my proposal?
Is it simpler than the current spec?

And then finally I'm of course interested to hear what your proposal 
actually is :)


Best Regards,
/ Jonas



Re: [XHR] SVG WG LC comments

2008-06-13 Thread Jonas Sicking


Anne van Kesteren wrote:


On Thu, 12 Jun 2008 23:01:10 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

 │ This is the Document pointer.

If 'pointer' or 'Document pointer' is a term (which the styling 
seems to indicate) then please add it to section 2.2.
 The specification defines various terms throughout the 
specification. Only terms that didn't really fit anywhere else are in 
section 2.2.


Still sounds like Document is the correct term here, rather than 
Document pointer.


Still? What do you mean?


As in I still think he has a point.

Using the term pointer seems bad since it's very C/C++ specific. I 
don't think it's used in neither Java or EcmaScript, where the term 
reference us more common. Additionally it is used very inconsistently 
throughout the spec.


I would suggest using simply the term Document or Document member 
instead.



Isn't the URI resolved using the document?


Yes, the active document within the window.


And what's the security dependencies. Security in browsers are 
different enough that I don't think we can nail it down too hard.


I think it has to two with two documents that can exchange information 
due to document.domain and then determining the origin of the 
XMLHttpRequest object.


However, that's not too important, determining the base URI seems 
important enough to justify this.


But you are not using the Window object to determine the base URI. The 
AbsctractView interface is enough to determine the base URI.


It is quite clear that the Window and HTML5 specs are not going to be 
finished recommendations by the time we want to put the XMLHttpRequest 
into Rec, so at that point we'll have to remove normative references to 
them one way or another.


So the fewer references you have to them now, the less work you'll have 
to do to modify the spec later.


/ Jonas



Re: Seeking earlier feedback from MS [Was: IE Team's Proposal for Cross Site Requests]

2008-06-13 Thread Jonas Sicking


Sunava Dutta wrote:

Woo hooo, my first mail to the new webapps alias! -:)

Thanks for waiting for us to get feedback in from people across MSFT. As 
promised, here is the whitepaper on client side cross domain security 
articulating the security principles and challenges (high level and specifics ) 
of the current CS-XHR draft.
I've also addressed the questions members raised in the FAQ.


Thanks Sunava, I look forward to reading this once it is available in an 
acceptable license.


However, I would further hope that you are able to discuss the feedback 
that are sure to be raised? As with your initial feedback, much of the 
results of these discussions will also require research and so it is 
good if we can get as much done before the face to face as possible.



As Jonas and Art mention, in order to provide the opportunity for members to 
research and usefully discuss the contents and other issues, lets talk about 
our concerns among other items F2F in the first week of July.


Yes, though I do want to point out that there are many other issues too 
to discuss at the F2F other than microsofts feedback.


Speaking of which, do we have an agenda yet for the F2F meeting?


Look forward to hosting the members here in Redmond.


Looking forward to seeing you there!

Best Regards,
Jonas Sicking



Re: [AC] Helping server admins not making mistakes

2008-06-14 Thread Jonas Sicking


Thomas Roessler wrote:

I think we've both been arguing this all over the place, and the
thread might be getting a bit incoherent.

So let's try to start over...

The question here is whether it makes sense to add fine-grained
controls to the authorization mechanisms to control -- in addition
to whether or not cross-site requests are permitted at all --:

  (a) whether or not cookies are sent
  (b) what HTTP methods can be used in cross-site requests.

I have two basic points:

1. *If* we have to have that kind of fine-grained controls, let's
please do them coherently, and within the same framework.  The
argument here is simply consistency.


Am I understanding you right if this is just an argument about what 
syntax to use? Syntax is certainly important as it's a tool to reduce 
human factor errors, so not saying syntax isn't important.



2. We shouldn't do (a) above, for several reasons:

 - it adds complexity


For who? It seems to me that it makes it *a lot* simpler for server 
operators that want to create mashups with public data.


For private data we are already relying on server operators to be 
clueful enough to ask the user first, so to ask them to add an 
additional (or tweak their syntax) is much to ask at all.



 - it adds confusion (witness this thread)
 - it's pointless

I don't think I articulated the thinking behind the third of these
reasons very clearly.  The whole point of the access-control model
(with pre-flight check and all that) is that requests that can be
caused to come from the user's browser are more dangerous than
requests that a third party can make itself.

Consider a.example.com and b.example.com.  Alice has an account with
a.example.com and can wreak some havoc there through requests that
have the right authentication headers.

The purpose of having the access-control mechanism is:

- to prevent b.example.com from reading information at a.example.com
  *using* *Alice's* *credentials* (because b.example.com can also
  just send HTTP requests from its own server), unless specifically
  authorized

- to prevent b.example.com from causing non-GET requests to occur at
  b.example.com *using* *Alice's* *credentials* (because
  b.example.com can also just send HTTP requests from its own
  server), unless specifically authorized

So, if there is an additional way to authorize third-party requests,
but without Alice's credentials, we're effectively introducing an
authorization regime for the same requests that our attacker could
send through the network anyway, by using their own server -- modulo
source IP address, that is.


And modulo the fact that the user might be able to connect to 
a.example.com, whereas b.example.com might not be able to. This is the 
case if a.example.com and the user are both sitting behind the same 
firewall.


These are some pretty important modulos.


Is that really worth the extra
complexity, both spec, implementation, and deployment wise?  I don't
think so.


Content and servers behind firewalls means that we have no choice but to 
authorize even requests that don't include the user credentials.



(Oh, and what does a no cookies primitive mean in the presence of
VPNs or TLS client certificates?)


That is a good question, one that we should address.


About the methods point, my concern is that the same people who are
clueless about methods when writing web applications will be
clueless about the policies.


I don't agree. I think it requires more knowledge to know how your 
server reacts to the full matrix of methods and headers, than to opt in 
to the headers that you are planning on handling in your CGI script.


Of course, it is very hard to get data on this. I do have some ideas 
here how to get experienced input, so hopefully I will have more data in 
a few days.


/ Jonas



Re: XHR2 Feedback As Tracker Issues

2008-06-16 Thread Jonas Sicking


It would be great if microsoft could do this.

Sunava Dutta wrote:

I LOVE the idea!


-Original Message-
From: Doug Schepers [mailto:[EMAIL PROTECTED]
Sent: Monday, June 16, 2008 12:39 PM
To: Chris Wilson
Cc: Ian Hickson; Sunava Dutta; Arthur Barstow; Marc Silbey; public-
webapps; Eric Lawrence; David Ross; Mark Shlimovich (SWI); Doug
Stamper; Zhenbin Xu; Michael Champion
Subject: XHR2 Feedback As Tracker Issues (was: [NOT] Microsoft's
feedback on XHR2)

Hi, Folks-

It might be useful if specific points were raised as issue in the
WebApps Tracker [1], rather than just floating around on email (be it
PDF, HTML, or plaintext).  That way, they could be addressed in a
concise and systematic manner.

Do people (specifically, the chairs, the editor, and the contributors)
think this would be useful?

[1] http://www.w3.org/2008/webapps/track/products/4

Regards-
-Doug Schepers
W3C Team Contact, WebApps, SVG, and CDF








Re: Need PDF of MS' input [Was Re: Seeking earlier feedback from MS]

2008-06-16 Thread Jonas Sicking
 what
types of problems you ran into at that time.


Recommendation

· Do not allow non-GET and POST verbs. This is in line with 
capabilities of HTML forms today and is specified by the HTML 4.**


· If verbs are sent cross domain, pin the OPTIONS request for 
non-GET verbs to the IP address of subsequent requests. This will be a 
first step toward mitigating DNS Rebinding and TOCTOU attacks.


As mentioned before, even with DNS Rebinding attacks Access-Control is
designed in such a way that it doesn't allow any types of requests to be
sent that can't already be sent by the current web platform.

However the pinning is an interesting idea here. One we should discuss
further.

· Using XMLHttpRequest to do this is inherently more complicated 
as XHR has its own rules for blocking verbs.


You mean that this is more complicated implementation wise? As stated 
before, implementation complexity is certainly important to take into 
consideration, I'm just trying to understand your concern.



Looking forward to continued discussion on these topics. There is 
definitely some interesting stuff in here so I'm glad we got this feedback!


Best Regards,
Jonas Sicking



Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Julian Reschke wrote:


timeless wrote:
On Thu, Jun 19, 2008 at 1:09 PM, Julian Reschke 
[EMAIL PROTECTED] wrote:

Can you provide an example where providing *XML* parse error information
within *XHR* would be problematic?


i really shouldn't have to. imagine a document that is not CSS and is 
not XML.


now imagine an api that lets you try to load it as css. imagine that
this api exposes a dom object that describes *any* information from
that document in the case that it fails to parse as css.

basically it meant that you can interrogate pages that you weren't
supposed to be able to look at to get information you weren't supposed
to have.

now replace 'css' with 'xml'. The logic still applies.

And yes, I understand you'll wave hands about this is a trusted
application. I don't care. If it's a trusted application, then I
trust it not to make mistakes and to have ways to verify the
information server side before it's ever sent on any wires.


But you already can read the unparsed content using responseText, no? 
Where's the leakage then?


Exactly. The problem with bug 35618 is that CSS can be loaded from a 3rd 
party site without that site opting in. In this case providing parsing 
error information could result in leakage of private information.


Though really, just the loading of the CSS is an information leak. 
However it has been deemed unlikely that someone will put private 
informaion into CSS rules.


There would be no risk of information leakage to provide parse error 
information for same-site CSS loads. Likewise, there is no risk of 
information leakage to provide parse errors for same site XML loads, or 
cross-site XML loads where the 3rd party site has opted in.


/ Jonas




Re: Opting in to cookies - proposal

2008-06-19 Thread Jonas Sicking


Maciej Stachowiak wrote:



On Jun 14, 2008, at 4:23 AM, Jonas Sicking wrote:



I must say though, this is starting to sound complex and I am not 
totally convinced of the need to make servers opt in to getting 
cookies. Is it really a likely mistake that someone would take 
affirmative steps to enable cross-site access to a per-user resource, 
but then neglect to check whether requests are cross-site and act 
appropriately?


I do think there is a big risk of that yes. I do think that many sites 
that today serve public data do have a few pages in there which 
contain forms or other pages that serve user specific data.


Even something as simple as a news site that largely serves public 
news articles might have a cookie where the user has chosen a home 
location for local news. This is the case on the news site I use for 
example, it usually just serves a standard list of news, but if I give 
it my home zip code, it will additionally serve a section of local news.


I guess I don't see that as a huge risk. In the case of the hypothetical 
news site, if it is handing out news in some kind of data feed format, 
wouldn't the point of access-control be to give the personalized feed? 


If the site gives out a personalized feed it has to be *a lot* more 
careful who it gives it out to. It must first ask the user for every 
site it gives the feed to.


If it just gives out non-personalized feeds it can safely share the feed 
with any site.


After all, you could otherwise use server-to-server communication to get 
the public data.


Yes, but server-to-server communication has a lot of downsides. First of 
all it greatly increases the latency since the information is 
transferred over two connections rather than one. Second, it means that 
you have to have a server that has enough capacity to tunnel all the 
mashup data for all the users of your site. This can be a significant 
hurdle.


This is something that could very easily be overlooked by an 
administrator that just configures his server to add a 
Access-Control: allow* header using a site-wide configuration 
file, without going through all CGI scripts on the server and teaching 
the ones that honor cookies to ignore the cookies for cross-site 
requests.


No site should add Access-Control: allow* site-wide!


Experiences with Adobes crossdomain.xml show that they do.

I mean, I guess 
it's possible people will do this, but people could add 
Access-Control-Allow-Credentials site-wide too. And if we add 
Access-Control-Allow-Credentials-I-Really-Mean-It, they'll add even more.


Yes, this is certainly a possibility. But my hope is that this will 
happen to a smaller extent.


Basically the part of this that worries me is that it requires agreement 
between the client and the server, but those are expected to be 
controlled by different parties in many cases. If our favorite news site 
starts out only offering unpersonalized news feeds to other sites, but 
then wants to use cookies to personalize, then all existing clients of 
its cross-site data must change. That seems like a huge burden. Worse 
yet, if the site changes its mind in the other direction, existing 
clients break completely.


If you start sharing a new set of data it makes a lot of sense to me 
that people reading that data need to change. This seems no different 
than if you change from sharing an address book to sharing emails. 
Usually someone reading the data would need to both change the URI they 
are reading, as well as how they display the data.


And if the administrator of such a server thoughtlessly enabled 
cross-site access without thinking about the consequences, would they 
not be equally likely to enable cross-site cookies without thinking 
about the consequences?


Not more likely than someone adding any other header without knowing 
what it does. This is why I designed my proposal such that opting in 
to cookies is a separate step.


But you are expecting people to add Access-Control without knowing what 
it does.


It's not a matter of knowing what the spec says, its a matter of knowing 
with 100% certainty that you are not sending any private data on any of 
the URIs you are sharing.


It seems like we are adding a lot of complexity (and therefore more 
opportunity for implementation mistakes) for a marginal reduction in 
the likelihood of server configuration errors.


I think the ability to separate sharing of private data from sharing 
of public data is a huge help for server operators. So I think this is 
much more than a marginal reduction of configuration errors.


The possibility of making this separation is there - simply ignore 
Cookie and/or Authorization headers when Access-Control-Origin is 
present. So this opt-in doesn't offer a new capability, just changes the 
defaults.


In many server technologies it's not trivial to ignore cookies and/or 
auth headers. Session handling tends to be handled before the CGI 
script starts which means that you have to weed

Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Zhenbin Xu wrote:


[Zhenbin Xu] Regardless what different browser does today, rich

parsing

error is an important feature for developers. I have found it can

pinpoint

the exact problem that otherwise would have been difficult to

identify when

I sent incorrectly constructed XML file.

And given that the goals is to define a useful spec for future
XHR implementations, we should define how rich parser error is
surfaced instead of omitting it because nobody but IE supported it.

It is even more important we define it in the XHR spec because it is

not

part of DOM Core. Otherwise such a key piece would be lost and we

will

have diverging implementations.

The goal of XHR Level 1 was to get interoperability on the feature set
that exists across the major implementations of XHR today, so I don't
think parse error information fits the bill there, but it sounds like a
great feature to look into for XHR Level 2.


[Zhenbin Xu]


Did something happen to your reply here?


[Zhenbin Xu] The way it was designed is that responseXML is always not null once
it is in OPENED state.  I don't think IE currently give out rich error 
information
about mimetype mismatch but the design allows it to be exposed on responseXML, 
if
necessary.


Ah, good to know. I'm not particularly a big fan of this design, feels 
more logical to me to not return a document object if no document was 
sent. But I guess it depends on how you look at it.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
And it's useful for pages that contain private information only when 
cookies are sent, but when no cookies are sent they only provide public 
information. I've given two examples of this in other threads:


1. A news site serving articles in different categories. When the user
   is logged in and has configured a home zipcode includes a category
   of local news.

   Example: news.yahoo.com

2. A discussion board that allows comments to be marked private. Only
   when a user is logged in and has access to private comments are the
   private comments included, otherwise only the public comments are
   shown.

   Example: buzilla.mozilla.com


For these, how would the site initating the connection to the data 
provider server know whether or not to include the load-private-data flag?


The same way that it knows which URI to load. I expect that sites will 
document what resources can be loaded at what URIs, and with which query 
parameters as part of the API documentation. Whether private data is 
served can be documented at the same place. Along with information on 
what to do if access is denied for that private information.


Surely if the server does anything with the load-private-data flag, then 
it is fundamentally as vulnerable as if we didn't do any of this.


Yes, this is about reducing likelyhood that things go wrong, not 
eliminating it as that seems impossible.


This 
only helps with servers that have same-domain pages that accept cookies, 
but have no cross-domain pages that accept cookies, ever (since if any of 
the cross-domain pages accept cookies, then our initial assumption -- that 
the site author makes a mistake and his site reacts to cookies in 
third-party requests by doing bad things -- means that he's lost).


How so. Sites that have a combination of private and public data can, 
and hopefully will, only set the Access-Control-With-Credentials header 
for the parts that serve private data. It needs to apply different 
opt-in policies here anyway since it needs to ask the user before 
sharing any of his/her data.


/ Jonas



Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Zhenbin Xu wrote:

Sorry I accidently deleted part of reply. Inline...


-Original Message-
From: Jonas Sicking [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 19, 2008 2:17 PM
To: Zhenbin Xu
Cc: Anne van Kesteren; Sunava Dutta; IE8 Core AJAX SWAT Team; public-
[EMAIL PROTECTED]
Subject: Re: responseXML/responseText exceptions and parseError

Zhenbin Xu wrote:


[Zhenbin Xu] Regardless what different browser does today, rich

parsing

error is an important feature for developers. I have found it can

pinpoint

the exact problem that otherwise would have been difficult to

identify when

I sent incorrectly constructed XML file.

And given that the goals is to define a useful spec for future
XHR implementations, we should define how rich parser error is
surfaced instead of omitting it because nobody but IE supported it.

It is even more important we define it in the XHR spec because it

is

not

part of DOM Core. Otherwise such a key piece would be lost and we

will

have diverging implementations.

The goal of XHR Level 1 was to get interoperability on the feature

set

that exists across the major implementations of XHR today, so I

don't

think parse error information fits the bill there, but it sounds

like a

great feature to look into for XHR Level 2.

[Zhenbin Xu]

Did something happen to your reply here?



[Zhenbin Xu] Indeed it would already be very useful if XHR1 is to summarize
behaviors of all major implementations today, and document the common behaviors.
In which case the spec should try to accommodate all major browsers and
leave controversial parts to XHR2. This is why we suggested the null or 
exception
language.


So strategy that the spec has been following has been to use the feature 
set that is common between browsers, but for those features try to 
define a good useful spec that archives interoperability and a solid API 
for developers to develop against.


The reason we didn't want to write a spec that accomodates all major 
browsers is that such would be largely useless. When you get into the 
details browsers behave differently enough that the spec would have to 
be so fuzzy that it would essentially just be a tutorial, not a 
specification.


Wordings like null or exception is a pain for developers. It's also 
something that is hard to build future specifications on. Especially 
when taken to the whole XHR spec.


Hope that explains why the spec looks the way it does?

/ Jonas





Re: Opting in to cookies - proposal

2008-06-19 Thread Jonas Sicking


Maciej Stachowiak wrote:


On Jun 19, 2008, at 1:48 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:


After reviewing your comments, I am much more inclined to favor 
Microsoft's proposal on this: rename the relevant headers. I think 
you argued that this doesn't scale, but I think only two headers 
have to be renamed, Cookie and Authorization. Note that other 
authentication-related headers, if any, do not need to be renamed, 
because without the Authorization header being present, no other 
authentication processing will take place. If the headers have 
different names, it's very hard to reveal private data accidentally. 
No site-wide blanket addition with headers will cause it, you have to 
go out of your way to process an extra header and treat it the same 
as Cookie. It would allow allow servers to choose whether to offer 
personalized or public data and change their mind at any time, 
without having to change clients of the data. It would also work for 
inclusion of cross-site data via mechanisms that don't have a 
convenient way to add an out of band flag.
The only downside I can think of to this approach is that it may 
break load balancers that look at cookies, unless they are changed to 
also consider the new header (Cross-Site-Cookie?).


Using different header names would certainly address the concern I 
have regarding reducing the risk that private data is inadvertently 
leaked.


However I think the downsides are pretty big. The load balancer issue 
you bring up is just one example. Another is that I think caching 
proxies today avoid caching data that is requested using 
Authentication headers. Probably the same is true for the cookie 
header in some configurations.


I think going against the HTTP spec carries big unknown risks. I'm 
sure others with more HTTP experience than me can chime in here better.


I don't see how this would go against HTTP. It's perfectly valid HTTP to 
not send Cookie or Authorization headers, and also valid to send 
whatever custom headers you want if agreed upon with the server.


You are sending data that the HTTP spec specifies should be sent over an 
Authorization header over another header instead.


In any case, if you are breaking the letter and/or spirit of the spec is 
of little matter. What matters is the actual effects are of sending this 
information over new header names are.


The cost and risk of adding an extra boolean to XMLHttpRequest seems 
much lower.


The cost of tying server-side changes to client-side changes for a 
cross-site communication technology seems like a very high cost to me. I 
don't buy the argument that it's normal to change when you change what 
data you are reading - the data being personalized (or not) is a 
different kind of change from changing the type of data you read. To 
compare to the user-level example, GMail and GCal are different URLs, 
but nytimes.com is the same URL whether I'm logged in or not.


First of all I'm unconvinced that this will happen in reality. Can you 
explain a usecase when a site would want to go from changing from public 
data to private data, without changing anything else? And is this common 
enough that we need to cater to it?


When you are sending personalized data you pretty much have to have a 
pretty different API anyway. At the very least the mashup sites will 
have to deal with access being denied and display some UI to the user, 
or redirect the user to the content site to log in and/or authorize the 
mashup site to read the data.


I would think that in most cases you would keep the old URI serving the 
public data, and set up a new URI where you can fetch the private data.


It might be a small change implementation wise, but semantically and 
security wise it's a big difference between serving public and private data.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
This only helps with servers that have same-domain pages that accept 
cookies, but have no cross-domain pages that accept cookies, ever 
(since if any of the cross-domain pages accept cookies, then our 
initial assumption -- that the site author makes a mistake and his 
site reacts to cookies in third-party requests by doing bad things -- 
means that he's lost).
How so. Sites that have a combination of private and public data can, 
and hopefully will, only set the Access-Control-With-Credentials header 
for the parts that serve private data. It needs to apply different 
opt-in policies here anyway since it needs to ask the user before 
sharing any of his/her data.


The scenario we are trying to address is the scenario where an author has 
accidentally allowed cross-site access to a part of the site that gives 
users abilities if they provide valid credentials, to prevent other sites 
from pretending to be the user and acting in a user-hostile way as if on 
the user's behalf.


Thus we are assuming that if a cookie is sent to the server with a 
cross-site request, the server will be vulnerable. That is the fundamental 
assumption.


Now, we can work around that by making it that authors don't accept 
cookies for cross-site requests, but only accept them from same-site 
requests. That works, because our assumption only relates to cross-site 
requests that _do_ include cookies.


If the server then opts-in to receiving cookies, then the server will 
receive cookies. Our assumption is that if a cookie is sent to the server 
with a cross-site request, the server will be vulnerable. Thus the server 
is now again vulnerable.


We can't pretend that the author will make a mistake if they always 
receive cookies but then assume that the author will suddenly stop making 
mistakes when we provide them with a way to opt-in to cookies. Either the 
author is going to make mistakes, or he isn't. We have to be consistent in 
our threat assessment.


Yes, if they only do that then they will be vulnerable.

The site is as always responsible for asking the user before allowing 
third-party access to private data, and yes, if they fail to do so 
properly they will be vulnerable.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
The site is as always responsible for asking the user before allowing 
third-party access to private data, and yes, if they fail to do so 
properly they will be vulnerable.


So I guess I don't really understand what your proposal solves, then. It 
seems like a lot of complexity for only a very minimal gain in only one 
very specific scenario (the site doesn't ever return cookie-based data 
cross-site). We're still relying on the author not making mistakes, 
despite the author will make a mistake being our underlying assumption. 
If the site has to know to not include the cookie opt-in header, why not 
just have the site ignore the cookies? (It also introduces the problems 
that Maciej mentioned, which I think are valid problems.)


Well, we are talking about two very different types of misstakes, which 
I think have very different likelyhoods of happening. If I understand 
you correctly.


One misstake is having URIs in the URI space where you opt in to 
Access-Control which serve private data without you realizing it.


The other mistake is intentionally publishing private data but 
forgetting to ask your users first before doing so.


Seems to me that the former is a lot more likely than the latter.

/ Jonas

Btw, I just realized that this thread says version 3, not sure why I 
made that mistake, this is obviously version 2 :)




Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
So I guess I don't really understand what your proposal solves, then. 
It seems like a lot of complexity for only a very minimal gain in only 
one very specific scenario (the site doesn't ever return cookie-based 
data cross-site). We're still relying on the author not making 
mistakes, despite the author will make a mistake being our 
underlying assumption. If the site has to know to not include the 
cookie opt-in header, why not just have the site ignore the cookies? 
(It also introduces the problems that Maciej mentioned, which I think 
are valid problems.)
Well, we are talking about two very different types of misstakes, which 
I think have very different likelyhoods of happening. If I understand 
you correctly.


One misstake is having URIs in the URI space where you opt in to 
Access-Control which serve private data without you realizing it.


The other mistake is intentionally publishing private data but 
forgetting to ask your users first before doing so.


Seems to me that the former is a lot more likely than the latter.


Right but the mistake that we're not doing anything about and which seems 
likely to be far more common than either of those is:


Having URIs in the URI space where you opt in to Access-Control _and_ opt 
in to cookies which serve or affect private data without you realizing it.


Yes, the best way I can think of to reduce this risk is to reduce the
number of URIs where you opt in to cookies. That is what my proposal
tries to accomplish.

That is, your solution only works so long as the site doesn't ever opt in 
to cookies. Which seems uncommon.


This is not true. You can opt in to cookies on just a subset of the
URIs where you opt in to Access-Control with my proposal.

Additionally, this way you can make sure to ask the user always before 
sending the Access-Control-With-Credentials header. This way the risk of 
leaking private data without the user realizing is further reduced.


If cookies are always turned on for Access-Control there will be many 
situations where you will want to enable Access-Control without asking a 
user since you are not sending the users private data.


(I'm assuming that the case of providing data cross-domain for simple GET 
requests is most easily handled just by having that script send back the 
right magic, in which case none of this applies as the URI space is one 
URI and there are no preflights at all. For this use case we don't have 
to worry about cookies at all as the server just wouldn't look at them.)


I'm not following what you are saying here. What script is that 
script? And what is the right magic?


I am just as concerned about GET requests as any other. In fact, all the 
private data leaks I've heard about with crossdomain.xml has been 
related to GET requests.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-20 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:

That is, your solution only works so long as the site doesn't ever opt in to
cookies. Which seems uncommon.
This is not true. You can opt in to cookies on just a subset of the URIs 
where you opt in to Access-Control with my proposal.


But the _entire assumption_ here is that the author is unable to correctly 
apply these features to the right subset of his site. If the author was 
able to correctly apply these features to the appropriate subset, then we 
wouldn't need your feature in the first place.


No, that is not the assumption. I'll try to rephrase. In the example
below i'll use PHP as the server side technology for:

My concern is for a site operator which understands the spec and wants 
to share public data being offered from all or part of the URI space for 
the site.


Under the current spec the operator must check each individual PHP
script in the part of the site that is shared to make sure that none of
them use $_SESSION, $_COOKIE, $HTTP_SESSION_VARS, $_ENV['HTTP_COOKIE'], 
 HttpRequest::getCookies(), any of the session_* functions, 
$_ENV['REMOTE_USER'], $_ENV['REMOTE_IDENT'], $_ENV['HTTP_AUTHORIZATION'] 
any of the kadm5_* functions, any of the radius_* functions or anything 
else that I'm missing that does session management based on user 
credentials.


If any of these things are used then the PHP script is likely mixing 
private data into the public data and so the script needs to be modified 
to not use any of the above features when the 'Origin' header is present 
and has a value different from the current domain.


While this is certainly doable, I feel that there is a risk that the 
site administrator will make a mistake and miss some of the above listed 
features and cause private data to be leaked.


So again, the issue isn't in understanding the spec. The issue is 
securing your site for the security model that the spec requires.


Additionally, this way you can make sure to ask the user always before 
sending the Access-Control-With-Credentials header. This way the risk of 
leaking private data without the user realizing is further reduced.


But we both know browsers aren't going to do this, or will offer a never 
ask me again checkbox. 


I'm talking about the site asking the user this question. The site
should always check with the user before sharing any of the users
private data with any third party.

If we separate out opting in to cookies from opting in to Access-Control 
then sites can easier ensure that any time they opt in to cookies, they 
only do so after having asked the user.


(I'm assuming that the case of providing data cross-domain for simple 
GET requests is most easily handled just by having that script send 
back the right magic, in which case none of this applies as the URI 
space is one URI and there are no preflights at all. For this use case 
we don't have to worry about cookies at all as the server just 
wouldn't look at them.)
I'm not following what you are saying here. What script is that 
script? And what is the right magic?


The script that provides the data, and the right magic is the 
Access-Control header.


I'm sorry, i'm totally at a loss about what you are saying here, I 
suspect I'm missing some context. Could explain from the beginning?


/ Jonas



Re: IRC logging

2008-06-20 Thread Jonas Sicking


Can anyone really ever mention things that are member confidential on 
IRC? We have no control over who else is in the room and possibly logging.


/ Jonas

Charles McCathieNevile wrote:


Hi Anne,

this raises a couple of issues - the obvious one being how we deal with 
meetings which include information that is member-only, and also whether 
the logger has some facility for saying something that doesn't go into 
the record, as the W3C log bots do.


Hopefully we will clear this up in the next couple of days - the chairs 
have been discussing this and agree that we want it to happen, but want 
to sort the issues. I'll follow up in private - with luck we can resolve 
this all by Monday, but please ask Krijn not to run a logger until we 
have sorted this out.


cheers

Chaals

On Wed, 18 Jun 2008 12:27:55 +0300, Anne van Kesteren [EMAIL PROTECTED] 
wrote:



Hi,

Krijn Hoetmer volunteered for logging our IRC channel (#webapps on 
irc.w3.org:80) similarly to how he logs for the HTML WG, CSS WG, and 
WHATWG. (Also the public ARIA discussion channel I believe.) If you 
have any objections to this please say so before the weekend.


If people find it more appropriate to decide this using a survey that 
would be fine with me as well, but since I don't expect opposition 
that seems like quite a bit of overhead.


Kind regards,











Re: Opting in to cookies - proposal version 3

2008-06-20 Thread Jonas Sicking


Ian Hickson wrote:

On Fri, 20 Jun 2008, Jonas Sicking wrote:

Under the current spec the operator must check each individual PHP
script in the part of the site that is shared to make sure that none of
them use $_SESSION, $_COOKIE, $HTTP_SESSION_VARS, $_ENV['HTTP_COOKIE'],
HttpRequest::getCookies(), any of the session_* functions,
$_ENV['REMOTE_USER'], $_ENV['REMOTE_IDENT'], $_ENV['HTTP_AUTHORIZATION'] any
of the kadm5_* functions, any of the radius_* functions or anything else that
I'm missing that does session management based on user credentials.

If any of these things are used then the PHP script is likely mixing private
data into the public data and so the script needs to be modified to not use
any of the above features when the 'Origin' header is present and has a value
different from the current domain.

While this is certainly doable, I feel that there is a risk that the site
administrator will make a mistake and miss some of the above listed features
and cause private data to be leaked.

So again, the issue isn't in understanding the spec. The issue is securing
your site for the security model that the spec requires.


That's all well and good, but what if the site author wants to send back 
some data that _is_ cookie aware? Now he has to go through and do the 
check anyway. So what's the win?


I think it's safe to assume that if the site uses cookies at all, that 
it'll eventually want to provide cross-site access to user data in some 
way.


Ah, sorry, I think I missed your point here.

I don't think that is unnecessarily true at all. I think one sticking 
point is that I suspect sites will opt in to Access-Control on pages 
they are already serving to their users. So I would not be surprised if 
yahoo opts in on the uri news.yahoo.com URI, or craigslist opt in for 
their full URI space.


In such cases I think it's very possible that sites will opt in on URIs 
that receive and process cookies, but would leak private data if they 
did so with cookies enabled.


/ Jonas



Re: ISSUE-4 (SpecContent): Should specifications decide what counts as content for transfer? [Progress Events]

2008-06-22 Thread Jonas Sicking


Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:
It makes no sense to me to for HTTP say that the total number of bytes 
should include HTTP headers. It would be similar to including the TCP 
headers in the IP packets IMHO.


There is a big difference here, an application might not have meaningful
access to the latter, but would necessarily have meaningful access to
the former (even if only to the next hop).


I don't see how ability to get hold of one or another makes a difference 
in determining whether it's the right or the wrong data. There is lots 
of data available that would be the wrong data, that doesn't change the 
fact that it's the wrong data.



The consequence of not in-
cluding headers would be, e.g., that on HEAD requests you would seem to
never make any progress.


Yes, I agree that this is somewhat unfortunate. Though in reality I 
doubt that it will matter much since headers are usually small enough 
that you don't really need progress reports on them.


/ Jonas



Re: Opting in to cookies - proposal

2008-06-22 Thread Jonas Sicking


Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:

First off, as before, when I talk about cookies in this mail I really
mean cookies + digest auth headers + any other headers that carry the
users credentials to a site.


I don't quite see why you would mix these. Is there anywhere where I can
read up on the use cases for an extra feature to enable the transmission
of cookies if not included by default? Especially for users credentials
in cookies it is difficult to imagine real world applications that would
depend on or at least greatly benefit from such a feature.


I'm not quite following what you are asking here. My proposal is about 
giving a site the ability to enable two modes of Access-Control:


1. Allow a third-party site to read the data on this resource, and/or
   perform unsafe methods in HTTP requests to this resource. When
   these requests are sent any cookie and/or auth headers (for the
   resource) are included in the request, just as if had been a
   same-site XHR request.
2. Same as above, but requests never include cookies or auth headers
   are never included.

In the spec currently only mode 1 is possible. I suggest that we make 
mode 2 possible as well. I guess you can call it opting out of cookies 
as well...


/ Jonas



Re: ElementTraversal progress?

2008-06-23 Thread Jonas Sicking


Charles McCathieNevile wrote:


Followup to webapps group please (reply-to set for this mail)

On Mon, 02 Jun 2008 23:56:22 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:


Charles McCathieNevile wrote:

 On Sat, 31 May 2008 01:05:44 +0200, Jonas Sicking [EMAIL PROTECTED]


I wanted to implement the ElementTraversal spec for the next release 
of firefox (after FF3). However last I heard there was still an 
outstanding issue of if we wanted to have .childElementCount 
unsigned long or if we wanted a .childElements NodeList.
 I guess Doug will pipe up soon, but as I understand things from him 
he thinks it makes sense to leave the spec as is. Opera, Ikivo and 
BitFlash are known to have implementations that are believed to be 
conformant to the current spec.

...
In mozilla we would actually even implement the .childElementCount 
property by keeping a hidden childNodes list internally. But that 
might be specific to the mozilla implementation.


Indeed, it seems from discussing it that it would. Checking back with 
the implementor at Opera, we would prefer to leave the spec as it is for 
now, and if necessary write another, even smaller spec that offered the 
node list functionality if you really want it.


What about the issue I raised here:

http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0214.html

Which no one replied to.

If you implement the HTML DOM you should already have code that not only 
filters out elements, but even filters out elements of a specific name. 
Seems like that code should be reusable?


/ Jonas



Re: New: Tracking Issues in XHR that we raised

2008-06-23 Thread Jonas Sicking


Zhenbin Xu wrote:

It should be revised to:


responseText:
If the state is not DONE, raise an INVALID_STATE_ERR exception and terminate 
these steps.


This doesn't seem very consistent with the other response properties 
though. Seems like .getResponseHeader and .getAllResponseHeaders work 
fine (without throwing) even in state HEADERS_RECEIVED and LOADING, so 
it seems better to let .responseText work fine there too.


Additionally, our API has for a long time been defined such that you can 
read .responseText all through the LOADING state in order to read 
streaming data. This has been advertised to authors so I would expect 
them to depend on that at this point and so if we started throwing here 
I would expect websites to stop working.


This makes even more sense in XHR2 when we have progress events and so 
the site gets notified as data comes in. (In fact, this is already the 
case in firefox where you get onreadystatechange notifications for the 
LOADING state every time data is received. We hope to change this to 
reflect the specification and use progress events as appropriate instead 
in FF3.1)


However throwing in the UNSENT and OPENED states are fine with me.


responseXML:
If the state is not at least OPENED, raise an INVALID_STATE_ERR exception and
terminate these steps.


I think we additionally need to throw in the OPENED state. Until all 
headers are received there is no way to know what document, if any, 
should be created so we need to either return null or throw until we're 
in state HEADERS_RECEIVED.


Though it does seem scary to start throwing in more states for this 
property as throwing more tends to break sites. So possibly we would 
have to go with returning null in the OPENED state even though that 
would be inconsistent with the other properties.



On a related note:
Can we specify exactly when .status and .statusText should throw? 
Currently the spec says to throw if not available which seems very 
implementation specific. If we say that it should throw unless the state 
is at least HEADERS_RECEIVED that should make things consistent.


Note that this would be unlikely to break sites due to more throwing. As 
things stand now the property is likely throw during the start of the 
OPENED state, but at some point during the state stop throwing and 
return a real result. So sites can't count on that happening at any 
predictable time before we're in the HEADERS_RECEIVED state anyway.


/ Jonas

/ Jonas



Re: ISSUE-4 (SpecContent): Should specifications decide what counts as content for transfer? [Progress Events]

2008-06-23 Thread Jonas Sicking


Charles McCathieNevile wrote:

On Sun, 22 Jun 2008 10:32:50 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:


Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:
It makes no sense to me to for HTTP say that the total number of 
bytes should include HTTP headers. It would be similar to including 
the TCP headers in the IP packets IMHO.
 There is a big difference here, an application might not have 
meaningful

access to the latter, but would necessarily have meaningful access to
the former (even if only to the next hop).


I don't see how ability to get hold of one or another makes a 
difference in determining whether it's the right or the wrong data. 
There is lots of data available that would be the wrong data, that 
doesn't change the fact that it's the wrong data.



The consequence of not in-
cluding headers would be, e.g., that on HEAD requests you would seem to
never make any progress.


Yes, I agree that this is somewhat unfortunate. Though in reality I 
doubt that it will matter much since headers are usually small enough 
that you don't really need progress reports on them.


Well, until you get some mobile network crawling along sending an accept 
header ...


Seriously, this is the sort of problem that makes me want to define this 
as application-specific. Because it makes no sense to me that a GET 
request counts the returned header as content, but it makes sense to me 
that a HEAD request does, and I am not sure what makes sense for PUT. 
Using it for mail, where you are transferring an entire mailbox as a 
single object it seems natural to count the mail headers, but shifting 
an individual message I am not so sure...


Is anyone ever going to be able to get any useful size data for the 
headers anyway though? I.e. if we allow headers to be counted as part of 
the size, is anyone ever going to be able to do that?


To be able to do that you'd have to have some sort of out-of-band 
metadata that travells parallel to the TCP/IP connection since the 
TCP/IP connection is busy transferring HTTP which doesn't contain this 
information.


I guess you could use just the 'bytes downloaded so far' part just to 
show that progress is made, but that would mix very strangly if you then 
received a body which you did have a size for.


Anyhow, I guess I don't care that much about this as it won't affect any 
actual implementations for the web at least for now. And I think in 
reality most people will not include the headers due to all the 
strangness mentioned above if you did include the headers. So feel free 
to go ahead either way on this.


/ Jonas



Re: ISSUE-10 (client-server): Client and Server model [Access Control]

2008-06-23 Thread Jonas Sicking


I don't think we have seen any alternative proposals for putting the 
policy *enforcement* on the server. It also seems very hard to me to 
rely on the server enforcing the policy, while still protecting legacy 
servers, since they currently do not perform any such enforcement.


What I have seen suggestions for though is a simpler policy language 
that doesn't send a full white-list to the client, but rather just a 
yes/no decision to the client.


/ Jonas



Re: Opting in to cookies - proposal

2008-06-23 Thread Jonas Sicking


Bjoern Hoehrmann wrote:

* Jonas Sicking wrote:
I'm not quite following what you are asking here. My proposal is about 
giving a site the ability to enable two modes of Access-Control:


1. Allow a third-party site to read the data on this resource, and/or
   perform unsafe methods in HTTP requests to this resource. When
   these requests are sent any cookie and/or auth headers (for the
   resource) are included in the request, just as if had been a
   same-site XHR request.
2. Same as above, but requests never include cookies or auth headers
   are never included.

In the spec currently only mode 1 is possible. I suggest that we make 
mode 2 possible as well. I guess you can call it opting out of cookies 
as well...


I am proposing that there be only a single mode unless it can clearly
be demonstrated that having two modes would be a substantial net gain.
As far as I am aware, this has not been established for a with-cookie
mode if the no-cookie mode is the default, and my questions focus on
learning more about the with-cookie mode.


Not sure if there is much of a 'default' mode no matter what, since it's 
the website that chooses if it wants to receive cookies or not.


But anyway...

Allowing with cookies has obvious benefits. It allows the target size to 
know which user data is requested for. So if the requesting site is 
trusted by the target site, the target site can send user-private data 
(note, this trust has to be established out-of-band from the spec, 
usually by the target site asking the user) and know which users data to 
send. Additionally, this is done without needing to entrust the 
requesting site with any user credentials. It also neatly integrates 
with security solutions inside ASP, PHP, etc without having to deploy a 
new infrastructure to handle these things.


Allowing without cookies gives websites that want to publish public data 
a tool to significantly safer opt in to Access-Control without having to 
worry about accidentally leaking users private data. Accidentally 
leaking private data can happen in two instances that I can think of:

  * The operator opts in to AC on a subsection of the site forgetting
that somewhere deeply nested is a form with CRSF protection
which isn't supposed to be readable cross-site. Or there is a
resource that serves user-private data which isn't supposed
to be readable cross-site.
  * The operator opts in to AC on a URI that mostly serves public
data, but forgets that if the user is logged in the URI also
serves some user-private data.


I do admit that I do not yet know how likely it is that operators will 
make any of the mistakes listed above, so I can't give an exact 
cost/benefit analysis.


/ Jonas



Re: Element Nodelist - ISSUE-6

2008-06-23 Thread Jonas Sicking


Sounds good to me.

/ Jonas

Doug Schepers wrote:

Hi, Jonas, Daniel-

Jonas Sicking wrote (on 6/23/08 2:03 PM):


What about the issue I raised here:

http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0214.html

Which no one replied to.

If you implement the HTML DOM you should already have code that not 
only filters out elements, but even filters out elements of a specific 
name. Seems like that code should be reusable?


For an HTML UA, yes, that makes perfect sense.  But there is concept of 
that in SVG, for example, so for an SVG-only UA that would still be an 
additional implementation (and memory) cost.


I intend to make a make a separate spec that also provides a nodelist 
for Element nodes, so we won't be losing the nodelist feature, just 
deferring it (and not for long, at that).  Those UAs which want to 
implement both Element Traversal and Element Nodelist can do so; those 
that don't yet aren't burdened with implementing Element Nodelist 
(though as devices mature, I'm sure they'll want to do both).


The other issue at stake here is the coordination between W3C and JSRs. 
 While this doesn't have a direct impact on desktop browser vendors, it 
does affect the current mobile Web sphere, where Java is widely 
deployed.  The better aligned the JSRs can be to core W3C technologies, 
the more robust the entire Open Web Stack is for content developers and 
users.  This is important enough that it is worth a small amount of 
extra standardization effort to facilitate that.


I will create an Element Nodelist specification right away, and if it is 
approved to go forward (and I don't see why it wouldn't be, since there 
is considerable support), I am confident that this would not slow down 
deployment in desktop browsers, and so authors should be able to use it 
in the same timeframe as Element Traversal.  I hope this resolves your 
issue satisfactorily.


Regards-
-Doug Schepers
W3C Team Contact, WebApps, SVG, and CDF





Re: Agenda and logistics...

2008-06-24 Thread Jonas Sicking


Maciej Stachowiak wrote:

Hi folks,

the agenda and logistics page for the meeting will be shortly 
available to working group members (Sunava, can you please ask your AC 
rep to ensure that you guys have joined by the time we have the 
meeting?).


I would like to request an additional agenda item. There is already a 
block of time for discussing Microsoft's feedback on XHR2+AC. I would 
like to request a separate block of time for discussion among those 
looking to implement or use XHR2+AC, so that we can come to rough 
consensus on the key remaining open issues.


Since Microsoft has announced that they plan to stick with XDR 
(http://blogs.msdn.com/ie/archive/2008/06/23/securing-cross-site-xmlhttprequest.aspx), 
I assume they are not interested in being part of this consensus. But I 
still think we need to have this item on the agenda, and of course 
representatives from Microsoft are welcome to observe this discussion.


I would actually hope that microsoft wants to take part in the consensus 
building too as I hope that future versions of IE will implement 
Access-Control, or at least parts of it.


Though that discussion would probably be better done in the context of 
the general discussion on microsofts feedback.


Perhaps we could devote a day or half day to this topic, rather than 
devoting 2 days to XHR1 issues. Perhaps (given Microsoft's request) we 
can discuss Microsoft's feedback on Tuesday, XHR2+AC issues on 
Wednesday, and XHR1 issues on Thursday.


This agenda sounds great. Me and Arun are leaving on a 3:27 flight out 
of Seattle on Thursday so having a full day to devote to Access-Control 
would be really great.


Getting consensus on Access-Control is of utmost importance to us and we 
have a really tight schedule for it.


/ Jonas



Re: Agenda and logistics...

2008-06-24 Thread Jonas Sicking


Arun Ranganathan wrote:


chaals et al.,


Yep. I am waiting for people to comment on whether they are happy to 
roll up about 11 on Tuesday


I am happy enough with an 11a.m. start time; I suspect, however, that 
our discussions will run over a bit, and thus, I think we ought to be 
prepared for that.


Same here

/ Jonas



Re: Worker Threads and Site Security Policy | Two Possible New Items for Standardization

2008-06-25 Thread Jonas Sicking


Maciej Stachowiak wrote:



On Jun 25, 2008, at 1:09 PM, Arun Ranganathan wrote:


Doug Schepers, Charles McCathieNevile (Chairs), Members of the WG,

On behalf of Mozilla, I'd like to introduce the possibility of two new 
work items for this group to consider.  Neither of these is presented 
as a fait accompli, although we would like to consider both of these 
for inclusion in Firefox 3.Next if that is possible.


1. Worker Threads in Script.  The idea is to offer developers the 
ability to spawn threads from within web content, as well as 
cross-thread communication mechanisms such as postMessage.  Mozilla 
presents preliminary thought on the subject [1], and notes similar 
straw persons proposed by WHATWG [2] and by Google Gears [3].  Also 
for reference see worker threads in C# [4].  The Web Apps working 
group seems like a logical home for this work.  Will other members of 
the WG engage with Mozilla on this, via additional work items covered 
by the charter of this WG?


Apple is interested in a worker API. The key issues for workers, in my 
opinion, are security, messaging, and which of the normal APIs are 
available. Right now, these things are covered in HTML5, so I think that 
may be a better place to add a Worker API.


We would certainly like to coordinate our work in this area with the 
proposed APIs cited.


I'd really rather not add more stuff to HTML5, it's too big as it is. 
Ideally worker threads is something that we can nail down in a pretty 
short period of time, before HTML5 is out (targeted a few years into the 
future iirc).


Like you say, some features from HTML5 should be exposed in the context 
of worker threads. I don't really know how to handle that, but I can see 
two ways:


1. Make a informative note stating a list of features that we expect 
will be made available once there is a finished spec for them, but leave 
it up to the HTML5 spec to actually explicitly make this requirement.


2. Have a normative requirement that implementations that also support 
feature X from HTML5, makes that implementation available to the worker 
thread as well.


/ Jonas



Re: F2F Agenda Updated

2008-06-25 Thread Jonas Sicking


Doug Schepers wrote:


Hi, WebApps WG-

The WebApps F2F meeting page has been updated to reflect the current 
agenda:


  http://www.w3.org/2008/webapps/Group/f2f0807.html (member-only)


I do notice that some of the times are in what you americans call 
'military time', and some times are in am/pm time, but lacking the 
actual am/pm notations.


/ Jonas



Re: [AC] Hardening against DNS rebinding attacks - proposal

2008-06-28 Thread Jonas Sicking


Maciej Stachowiak wrote:



On Jun 28, 2008, at 2:33 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:

On Jun 27, 2008, at 2:18 PM, Jonas Sicking wrote:
What is the threat model this defends against? Since any server using 
Access-Control that does not check HOST is vulnerable to a 
conventional XHR DNS rebinding attack. If browsers provide defense 
against DNS rebinding through some form of DNS pinning they can apply 
it to Access-Control too, but I don't understand the benefit of 
pinning only for Access-Control.


To some extent I agree.

It does provide protection for Access-Control implementations outside of
the web-platform. And for vendors that have expressed concern about
deploying the spec without DNS protection (such as microsoft) this could
be an alternative.


Vendors that wanted to do an IP-based restriction already can (unless 
the spec language somehow makes using the cached OPTIONS result 
mandatory, which it should not).


Reading the letter of the spec I'm not sure that it's currently allowed. 
However we should loosen this in the conformance criteria section. We 
should additionally allow for things like refusing to let internet sites 
connect to private IP addresses (something we hope to implement in 
future firefox releases), or other general security policies that UAs have.


I'm not sure what you mean by Access-Control implementations outside of 
the web-platform. Can you describe a specific scenario where this 
mechanism would actually be an effective defense?


In a browser that doesn't support the features that are currently prone 
to DNS rebinding attacks. This includes things like same-site XHR and 
iframes (and arguably script).


Also note that a server can (and should for reasons other than 
Access-Control) protect itself from DNS rebinding attacks by 
checking the 'Host' header.
Given this very simple total server-side defense, I am leery of 
adding a complex (and ultimately ineffective) client-side mitigation.


This unfortunately only works for servers that are accessed through a
DNS name, this is commonly not the case for for example personal routers
with builtin firewalls.


How are such servers accessed instead? By IP address?


Yes.

(If so, wouldn't 
the IP address be in the Host header?)


Looking at rfc2616 it looks like the Host header is empty when a 
connection is made using IP addresses. Though I realized that in the 
DNS-Rebinding scenario the Host header should contain the DNS name of 
the attacking site, which the firewall could use to detect the attack. 
Unfortunately it seems that in reality many don't.


/ Jonas



Re: Element Nodelist - ISSUE-6 (was: ElementTraversal progress?)

2008-07-06 Thread Jonas Sicking


Isn't .children more like document.all in that you can dig out elements 
with a specific id and/or specific name? I.e. isn't it more than just a 
plain NodeList of all child elements?


/ Jonas

John Resig wrote:

I just want to note that most browsers implement the .children child element 
NodeList (all except for Mozilla-based browsers, at least). I suspect that 
building upon this existing work would lead to especially-fast adoption.

--John

- Original Message -
From: Doug Schepers [EMAIL PROTECTED]
To: Jonas Sicking [EMAIL PROTECTED]
Cc: Webapps public-webapps@w3.org, Web APIs WG [EMAIL PROTECTED], Daniel 
Glazman [EMAIL PROTECTED]
Sent: Monday, June 23, 2008 7:23:47 PM GMT -05:00 US/Canada Eastern
Subject: Element Nodelist  - ISSUE-6 (was: ElementTraversal progress?)


Hi, Jonas, Daniel-

Jonas Sicking wrote (on 6/23/08 2:03 PM):

What about the issue I raised here:

http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0214.html

Which no one replied to.

If you implement the HTML DOM you should already have code that not only 
filters out elements, but even filters out elements of a specific name. 
Seems like that code should be reusable?


For an HTML UA, yes, that makes perfect sense.  But there is concept of 
that in SVG, for example, so for an SVG-only UA that would still be an 
additional implementation (and memory) cost.


I intend to make a make a separate spec that also provides a nodelist 
for Element nodes, so we won't be losing the nodelist feature, just 
deferring it (and not for long, at that).  Those UAs which want to 
implement both Element Traversal and Element Nodelist can do so; those 
that don't yet aren't burdened with implementing Element Nodelist 
(though as devices mature, I'm sure they'll want to do both).


The other issue at stake here is the coordination between W3C and JSRs. 
  While this doesn't have a direct impact on desktop browser vendors, it 
does affect the current mobile Web sphere, where Java is widely 
deployed.  The better aligned the JSRs can be to core W3C technologies, 
the more robust the entire Open Web Stack is for content developers and 
users.  This is important enough that it is worth a small amount of 
extra standardization effort to facilitate that.


I will create an Element Nodelist specification right away, and if it is 
approved to go forward (and I don't see why it wouldn't be, since there 
is considerable support), I am confident that this would not slow down 
deployment in desktop browsers, and so authors should be able to use it 
in the same timeframe as Element Traversal.  I hope this resolves your 
issue satisfactorily.


Regards-
-Doug Schepers
W3C Team Contact, WebApps, SVG, and CDF







Re: [access-control] Update

2008-07-09 Thread Jonas Sicking


Maciej Stachowiak wrote:


On Jul 9, 2008, at 3:17 PM, Anne van Kesteren wrote:



On Wed, 09 Jul 2008 23:54:17 +0200, Sunava Dutta 
[EMAIL PROTECTED] wrote:

I prefer
Access-control: *
Access-control: URL


I suppose it would be slightly shorter, but it's also less clear.


I would be in favor of Access-Control or Access-Control-Allow, I think 
Access-Control-Origin and Origin are confusing in combination. It seems 
unclear from the names which is a request header and which is a response 
header.


Agreed.

I also think that putting a somewhat more verbose syntax will give us a 
better forwards compat story. For example


Access-Control: allow-without-query-parameters *
or
Access-Control: allow-only-tuesdays *

I have a hard time believing that we would never find it useful to 
extend the syntax in future versions of the spec. I also as an 
implementor don't find it hard to strip out allow  before the origin.


I also find it very useful that you can just look at the header in order 
to realize that it is granting some sort of access, which putting the 
word allow in the syntax does.


So either
Access-control: allow *
or
Access-control-Allow: *
fulfills that.

That said, I would be ok with simply
Access-Control: *
as well. If we need degradation in the future we can always invent new 
headers...


/ Jonas



Re: [access-control] Update

2008-07-09 Thread Jonas Sicking


Anne van Kesteren wrote:


On Wed, 09 Jul 2008 22:22:52 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

The name Access-Control-Origin is IMHO confusing.


It's more or less identical to how it works for Web sockets. (Called 
Websocket-Origin there.)


If only we had the editor of that spec around... ;)

Lastly, the 'URL' token http://dev.w3.org/2006/waf/access-control/#url 
should not be a full URL, and I don't think we want to depend on HTML5 
for it either. Currently we seem to be allowing the syntax


Access-Control-Origin: http://foo.com/bar/bin/baz.html

which I think is very bad as it seems to indicate that only that page 
would be allowed to POST, which of course isn't something that we can 
enforce.


This is exactly how postMessage() works and it seems nice to align with 
that.


I am very strongly against this syntax as it gives a false sense of 
security. To the point where I don't think I'd be willing to implement 
it in firefox. The fact that postMessage allows this sounds very 
unfortunate and something that I will look into fixing in that spec.


I don't want to carry this mistake forward into Access-Control.

Additionally, the way the spec was written before we could create a 
conformat implementation now without having to worry about HTML5 
changing things under us.


Well, in the end we want all those concepts implemented in the same way 
everywhere, right? So I'm not sure how this matters.


So why not let HTML5 refer to Access-Control?

/ Jonas



[AC] Preflight-less POST

2008-07-09 Thread Jonas Sicking


Hi All,

During the F2F we talked about doing preflight-less POSTs in order to be 
compatible with microsofts security model and allow them follow the AC 
spec for their feature set.


Unfortunately when I brought this up at mozilla there was concern about 
doing cross-site POSTing with content types other than what forms 
already allow. The concern was that it could make servers exploitable, 
which weren't today.


So I see a few ways forward:

1. Build more confidence about that this would not in fact break servers.

I'm working on this method. I've contacted Adobe since I think flash 
currently allow cross-site POSTing with arbitrary Content-Types. I've 
also contacted Microsoft to see if they have gotten any feedback on IE8 
Beta 1 where XDR allow arbitrary content types to see if they have 
gotten any feedback there. Silverlight also support this feature.


I'd also like to make a general shout-out here to see how people feel 
about this, or if they know of any other protocols that send arbitrary 
Content-Types with cross-site POSTs that we could use to gather data 
about if this makes sites exploitable.


If anyone has pointers to any research that has been done on flash in 
general, or its cross-site posting mechanism in particular would be 
great, even if it doesn't mention this specific issue.



2. Don't require pre-flight for POSTs 'text/plain', but require it 
otherwise.


The downside of this solution is that it encourages people to use 
'text/plain' as Content-Type for everything they send which has its 
downsides.


The upshot is that this would still allow compat with XDR.


3. Always pre-flight POSTs

This would abandon any hope of allowing XDR to use Access-Control as 
securit protocol.


Unless microsoft were able to implement preflights in IE8, but it seems 
like it's really late in their release schedule for such a large change.



One thing that I really like about proposal 1 is the simplicity. We 
would say POST can be done cross origin without any checking, so you 
need to protect yourself against that. Any other proposal is basically 
POST can be done cross origin without any checking, but only for these 
here values of the 'Content-Type' header. Except that it looks like in 
Access-Control you can rely on those requests not coming in. Oh, and if 
you are concerned about users of Flash and Silverlight being exploitable 
you do need to worry about all values for 'Content-Type'.


/ Jonas



Re: [access-control] Update

2008-07-10 Thread Jonas Sicking


Anne van Kesteren wrote:


On Thu, 10 Jul 2008 01:13:52 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

Anne van Kesteren wrote:
 This is exactly how postMessage() works and it seems nice to align 
with that.


I am very strongly against this syntax as it gives a false sense of 
security. To the point where I don't think I'd be willing to implement 
it in firefox. The fact that postMessage allows this sounds very 
unfortunate and something that I will look into fixing in that spec.


Let me know how that works out. postMessage() is shipping already in 
various implementations...


I will keep you updated.

Until then I very strongly feel we need to change the parsing rules to 
refer to rfcs 3986 and 3490 the way the previous draft did.


Additionally, the way the spec was written before we could create a 
conformat implementation now without having to worry about HTML5 
changing things under us.


Well, in the end we want all those concepts implemented in the same 
way everywhere, right? So I'm not sure how this matters.


So why not let HTML5 refer to Access-Control?


I don't really see how that would work.


Access-Control can define how to parse the 'origin' part of the URI and 
HTML5 can refer to that. Or they can both refer to the same RFCs.


/ Jonas



Re: [access-control] Update

2008-07-10 Thread Jonas Sicking


Anne van Kesteren wrote:
 * Access-Control is now Access-Control-Origin which takes * or a URL. 
In other words, whether or not a site grants access is simplified *a 
lot*. Implementors who told me this was the most complex part to 
implement can rejoice. This also makes this specification consistent 
with Web Sockets and postMessage(), both defined in HTML5. 
(Access-Control-Origin is not to be confused with the old 
Access-Control-Origin, which is now Origin.)


 * Access-Control-Credentials provides an opt in mechanism for 
credentials. Whether or not credentials are included in the request 
depends on the credentials flag, which is set by a hosting 
specification. Preflight requests are always without credentials.


An alternative syntax I've been thinking about for opting in to cookies is:

Access-Control: allow-with-credentials http://foobar.com

There are a couple of advantages to this syntax. First of all it keeps 
down the number of headers. Second, and more importantly, it cleanly 
disallows opting in to cookies while wildcarding. We'd simply make the 
syntax for the header


Access-Control: Access-Control : allow-rule | allow-with-cred-rule
allow-rule: allow  (URL | *) 
allow-with-cred-rule: allow-with-credentials  URL 

One, albeit not big, issue with the current proposal is that it allows 
someone to say.


Access-Control-Origin: *
Access-Control-Allow-Credentials: true

which is somewhat unfortunate. While this can be defined to be rejected 
by an implementation that supports the Access-Control-Allow-Credentials 
header. An implementation like XDR which doesn't will still allow the 
syntax.


/ Jonas



Re: [AC] Preflight-less POST

2008-07-16 Thread Jonas Sicking


Anne van Kesteren wrote:


On Thu, 10 Jul 2008 13:21:33 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:
Yes, I had gotten the impression that Flash would allow POSTs even if 
there was no /crossdomain.xml file. I.e. that it would allow the 
actual POST even if the preflight failed, it just wouldn't let you 
read the data.


If I'm wrong that definitely changes things and makes option 1 much 
less viable.


It seems Björn has some other data than I have. I used the following 
simple page together with request sniffing


  http://blog.monstuff.com/Flash4AJAX/static/Xdomain.html

to figure out if everything had a preflight /crossdomain.xml GET 
request. Using Flash 9 on Ubuntu this appeared to be the case.



Just allowing cross-site POST when Content-Type is 
application/x-www-form-urlencoded or text/plain seems bad as it a) 
encourages bad design to avoid a preflight and b) makes whitelisting 
even more fine-grained. Initially the distinction was just on 
methods, then it became headers, going further down to header values 
seems like a bad idea to me. I'd much rather go back to just GET 
versus everything else (i.e., methods).


I agree it's bad, the question is if it's worse than option 3, which 
is to not have IE compatibility.


True. Another point to consider here is if we want compatibility with 
HTML forms Web Forms as using Access Control would enable more 
functionality for ordinary forms as well, such as exposing cross-site 
return data and allowing the CHICKEN method.


Indeed. Though option 1 would also allow us to do that.

/ Jonas



Re: [D3E] Possible Changes to Mutation Events

2008-07-17 Thread Jonas Sicking


Kartikaya Gupta wrote:

On Thu, 17 Jul 2008 11:48:52 -0400, Boris Zbarsky [EMAIL PROTECTED] wrote:

There are countless other
implementations of MutationEvents out in the world
(http://google.com/codesearch?hl=enlr=q=DOMNodeRemoved+-mozilla+-webcoresbtn=Search).
They exist in more languages and are used in more contexts than I
care to enumerate

That's fine.  How many of those contexts have to assume that all DOM
access is malicious?


More than zero, I think. There's at least one gtk implementation that (at a 
quick glance) would have to deal with potentially malicious users.


And how well is gtk dealing with this? Has anyone done any extensive 
testing, such as fuzzing, to try to do evil things inside these mutation 
listeners?


/ Jonas



Re: [D3E] Possible Changes to Mutation Events

2008-07-17 Thread Jonas Sicking


Kartikaya Gupta wrote:

On Wed, 16 Jul 2008 16:18:39 -0500, Jonas Sicking [EMAIL PROTECTED] wrote:

Laurens Holst wrote:
I see, so the motivation for the change request to DOMNodeRemoved is 
that the second change request (throwing events at the end, after all 
operations) is be impossible to do if events are not always thrown at 
the end. And the motivation for throwing events at the end seems to be 
for a specific kind of optimisation called ‘queuing of events’. I would 
appreciate if someone could describe this optimisation.

Here is the problem we are struggling with is that the current design is
very complex to implement. Any code we have that somewhere inside the
code requires the DOM to be mutated mean that after the mutation we
recheck all invariants after each mutation. This is because during the
mutation a mutation event could have fired which has completely changed
the world under us. These changes can be as severe as totally changing
the structure of the whole DOM, navigating away to another webpage
altogether and/or closing the current window.



I understand your concerns, and while your proposed solution would
solve your problem, it pushes this exact same burden onto web authors.
Say we go ahead change the spec so that all the events are queued up
and fired at the end of a compound operation. Now listeners that
receive these events cannot be sure the DOM hasn't changed out from
under *them* as part of a compound operation.


In the case where there are multiple people listening to mutation events 
for a DOM, and occationally mutating the DOM during those listeners, 
mutation events are already useless.


There is no way you can know that by the time you get the event is 
represents reality at all. The node you just got a remove-event for 
might already be inserted in exactly the same place again.


If this isn't the case, i.e. where one person writes all listeners, or 
listeners don't mutate the DOM, then I don't see that we are pushing the 
problem onto authors. Yes, the DOM will look different by the time 
handler fires, but I don't see that it should significantly harder to 
deal with.



Consider the following
example:

htmlbody
 style.lastLink { color: red }/style
 a href=http://example.org;i want to be the last link/a
 div id=emptyMe
  a href=http://example.org;example two/a
  a class=lastLink href=http://example.org;example three/a
 /div
 script type=text/javascript
var numLinks = document.links.length;
document.addEventListener( DOMNodeRemovedFromDocument, function(e) {
if (e.target.nodeName == 'A') { // or e.relatedNode.nodeName as the 
case may be
if (--numLinks  0) {
document.links[ numLinks - 1 ].className = 'lastLink';
}
}
}, true );
 /script
/body/html


The above would be trivial to rewrite to use

document.links[document.links.length - 1].className = 'lastLink';


If you did something like document.getElementById('emptyMe').innerHTML
= '' and considered it a compound operation, the code above, which
works with current implementations, will die because numLinks will be
out of sync with document.links.length, and the array indexing will
fail. To avoid this scenario, the code has to be rewritten to re-query
document.links.length instead of assuming numLinks will always be
valid. This is exactly the same problem you're currently having - the
DOM is changing under the code unexpectedly, forcing it to recheck
assumptions.


Note that there is nothing in the spec that says that this isn't already 
the case. For example, an implementation would be totally allowed to in 
the case of


document.getElementById('emptyMe').innerHTML = '';

fire all DOMNodeRemovedFromDocument events before doing any mutations to 
the DOM. It could then do all removals while firing no events. This 
seems like it would break your code.


The fact is, it would be extremely hard to define how implementations 
should behave in all the various specs that cause mutations to occur, if 
you also have to define exactly how to behave if mutation listeners 
mutate the DOM.


If instead we instead allowed those specs to state what is a compound 
operation, it can allow the speced behavior to happen inside a compound 
operation, and then mutation listeners are dealt with afterwards.


/ Jonas




Re: [D3E] Possible Changes to Mutation Events

2008-07-17 Thread Jonas Sicking


Doug Schepers wrote:

Jonas proposes two substantive changes to this:

* DOMNodeRemoved and DOMNodeRemovedFromDocument would be fired after the 
mutation rather than before
* DOM operations that perform multiple sub-operations (such as moving an 
element) would be dispatched (in order of operation) after all the 
sub-operations are complete.


So based on the feedback so far in this thread, here is a revised 
proposal from me, with the added feature that it's backwards compatible

with DOM Events Level 2:

* Add a |readonly attribute long relatedIndex;| property to
  the MutationEvent interface.
* Add a DOMChildRemoved event which is fired on a node when one
  of its children is removed. The relatedNode property contains the
  removed child, and relatedIndex contains the index the child had
  immediately before the removal. The event is fired after the removal
  takes place.
* Add a DOMDescendantRemovedFromDocument event which is fired on a node
  when the node is in a document, but any of nodes the descendants is
  removed from the document. The event is fired after the removal takes
  place.
  The relatedNode property contains the removed descendant. The
  relatedIndex property contains the index the child had
  immediately before the removal. (Should relatedIndex be -1 when
  the node wasn't removed from its parent, but rather an ancestor was?)
* Specify *when* the events fire (see details below).
* Deprecate the DOMNodeRemoved and DOMNodeRemovedFromDocument events.
  If this means making them optional or just discouraged I don't really
  care. I'd even be ok with simply leaving them in as is. Mozilla will
  simply remove our implementation of the DOMNodeRemoved event. We've
  never supported the DOMNodeRemovedFromDocument event.


As for when the events fire (note that this is just clarifications of 
the spec, not changes to it):
For events that fire after the mutation takes place I propose that we 
add a concept of a compound operation and state that while compound 
operations are in progress no mutation events are fired. Instead the 
events are queued up. After the outermost compound operation finishes, 
but before it returns control to the caller, the queue of mutation 
events is processed.


A compound operation may itself contain several compound operations. For 
example parent.replaceChild(newChild, oldChild) can consist of a removal 
(removing newChild from its old parent) a second removal (removing 
oldChild from parent) and an insertion (inserting newChild into its new 
parent). Processing of the queue wouldn't start until after the 
outermost compound operation finishes.


One important detail here is that processing of the queue starts *after* 
the compound operation finishes. This means that if a mutation listener 
causes another mutation to happen, this is considered a new compound 
operation. So if the mutation listener causes another mutation to 
happen, then the mutation events queued up during that compound 
operation fires before that compound operation returns.


There are two ways to implement this:
Either you move the currently queued events off of the queue before 
starting to process them.


Or, when starting an outermost compound operation, remember what the 
current length of the queue is, and when finishing the operation, only 
process queued events added after that index.


The whole point of this inner queuing is to allow mutations inside 
mutation listeners to behave like mutations outside them. So if code 
inside a mutation listeners calls .replaceChild, it can count on that 
the mutation listeners for that mutation has fired by the time 
replaceChild returns.



What exactly constitutes a compound operation is left up to the specs 
that describe the operations. For example setting .innerHTML should 
likely constitute a compound operation. For DOM Core, every function 
that mutates the DOM should be considered a compound operation.



This all does sound a bit complicated. However the same would be true 
for any sufficiently detailed specification of how mutation events 
behave. As stated, the current wording of the spec allows for wildly 
different and useless firing logic.


The only thing that we could somewhat simplify would be to not use 
separate queues for mutations inside mutation listeners. However the 
simplification would be pretty marginal, and I think it would be 
confusing that mutations inside mutation listeners wouldn't cause events 
to fire until after all other pending events had fired.


/ Jonas



Re: [D3E] Possible Changes to Mutation Events

2008-07-18 Thread Jonas Sicking


Doug Schepers wrote:


Hi, Jonas-

Thanks for this modified proposal.  I want to hear back from those 
who've already commented as to their disposition, and to solicit 
comments from other known implementors (e.g., gtk, BitFlash, Opera, 
JSR), but I think your proposal is reasonable, and well detailed.


A few comments inline...

Jonas Sicking wrote (on 7/17/08 8:51 PM):


* Add a |readonly attribute long relatedIndex;| property to
  the MutationEvent interface.
* Add a DOMChildRemoved event which is fired on a node when one
  of its children is removed. The relatedNode property contains the
  removed child, and relatedIndex contains the index the child had
  immediately before the removal. The event is fired after the removal
  takes place.
* Add a DOMDescendantRemovedFromDocument event which is fired on a node
  when the node is in a document, but any of nodes the descendants is
  removed from the document. The event is fired after the removal takes
  place.
  The relatedNode property contains the removed descendant. The
  relatedIndex property contains the index the child had
  immediately before the removal. (Should relatedIndex be -1 when
  the node wasn't removed from its parent, but rather an ancestor was?)


What is the rationale for having both 'DOMChildRemoved' and 
'DOMDescendantRemovedFromDocument'?  Wouldn't a single one, 
'DOMDescendantRemovedFromDocument' (or, preferably, 
'DOMDescendantRemoved'), work about as well?  You already give a way to 
detect if it was a child or a descendant.


Well, I'd start by asking what the rationale is for mutation events at
all :) They seem to only solve the very simple cases where all parties
that mutate the page cooperate nicely with each other and with the
parties that listen to mutation events. But I would have expected that
in those cases the parties that mutate the DOM could instead have
notified the listening parties directly.

But I digress :)

Specifically the two new events correspond to the current DOMNodeRemoved
and DOMNodeRemovedFromDocument, so the rationale is the same as for
those old events. However I can only guess at that rationale is.

DOMNodeRemoved is useful when you want to know when a node is removed
from its parent.

DOMNodeRemovedFromDocument is useful when you want to know when a node
is no longer part of a document.

The latter seems in general more useful, for example for keeping a TOC
of headings in a page, or a list of clickable links. However the latter
seems very complicated to implement without severely regressing
performance any time there is a listener for the event. Whenever a node
is removed you have to fire an event for each and every node in the
removed subtree, presumably including attribute nodes.

Also DOMNodeRemoved can be mostly used to emulate
DOMNodeRemovedFromDocument by checking if the node you are interested in
(i.e. the link or the heading) has the removed node in its parent chain.

In mozilla we have never implemented DOMNodeRemovedFromDocument or
DOMNodeInsertedIntoDocument due to its high cost. Likewise I doubt that
we'll implement DOMDescendantRemovedFromDocument. I'm not sure what
other vendors have done about the old event or feel about the new.

I understand that having the distinction means that you could filter on 
the level of depth to fire events on, but I'm asking if this is useful 
and necessary.


I take it you are asking under the general assumption that mutation
events are useful at all? :)

I generally think that DOMDescendantRemovedFromDocument is likely easier
to use, but seems prohibitively expensive to implement correctly,
whereas DOMChildRemoved seems to cover most use cases, though with a bit 
more effort on the side of the listener.


Also note that all these events have a capture phase, so you can always 
attack your listener on an ancestor of the nodes you are interested in, 
such as the document node.



* Specify *when* the events fire (see details below).


We should do this regardless, since it is tightening up the spec, not 
changing it (though admittedly, it may force some implementations to 
change anyway... but that means more interop).




* Deprecate the DOMNodeRemoved and DOMNodeRemovedFromDocument events.
  If this means making them optional or just discouraged I don't really
  care. I'd even be ok with simply leaving them in as is. Mozilla will
  simply remove our implementation of the DOMNodeRemoved event. We've
  never supported the DOMNodeRemovedFromDocument event.


If Mozilla is determined to remove them regardless of what the spec 
says, then I would rather leave them in as specced, but deprecate them. 


Yes, given that someone pointed out that we can't really fire 
DOMNodeRemoved after the event takes place as that means that the parent 
chain is broken and so the event wouldn't reach any ancestors, I see no 
other alternative than dropping the event entirely.


Do note that there is still nothing that defines when these events 
should fire. I.e. if you do

Re: [D3E] Possible Changes to Mutation Events

2008-07-18 Thread Jonas Sicking


Kartikaya Gupta wrote:

On Thu, 17 Jul 2008 17:51:42 -0700, Jonas Sicking [EMAIL PROTECTED] wrote:

* Add a DOMDescendantRemovedFromDocument event which is fired on a node
   when the node is in a document, but any of nodes the descendants is
   removed from the document. The event is fired after the removal takes
   place.
   The relatedNode property contains the removed descendant. The
   relatedIndex property contains the index the child had
   immediately before the removal. (Should relatedIndex be -1 when
   the node wasn't removed from its parent, but rather an ancestor was?)



From this description it seems bit ambiguous as to exactly which node is the 
target of the event, since there could be multiple nodes that satisfy the 
conditions (a) being attached to the document and (b) ancestor of the removed 
node. I assume you mean the node that satisfies the conditions that is deepest 
in the tree? And yes, I think relatedIndex should be -1 if an ancestor was the 
one that was removed, so that it's easier to distinguish between the roots of 
the removed subtrees and the non-roots.


Good point. Yes, I meant the one deepest in the tree. Another way to put 
it is that the target is always the parent old of the root of the 
subtree that was removed.




/ Jonas



Re: XDomainRequest Integration with AC

2008-07-19 Thread Jonas Sicking


Maciej Stachowiak wrote:


On Jul 18, 2008, at 4:20 PM, Sunava Dutta wrote:

I’m in time pressure to lock down the header names for Beta 2 to 
integrate XDR with AC. It seems no body has objected to Jonas’s 
proposal. http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0175.html

Please let me know if this discussion is closed so we can make the change.


I think Anne's email represents the most recent agreement and I don't 
think anyone has 
objected: http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0142.html


The change would be: 

Instead of checking for XDomainRequestAllowed: 1 check for 
Access-Control-Allow-Origin: * or Access-Control-Allow-Origin: url 
where url matches what was sent in the Origin header.


So I have one final request for a change to the above syntax.

How would people feel about the syntax

Access-Control-Allow-Origin: url

This would give us at least something for a forwards compatibility story 
if we wanted to add to the syntax in future versions of the spec. I 
really think we are being overly optimistic if we think that the current 
syntax is the be-all end-all syntax that we'll ever want.


For example during the meeting we talked about that banks might want to 
enforce that the requesting site uses a certain level of encryption, or 
even a certain certificate. A syntax for that might be:


Access-Control-Allow-Origin: origin https://foo.com encryption sha1

Or that the site in question uses some opt-in XSS mitigation technology 
(such as the one drafted by Brandon Sterns in a previous thread in this 
WG). This could be done as


Access-Control-Allow-Origin: origin https://foo.com require-xss-protection

So the formal syntax would be

Access-Control-Allow-Origin:  (* | url) 

/ Jonas

/ Jonas



Re: XDomainRequest Integration with AC

2008-07-19 Thread Jonas Sicking


Jonas Sicking wrote:


Maciej Stachowiak wrote:


On Jul 18, 2008, at 4:20 PM, Sunava Dutta wrote:

I’m in time pressure to lock down the header names for Beta 2 to 
integrate XDR with AC. It seems no body has objected to Jonas’s 
proposal. 
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0175.html
Please let me know if this discussion is closed so we can make the 
change.


I think Anne's email represents the most recent agreement and I don't 
think anyone has objected: 
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0142.html


The change would be:
Instead of checking for XDomainRequestAllowed: 1 check for 
Access-Control-Allow-Origin: * or Access-Control-Allow-Origin: url 
where url matches what was sent in the Origin header.


So I have one final request for a change to the above syntax.

How would people feel about the syntax

Access-Control-Allow-Origin: url

This would give us at least something for a forwards compatibility story 
if we wanted to add to the syntax in future versions of the spec. I 
really think we are being overly optimistic if we think that the current 
syntax is the be-all end-all syntax that we'll ever want.


For example during the meeting we talked about that banks might want to 
enforce that the requesting site uses a certain level of encryption, or 
even a certain certificate. A syntax for that might be:


Access-Control-Allow-Origin: origin https://foo.com encryption sha1

Or that the site in question uses some opt-in XSS mitigation technology 
(such as the one drafted by Brandon Sterns in a previous thread in this 
WG). This could be done as


Access-Control-Allow-Origin: origin https://foo.com 
require-xss-protection


So the formal syntax would be

Access-Control-Allow-Origin:  (* | url) 


We might also want to consider simply calling the header

Access-Control-Allow

Since the above future expansions would make the header not just contain 
the origin, but also further restrictions on the origin.


/ Jonas



Can HTTP headers encode enough URLs? (was: XDomainRequest Integration with AC)

2008-07-21 Thread Jonas Sicking


Julian Reschke wrote:


Ian Hickson wrote:

On Mon, 21 Jul 2008, Julian Reschke wrote:

Ian Hickson wrote:

...
...which basically just says it's a valid URL if it's a valid URI or 
IRI
(with some caveats in the case of IRIs to prevent legacy encoding 
behaviour

from handling valid URLs in a way that contradicts the IRI spec). This
doesn't allow spaces.
...
Correct. But it does allow non-ASCII characters. How do you put them 
into an HTTP header value?


Presumably HTTP defines how to handle non-ASCII characters in HTTP as 
part of its error handling rules, no?


Non-ASCII characters in header values are by definition ISO-8859-1. Yes, 
that sucks. It's not sufficient to encode all IRIs, thus you need to map 
IRIs to something you can use.


And no, that has nothing to do with error handling.


It sounds like what you are asking is if HTTP headers can encode all the 
values for 'url' that we need? This is different from my original 
concern, but is certainly a valid question.


Given that we don't need to encode the all possible paths, since all 
paths are disallowed, is there still a concern? People would have to use 
punycode to encode non-ascii characters if they are part of the domain 
name, which is unfortunate, but hopefully tooling will help here.


/ Jonas



Re: [D3E] Possible Changes to Mutation Events

2008-07-21 Thread Jonas Sicking


Stewart Brodie wrote:

Maciej Stachowiak [EMAIL PROTECTED] wrote:


On Jul 16, 2008, at 5:00 PM, Stewart Brodie wrote:


Maciej Stachowiak [EMAIL PROTECTED] wrote:

On Jul 16, 2008, at 2:03 PM, Stewart Brodie wrote:


I agree with all that, but it's not the whole story, because making
this change has potentially severe consequences for memory usage if
you start moving large subtrees around within a document.  Just how
long is the event queue allowed to get?

It will only grow without bound if every mutation event handler in
turn modifies the DOM itself, or if a compound DOM operation is
unbounded.

Unbounded queueing is a real concern.  Having said that, the current
situation is that we have unbounded recursion, which is equally
unacceptable, although the recursion depth is at least defined by the
number of recursive calls made by the event listeners, whereas the queue
length is dependent on the number of nodes in the affected subtree,
which is likely to be substantially greater.

Can you describe a plausible scenario where growth of the mutation event
queue would be a significant issue? You've said repeatedly you are worried
about it but have not given an example of when this might happen. Are you
talking about the kind of programmer error that results in an infinite
loop?


[Many apologies for the delayed reply]

The worst case I can think of is Node.replaceChild() where the removed child
is a large subtree and the new child is a large subtree that's attached
somewhere else in the document.  This sort of content is easily feasible in
the sort of UI applications that we have to support (for example, a TV
electronic programme guide), and yet the memory we have available is
severely constrained.

If each subtree contains just 100 nodes, that's 303 events to be queued.
Even if the new subtree is not attached, that's still 203 events.  What
happens if we have no memory left in which to queue these events?


So how much memory are these 303 events compared to the 200 nodes that 
are involved in the mutation? I would expect it to be pretty small.


My point is that it's easy to create a scenario where most algorithms 
consume a lot of data in absolute terms. But if it requires that large 
amounts of data is already being consumed then this doesn't seem like a 
big practical problem.


I would actually argue that the problem here is that the 
DOMNodeRemovedFromDocument event is pretty insanely defined. The rest is 
just side effects from that. This is why we haven't implemented it in 
mozilla as of yet.



Concentrating on the removal side, currently, I have to queue 0 events, and
I can optimise away DOMNodeRemovedFromDocument completely, usually. However,
I have to ensure that the detached node and its parent are still valid after
the dispatch of DOMNodeRemoved.  With these changes, I could only optimise
away the events if *neither* DOMNodeRemoved or DOMNodeRemovedFromDocument
have any listeners.  Sure, the DNRFD events would get optimised out at the
point of dispatch, but I've had to store a whole load of information in the
meantime.


Hmm.. this is indeed a problem. I definitely want to be able to optimize 
away most of the mutation code if there isn't someone actually listening 
to these events.


One way to fix it would be to state that if no listeners are registered 
for an event by the time it is scheduled, an implementation is allowed 
to not fire the event at all, even if a listener got registered before 
the event would have actually fired.


It seems like it's a very odd edge case where anyone would notice a 
difference at all. It requires that one mutation event listener 
registers another listener for another event.


It'd be ugly, but I can't think of anything better really.


The alternative would be to queue a promise to generate the event for every
node in the subtree, but that promise would end up being fulfilled based on
the state of the tree after DOMNodeRemoved listeners had been executed.
However, it would reduce the likelihood of memory problems dramatically.


Yes, noone is saying that you need to actually create the actual event 
objects until it's needed.


/ Jonas



Re: ISSUE-42 (simpler custom events): Should we simplify custom events? [DOM3 Events]

2008-07-23 Thread Jonas Sicking


Cameron McCormack wrote:

XBL2 currently says:

  The action taken (retarget vs. stop) is specific to the event type. In
  general, UI events must be retargeted and mutation events must be
  stopped. Exceptions to the rule are noted below. The goal of this
  retargeting or stopping is to stop outer shadow scopes from being
  exposed to nodes from inner shadow scopes, and to stop outer shadow
  scopes from getting apparently meaningless events that only make sense
  in the context of inner shadow scopes.
   — http://www.w3.org/TR/xbl/#event2

A definitive list of which event types should be retargetted and which
should be stopped doesn’t seem to be given, though.  It’s unclear
whether a CustomEvent (or a plain Event) should be retargetted or
stopped.


I think this was a feature request to make it possible for the code 
firing the custom event to specify whether the event should be 
retargetted or stopped. The XBL spec can't specify all possible custom 
events of course, so it does make sense, however I would ask what the 
use case is.


/ Jonas



Re: XDomainRequest Integration with AC

2008-07-30 Thread Jonas Sicking


Please note that

Access-Control-Allow-Origin: url

is also allowed syntax. Where the url must contain only scheme, domain 
and host.


So the following syntax is allowed:
Access-Control-Allow-Origin: http://example.com

It is somewhat unclear if the following syntaxes are allowed:

Access-Control-Allow-Origin: http://example.com/
Access-Control-Allow-Origin: http://example.com/?
Access-Control-Allow-Origin: http://example.com/#
Access-Control-Allow-Origin: http://example.com/;


I think the first one should be ok, but not the other three.

/ Jonas



Sunava Dutta wrote:

Access-Control-Allow-Origin: * seems to be the consensus for the public 
scenario, please confirm.
On a less urgent note did we get any further traction on the discussion on 
angle brackets for the URL specified scenario? The last mail here seems to be 
on 7/21.



-Original Message-
From: Maciej Stachowiak [mailto:[EMAIL PROTECTED]
Sent: Saturday, July 19, 2008 9:32 PM
To: Jonas Sicking
Cc: Sunava Dutta; [EMAIL PROTECTED]; Sharath Udupa; Zhenbin Xu; Gideon
Cohn; public-webapps@w3.org; IE8 Core AJAX SWAT Team
Subject: Re: XDomainRequest Integration with AC


On Jul 18, 2008, at 11:15 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:

On Jul 18, 2008, at 4:20 PM, Sunava Dutta wrote:

I'm in time pressure to lock down the header names for Beta 2 to
integrate XDR with AC. It seems no body has objected to Jonas's
proposal. http://lists.w3.org/Archives/Public/public-

webapps/2008JulSep/0175.html

Please let me know if this discussion is closed so we can make the
change.

I think Anne's email represents the most recent agreement and I
don't think anyone has objected:

http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0142.html

The change would be: Instead of checking for
XDomainRequestAllowed: 1 check for Access-Control-Allow-Origin:
* or Access-Control-Allow-Origin: url where url matches what was
sent in the Origin header.

So I have one final request for a change to the above syntax.

How would people feel about the syntax

Access-Control-Allow-Origin: url

I don't think the angle brackets are necessary for forward compat,
since we can just disallow spaces from the URL.

  - Maciej



This would give us at least something for a forwards compatibility
story if we wanted to add to the syntax in future versions of the
spec. I really think we are being overly optimistic if we think that
the current syntax is the be-all end-all syntax that we'll ever want.

For example during the meeting we talked about that banks might want
to enforce that the requesting site uses a certain level of
encryption, or even a certain certificate. A syntax for that might

be:

Access-Control-Allow-Origin: origin https://foo.com encryption sha1

Or that the site in question uses some opt-in XSS mitigation
technology (such as the one drafted by Brandon Sterns in a previous
thread in this WG). This could be done as

Access-Control-Allow-Origin: origin https://foo.com require-xss-
protection

So the formal syntax would be

Access-Control-Allow-Origin:  (* | url) 

/ Jonas

/ Jonas








Re: XDomainRequest Integration with AC

2008-07-30 Thread Jonas Sicking


And note that this syntax should be supported even in the public data 
scenario.


/ Jonas

Jonas Sicking wrote:


Please note that

Access-Control-Allow-Origin: url

is also allowed syntax. Where the url must contain only scheme, domain 
and host.


So the following syntax is allowed:
Access-Control-Allow-Origin: http://example.com

It is somewhat unclear if the following syntaxes are allowed:

Access-Control-Allow-Origin: http://example.com/
Access-Control-Allow-Origin: http://example.com/?
Access-Control-Allow-Origin: http://example.com/#
Access-Control-Allow-Origin: http://example.com/;


I think the first one should be ok, but not the other three.

/ Jonas



Sunava Dutta wrote:
Access-Control-Allow-Origin: * seems to be the consensus for the 
public scenario, please confirm.
On a less urgent note did we get any further traction on the 
discussion on angle brackets for the URL specified scenario? The last 
mail here seems to be on 7/21.




-Original Message-
From: Maciej Stachowiak [mailto:[EMAIL PROTECTED]
Sent: Saturday, July 19, 2008 9:32 PM
To: Jonas Sicking
Cc: Sunava Dutta; [EMAIL PROTECTED]; Sharath Udupa; Zhenbin Xu; Gideon
Cohn; public-webapps@w3.org; IE8 Core AJAX SWAT Team
Subject: Re: XDomainRequest Integration with AC


On Jul 18, 2008, at 11:15 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:

On Jul 18, 2008, at 4:20 PM, Sunava Dutta wrote:

I'm in time pressure to lock down the header names for Beta 2 to
integrate XDR with AC. It seems no body has objected to Jonas's
proposal. http://lists.w3.org/Archives/Public/public-

webapps/2008JulSep/0175.html

Please let me know if this discussion is closed so we can make the
change.

I think Anne's email represents the most recent agreement and I
don't think anyone has objected:

http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0142.html

The change would be: Instead of checking for
XDomainRequestAllowed: 1 check for Access-Control-Allow-Origin:
* or Access-Control-Allow-Origin: url where url matches what was
sent in the Origin header.

So I have one final request for a change to the above syntax.

How would people feel about the syntax

Access-Control-Allow-Origin: url

I don't think the angle brackets are necessary for forward compat,
since we can just disallow spaces from the URL.

  - Maciej



This would give us at least something for a forwards compatibility
story if we wanted to add to the syntax in future versions of the
spec. I really think we are being overly optimistic if we think that
the current syntax is the be-all end-all syntax that we'll ever want.

For example during the meeting we talked about that banks might want
to enforce that the requesting site uses a certain level of
encryption, or even a certain certificate. A syntax for that might

be:

Access-Control-Allow-Origin: origin https://foo.com encryption sha1

Or that the site in question uses some opt-in XSS mitigation
technology (such as the one drafted by Brandon Sterns in a previous
thread in this WG). This could be done as

Access-Control-Allow-Origin: origin https://foo.com require-xss-
protection

So the formal syntax would be

Access-Control-Allow-Origin:  (* | url) 

/ Jonas

/ Jonas











Re: XDomainRequest Integration with AC

2008-07-30 Thread Jonas Sicking


Adam Roben wrote:


On Jul 30, 2008, at 12:19 PM, Jonas Sicking wrote:



Please note that

Access-Control-Allow-Origin: url

is also allowed syntax. Where the url must contain only scheme, domain 
and host.


Do you mean scheme, host, and port?


Yes :)

/ Jonas



Re: XDomainRequest Integration with AC

2008-08-01 Thread Jonas Sicking


Sunava Dutta wrote:

In offline conversations with Jonas on the topic of supporting the url
syntax I think Jonas mentioned a good point regarding supporting URL
for the private scenario. Namely, in caching scenarios allowing the
URL to be sent in the response header if mistakes happen (for example
Vary by origin header is not sent or ignored) doing so will ensure
that it fails securely. I'm not so sure about the value for the public
data scenario (other than consistency). Here's what I came up with,
feel free to add on or elaborate.


Just to clarify, your proposal is that when
Access-Control-Allow-Credentials *is not* set to true we should only
allow the value * for Access-Control-Allow-Origin? Whereas when
Access-Control-Allow-Credentials *is* set to true we already only allow
the URL syntax for Access-Control-Allow-Origin.



Pros (of supporting URL syntax in public scenarios)
*   Supporting URL allows for a site to return data that's related
to a particular site, but is non user specific (no creds)


This can be accomplished either way since the server can use the Origin
header if it wants to send different data for different requesting servers.

I'm not really sure that that is a usecase that we have designed much
around though.


Cons
*   A better architecture here is that a site will rely on the
Access-Control-Origin header to determine the site and then decide to
send the data or not. Along those line it as a few teammates said it
seems wasteful to support URL syntax for the public scenario as we
don't want the data to be sent on the client and dumped. The server
should simply not send the content if the Origin is not what is
desired.


Please note that the header is simply called Origin now, and has been
for quite some time.


*   The second problem that comes to mind is that clients cannot
be trusted. The resource server essentially is relying on the  client
to enforce the domain check.  However since this is anonymous access,
the client could well be evil.com's server, which would simply ignore
the URL and grab data.   The resource server has no way of telling
who the request party really is -- it is just an anonymous HTTP
client. This may instill a false sense of security for server
operators.
*   The third challenge here is that the access-control-origin
header may be spoofable therefore this scenario is not reliably
solved.


Like you point out, if you can't trust the client then your initial
proposal for the server to look at the Origin header does not work either.

However this seems to be the case no matter if we support the URL syntax
for public data or not. So I don't see how this is a pro or con one way
or another. All three cons you are listing seems to come down to that in
the public data scenario the client can't be trusted, which does seem
partially true.


The way I see it is this:

Pros of supporting the URL syntax for public data:
* Simpler and more consistent specification.
  I.e. the URL syntax is always allowed and we are only forbidding the
  combination of wildcarding and sending cookies at the same time.
* Allows a server which serves private and public data. When the server
  receives cookies it can customize the result for the user, when no
  cookies are sent it just sends back a generic response.
  In both cases it echoes back in the Access-Control-Allow-Origin
  response header the URL it received in the Origin request header.
* Allows mashup sites inside a corporate firewall. These servers might
  serve company private data and wants to use the Access-Control spec
  to allow the data to be mashed up. However it does not want external
  websites to load such data. It does this by only echoing back in the
  Access-Control-Allow-Origin header the URL from the Origin header
  if the Origin is an intranet server.
  In this case the client can be trusted even in the public data case
  since only browsers installed on company client desktops can issue
  requests to the site, evil.com is blocked by the firewall.

Cons of supporting the URL syntax for public data:
* There is a risk of a false sense of security. I.e. a site might send
  private data from a URI and protect it only by sending
  Access-Control-Allow-Origin: trusted.com. The spec does state that
  such a response should not be exposed to evil.com. However if evil.com
  made such a request server-to-server it can of course ignore this and
  still read the data.

A couple of notes:
The last 'con' might happen anyway if the server just looks at the
Origin request header and deciding whether to send the data or not based
on the value of that header. So for the false sense of security argument
to hold we would also be required not to send the 'Origin' header with
public data requests. Similarly the 'Referer' header should not be send
since it carries the same risk.

The second 'serves private and public data' was something that Hixie
mentioned that google might want to do in the future. If a request 

Re: XDomainRequest Integration with AC

2008-08-08 Thread Jonas Sicking


Anne van Kesteren wrote:

On Wed, 30 Jul 2008 18:19:20 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

Please note that

Access-Control-Allow-Origin: url

is also allowed syntax. Where the url must contain only scheme, [host, 
and port].


So the following syntax is allowed:
Access-Control-Allow-Origin: http://example.com

It is somewhat unclear if the following syntaxes are allowed:

Access-Control-Allow-Origin: http://example.com/
Access-Control-Allow-Origin: http://example.com/?
Access-Control-Allow-Origin: http://example.com/#
Access-Control-Allow-Origin: http://example.com/;

I think the first one should be ok, but not the other three.


I think all of these should be disallowed.

My plan is to simply require Access-Control-Allow-Origin to hold the 
ASCII serialization of an origin (see HTML5) and have a literal 
comparison of that with the value of Origin. This would be quite strict, 
but should be fine I think.


That is fine, though I'm inclined to think that the trailing slash 
should be allowed in the HTML5 syntax for an origin.


/ Jonas



Re: ISSUE-44 (EventsAndWindow): Should DOM3 Events cover the interaction of events and the Window object? [DOM3 Events]

2008-08-08 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 7 Aug 2008, Jonas Sicking wrote:

Ian Hickson wrote:

On Thu, 7 Aug 2008, Olli Pettay wrote:
Could we actually just say that if document implements DocumentView 
interface and .defaultView isn't null and implements EventTarget, 
the event propagates to .defaultView. So in that case defaultView 
becomes the root of the event target chain (if the target of the 
event is bound to document). No need to mention Window, since 
AbstractView is already defined in DOM2 Views[1]. HTML5 defines that 
AbstractView must also implement Window and EventTarget interfaces. 
[2]

Fine by me, so long as the result is compatible with most UAs.
One very unfortunate thing is that if i recall correctly the Window 
object never takes part in the EventTarget chain for the 'load' event. 
But does for all other events. This was because a lot of sites broke 
when we tried to bubble 'load' to the Window.


Is that any load event, or only specific load events? (i.e. is it a 
characteristic of the bubbling/capture process, or the events that are 
fired by certain circumstances like the end of parsing or an image being 
fetched from the network?) If the latter, it would be useful if the DOM3 
Events spec could provde a hook for the HTML5 spec to annotate certain 
events as being affected by this exception.


I think it is all 'load' events except the one fired for the finished 
load of the actual page.


I.e. loads for images, stylesheets, plugins, etc had to not reach the 
Window object.


I'm uncertain if iframe loads reach the Window or not.

/ Jonas



Re: File Upload Status ?

2008-08-08 Thread Jonas Sicking


Garrett Smith wrote:

The File object is useful for uploading files via XHR. It provides
functionality for data to be retrieved from a file submitted to a
formusing the input type file.

It is currently a Working Draft:
 http://www.w3.org/TR/file-upload/
 http://dev.w3.org/2006/webapi/FileUpload/publish/FileUpload.html

Implemented differently in Firefox 3.
 http://developer.mozilla.org/en/docs/nsIDOMFile
 https://bugzilla.mozilla.org/show_bug.cgi?id=371432

An example in Firefox 3:
http://dhtmlkitchen.com/ape/example/form/Form.html

It is a useful feature for in-page file upload, without resorting
toIFRAME hackery.

What is the status of File Upload?

Firefox 3's implementation is different than the w3c working draft.The
spec author seems to have abandoned that, so now there's a working
draft which seems to be collecting dust for a couple of years.

What is going on with File Upload specification? It would be a useful
feature, but with only a half-legged attempt at a spec that the author
abandoned, and a different implementation in Firefox 3, other browsers
probably won't implement this functionality any time soon. It's useful
in Firefox 3, and would be even better if there were some mime-type
sniffing (mediaType).

There seems to be a need for failing test cases,, so implementations
can fill in the ???'s. Any other suggestions for getting this thing
done?


The spec only really supplies one feature over what Firefox 3 has: The 
ability to open a file dialog strictly from Javascript without any UI 
objects involved.


I'm not sure if this is a super desirable feature from a security point 
of view. Technically speaking a site could take a users browser hostage 
unless the user agrees to give up a sensitive file:


function checkForFile(e) {
  if (!e || !fileIsPasswordFile(e.fileList[0])) {
alert(Give me your passw0rd file!);
var fd = new FileDialog();
fd.addEventListenerNS(
  http://www.w3.org/ns/fs-event#;, files-selected, checkForFile,
   false);
fd.open();
  }
  else {
xhr = new XMLHttpRequest();
xhr.open(GET, http://evil.com/passwordsaver.cgi;, false);
xhr.send(e.fileList[0]);
  }
}
checkForFile();

Granted, there are certainly many ways to DoS a browser already 
(while(1) alert('ha');) but the above is somewhat more sinister.


/ Jonas



Re: XDomainRequest Integration with AC

2008-08-08 Thread Jonas Sicking


Jonas Sicking wrote:


Anne van Kesteren wrote:
On Fri, 08 Aug 2008 11:38:55 +0200, Jonas Sicking [EMAIL PROTECTED] 
wrote:
String comparison is not going to be ok either way. The following two 
origins are equivalent:


http://www.foo.com
http://www.foo.com:80


My proposal was to treat those as non-equivalent. Basically, to 
require Access-Control-Allow-Origin to have the same value as Origin.


The downside with doing that is that we can't use the same syntax for 
Access-Control as for postMessage. (Yes, I'm still intending to get 
postMessage fixed, haven't had time yet though).


Not sure how big the value is in that though...


The big worry I have though is if there is any possibility to puny 
encode the same origin in multiple ways (other than with or without 
default port). This could lead to different UAs encoding the same origin 
in different ways, which could lead to interoperability issues if sites 
rather than echoing the 'Origin' header always send out a static value 
for the Access-Control-Allow-Origin header.


In general, I don't think it's a lot of work to require a strict 
same-origin check. All browsers should have such an algorithm 
implemented anyway.


/ Jonas



Re: [selectors-api] Investigating NSResolver Alternatives

2008-08-15 Thread Jonas Sicking


João Eiras wrote:


Hi !

I vote for having a new light weight object to completely replace the 
current NSResolver, and then apply it to other DOM specs namely the 
XPath DOM.


I had some of the problems we're discussing with the XPath DOM API, and 
obviously the same apply here.
I exposed my problems at the dom mailing list, but my comments were 
dismissed completely

http://lists.w3.org/Archives/Public/www-dom/2007OctDec/0002.html

The problems I outlineed with NSResolver are summarized to the following:
 - a element with no prefix is assumed to be
   in the namespace with null uri. You can't
   change this behavior.


This is not true. We can just define that unprefixed names call into the 
NSResolver with an empty string, and whatever the NSResolver returns 
will be used as prefix.



 - a element with a prefix MUST be in a
   namespace with non null namespace uri, else
   returning empty string from lookupNamespaceURI
   results in NAMESPACE_ERR being thrown


Again, this is not true. We can just define that returning the empty 
string means to use the null namespace.


null is the value that's supposed to be returned when no namespace is 
found and NAMESPACE_ERR should be thrown.


Even the resolver returned from createNSResolver is horribly 
underdefined and so we could make it follow the above rules without 
breaking any specs.



I proposed the following changes:
 - a element with no prefix would result in
   lookupNamespaceURI being called with the empty
   string, and the resolver could return a default
   namespace, or return null (or empty string) to
   imply the null namespace
 - a element with prefix would result in
   lookupNamespaceURI being called with the prefix
   and the function could either return null
   (or empty string) to imply the null namespace,
   or it could return a fully qualified uri.


Having to create an element whenever you want to resolve some custom 
namespaces seems like a horrible hack, and awefully complicated to use.


Say that I wanted to find all elements in the myNS namespace, using a 
NSResolver I would do:


doc.querySelectorAll(*, function (pre) {
return pre ==  ? myNS : null;
  });

With your proposed solution I'd have to do

doc.querySelectorAll(*,
  document.createElementNS(myNS, dummy));

This looks sort of ok, but still very strange to have to use the dummy 
name. However, if i'm using two namespaces in an expression i'm in a 
world of pain. Compare


doc.querySelectorAll(a:*, b:*, function (pre) {
return pre == a ? myNS :
   pre == b ? myNS2 : null;
  });

vs

e = document.createElement(dummy);
e.setAttributeNS(http://www.w3.org/2000/xmlns/;,
 xmlns:a, myNS)
e.setAttributeNS(http://www.w3.org/2000/xmlns/;,
 xmlns:b, myNS2)
doc.querySelectorAll(a:*, b:*, e);

How many people even know the proper namespace for the xmlns attribute? 
Did you?


On top of that we can't really change how DOM-XPath works given that 
there are implementations for the spec with pages out there depending on it.


/ Jonas



Re: [selectors-api] Investigating NSResolver Alternatives

2008-08-15 Thread Jonas Sicking


João Eiras wrote:


On , Jonas Sicking [EMAIL PROTECTED] wrote:


João Eiras wrote:

 Hi !
 I vote for having a new light weight object to completely replace 
the current NSResolver, and then apply it to other DOM specs namely 
the XPath DOM.
 I had some of the problems we're discussing with the XPath DOM API, 
and obviously the same apply here.
I exposed my problems at the dom mailing list, but my comments were 
dismissed completely

http://lists.w3.org/Archives/Public/www-dom/2007OctDec/0002.html
 The problems I outlineed with NSResolver are summarized to the 
following:

 - a element with no prefix is assumed to be
   in the namespace with null uri. You can't
   change this behavior.


This is not true. We can just define that unprefixed names call into 
the NSResolver with an empty string, and whatever the NSResolver 
returns will be used as prefix.



 - a element with a prefix MUST be in a
   namespace with non null namespace uri, else
   returning empty string from lookupNamespaceURI
   results in NAMESPACE_ERR being thrown


Again, this is not true. We can just define that returning the empty 
string means to use the null namespace.


null is the value that's supposed to be returned when no namespace is 
found and NAMESPACE_ERR should be thrown.


Even the resolver returned from createNSResolver is horribly 
underdefined and so we could make it follow the above rules without 
breaking any specs.


You misread. That was the list of issues I outlined on the dom mailing 
list, back in 2007.

Of course we can workaround them, and we should.




I proposed the following changes:
 - a element with no prefix would result in
   lookupNamespaceURI being called with the empty
   string, and the resolver could return a default
   namespace, or return null (or empty string) to
   imply the null namespace
 - a element with prefix would result in
   lookupNamespaceURI being called with the prefix
   and the function could either return null
   (or empty string) to imply the null namespace,
   or it could return a fully qualified uri.


Having to create an element whenever you want to resolve some custom 
namespaces seems like a horrible hack, and awefully complicated to use.




Having what ? Since when did I suggest creating elements ?
That list of suggestions were about chaging NSResolver behavior.


Doh! I got to the a element part and then misread the rest with faulty 
assumptions. My bad. I agree with the above.


You may go to 
http://lists.w3.org/Archives/Public/www-dom/2007OctDec/0002.html for 
code samples.


Unfortunately I don't think we can change how XPath parses things since 
there is already code out there that might rely on the current behavior. 
Might be worth looking into though.


/ Jonas



Re: [selectors-api] Investigating NSResolver Alternatives

2008-08-16 Thread Jonas Sicking


João Eiras wrote:


Unfortunately I don't think we can change how XPath parses things 
since there is already code out there that might rely on the current 
behavior. Might be worth looking into though.




I don't want to worry about xpath, although that misfeautre bite me hard :)
Chaging the behavior how I suggested would be harmless because 
currently, the use cases I wanted to fix either had the behavior 
unspecified or exceptions were expected to be thrown.


My concearn currently is with the selectors-api.
I don't want to see the same errors repeated.


Agreed. I think the right solution is to make the changes you have 
proposed to NSResolver, and then make use of the NSResolver interface 
for the querySelector(All) APIs.


/ Jonas



Re: [selectors-api] Investigating NSResolver Alternatives

2008-08-20 Thread Jonas Sicking


Boris Zbarsky wrote:


Lachlan Hunt wrote:
The developers implementing this in Opera has given me feedback saying 
that this shouldn't throw an exception because JS allows additional 
arguments to be passed to any method, including other DOM APIs, like 
getElementById(), etc.  Is there a good reason why this should behave 
differently?


Well, it depends on whether you plan to add arguments in the future.

If you don't then it doesn't matter that extra arguments don't throw.

But if you do add arguments later, and pages are passing extra arguments 
and expecting that to not throw, then that either constrains what you 
can do with the arguments you add (e.g. they must never throw, no matter 
what) or you force UAs to break pages.


So the suggestion that extra arguments should throw in this case is 
based on the assumption that you plan to add some sort of argument for 
namespaces later and the assumption that throwing on extra arguments now 
is better than the two alternatives in the previous paragraph.


Of course not throwing on extra arguments is indeed the easy path to 
implementation (and is what Gecko will be shipping, it looks like), so 
as long as you can live with the results as a spec writer it's all good 
by me.


I think we should follow suite with all other functions in the DOM, 
which means we should not throw for extra arguments. I see no reason to 
treat this function differently than everything else.


/ Jonas



Re: [whatwg] WebIDL and HTML5

2008-08-25 Thread Jonas Sicking


Garrett Smith wrote:

There are probably others but I can't think of them. I think the
majority of the time that strings will want to go to ToString,
booleans will want to go to ToBoolean.

That can be the default, perhaps.  But I suspect usually null should become
, not null.


Why?



Note that 'null' is generally a valid value for DOMString. This doesn't 
seem to be explicitly called out in the definition for DOMString. 
However there are lots of functions that takes a DOMString and describes 
what to do when the argument is null (as opposed to null).


So for a function like

  bool doStuff(in DOMString arg);

if null is passed there should be no need to call .toString() or any 
other type of conversion is needed at all. However most functions in the 
DOM spec does not define behavior for the null value, so we have chosen 
to treat it as the the empty string as that seems like the most sensible 
behavior.


/ Jonas



Re: [whatwg] WebIDL and HTML5

2008-08-26 Thread Jonas Sicking


Garrett Smith wrote:

On Mon, Aug 25, 2008 at 6:07 PM, Jonas Sicking [EMAIL PROTECTED] wrote:

Garrett Smith wrote:

There are probably others but I can't think of them. I think the
majority of the time that strings will want to go to ToString,
booleans will want to go to ToBoolean.

That can be the default, perhaps.  But I suspect usually null should
become
, not null.

Why?


Note that 'null' is generally a valid value for DOMString. This doesn't seem
to be explicitly called out in the definition for DOMString. However there
are lots of functions that takes a DOMString and describes what to do when
the argument is null (as opposed to null).

So for a function like

 bool doStuff(in DOMString arg);



There is no DOM method calld doStuff. Can you provide a concrete
example? Boris couldn't think of one either. Let us first investigate
some implementations.


A quick search through the DOM Level 3 Core gives:

The generic statement
# Applications should use the value null as the namespaceURI  parameter
# for methods if they wish to have no namespace. In programming
# languages where empty strings can be differentiated from null, empty
# strings, when given as a namespace URI, are converted to null

So here  is clearly treated as null, while null is distinct from the 
two of them.


Same thing in the hasFeature function which states

# a DOM Level 3 Core implementation who returns true for Core with the
# version number 3.0 must also return true for this feature when the
# version number is 2.0,   or, null

The function DOMStringList.item returns a DOMString with the value null 
when the index is = the length. It seems unexpected to return null 
here, though it's not explicitly clear.


NameList similarly returns the DOMString null from getName and 
getNamespaceURI under some conditions.


The function DOMImplementation.createDocument accepts null as value for 
namespaceURI and qualifiedName to indicate that no documentElement 
should be created. I would expect that passing null for those 
parameters would create an element with localname null in the 
namespace null. Effectively null xmlns=null without the xmlns 
attribute. It further states that an exception should be thrown if 
qualifiedName has a prefix but namespaceURI is null, but seems to allow 
qualifiedName having a prefix and namespaceURI being null.


The attribute Document.documentURI returns the DOMString null under 
certain conditions. It also states

# No lexical checking is performed when setting this attribute; this
# could result in a null value returned when using Node.baseURI
Which seems to indicate that you can set it to null.

The attribute Document.inputEncoding is of type DOMString and returns 
null under certain conditions.


The attribute Document.xmlEncoding is of type DOMString and returns null 
under certain conditions.


The attribute Document.xmlVersion is of type DOMString and returns null 
under certain conditions.


A lot of functions on the Document interface for creating nodes state 
that nodes are created with localName, prefix, and namespaceURI set to 
null, such as createElement. All these properties are DOMStrings. 
Further, the namespace-aware functions, such as createElementNS say to 
throw an exception if the qualifiedName parameter has a prefix but 
namespaceURI is null. It makes no such statement if the namespaceURI is 
null.



I'm actually going to stop here. There are plenty of references in the 
spec to DOMStrings having the value null. So while it's not specifically 
clear in the definition of DOMString it seems clear to me that null is a 
valid value for DOMString. So no conversion using any conversion 
operator is needed. Thus I don't think talking about what ToString does 
is relevant to the discussion.


Further, many of functions in the DOM Level 3 Core spec treat null as 
, not as null. All the namespace aware factory functions on the 
Document interface does so, DOMImplementation.hasFeature does so, 
Element.getAttributeNS does so.



type of conversion is needed at all. However most functions in the DOM spec
does not define behavior for the null value, so we have chosen to treat it
as the the empty string as that seems like the most sensible behavior.


It is not something that can or should be generally relied upon
because it is not standardized and indeed works differently in
implementations. Please also review the example I provided earlier,
assigning an object as a css property value.


In your example, if I understand it correctly, you are passing it 
something that is not a valid DOMString. In that case I absolutely agree 
that using ToString to convert it to a string is the right thing to do.


However when null is passed that is not the case.

/ Jonas



Re: [whatwg] WebIDL and HTML5

2008-08-26 Thread Jonas Sicking


Garrett Smith wrote:

In some UAs that alerts null, in some it throws an exception because
createElement() is not valid.  Just try it.



Using ToString, the following result would be obtained:

// ERROR
document.createElement();

// create a null element.
document.createElement(null);

// Create an a element.
document.createElement( { toString: function(){ return a; }} );

document.createElement( new String(div) );

To simulate the ToString effect, use String( arg ); (do not use new
String(arg));


Exactly, this shows that converting null to null is undesirable for 
some functions.



No, but there should be a need to call ToString.

The thing is, ToString is lossy.  Once called, there is no way to tell apart
null and null.  However, there are DOM methods which require different
behavior for the two (e.g. createElementNS and so forth, which I did
explicitly mention earlier, contrary to your counldn't think of one either
business).


After the method createElement(null) is called, there will be no need
to tell apart null and null. I see that. Why is that a problem?


Not sure what you are saying here.


You said that null - null is lossy. I don't think that's a valid
argument. null -  is even more lossy.


Both conversions are lossy yes. The correct solution is to do no 
conversion. As I've stated many times, null is a valid value for 
DOMString and so no conversion is needed in the general case.


However for some function *implementations* treating giving null and  
as an input argument will yield the same result, such as createElement.



Similarly, there are DOM methods that require different behavior for null
and .  Specific examples include the proposed DOM style set APIs,
DOMImplementation.createDocument. There are also cases where returning null
and returning  are not equivalent.  inputEncoding is an example.


null and  are not equivalent. Setting a style to have the value null
should probably convert the value to null


What do you base that on? Can you point to any specs that state that 
that is the correct thing, or any webpages that expect that behavior?



Internet Explorer is more strict in that it requires a string value,
and won't call ToString. It would be useful to have the method call
ToString, because that way an object with a toString method could be
used for setting values:

script
var colorObject = {
  r : 16, g: 255, b: 16,
  toString : function() {
return rgb( + this.r + , + this.g + , + this.b+); }
}

document.body.style.backgroundColor = colorObject;
/script

It would seem somewhat expected that the colorObject's toString would
be called. The object's toString method would be called from the
internal ToString method.


I agree it seems useful to use ToString as a conversion method when an 
argument of type DOMString is passed something that isn't a valid 
DOMString value.




Null is not the same as no value. Null is a type, having one value:-
null. The value null is a primitive value in EcmaScript. It is the
only member of the Null type.


As far the the DOM spec goes the value null is a bit special. It isn't 
simply a value valid for the Null type. null is also a valid value for 
many other types. For example Node.insertBefore takes as a second 
argument a Node, however passing null is a valid value to pass. So null 
is a valid value for the type Node.


I argue that there is very strong indication in the spec that null is 
also a valid value for the type DOMString. Camerons mail also supports this.


/ Jonas

/ Jonas



Re: [whatwg] WebIDL and HTML5

2008-08-26 Thread Jonas Sicking



One last time, the facts:

1)  There are DOM methods that accept DOMString arguments or return
   DOMString values.


Fact.


Sounds like you agree here.


2)  In general, such methods need to be able to tell apart null and all
   string values (including  and null).


They need to determine value type so that they can either throw or
handle. null, 0, undefined, new String('x'), are all not strings.


Sounds like you agree here too?


3)  The behavior of null is almost always either that of  or that of
   null.  Which one depends on the exact DOM method.


That is false. null has no behavior. Methods have behavior. For
example, EcmaScript's internal ToString has a behavior of converting
null to null. That is a behavior.


You aren't actually answering Boris question, but rather point out a
grammatical error in the question. So let me repeat the question with
the grammatical error fixed. Please do excuse any other grammar errors I
introduce as English is a second language to me.

3)  The behavior of the function when null is passed as value for an
argument is almost always either that of  or that of null.
Which one depends on the exact DOM method.

Do you agree with this?

/ Jonas



Re: [whatwg] WebIDL and HTML5

2008-08-27 Thread Jonas Sicking


Garrett Smith wrote:

I don't agree that that is a good way to handle null, but it is clear
that these two:

document.body.textContent=null;
document.body.textContent='';

Are specified as being different from each other. They are different
because  is the empty string and null is null. Agreed?

No, the spec requires the identical behavior for both. Please read again.



I don't see it. Where does it say null is converted to the empty string?


It does not say to convert null to the empty string no, you are entirely 
correct there. What it does do though is describe what happens when you 
set textContent to null:


On setting, any possible children this node may have are removed and, 
if it the new string is not empty or null, replaced by a single Text 
node containing the string this attribute is set to.


So it says to first remove all children, and then do nothing more. Do 
you share this interpretation for this one attribute?



So at this point I want to ask though: What is your proposal?

What do you propose should happen when an attribute like

  attribute DOMString nodeValue;

is set using the following ECMAScript code:

  myNode.nodeValue = hello;
  myNode.nodeValue = ;
  myNode.nodeValue = 0;
  myNode.nodeValue = 10.2;
  myNode.nodeValue = { toString: function() { return str; } };
  myNode.nodeValue = new Date();
  myNode.nodeValue = null;
  myNode.nodeValue = undefined;
  myNode.nodeValue = myNode;

where myNode points to a textnode. Note that 'throw an exception' is a 
valid answer. In that case ideally also specify the exception to be 
thrown if you have an opinion.


Note that the lines should be considered separate from each other. So if 
for example the second line throws I am still interested to hear what 
the third line does etc.


Extra interesting would be to hear motivation on why you think the 
behavior you describe is the appropriate one.


Best Regards,
Jonas



Re: [XMLHttpRequest2] comments

2008-09-04 Thread Jonas Sicking


Anne van Kesteren wrote:


For when I next edit this draft (hopefully soon):

 * Origin header shouldn't point to the origin definition.

 * Exception codes need to be changed. (See XMLHTttpRequest Level 1.)

 * Upload notifications should work non same origin as well.

 * Download notifications should work non same origin as well most 
likely, even readyState == 2 can work now the processing instruction is 
gone as far as I can tell.


There are two further features I'd like to see added to XHR, hopefully 
in the XHR2 timeframe:


1. A timeout property like the one on microsofts XDR. I haven't looked
   into the specifics of XDRs property, but I would think that an
   'abort' event should fire as well as readystate transitioning to
   something if the timeout is reached.

2. A .responseJSON property. This would return the same thing as the
   following code:

   if (xhr.readyState != 4) {
 return null;
   }
   return JSON.parse(xhr.responseText);

   However there are a few details that can be quibbled about:
   a) Should a partial result be returned during readystate 3
   b) If the property is gotten multiple times, should that return the
  same or a new object every time.

/ Jonas



Re: XDomainRequest Integration with AC

2008-09-05 Thread Jonas Sicking


Anne van Kesteren wrote:


On Fri, 08 Aug 2008 20:44:04 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:
The big worry I have though is if there is any possibility to puny 
encode the same origin in multiple ways (other than with or without 
default port). This could lead to different UAs encoding the same 
origin in different ways, which could lead to interoperability issues 
if sites rather than echoing the 'Origin' header always send out a 
static value for the Access-Control-Allow-Origin header.


Is that possible? I don't think it is. Domain names follow a strict set 
of normalization rules. (That would also mean the Origin header could 
contain different values depending on the implementation, which is not 
the case.)


The only thing that i _know_ of is that:

http://foo.com
and
http://foo.com:80

are the same origin but have different string representations. I have 
also heard that some UAs are able to handle non-ascii characters in 
header values by somehow specifying an encoding. I don't really know how 
that works, but for those UAs the following to origins would be equivalent:


http://www.xn--jrnspikar-v2a.com
and
http://www.järnspikar.com

/ Jonas



Re: FileUpload Editor | Re: File Upload Status ?

2008-09-06 Thread Jonas Sicking


Garrett Smith wrote:

On Fri, Sep 5, 2008 at 12:28 PM, Arun Ranganathan [EMAIL PROTECTED] wrote:

All,



Hi Arun,


On behalf of Mozilla, I'd like to take over as editor of the FileUpload
spec., which was actually once assigned to me quite some time ago :)

Of course, I think that

form.getDataAsString();


form serialization is out of scope of this particular specification, but I


I agree, but it seems that File Serialization is a prereq to Form
Serialization.


I don't agree. The two seems totally unrelated. Sure, if you have file 
serialization a JS library could implement form serialization. However 
we're talking about specs that UAs would implement, which can be done 
totally without public APIs.



But then, is there a strong need for File
Serialization? Am I sufferering from 'analysis paralysis'?


Getting to file data is useful for many many other things than sending 
that data to a server. For example asking the user for a file containing 
data to process, or a word document to open and edit in google docs.



think other things like Blobs (as per the Google Gears proposal -- maybe a
File interface can have a getBlobs on it) are probably in, as well as some
other interface proposals by Opera, etc.  Security makes or breaks all of
this, so I'll start with the restrictions we've put on this as a
strawperson.  After talking to Jonas, I also think File Download should be
in scope, again modulo security which, following a straw person, is a
welcome group discussion.



How does File Download work?


File download is when a page has some data that it wants to make 
available to the user to save as a local file. For example a result from 
a computation, or a document produced in google docs.



Would it be better to provide only the ability to access files'
mediaType, and fileSize, and let XHR accept a File to send()?


In the above example you are not interested in sending the data to the 
server, but rather providing to the user to save locally.



Reading a file locally would allow for populating a Rich Text Editor
without a trip to the server. There may be other cases for needing the
data inside a file on the client, but I can't think of them.


That seems like a fine example :)


I've gotten CVS access from Mike.  I'd like till the end of the week (next
week) to turn around a a first revision.



Writing documentation and then writing code has led API problems that
could have been avoided.

To avoid this problem, before writing documentation, it would be a
good idea to figure out what the goal of the API is. Informal
discussion could accomplish that. Next, I would like to see a test
case. The test case would accomplish figuring out what the API will
do, plus reveal edge cases and problems.

I know this is not the way things usually get done.

My reason for emailing to ask for feedback from Jonas, Maciej, and
Oliver, and the public-webapps list, was to get the ball rolling by
this less formal process. I would like to know what the API should do
and then write a test case that expects such functionality.


I agree we should first discuss use cases and requirements, before 
coming up with APIs.


/ Jonas



Re: [ProgressEvents]

2008-09-08 Thread Jonas Sicking


Garrett Smith wrote:

On Sun, Sep 7, 2008 at 8:47 AM, Erik Dahlström [EMAIL PROTECTED] wrote:

Hello webapps wg,

On behalf of the SVG WG I'd like to propose adding to the ProgressEvents 
spec[1] an event equivalent to the 'loadend' (previously known as 
'SVGPostLoad') event currently defined in SVG Tiny 1.2 [2].

The 'loadend' event is dispatched by completion of a load, no matter if it was 
successful or not. In terms of the ProgressEvents spec the 'loadend' event 
would be dispatched following either of 'abort', 'load' or 'error', and there 
must be exactly one 'loadend' event dispatched. In the Event definitions table 
it would look like this:

Name: loadend
Description: The operation completed
How often?: once
When?:  Must be dispatched last



If the event were dispatched last, and there was a progress bar, plus
an overlay, then the success handler would fire before the progress
bar + overlay were hidden/removed.

Please see also:
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0537.html


I would be in support of adding such an event. And I agree with Garrett 
that it makes more sense to dispatch it before the load/abort/error 
event is dispatched. In fact, we could even make the default behavior of 
the loadend event be dispatching one of the above three, thus allowing 
them to be canceled by calling .preventDefault on the loadend event.


Would be interested to hear Ollis feedback given that he recently 
implemented progress events for XHR in firefox.


/ Jonas



Re: [xmlhttprequest2] timeout and JSON

2008-09-09 Thread Jonas Sicking


Anne van Kesteren wrote:

On Fri, 05 Sep 2008 02:36:58 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

1. A timeout property like the one on microsofts XDR. I haven't looked
into the specifics of XDRs property, but I would think that an
'abort' event should fire as well as readystate transitioning to
something if the timeout is reached.


What's wrong with using setTimeout?


Doesn't work with synchronous requests. Or at least sort of. It's 
somewhat undefined if timers and UI events should fire during 
synchronous XHR loads, but it seems like a lot of people expect at least 
timers not to. So for this I'm assuming that that is the case.


(If anyone is interested, FF2 did not fire timers, FF3 does)

Timeouts are especially import during synchronous requests since they 
block the UI, so you don't want to do that for too long periods of time.



2. A .responseJSON property. This would return the same thing as the
following code:

if (xhr.readyState != 4) {
  return null;
}
return JSON.parse(xhr.responseText);

However there are a few details that can be quibbled about:
a) Should a partial result be returned during readystate 3
b) If the property is gotten multiple times, should that return the
   same or a new object every time.


What's wrong with using JSON.parse?


Partially convenience.
Partially for completeness with .responseXML.
Partially to discourage people from sending data using XML just because 
the object is called XMLHttpRequest.



(I'm not necessarily opposed, but I'd like to only add features that are
necessary.)


Agreed, my bad for not including use cases initially.

/ Jonas



Re: Support for compression in XHR?

2008-09-09 Thread Jonas Sicking


Geoffrey Sneddon wrote:



On 9 Sep 2008, at 14:58, Dominique Hazael-Massieux wrote:



Le mardi 09 septembre 2008 à 09:02 -0400, Boris Zbarsky a écrit :

HTTP has Content-Encoding and Transfer-Encoding, no?  No special effort
on the part of XMLHttpRequest is needed to make use of those, as long as
the underlying HTTP implementation supports them.


Well, at least when an outgoing XmlHttpRequest goes with a body, the
spec could require that upon setting the Content-Encoding header to
gzip or deflate, that the body be adequately transformed. Or is
there another e.g. to POST a gzip request with Content-Encoding?


Why can it not just be added transparently by the XHR implementation?


I doubt that it could. An UA implementation won't know which encodings 
the server supports.


I suspect compression from the UA to the server will need support on the 
XHR object in order to work. I don't think the right way to do it is 
through setRequestHeader though, that seems like a hack at best.


/ Jonas



Re: Support for compression in XHR?

2008-09-09 Thread Jonas Sicking


Kris Zyp wrote:

Well, at least when an outgoing XmlHttpRequest goes with a body, the
spec could require that upon setting the Content-Encoding header to
gzip or deflate, that the body be adequately transformed. Or is
there another e.g. to POST a gzip request with Content-Encoding?


Why can it not just be added transparently by the XHR implementation?


I doubt that it could. An UA implementation won't know which encodings 
the server supports.


I suspect compression from the UA to the server will need support on 
the XHR object in order to work. I don't think the right way to do it 
is through setRequestHeader though, that seems like a hack at best.


I would have thought this would be negotiated by the server sending a 
Accept-Encoding header to indicate what forms of encoding it could 
handle for request entities. XHR requests are almost always proceeded by 
a separate response from a server (the web page) that can indicate the 
server's ability to decode request entities.


I think that this would go against the spirit of HTTP. The idea of HTTP 
is that it is state-less, so you should not carry state from one request 
to the next.


/ Jonas



Re: Support for compression in XHR?

2008-09-10 Thread Jonas Sicking


Kris Zyp wrote:
I suspect compression from the UA to the server will need support on 
the XHR object in order to work. I don't think the right way to do 
it is through setRequestHeader though, that seems like a hack at best.


I would have thought this would be negotiated by the server sending a 
Accept-Encoding header to indicate what forms of encoding it could 
handle for request entities. XHR requests are almost always proceeded 
by a separate response from a server (the web page) that can indicate 
the server's ability to decode request entities.


I think that this would go against the spirit of HTTP. The idea of 
HTTP is that it is state-less, so you should not carry state from one 
request to the next.


Encoding capability isn't really a state in the HTTP sense, since it is 
presumably an immutable characteristic of the server, rather than a 
mutable state of an application (the latter being what HTTP abhors). It 
seems completely analagous to Accept-Ranges which works exactly the same 
(communicates the server's ability to handle Range requests and what 
range units are acceptable).


You might be right, I'd have to defer to people that know HTTP better 
than me.


I'm not sure it's a capability of the server, but rather a capability of 
that particular URI. For example example.com/foo.cgi might be 
implemented using entirely different code from example.com/bar.php.


/ Jonas



Re: [xmlhttprequest2] timeout and JSON

2008-09-11 Thread Jonas Sicking


Jonas Sicking wrote:

Sunava Dutta wrote:
XDR timeout doesn’t work with sync requests as there is no sync 
support in the object.
I'd be thrilled if IE9's timeout property be adopted for XHR. (OK, 
thrilled would be an understatement!)

http://msdn.microsoft.com/en-us/library/cc304105(VS.85).aspx

We fire an ontimeout and named it similar to other defined XHR2 events 
like onload etc to ease potential integration  with XHR2! It works for 
sync and async calls. Of course, if needed we are amenable to making 
tweaks down the road to the property/event behavior if necessary, but 
ideally it would be picked up 'as is'.


How do other properties, like .readystate, .responseXML, .responseText, 
.status interact with timing out? I.e. say that we have an XHR object 
currently loading in the following state:


xhr.readystate = 3
xhr.status = 200
xhr.responseText = resultelhello worl
xhr.responseXML = #document
result
  el
#text: hello worl

(sorry, trying to draw a DOM, not very obvious what it is)

What happens if there is a timeout in that state?

1) .readystate is set to 0
   .status is set to 0
   .responseXML is set to null
   .responseText is set to 
2) All properties are left as is.
3) Something else  (Profit?)

If the answer is 1, does that happen before or after ontimeout is fired? 
And does that mean that onreadystatechange is called?


If the answer is 2, does this mean that no action at all is taken other 
than ontimeout getting called? onreadystatechange is not fired any more 
unless someone explicitly calls .abort() and then .open()?


To make it clear. I absolutely agree that it would rock if we could use 
the MS implementation. However it needs to be specified in more detail 
than the MSDN page includes, such as answers to the questions above.


/ Jonas



Re: Support for compression in XHR?

2008-09-11 Thread Jonas Sicking


Stewart Brodie wrote:

Jonas Sicking [EMAIL PROTECTED] wrote:


Stewart Brodie wrote:



I disagree with any proposal that allows the application layer to
forcibly interfere with the transports layer's internal workings.  Use
of encodings, persistent connections, on-the-fly compression are
entirely internal to the transport mechanism.
This is fine for transport mechanism where capability negotiation is 
two-way.


However a relatively common use case for the web is transferring things 
through HTTP, where the HTTP client has no way of getting capability 
information about the HTTP server until the full request has already 
been made.


I don't believe that this is the case.  Even in the case where you are using
a cross-domain request to a server that the client has not talked to before,
the client always has the option of attempting an OPTIONS request to see
what the server offers.  It might choose to do that if it's going to have to
send a large entity-body in the request; it might not bother if the entity
is small.


Sure, we could make XHR always do an OPTIONS request before sending 
anything to the server. However it seems like this might affect 
performance somewhat.



The application cannot possibly know that the request isn't going to get
routed through some sort of transparent proxy - or normal proxy for that
matter - where your forcing of encoding will fail.  That is why it is a
transport-level option, not an application-level option.


A transparent proxy shouldn't look at the actual data being transferred, no?

/ Jonas



Re: Support for compression in XHR?

2008-09-11 Thread Jonas Sicking


mike amundsen wrote:

I think a reasonable approach would be to offer an optional flag to
*attempt* compression on the upload.
When set to false (the default), no OPTION call would be made and
the content would be sent w/o any compression.
When set to true the component can make an OPTIONS call, inspect the
result and - if the proper HTTP Header is present in the reply, send
using the appropriate compression format.

Components could choose to not implement this feature, implement it
using only one type of compression or implement it w/ the ability to
support multiple types of compression.


Wouldn't a better solution then be that when the flag is set always 
compress? And leave it up to the application to ensure that it doesn't 
enable capabilities that the server doesn't support. After all, it's the 
applications responsibility to know many other aspects of server 
capabilities, such as if GET/POST/DELETE is supported for a given URI.


Or is there a concern that there is a transparent proxy sitting between 
the browser and the server that isn't able to deal with the compression? 
If so, will the OPTIONS proposal really help? If it will, how?


/ Jonas



Re: Support for compression in XHR?

2008-09-11 Thread Jonas Sicking


mike amundsen wrote:

If i understand you, you're saying the coding of the page (HTML/JS)
should 'know' that the target server does/does-not support compression
for uploads and handle it accordingly.  I assume you mean, for
example, as a coder, I know serverX only allows posting an XML
document, while serverY only allows posting urlencoded pairs;
therefore I write my HTML page accordingly to prevent errors. This
works fine and is the way most pages are built today.


Yup.


I was thinking that compression on upload could be seen more like
compression on download. This is a decision that HTML/JS coders do not
have to make as it is handled 'under the covers' between client and
server. When clients make a request, they tell the server that they
can accept compressed bodies. If the server is properly
config'ed/coded it can then compress the response.  and if the server
does not support compressed downloads, it just sends the uncompressed
version instead. no handshakes. no extra traffic.


This would be idea. I definitely agree.


In my initial suggestion was attempting to describe a way that the
client could act the same way for up-stream bodies. Maybe we don't
need any flags at all. If the environment (app code, browser,
component?) 'knows' the server supports compressed uploads, it sends
the body that way.  I just am not clear on the details of how to
support this cleanly without causing extra traffic or breaking
existing implementations.


Your proposal with the flag seems like it's reverting to the having to 
know case, since you'd want to set the flag if and only if the server 
supports compression to avoid extra overheads, while still taking 
advantage of compression when the server supports it.



Ideally, I think some thing like this is transport-level and should be
'serendipitous' if at all possible instead of requiring special coding
on the client.


Agreed. I personally just can't think of such a solution. However there 
are people far smarter than me and with far more knowledge of HTTP on 
this list, so hopefully we can figure something out.


/ Jonas



Re: Support for compression in XHR?

2008-09-11 Thread Jonas Sicking


mike amundsen wrote:

Jonas:

snip

Your proposal with the flag seems like it's reverting to the having to
know case, since you'd want to set the flag if and only if the server
supports compression to avoid extra overheads, while still taking advantage
of compression when the server supports it.

/snip

Well, seems I've muddled things a bit by mixing concerns of
cross-domain, compression support, and security in an overly simple
suggestion.

My intention was to allow clients to decide if they want to *attempt*
compression, not *force* it. Thus, it could be exposed as an flag in
the component in an effort to cut down on extra traffic between client
and server. If the flag is set to false, don't try query the server
to see if it supports compression. This is on the assumption that a
client must execute an OPTIONS call in order to discover if the server
supports compression on uploads.  This model would make the whole
conversation transparent to the application developer, but not the
component developer.


What does client, component and application mean in the above 
paragraph?


/ Jonas



Re: Support for compression in XHR?

2008-09-11 Thread Jonas Sicking


mike amundsen wrote:

Jonas:

In the text below I meant:

client = browser (or other hosting environment)
component = xmlHttpRequest (or XHR, etc.) object available as part of
the client/environment
application = code running within the client/environment (i.e. script
that can get an instance of the component, etc.)

Mike A

On Thu, Sep 11, 2008 at 11:01 PM, Jonas Sicking [EMAIL PROTECTED] wrote:

mike amundsen wrote:

Jonas:

snip

Your proposal with the flag seems like it's reverting to the having to
know case, since you'd want to set the flag if and only if the server
supports compression to avoid extra overheads, while still taking
advantage
of compression when the server supports it.

/snip

Well, seems I've muddled things a bit by mixing concerns of
cross-domain, compression support, and security in an overly simple
suggestion.

My intention was to allow clients to decide if they want to *attempt*
compression, not *force* it. Thus, it could be exposed as an flag in
the component in an effort to cut down on extra traffic between client
and server.


Wouldn't that mean that it's the application (i.e. web page) and not the 
client (i.e. browser) that decides to attempt compression or not? I.e. 
the browser wouldn't try unless the web page had told it to do so.



If the flag is set to false, don't try query the server
to see if it supports compression. This is on the assumption that a
client must execute an OPTIONS call in order to discover if the server
supports compression on uploads.  This model would make the whole
conversation transparent to the application developer, but not the
component developer.


Not really though, since it's the application (i.e. web page) that needs 
to set the flag in order to get compression.


It seems like the difference between having the web page fully decide 
and what you are proposing is basically the ability to fall back 
gracefully if the web page thought that compression is available when it 
is not.


This at the cost of the overhead of always making an OPTIONS request 
before attempting compression.


(not saying if this is good or not, just trying to make sure I 
understand the proposal).


/ Jonas



[access-control] Implementation comments

-- Thread Jonas Sicking
"-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">



new-zealand-swingers-










new-zealand-swingers- 





Thread


Date






Refine search









  
  new-zealand-swingers-
  

  

  
  
List Id:
New-Zealand-Swingers-.googlegroups.com
  

  

  
  
List Help:
mailto:New-Zealand-Swingers-+help@googlegroups.com
  

  

  
  
List Subscribe:
-
  

  

  
  
Posting Address:
mailto:New-Zealand-Swingers-@googlegroups.com
  

  

  
  
List Owner:
-
  

  

  

  

 

  
  
Mailing List System:
GoogleGroups
  

  

  

  
  
Archive Localization:




 Catalan  catal 
 Chinese   
 Czech  esky 
 Danish  Dansk 
 Dutch  Nederlands 
 English 
 French  Franais 
 German  Deutsch 
 Greek   
 Hebrew 
 Hungarian  magyar 
 Indonesian  Bahasa Indonesia 
 Italian  Italiano 
 Japanese   
 Korean   
 Lithuanian  lietuvi kalba 
 Norwegian  Norsk 
 Polish  polski 
 Portuguese  Portugus 
 Romanian  Romn 
 Russian   
 Serbian  srpski jezik 
 Spanish  Espaol 
 Swedish  Svenska 
 Turkish  Trke 
 Ukranian   




  

  

  
  
Custom Logo:








  

  

  
















  
  Article Writing amp;amp; Submission Service To 5,700+ Sites!
  
  
  
  
  
  








	

	robert-allen-home-business 

	
		
			-- Thread --
			-- Date --
			





			
		
	



	
	
	





		
			Article Wrounded set?

What would be the downside of always remembering that a server has
granted a specific header for a specific amount of time and only
delete that grant if the header exires, is updated by a later request,
or is removed due to a failed security check?

 3.
 I think we should look into putting more headers on the white list
 that doesn't require preflights. Specifically the following headers
 seems like they should be safe

 Range
 If-Unmodified-Since
 Cache-Control: no-cache

 since they can already happen in an HTML implementation that has a
 local cache (which I think is pretty much every implementation that I
 know of).

 I'm happy to add whatever headers people are ok with. I don't really feel
 knowledgeable enough to make security judgments about them though. Having
 said that, I would rather avoid adding header names of which only certain
 values are ok.

Maciej Stachowiak wrote:
 Do caching HTML implementations normally send Range without If-Range?

 In general I would be wary of extending the set of headers allowed without
 preflight. Are there specific common use cases for these?

I have heard Cache-Control: no-cache is fairly commonly used, though
the only use cases that I can think of are for servers that don't
properly set cache related headers, so it's possibly not something we
should optimize for.

The one header that I can give a semi good reason to add is
Content-Language, which seems to make sense if we think
Accept-Language is common enough to be put on the whitelist.

However I'd be fine with leaving the header list as is for now and
possibly whitelist more headers in a future version of the spec.

/ Jonas



Re: [xmlhttprequest2] timeout and JSON

2008-09-18 Thread Jonas Sicking

On Wed, Sep 17, 2008 at 3:03 AM, Sunava Dutta
[EMAIL PROTECTED] wrote:
 Jonas said
 I guess IE doesn't have an abort event on the XHR object (is this
 correct?) so the relation between ontimeout and onabort is undefined as
 far as the IE implementation goes.

 Correct. We do have abort and timeout, and adding onabort in the future IE 
 release will have to be considered so we should define the relationship. As 
 you mentioned, a possible good definition of timeouts is that a 'timeout' 
 event should fire (which will trigger ontimeout) and then abort() should be 
 called which will result in an 'abort' even (which will trigger onabort).

Sounds good to me. Would be great to hear what other people think on
having timeout in general and the specifics regarding what should
happen when the timeout fires.

/ Jonas



Re: [AC] Access-Control-Allow-Origin header syntax

2008-09-29 Thread Jonas Sicking

On Thu, Sep 25, 2008 at 12:35 PM, Anne van Kesteren [EMAIL PROTECTED] wrote:
 On Thu, 25 Sep 2008 21:26:05 +0200, Jonas Sicking [EMAIL PROTECTED] wrote:

 However I think that if we are using URI syntax for these headers we
 should treat them as URIs and not as opaque strings. Everywhere else
 in the product URIs are parsed and canonicalized before being
 compared. We specifically do not use string comparisons. I think for
 the sake of consistency with the rest of the web platform we should do
 the same here, anything else is just unexpected behavior.

 The point is actually that the header does _not_ take a URI. It did at some
 point, but you didn't like that. It takes the ASCII form of an origin
 instead, identically to The Web Socket Protocol in HTML 5.

What says that an origin is not a URI? Sure, many URIs deny access,
but it looks to me like they are still subsets of URIs. If we say that
they are not URIs, why not go all out and invent a new syntax, such as

http.org.example.www:80

to allow the site http://www.example.org? This would reduce confusion
around them being URIs.

However I think it would be better to keep them as URIs, while saying
that if there is a path, or if the URI is not same-origin as the
Origin header then deny access.

/ Jonas



Re: [access-control] Implementation comments

2008-09-29 Thread Jonas Sicking


Anne van Kesteren wrote:

On Mon, 29 Sep 2008 18:03:43 -0400, Jonas Sicking [EMAIL PROTECTED] wrote:

Anne van Kesteren wrote:
Then I'll specify the former as special casing those methods here is 
something I rather not do. I'd much rather have addEventListener, 
addEventListenerNS, onprogress, etc. work consistently.
 I've done it this way. The 'progress' and 'load' events are only 
dispatched if a preflight request has been made.


Why just limit to those events? Seems simpler and more future proof to 
not fire any events on the upload object. That would also cover future 
events like 'redirect' and 'stall'.


I don't see any reason to prevent synthesized events from firing. If we 
add more events we have to define when they dispatch anyway so that's 
not a problem. (This is different from whether registered events force a 
preflight or not, where it does make sense to have a catch-all.)


I agree we shouldn't prevent synthesized events. But why not say that no 
ProgressEvents are dispatch at all? Seems like you at least have to 
prevent 'abort' as well, so why not also 'loadstart' and 'error'.


/ Jonas



Re: [access-control] Implementation comments

2008-09-29 Thread Jonas Sicking


Anne van Kesteren wrote:

On Mon, 29 Sep 2008 23:53:32 -0400, Jonas Sicking [EMAIL PROTECTED] wrote:
I agree we shouldn't prevent synthesized events. But why not say that 
no ProgressEvents are dispatch at all?


That would prevent synthesized ProgressEvent events.


I mean that the implementation should not dispatch any ProgressEvents. I 
don't see a reason that synthesized 'load' or 'progress' events should 
be prevented, and it doesn't look like those are prevented now.



Seems like you at least have to prevent 'abort' as well,


Why is that?


Otherwise you tell the 'abort' apart from 'error' to do server detection.


so why not also 'loadstart' and 'error'.


We could do that I suppose. It would require doing an origin check 
before returning on send() in the asynchronous case, but that shouldn't 
be much of an issue.


Yes, I don't see a reason to do the origin checks after



Re: [widgets] Preferences API

2008-09-30 Thread Jonas Sicking


Arve Bersvendsen wrote:


On Tue, 30 Sep 2008 01:35:42 +0200, Marcos Caceres 
[EMAIL PROTECTED] wrote:




Hi All,
I think we should dump the Widgets preferences API in favor of HTML5
DOM's storage API. Basically, preferences API basically replicates
what DOM Storage already defines. Also, DOM storage is already
implemented across three or four browsers and we can assume the
specification to be fairly stable (or hopefully it will be by the time
we get to CR). Dumping the preferences API will also avoid problems in
the future as HTML5 becomes prevalent in the market.


While I, in principle, agree that not replicating existing storage APIs 
is a good thing, are we sure that all widget implementations will 
implement HTML5?  Also, are we sure that a preference storage may not 
have additional requirements that make them a bad match (such as 
encryption of stored data)?


There is nothing that prevents requiring that that specific part of the 
HTML5 spec be implemented. This wouldn't require the whole of HTML5 to 
be implemented.


Also, if there are additional requirements, it might be a good idea to 
raise them with the HTML5 WG to ensure that the storage API can be 
reused for widgets.


/ Jonas



Re: [access-control] non same-origin to same-origin redirect

2008-10-06 Thread Jonas Sicking


Anne van Kesteren wrote:


On Fri, 03 Oct 2008 14:10:43 +0200, Anne van Kesteren [EMAIL PROTECTED] 
wrote:
Since Jonas didn't e-mail about this I thought I would. Say 
http://x.example/x does a request to http://y.example/y. 
http://y.example/y redirects to http://x.example/y. If this request 
were to use the Access Control specification the algorithm would have 
a status return flag set to same-origin and a url return flag set to 
http://x.example/y. XMLHttpRequest Level 2 would then attempt a same 
origin request to http://x.example/y.


For simplicity and to err on the side of security it has been 
suggested to remove the status return flag same-origin and simply 
keep following the normal rules. This would mean that if that request 
were to be successful http://x.example/y would need to include 
Access-Control-Allow-Origin: http://x.example (or a value * would also 
be ok if the credentials flag is false). I'm planning on making this 
change in the next few days.


I updated both Access Control and XMLHttpRequest Level 2 to no longer 
special case the scenario where during a non same origin request you're 
redirected to a same origin URL. Both specifications use the status 
flag (previously known as the status return flag) and the url return 
flag is gone.


I think this the is better of the two alternatives.

The scenario that I am worried about is a page on server sensitive.com 
reads public data from evil.com. However if evil.com redirects back to a 
private resource on sensitive.com sensitive.com might be dealing with 
sensitive user-private data without being aware of it. This seems scary 
and could lead to the data being stored or published somewhere unsafe.


Things still aren't perfect since it's strange that a site has to trust 
itself. And that if it does it'd be back in the somewhat scary situation 
described above, but it's at least somewhat better IMHO.


/ Jonas



Re: [access-control] Seeking Clarification and Status of Issues #25, #26, #29, #30, #31 and #32

2008-10-09 Thread Jonas Sicking


Arthur Barstow wrote:


The following issues were created during the July 1-2 f2f meeting 
(minutes at [1], [2], respectively).


Would someone that attended that meeting please elaborate these issues?

In particular, has the Issue been addressed and thus can be proposed to 
be Closed?


-Regards, Art Barstow

[1] http://www.w3.org/2008/07/01-wam-minutes.html
[2] http://www.w3.org/2008/07/02-wam-minutes.html


* ISSUE-25 - Revocation of cached access grants
http://www.w3.org/2008/webapps/track/issues/25


The issue was the ability for a server to revoke a cached preflight 
result. Or ensuring that if you are MITMed in a cafe or some such that 
the cached result doesn't linger too long.


I *think* this one is resolved since the cache is cleared if access is 
ever denied (haven't implemented this part of spec yet so not 100% sure).


We decided that implementations should be allowed to clear the cache at 
any point if they so desire, which means that implementations are 
allowed to limit the cache time to 24 hours or some such (something that 
firefox will do).


Hmm.. though looking at the spec I can't find where it says that 
clearing items out of the cache at any point.


* ISSUE-26 Wildcarding is currently possible together with cookies which 
could result in exploitable servers.

http://www.w3.org/2008/webapps/track/issues/26


This is about not allowing the '*' syntax when fetching private data.

This has been address as this is no longer allowed per spec.


* ISSUE-29 Should Access-control allow DNS binding defense?
http://www.w3.org/2008/webapps/track/issues/29


This should again be addressed by the spec allowing the implementation 
the clear the cache at any point.


* ISSUE-30 Should spec have wording to recognise that User Agents may 
implement further security beyond the spec?

http://www.w3.org/2008/webapps/track/issues/30


I guess this is the part that needs to be addressed by adding wording to 
the spec to say that the cache can be cleared/ignored at any point.


I wrote up a list at some point and I think sent it to the public list 
about security measures that Firefox was going to take beyond the spec.



* ISSUE-31 Allow POST without a preflight with headers in a whitelist
http://www.w3.org/2008/webapps/track/issues/31


This is addressed in the spec. POST is now allowed when the content-type 
is text/plain.


* ISSUE-32 Each redirect step needs to opt in to AC in order to avoid 
data leaking

http://www.w3.org/2008/webapps/track/issues/32


I think this is addressed in the spec.


So all in all it seems like once ISSUE-30 is fixed all of the above can 
be closed.


/ Jonas



Re: [access-control] Seeking Clarification and Status of Issues #25, #26, #29, #30, #31 and #32

2008-10-10 Thread Jonas Sicking


Jonas Sicking wrote:
* ISSUE-30 Should spec have wording to recognise that User Agents may 
implement further security beyond the spec?

http://www.w3.org/2008/webapps/track/issues/30


I guess this is the part that needs to be addressed by adding wording to 
the spec to say that the cache can be cleared/ignored at any point.


I wrote up a list at some point and I think sent it to the public list 
about security measures that Firefox was going to take beyond the spec.


The list is here:
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0048.html

/ Jonas



Re: FileUpload Spec | Editor's Draft | Re: Call for Consensus: a new WD of the File Upload spec

2008-10-16 Thread Jonas Sicking



Why did you ignore Apple's proposal to start with a minimal common
interface (which most people seemed to like) and instead wrote a 
draft that
is the union of all things in Robin's original spec, all things that 
Mozilla

happened to implement, and a bunch of the things that Google propose?


[1] 
http://lists.w3.org/Archives/Member/member-webapps/2008OctDec/0010.html
[2] 
http://lists.w3.org/Archives/Public/public-webapps/2008OctDec/0047.html
[3] 
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0186.html
[4] 
http://lists.w3.org/Archives/Public/public-webapps/2008JulSep/0387.html


Were you referring to [3] above? I didn't actually realize that Apple
was proposing that as a v1 for the FileUpload spec. Apologies for
that, it was certainly not intended to be ignored.


Yes, [3] was our intended proposal for v1 of the file upload spec. I 
don't recall hearing any objection to publishing that as v1.


Arun did not ever respond to that email thread, and your only comment 
was This sounds like a great idea to me.


Nowhere in [3] did it mention that this was a proposal for a v1 of the 
FileUpload spec. In fact, it did not mention at all what to do with the 
proposal, i.e. publish as a Note, add to a new spec, add to an existing 
spec such as XHR Level 2, etc.


Hence the confusion on my part. My apologies.


I do agree that that API is good and should become part of the web
platform, however I'm not sure that it solves enough use cases that it
deserves a spec on its own. Basically it only provides a 'cleaner' API
for what you can already do by adding target=a-hidden-iframe on a
form element and calling .submit().


Not true. It lets you upload files with explicit in-page progress UI, 
which form submission cannot do. It lets you perform the upload (and 
show the feedback) from a different frame or window than where the user 
chose the file. It lets you upload multiple files selected from a single 
file control but one at a time and with separate progress feedback for 
each.


These are all specific requests that we have heard from Web developers, 
who are more interested in these features than direct access to file 
bytes without doing an upload.


We added the .files/File API as part of the effort to support offline 
apps. In such a case you need access to the data so that you can store 
it in localStorage, or you need to extend localStorage to be able to 
store File objects rather than just strings.


There are for sure very good use cases for both accessing data as well 
as sending it to the server using XHR.



I think at the very least we should provide the ability to get access
to the data from within javascript so that you don't have to upload
the data to the server and pull it back down again. Be that through
the mozilla API or the google Blob API (seems like most people are
pushing for the google Blob API so I suspect we'll land on something
like it). That I think is a much bigger enabler for web developers and
a higher priority for at least me to get specified.


I don't like either the Mozilla API or the Google Blob API. I think it 
will probably take some time to agree on a good API - I don't think the 
union of everyone's proposals is a good way to do that. I think it will 
take time to come to a consensus on the right API for direct access to 
the file contents - it is a good idea, but there are divergent 
approaches, all with their flaws.


I guess I'm fine with doing a v1 spec that just contains the parts in 
[3] as long as we don't hold off on a spec for accessing data at the 
same time, be that a FileUpload v2 spec or something completely separate.


That does seem like more work editor-wise though, so I'll leave that 
decision to the editor.



I'm less convinced that we need the FileDialog interface from Robin's
original spec as it's basically again just a cleaner API for
something that is already possible.


Instead of cleaner I would say it arguably has more security risk, 
since input type=file binds things to an unforgable user action.


From a UI point of view the FileDialog brings up the same UI, no? You 
still get the same filepicker when FileDialog.open is called. And you 
can similarly prevent an input type=file from being rendered using a 
plethora of CSS tricks.


/ Jonas



[XHR] blocking httpOnly cookies

2008-10-20 Thread Jonas Sicking


In bug 380418 [1] we have decided to completely block access to the 
Set-Cookie header through XHR. This seems like the safest way to prevent 
httpOnly cookies from leaking in to javascript.


In addition it seems good to block access to the raw network protocol 
used for security and can contain user credentials.


There is a risk that this will break sites since we are blocking things 
that used to work. However the number of legitimate uses seems pretty 
small (I can't think of any) and the win is high (blocking httpOnly 
cookies reliably as well as possible future cookie expansions)


The way the blocking works is that the getResponseHeader and 
getAllResponseHeaders functions behave as if Set-Cookie and Set-Cookie2 
was not sent by the server.


/ Jonas

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=380418



Re: New Progress Events draft

2008-10-20 Thread Jonas Sicking


Charles McCathieNevile wrote:


Hi folks,

at 
http://dev.w3.org/cvsweb/~checkout~/2006/webapi/progress/Progress.html?rev=1.24 
you will find a new draft of the progress events spec, for your 
delectation...


So the spec says that for HEAD requests the size should include the size 
of headers. I just realized that this might be a security issue.


The headers can include the users password, many times in clear text. If 
a site knows the size of the default headers for a given implementation, 
it can figure out the size of the users password by subtracting the 
default size from the size reported from the 'load' event from a HEAD 
request.


/ Jonas



Re: New Progress draft (1.25)...

2008-10-23 Thread Jonas Sicking


Garrett Smith wrote:

On Tue, Oct 21, 2008 at 5:32 PM, Garrett Smith [EMAIL PROTECTED] wrote:

On Tue, Oct 21, 2008 at 3:27 AM, Charles McCathieNevile
[EMAIL PROTECTED] wrote:

http://dev.w3.org/cvsweb/~checkout~/2006/webapi/progress/Progress.html?rev=1.24

Hopefully this draft is ready for last call. So please have a look through

It was agreed that loadend should fire prior to abort | error | load.


I do remember that we talked about it that way, and also talked about 
having the default action of the loadend event be to fire the 
appropriate abort/error/load event.


However I'm not sure why that way is better? I.e. why would you want to 
prevent abort/error/load from firing?


I do like the symmetry in the current proposal where loadstart is the 
first thing that fires, and loadend is the last thing. Seems very intuitive.


/ Jonas



Re: New Progress draft (1.25)...

2008-10-25 Thread Jonas Sicking


Garrett Smith wrote:

On Fri, Oct 24, 2008 at 5:51 AM, Jonas Sicking [EMAIL PROTECTED] wrote:

Garrett Smith wrote:




I agree. Not sure if that is what you want to do before or after getting the
load/error/abort event though?

I should mention that I'm not particularly married to having things one way
or another. But I think we should have reasons for choosing.



Agree. Anyone who has another use case for loadend, please post up.


I was also wondering why in your use case it made sense to fire loadend 
before load/error/abort? I.e. what would you be doing in those events 
such that you want the progress bar hidden at that point.


Though I do agree that it makes sense to say i'm done before here's 
the data (or it failed).


/ Jonas



Re: Call for Consensus - Selectors Last Call

2008-10-31 Thread Jonas Sicking


Hear hear!

Charles McCathieNevile wrote:


Hi,

Lachy thinks the latest editor's draft[1] is ready for Last Call, after 
responding to all the comments from last time (and removing the 
NSResolver). The disposition of comments[2] explains what happened to 
those comments.


So this is a call for Consensus to publish the Editor's Draft [1] of the 
Selectors API spec as a Last Call. Please respond before Monday November 
10. As always, silence is taken as assent but an explicit response is 
preferred.


Opera supports publication of this draft as a Last Call.

[1] http://dev.w3.org/2006/webapi/selectors-api/
[2] 
http://dev.w3.org/2006/webapi/selectors-api/disposition-of-comments.html


cheers

Chaals






Re: [WebIDL] Treatment of getters and setters

2008-11-17 Thread Jonas Sicking


Cameron McCormack wrote:

Hello WGs.

I’ve just committed changes to Web IDL on how it treats index and name
getters and setters.  Browers don’t agree on some specifics of how the
index and name properties on HTMLCollections are handled, so I’ve tried
to make some sensible decisions in these cases.  Comments welcome!

Here’s a summary of changes to Web IDL:

  * Removed [IndexGetter], [IndexSetter], [NamedGetter] and
[NamedSetter].

  * Removed the custom [[Get]] behaviour on host objects.

  * Added [NumberedIndex] and [NamedIndex], which can appear on
interfaces.  Both of these can take identifier arguments Creatable,
Modifiable and/or Deletable, which control how the properties that
are placed on host objects can be interacted with.  Properties for
[NumberedIndex] are enumerable, and those for [NamedIndex] aren’t.

  * Changed the custom [[Put]] behaviour on host objects so that it
only is there to support [PutForwards].

For details see:

  http://dev.w3.org/2006/webapi/WebIDL/#NamedIndex
  http://dev.w3.org/2006/webapi/WebIDL/#NumberedIndex
  http://dev.w3.org/2006/webapi/WebIDL/#index-properties


It seems very unfortunate that we now have to use prose to describe 
which functions the getters/setters map to. Why was that part of these 
changes needed?


/ Jonas



[XHR2] abort event when calling abort()

2008-11-17 Thread Jonas Sicking


Currently the spec says to fire an 'abort' event every time abort() is 
called, no matter what the current state is. This means that if you call 
abort() back to back five times you'll get five abort events.


This does match what Firefox does, however it seems to go against both 
logic and the ProgressEvent spec.


Is there a reason to not just fire the abort event when we are in fact 
aborting a network transfer?


Same thing applies partially to the abort event fired on the upload object.

/ Jonas



Re: [Bindings] extended attribute for callback function interfaces?

2008-11-23 Thread Jonas Sicking


Cameron McCormack wrote:

Hi David.

Cameron McCormack:

[re allowing a Function to implement a callback interface]
I believe this is already handled for all such interfaces, in the last
paragraph of section 4.4:


L. David Baron:

I'm not sure if you want it to be handled for all such interfaces.
You often want this behavior for interfaces that you expect will
always have a single method, but you may not if they currently have
one method but you expect more methods to be added via derived
interfaces (either now or potentially later).


You can now specify [Callback], to allow either a function or a property
on a native object to be the implementation, [Callback=FunctionOnly]
to require it to be the function that is the implementation, and
[Callback=PropertyOnly] to require the property on the object to be the
implementation.

  http://dev.w3.org/2006/webapi/WebIDL/#Callback


Why do we need the FunctionOnly/PropertyOnly feature? In gecko we don't 
have that functionality and it hasn't caused any problems that I can 
think of.


What could make sense to do is to say that if the [Callback] interface 
has any attributes or more than one function you can't pass a function.


But why would we ever want an interface that only had one function that 
we didn't want to be implementable as a function.


/ Jonas



Re: [Bindings] extended attribute for callback function interfaces?

2008-11-24 Thread Jonas Sicking


Cameron McCormack wrote:

Jonas Sicking:
Why do we need the FunctionOnly/PropertyOnly feature? In gecko we don't  
have that functionality and it hasn't caused any problems that I can  
think of.


I took David’s feedback to mean that sometimes you want to state that
a single-function interface can’t be implemented by a function (and
added PropertyOnly for that).


Ah, after rereading Davids feedback that does make sense to me. What we 
do in gecko is that we have a property that indicates 'this interface 
can be implemented as a function'. See for example:


http://mxr.mozilla.org/mozilla-central/source/dom/public/idl/events/nsIDOMEventListener.idl

(search for 'function').

I don't really care if it's an opt-in or opt-out though. And actually, 
simply having the [Callback] flag might be enough since as soon as you 
have said that something is a callback-interface it's going to be really 
hard to add any thing to the interface.


/ Jonas



Re: [selectors-api] SVG WG Review of Selectors API

2008-12-08 Thread Jonas Sicking

 == Section 6. The NodeSelector Interface

  The caller must pass a valid group of selectors.

 That's an authoring requirement, explain how that is applicable?

  The group of selectors must not use namespace prefixes that need to be
  resolved.

 That also sounds like an authoring requirement. If it's an authoring
 requirement please mark it as informative, or as only applying to
 conforming applications.

Aren't most W3C Recommendations intended as specifications for both
implementations and authors?

 Since NSResolver was taken out, please consider adding hardcoded namespace
 prefixes for svg and xhtml similar to how the empty and any namespaces are
 handled by this draft.

 Or alternatively forward the reader to DOM 3 XPath for the cases where the
 Selectors API falls short. Even if hardcoded namespace prefixes are added
 it'd still be a good idea to link to DOM 3 XPath, since it looks like
 Selectors API is unable to deal with arbitrary xml markup.

Do note that XPath can't deal with 'arbitrary xml markup' either as it
is not turing complete. (only when combined with XSLT it is).

However I do agree that having informative links to DOM 3 XPath is a
very good idea as it is a very similar technology.

/ Jonas



Re: Use cases for Selectors-API

2008-12-26 Thread Jonas Sicking

On Fri, Dec 26, 2008 at 10:26 PM, Giovanni Campagna
scampa.giova...@gmail.com wrote:
 As for the subject, I was wondering what are the use cases for Selectors
 API. Yes, author desperately need a way to get element that match some
 constraint inside a document (ie all tables or all p  span.urgent
 elements).
 But they already have a language to implemented this: XML Path Language +
 DOM3 XPath.

 Advantages of XMLPath vs Selectors:
 - you can use namespaces (you can in selectors too, but not in selectors
 api)
 - you can return non-node values (ie. the element.matchesSelector
 equivalent), either ordered or unordered (possibly faster) node iterators
 - you can extend syntax by adding new functions in different namespace (this
 is required to support UI pseudo-classes, and actually there are no
 facilities to do this)
 - once XMLPath model is implemented, also XQuery is easily implemented and
 XQuery features are incredible: what about reordering a table without using
 HTML5 datalist or java applet? or finding a row in that table based on user
 input, without using ctrl+f?
 - XMLPath and DOM3 XPath actually are implemented: even FF2 supports
 document.evaluate, instead selectors api are included only in the very
 latest browser (Opera 10 and FF3.1 AFAIK)

 Disadvantages of XMLPath vs Selectors:
 - authors must learn a new language
 - authors must use XML

 For the first issue, I simply answer that many authors don't know the latest
 Selectors lev 3 features either and anyway they have to learn the
 Selectors-API (yes, they're not difficult, but until they're widely
 deployed, the only mean to learn about them are W3C specs, and authors
 sometimes don't even know about W3C)
 For the second, it would be useful if HTML5 included a way to build an XML
 Information Set from a text/html resource. DOM3XPath for example normatively
 defines bindings between Information Set elements and DOM interfaces, so
 that from any successfully built DOM it is possible to have an XML Infoset.

A few comments:

* XPath works mostly fine on HTML already, firefox supports it. The
only things that weren't defined by specs were a few things related to
case sensitivity.
* The fact that XPath can return non-nodeset values is totally
unrelated to implementing a element.matchesSelector type API. What it
means is that you can return things like numbers and strings, i.e. you
can return the number of p children inside the body multiplied by
the value in the 'hello' attribute of the first meta element.
* Related, XPath *can't* in fact be used to implement
element.matchesSelector. At least not without incredible performance
costs. The problem is that the full XPath language is actually too
powerful. Something like myElement.matchesXPath(id(node())) would
require walking the entire DOM and checking if the XPath value of the
node is the same as the ID of myElement. This can get arbitrarily
complicated such as myElement.matchesXPath(id(concat(., @*[3], ..)))
which would essentially require evaluating the full expression with
each node (including each namespace and attribute node) in the
document as the context node. When XPath was developed, which happened
alongside with XSLT, this exact problem was actually run into. The
solution was that XSLT defines a similar, but distinct, language
called 'patterns'[1] used in situations when you need to match a node
against an expression. (In firefox the code for patterns is largely
separate from the code for XPath)
* Minor nit: It's called XPath, not XMLPath.

[1] http://www.w3.org/TR/xslt#patterns

/ Jonas



  1   2   3   4   5   6   7   8   9   10   >