Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Julian Reschke


Ian Hickson wrote:

On Wed, 18 Jun 2008, Zhenbin Xu wrote:
[Zhenbin Xu] Regardless what different browser does today, rich parsing 
error is an important feature for developers. I have found it can 
pinpoint the exact problem that otherwise would have been difficult to 
identify when I sent incorrectly constructed XML file.


Mozilla shows the XML error in its error console, which seem more useful 
than exposing the error to script, really. (I expect other browsers do the 
same but I haven't checked as recently.)


That's useful, but IMHO not nearly as useful as giving the script code 
the ability to access the information. Sometimes errors happens in the 
absence of the developer, and it's useful to have an easy and 
automatable way to get the diagnostics.


BR, Julian




Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Julian Reschke


timeless wrote:

generally what i've seen is that exposing some information about a
parse error to a script is a great way to enable data leaks to a
malicious application.


On Thu, Jun 19, 2008 at 11:19 AM, Julian Reschke [EMAIL PROTECTED] wrote:

Could you please provide some more information or give an example about when
this would be the case?


this is the replacement spec which doesn't support this feature (which
also implies that people might have had a reason):
http://dev.w3.org/csswg/cssom/#the-cssstylesheet
Statements that were dropped during parsing can not be found using these APIs.
...


Can you provide an example where providing *XML* parse error information 
within *XHR* would be problematic?


BR, Julian



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Anne van Kesteren


On Thu, 19 Jun 2008 11:42:57 +0200, Ian Hickson [EMAIL PROTECTED] wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:

This has one side-effect, which is that it doesn't work well with XBL
or VBWG in environments where the XBL file (or VXML file) is
customised to the user but accessed cross-site. Is that ok?


It doesn't work well in the sense that they don't work out-of-the-box.
It would be trivial to add a load-private-data pseudo attribute to the
?xbl? PI that sets the with credentials flag to true.

However I can't think of a situation where someone wants to load private
XBL bindings so I'm totally ok with it being a bit more hassle. It might
be a bigger deal for VXML, I don't know since I've not looked at that
spec.


Sounds fair to me. I'll add the attribute to XBL2 when it goes back to LC
once implementations start, assuming we adopt this.


Ian, it seemed to me you were talking about the server side problem  
because ?access-control? alone would not be enough. XBL being served  
would need to be served with the appropriate HTTP headers set. (Also, not  
just ?xbl? would need to be changed but also the other APIs for  
attaching XBL would require changing I presume.)



--
Anne van Kesteren
http://annevankesteren.nl/
http://www.opera.com/



Re: Wheel events (ISSUE-9)

2008-06-19 Thread Markus Stange


Doug Schepers wrote:
And that causes problems like 
http://mozilla.pettay.fi/moztests/pixelscrolling.mov


Can you provide some context for what is going on in that video?  What 
is the problem that illustrates?  Does it relate to the scrolling vs. 
zooming of the map?


The problem is that calling .preventDefault() on line scroll events 
doesn't have any effect on the pixel scroll events. So we need to 
dispatch pixel scroll events, too, in order to give web applications a 
reliable way to prevent scrolling.


Markus





Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Julian Reschke wrote:


timeless wrote:
On Thu, Jun 19, 2008 at 1:09 PM, Julian Reschke 
[EMAIL PROTECTED] wrote:

Can you provide an example where providing *XML* parse error information
within *XHR* would be problematic?


i really shouldn't have to. imagine a document that is not CSS and is 
not XML.


now imagine an api that lets you try to load it as css. imagine that
this api exposes a dom object that describes *any* information from
that document in the case that it fails to parse as css.

basically it meant that you can interrogate pages that you weren't
supposed to be able to look at to get information you weren't supposed
to have.

now replace 'css' with 'xml'. The logic still applies.

And yes, I understand you'll wave hands about this is a trusted
application. I don't care. If it's a trusted application, then I
trust it not to make mistakes and to have ways to verify the
information server side before it's ever sent on any wires.


But you already can read the unparsed content using responseText, no? 
Where's the leakage then?


Exactly. The problem with bug 35618 is that CSS can be loaded from a 3rd 
party site without that site opting in. In this case providing parsing 
error information could result in leakage of private information.


Though really, just the loading of the CSS is an information leak. 
However it has been deemed unlikely that someone will put private 
informaion into CSS rules.


There would be no risk of information leakage to provide parse error 
information for same-site CSS loads. Likewise, there is no risk of 
information leakage to provide parse errors for same site XML loads, or 
cross-site XML loads where the 3rd party site has opted in.


/ Jonas




Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Boris Zbarsky


Julian Reschke wrote:
Could you please provide some more information or give an example about 
when this would be the case?


Here's a simple past example, if I understand your question correctly.

One can set an onerror handler on Window that will trigger if an exception is 
thrown and not caught, and will also trigger on script parsing/compilation 
errors.  For the latter case, the offending line of script is included in the 
exception object.


Now consider the following HTML page:

  script src=target.html/script

Since most likely target.html is not actually valid JS, there will be a parse 
error, and the error object will contain the text on the line in question.


For what it's worth, Gecko will now only include the text if the script the 
error is in and the onerror handler are same-origin.  Until we started doing 
that, there was a cross-origin information leak.


-Boris



RE: responseXML/responseText exceptions and parseError

2008-06-19 Thread Zhenbin Xu

 -Original Message-
 From: Julian Reschke [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 19, 2008 12:13 AM
 To: Ian Hickson
 Cc: Zhenbin Xu; Jonas Sicking; Anne van Kesteren; Sunava Dutta; IE8
 Core AJAX SWAT Team; public-webapps@w3.org
 Subject: Re: responseXML/responseText exceptions and parseError

 Ian Hickson wrote:
  On Wed, 18 Jun 2008, Zhenbin Xu wrote:
  [Zhenbin Xu] Regardless what different browser does today, rich
 parsing
  error is an important feature for developers. I have found it can
  pinpoint the exact problem that otherwise would have been difficult
 to
  identify when I sent incorrectly constructed XML file.
 
  Mozilla shows the XML error in its error console, which seem more
 useful
  than exposing the error to script, really. (I expect other browsers
 do the
  same but I haven't checked as recently.)

 That's useful, but IMHO not nearly as useful as giving the script code
 the ability to access the information. Sometimes errors happens in the
 absence of the developer, and it's useful to have an easy and
 automatable way to get the diagnostics.

 BR, Julian


[Zhenbin Xu] Agree :-)  One less dependency.



Re: Opting in to cookies - proposal

2008-06-19 Thread Jonas Sicking


Maciej Stachowiak wrote:



On Jun 14, 2008, at 4:23 AM, Jonas Sicking wrote:



I must say though, this is starting to sound complex and I am not 
totally convinced of the need to make servers opt in to getting 
cookies. Is it really a likely mistake that someone would take 
affirmative steps to enable cross-site access to a per-user resource, 
but then neglect to check whether requests are cross-site and act 
appropriately?


I do think there is a big risk of that yes. I do think that many sites 
that today serve public data do have a few pages in there which 
contain forms or other pages that serve user specific data.


Even something as simple as a news site that largely serves public 
news articles might have a cookie where the user has chosen a home 
location for local news. This is the case on the news site I use for 
example, it usually just serves a standard list of news, but if I give 
it my home zip code, it will additionally serve a section of local news.


I guess I don't see that as a huge risk. In the case of the hypothetical 
news site, if it is handing out news in some kind of data feed format, 
wouldn't the point of access-control be to give the personalized feed? 


If the site gives out a personalized feed it has to be *a lot* more 
careful who it gives it out to. It must first ask the user for every 
site it gives the feed to.


If it just gives out non-personalized feeds it can safely share the feed 
with any site.


After all, you could otherwise use server-to-server communication to get 
the public data.


Yes, but server-to-server communication has a lot of downsides. First of 
all it greatly increases the latency since the information is 
transferred over two connections rather than one. Second, it means that 
you have to have a server that has enough capacity to tunnel all the 
mashup data for all the users of your site. This can be a significant 
hurdle.


This is something that could very easily be overlooked by an 
administrator that just configures his server to add a 
Access-Control: allow* header using a site-wide configuration 
file, without going through all CGI scripts on the server and teaching 
the ones that honor cookies to ignore the cookies for cross-site 
requests.


No site should add Access-Control: allow* site-wide!


Experiences with Adobes crossdomain.xml show that they do.

I mean, I guess 
it's possible people will do this, but people could add 
Access-Control-Allow-Credentials site-wide too. And if we add 
Access-Control-Allow-Credentials-I-Really-Mean-It, they'll add even more.


Yes, this is certainly a possibility. But my hope is that this will 
happen to a smaller extent.


Basically the part of this that worries me is that it requires agreement 
between the client and the server, but those are expected to be 
controlled by different parties in many cases. If our favorite news site 
starts out only offering unpersonalized news feeds to other sites, but 
then wants to use cookies to personalize, then all existing clients of 
its cross-site data must change. That seems like a huge burden. Worse 
yet, if the site changes its mind in the other direction, existing 
clients break completely.


If you start sharing a new set of data it makes a lot of sense to me 
that people reading that data need to change. This seems no different 
than if you change from sharing an address book to sharing emails. 
Usually someone reading the data would need to both change the URI they 
are reading, as well as how they display the data.


And if the administrator of such a server thoughtlessly enabled 
cross-site access without thinking about the consequences, would they 
not be equally likely to enable cross-site cookies without thinking 
about the consequences?


Not more likely than someone adding any other header without knowing 
what it does. This is why I designed my proposal such that opting in 
to cookies is a separate step.


But you are expecting people to add Access-Control without knowing what 
it does.


It's not a matter of knowing what the spec says, its a matter of knowing 
with 100% certainty that you are not sending any private data on any of 
the URIs you are sharing.


It seems like we are adding a lot of complexity (and therefore more 
opportunity for implementation mistakes) for a marginal reduction in 
the likelihood of server configuration errors.


I think the ability to separate sharing of private data from sharing 
of public data is a huge help for server operators. So I think this is 
much more than a marginal reduction of configuration errors.


The possibility of making this separation is there - simply ignore 
Cookie and/or Authorization headers when Access-Control-Origin is 
present. So this opt-in doesn't offer a new capability, just changes the 
defaults.


In many server technologies it's not trivial to ignore cookies and/or 
auth headers. Session handling tends to be handled before the CGI 
script starts which means that you have to weed 

RE: responseXML/responseText exceptions and parseError

2008-06-19 Thread Zhenbin Xu

 -Original Message-
 From: Jonas Sicking [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 19, 2008 1:24 AM
 To: Zhenbin Xu
 Cc: Anne van Kesteren; Sunava Dutta; IE8 Core AJAX SWAT Team; public-
 [EMAIL PROTECTED]
 Subject: Re: responseXML/responseText exceptions and parseError

 Zhenbin Xu wrote:
  Inline...
 
  -Original Message-
  From: Jonas Sicking [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, June 17, 2008 3:37 PM
  To: Anne van Kesteren
  Cc: Zhenbin Xu; Sunava Dutta; IE8 Core AJAX SWAT Team; public-
  [EMAIL PROTECTED]
  Subject: Re: responseXML/responseText exceptions and parseError
 
  Anne van Kesteren wrote:
  On Tue, 17 Jun 2008 10:29:12 +0200, Zhenbin Xu
  [EMAIL PROTECTED] wrote:
  Technically because all other XHR methods/properties throw
  exceptions
  in case of state violation, exception for responseXML/responseText
  is
  better.
  The reason these don't throw an exception anymore is actually
  documented
  on the public-webapi mailing list. Nobody else provided additional
  information at the time:
 
  http://lists.w3.org/Archives/Public/public-
  webapi/2008Feb/thread.html#msg94
  Regarding parseError, since the parseError object is not part of
 DOM
  Core and nobody but Internet Explorer supported it, it's not part
 of
  XMLHttpRequest.
  Agreed.
 
 
  [Zhenbin Xu] Regardless what different browser does today, rich
 parsing
  error is an important feature for developers. I have found it can
 pinpoint
  the exact problem that otherwise would have been difficult to
 identify when
  I sent incorrectly constructed XML file.
 
  And given that the goals is to define a useful spec for future
  XHR implementations, we should define how rich parser error is
  surfaced instead of omitting it because nobody but IE supported it.
 
  It is even more important we define it in the XHR spec because it is
 not
  part of DOM Core. Otherwise such a key piece would be lost and we
 will
  have diverging implementations.

 The goal of XHR Level 1 was to get interoperability on the feature set
 that exists across the major implementations of XHR today, so I don't
 think parse error information fits the bill there, but it sounds like a
 great feature to look into for XHR Level 2.



[Zhenbin Xu]




  If we change DOM Core to say that documents with a
  namespace well-formedness violation are represented by an empty
  Document
  object with an associated parseError object I suppose we could
 update
  XMLHttpRequest to that effect.
  If we return null now people will use that to check for well-
 formedness
  checks. If we in the next version of the spec then said that an
 empty
  document with .parseError set should be returned those pages would
  break.
 
  So if we are planning on doing the parse error thing then I think we
  should define that an empty document is returned.
 
  Though I think it's more friendly to JS developers to return null.
  Otherwise they have to nullcheck both .responseXML as well as
  .responseXML.documentElement in order to check that they have a
 valid
  document.
 
  And if I understand it right IE would have to be changed to be
  complient
  with the spec no matter what since they currently return a non-empty
  document.
 
  / Jonas
 
  [Zhenbin Xu] IE does returns an empty document.
 responseXML.documentElement
  is null but responseXML is not null.
 
  A typical readyState handler can be written this way:
 
  xhr.onreadystatechange = function()
  {
  if (this.readyState == 4  this.status == 200)
  {
  var x = this.responseXML;
  if (x.parseError.errorCode != 0)
  {
  alert(x.parseError.reason);
  return;
  }
  alert(x.documentElement);
  }
  }
 
  I don't see why this is not friendly.  It is more comprehensive and
 gives
  more information than a simple null check, which contains no
 information
  about what exactly is the parsing error (e.g. which open tag doesn’t
 match
  which end tag, etc.).

 Won't that throw when XHR.responseXML is null, such as when the
 mime-type is something completely different from an XML document?

 But I absolutely agree that a null check does not give enough error
 information to usefully do debugging. I was merely saying that a null
 check should be enough to check for if parsing succeeded. The above is
 probably not what you would want to do on an end user site since a user
 won't care which end tag was not properly nested.

 / Jonas


[Zhenbin Xu] The way it was designed is that responseXML is always not null once
it is in OPENED state.  I don't think IE currently give out rich error 
information
about mimetype mismatch but the design allows it to be exposed on responseXML, 
if
necessary.

It is still prudent to do the above check on end user site because it is more
robust 

Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Zhenbin Xu wrote:


[Zhenbin Xu] Regardless what different browser does today, rich

parsing

error is an important feature for developers. I have found it can

pinpoint

the exact problem that otherwise would have been difficult to

identify when

I sent incorrectly constructed XML file.

And given that the goals is to define a useful spec for future
XHR implementations, we should define how rich parser error is
surfaced instead of omitting it because nobody but IE supported it.

It is even more important we define it in the XHR spec because it is

not

part of DOM Core. Otherwise such a key piece would be lost and we

will

have diverging implementations.

The goal of XHR Level 1 was to get interoperability on the feature set
that exists across the major implementations of XHR today, so I don't
think parse error information fits the bill there, but it sounds like a
great feature to look into for XHR Level 2.


[Zhenbin Xu]


Did something happen to your reply here?


[Zhenbin Xu] The way it was designed is that responseXML is always not null once
it is in OPENED state.  I don't think IE currently give out rich error 
information
about mimetype mismatch but the design allows it to be exposed on responseXML, 
if
necessary.


Ah, good to know. I'm not particularly a big fan of this design, feels 
more logical to me to not return a document object if no document was 
sent. But I guess it depends on how you look at it.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
And it's useful for pages that contain private information only when 
cookies are sent, but when no cookies are sent they only provide public 
information. I've given two examples of this in other threads:


1. A news site serving articles in different categories. When the user
   is logged in and has configured a home zipcode includes a category
   of local news.

   Example: news.yahoo.com

2. A discussion board that allows comments to be marked private. Only
   when a user is logged in and has access to private comments are the
   private comments included, otherwise only the public comments are
   shown.

   Example: buzilla.mozilla.com


For these, how would the site initating the connection to the data 
provider server know whether or not to include the load-private-data flag?


The same way that it knows which URI to load. I expect that sites will 
document what resources can be loaded at what URIs, and with which query 
parameters as part of the API documentation. Whether private data is 
served can be documented at the same place. Along with information on 
what to do if access is denied for that private information.


Surely if the server does anything with the load-private-data flag, then 
it is fundamentally as vulnerable as if we didn't do any of this.


Yes, this is about reducing likelyhood that things go wrong, not 
eliminating it as that seems impossible.


This 
only helps with servers that have same-domain pages that accept cookies, 
but have no cross-domain pages that accept cookies, ever (since if any of 
the cross-domain pages accept cookies, then our initial assumption -- that 
the site author makes a mistake and his site reacts to cookies in 
third-party requests by doing bad things -- means that he's lost).


How so. Sites that have a combination of private and public data can, 
and hopefully will, only set the Access-Control-With-Credentials header 
for the parts that serve private data. It needs to apply different 
opt-in policies here anyway since it needs to ask the user before 
sharing any of his/her data.


/ Jonas



RE: Further LC Followup from IE RE: Potential bugs identified in XHR LC Test Suite

2008-06-19 Thread Zhenbin Xu

I think we are now off track.

Nonetheless we should realize that customer cannot
write an interoperable page with my fictional home grown browser if it doesn't 
exist
or doesn't have the needed feature when the page was written.  I doubt customers
would write against particular browser if equal efforts can results in 
interoperable
solutions run on top multiple browsers.  Now that the solutions are in place, 
they
deserve our consideration.

I would argue it is important to think about customer's migration path when we 
design
new feature or standardizing existing ones.  Otherwise we would be painting 
customers
into corner and blaming them for their dilemma.



 -Original Message-
 From: Ian Hickson [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 18, 2008 9:48 PM
 To: Zhenbin Xu
 Cc: Sunava Dutta; Web API public; IE8 Core AJAX SWAT Team; public-
 [EMAIL PROTECTED]
 Subject: RE: Further LC Followup from IE RE: Potential bugs
 identified in XHR LC Test Suite

 On Wed, 18 Jun 2008, Zhenbin Xu wrote:
 
  In the case there isn't clear technical differences, I don't think we
  should pick the right solution based on implementer's cost. Rather We
  should base it on customer impact. A bank with 6000 applications
 built
  on top of IE's current APIs simply would not be happy if some
  applications cannot run due to changes in some underlying object
 model.
  And this is not IE's problem alone since the bank simply cannot
 upgrade
  or change the browser, if all other browsers result in the same
  breakage.

 For non-Web HTML pages like in this example, solutions like IE's IE7
 mode are fine. IMHO we should be concentrating on pages on the Web,
 not
 on browser-specific pages -- interoperability isn't relevant when the
 page isn't intended to run on multiple browsers.

 --
 Ian Hickson   U+1047E)\._.,--,'``.
 fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._
 ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-
 .;.'




Re: Opting in to cookies - proposal

2008-06-19 Thread Jon Ferraiolo




 Maciej Stachowiak wrote:
 
 
  On Jun 14, 2008, at 4:23 AM, Jonas Sicking wrote:

...snip...


  I mean, I guess
  it's possible people will do this, but people could add
  Access-Control-Allow-Credentials site-wide too. And if we add
  Access-Control-Allow-Credentials-I-Really-Mean-It, they'll add even
more.

 Yes, this is certainly a possibility. But my hope is that this will
 happen to a smaller extent.


I share the hope smaller extent hope with Jonas, and his latest proposals
look good to me.

My assumption is that 99% of all cross-site XHR usage will not require
credentials/cookies. Therefore, what makes sense is a simple way that
server developers can opt-in to credential-free cross-site XHR which tells
the browser to allow cross-site credential-free XHR to their site. Then, in
an advanced section of the AC spec, talk about how some workflows might
want credentials to be sent, and here is the extra header to enable
credentials (Access-Control-Allow-Credentials), but this section of the
spec should include SHOUTING TEXT about potential dangers and instruct the
developer that he should not enable transmission of credentials unless he
is sure that he needs it and he is sure that he knows what he is doing
(such as understanding what a CSRF attack is). I realize that some
developers won't read the spec carefully or notice the shouting text, but I
expect most tutorials and examples on the Web will follow the lead from the
spec and help to teach people steer clear of the
Access-Control-Allow-Credentials header unless they know what they are
doing.

Jon

Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Ian Hickson

On Thu, 19 Jun 2008, Jonas Sicking wrote:
  
  This only helps with servers that have same-domain pages that accept 
  cookies, but have no cross-domain pages that accept cookies, ever 
  (since if any of the cross-domain pages accept cookies, then our 
  initial assumption -- that the site author makes a mistake and his 
  site reacts to cookies in third-party requests by doing bad things -- 
  means that he's lost).
 
 How so. Sites that have a combination of private and public data can, 
 and hopefully will, only set the Access-Control-With-Credentials header 
 for the parts that serve private data. It needs to apply different 
 opt-in policies here anyway since it needs to ask the user before 
 sharing any of his/her data.

The scenario we are trying to address is the scenario where an author has 
accidentally allowed cross-site access to a part of the site that gives 
users abilities if they provide valid credentials, to prevent other sites 
from pretending to be the user and acting in a user-hostile way as if on 
the user's behalf.

Thus we are assuming that if a cookie is sent to the server with a 
cross-site request, the server will be vulnerable. That is the fundamental 
assumption.

Now, we can work around that by making it that authors don't accept 
cookies for cross-site requests, but only accept them from same-site 
requests. That works, because our assumption only relates to cross-site 
requests that _do_ include cookies.

If the server then opts-in to receiving cookies, then the server will 
receive cookies. Our assumption is that if a cookie is sent to the server 
with a cross-site request, the server will be vulnerable. Thus the server 
is now again vulnerable.

We can't pretend that the author will make a mistake if they always 
receive cookies but then assume that the author will suddenly stop making 
mistakes when we provide them with a way to opt-in to cookies. Either the 
author is going to make mistakes, or he isn't. We have to be consistent in 
our threat assessment.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Opting in to cookies - proposal

2008-06-19 Thread Maciej Stachowiak



On Jun 19, 2008, at 1:48 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:


After reviewing your comments, I am much more inclined to favor  
Microsoft's proposal on this: rename the relevant headers. I think  
you argued that this doesn't scale, but I think only two headers  
have to be renamed, Cookie and Authorization. Note that other  
authentication-related headers, if any, do not need to be renamed,  
because without the Authorization header being present, no other  
authentication processing will take place. If the headers have  
different names, it's very hard to reveal private data  
accidentally. No site-wide blanket addition with headers will cause  
it, you have to go out of your way to process an extra header and  
treat it the same as Cookie. It would allow allow servers to  
choose whether to offer personalized or public data and change  
their mind at any time, without having to change clients of the  
data. It would also work for inclusion of cross-site data via  
mechanisms that don't have a convenient way to add an out of band  
flag.
The only downside I can think of to this approach is that it may  
break load balancers that look at cookies, unless they are changed  
to also consider the new header (Cross-Site-Cookie?).


Using different header names would certainly address the concern I  
have regarding reducing the risk that private data is inadvertently  
leaked.


However I think the downsides are pretty big. The load balancer  
issue you bring up is just one example. Another is that I think  
caching proxies today avoid caching data that is requested using  
Authentication headers. Probably the same is true for the cookie  
header in some configurations.


I think going against the HTTP spec carries big unknown risks. I'm  
sure others with more HTTP experience than me can chime in here  
better.


I don't see how this would go against HTTP. It's perfectly valid HTTP  
to not send Cookie or Authorization headers, and also valid to  
send whatever custom headers you want if agreed upon with the server.


The cost and risk of adding an extra boolean to XMLHttpRequest seems  
much lower.


The cost of tying server-side changes to client-side changes for a  
cross-site communication technology seems like a very high cost to me.  
I don't buy the argument that it's normal to change when you change  
what data you are reading - the data being personalized (or not) is a  
different kind of change from changing the type of data you read. To  
compare to the user-level example, GMail and GCal are different URLs,  
but nytimes.com is the same URL whether I'm logged in or not.


Regards,
Maciej




Re: responseXML/responseText exceptions and parseError

2008-06-19 Thread Jonas Sicking


Zhenbin Xu wrote:

Sorry I accidently deleted part of reply. Inline...


-Original Message-
From: Jonas Sicking [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 19, 2008 2:17 PM
To: Zhenbin Xu
Cc: Anne van Kesteren; Sunava Dutta; IE8 Core AJAX SWAT Team; public-
[EMAIL PROTECTED]
Subject: Re: responseXML/responseText exceptions and parseError

Zhenbin Xu wrote:


[Zhenbin Xu] Regardless what different browser does today, rich

parsing

error is an important feature for developers. I have found it can

pinpoint

the exact problem that otherwise would have been difficult to

identify when

I sent incorrectly constructed XML file.

And given that the goals is to define a useful spec for future
XHR implementations, we should define how rich parser error is
surfaced instead of omitting it because nobody but IE supported it.

It is even more important we define it in the XHR spec because it

is

not

part of DOM Core. Otherwise such a key piece would be lost and we

will

have diverging implementations.

The goal of XHR Level 1 was to get interoperability on the feature

set

that exists across the major implementations of XHR today, so I

don't

think parse error information fits the bill there, but it sounds

like a

great feature to look into for XHR Level 2.

[Zhenbin Xu]

Did something happen to your reply here?



[Zhenbin Xu] Indeed it would already be very useful if XHR1 is to summarize
behaviors of all major implementations today, and document the common behaviors.
In which case the spec should try to accommodate all major browsers and
leave controversial parts to XHR2. This is why we suggested the null or 
exception
language.


So strategy that the spec has been following has been to use the feature 
set that is common between browsers, but for those features try to 
define a good useful spec that archives interoperability and a solid API 
for developers to develop against.


The reason we didn't want to write a spec that accomodates all major 
browsers is that such would be largely useless. When you get into the 
details browsers behave differently enough that the spec would have to 
be so fuzzy that it would essentially just be a tutorial, not a 
specification.


Wordings like null or exception is a pain for developers. It's also 
something that is hard to build future specifications on. Especially 
when taken to the whole XHR spec.


Hope that explains why the spec looks the way it does?

/ Jonas





Re: Opting in to cookies - proposal

2008-06-19 Thread Jonas Sicking


Maciej Stachowiak wrote:


On Jun 19, 2008, at 1:48 PM, Jonas Sicking wrote:


Maciej Stachowiak wrote:


After reviewing your comments, I am much more inclined to favor 
Microsoft's proposal on this: rename the relevant headers. I think 
you argued that this doesn't scale, but I think only two headers 
have to be renamed, Cookie and Authorization. Note that other 
authentication-related headers, if any, do not need to be renamed, 
because without the Authorization header being present, no other 
authentication processing will take place. If the headers have 
different names, it's very hard to reveal private data accidentally. 
No site-wide blanket addition with headers will cause it, you have to 
go out of your way to process an extra header and treat it the same 
as Cookie. It would allow allow servers to choose whether to offer 
personalized or public data and change their mind at any time, 
without having to change clients of the data. It would also work for 
inclusion of cross-site data via mechanisms that don't have a 
convenient way to add an out of band flag.
The only downside I can think of to this approach is that it may 
break load balancers that look at cookies, unless they are changed to 
also consider the new header (Cross-Site-Cookie?).


Using different header names would certainly address the concern I 
have regarding reducing the risk that private data is inadvertently 
leaked.


However I think the downsides are pretty big. The load balancer issue 
you bring up is just one example. Another is that I think caching 
proxies today avoid caching data that is requested using 
Authentication headers. Probably the same is true for the cookie 
header in some configurations.


I think going against the HTTP spec carries big unknown risks. I'm 
sure others with more HTTP experience than me can chime in here better.


I don't see how this would go against HTTP. It's perfectly valid HTTP to 
not send Cookie or Authorization headers, and also valid to send 
whatever custom headers you want if agreed upon with the server.


You are sending data that the HTTP spec specifies should be sent over an 
Authorization header over another header instead.


In any case, if you are breaking the letter and/or spirit of the spec is 
of little matter. What matters is the actual effects are of sending this 
information over new header names are.


The cost and risk of adding an extra boolean to XMLHttpRequest seems 
much lower.


The cost of tying server-side changes to client-side changes for a 
cross-site communication technology seems like a very high cost to me. I 
don't buy the argument that it's normal to change when you change what 
data you are reading - the data being personalized (or not) is a 
different kind of change from changing the type of data you read. To 
compare to the user-level example, GMail and GCal are different URLs, 
but nytimes.com is the same URL whether I'm logged in or not.


First of all I'm unconvinced that this will happen in reality. Can you 
explain a usecase when a site would want to go from changing from public 
data to private data, without changing anything else? And is this common 
enough that we need to cater to it?


When you are sending personalized data you pretty much have to have a 
pretty different API anyway. At the very least the mashup sites will 
have to deal with access being denied and display some UI to the user, 
or redirect the user to the content site to log in and/or authorize the 
mashup site to read the data.


I would think that in most cases you would keep the old URI serving the 
public data, and set up a new URI where you can fetch the private data.


It might be a small change implementation wise, but semantically and 
security wise it's a big difference between serving public and private data.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
This only helps with servers that have same-domain pages that accept 
cookies, but have no cross-domain pages that accept cookies, ever 
(since if any of the cross-domain pages accept cookies, then our 
initial assumption -- that the site author makes a mistake and his 
site reacts to cookies in third-party requests by doing bad things -- 
means that he's lost).
How so. Sites that have a combination of private and public data can, 
and hopefully will, only set the Access-Control-With-Credentials header 
for the parts that serve private data. It needs to apply different 
opt-in policies here anyway since it needs to ask the user before 
sharing any of his/her data.


The scenario we are trying to address is the scenario where an author has 
accidentally allowed cross-site access to a part of the site that gives 
users abilities if they provide valid credentials, to prevent other sites 
from pretending to be the user and acting in a user-hostile way as if on 
the user's behalf.


Thus we are assuming that if a cookie is sent to the server with a 
cross-site request, the server will be vulnerable. That is the fundamental 
assumption.


Now, we can work around that by making it that authors don't accept 
cookies for cross-site requests, but only accept them from same-site 
requests. That works, because our assumption only relates to cross-site 
requests that _do_ include cookies.


If the server then opts-in to receiving cookies, then the server will 
receive cookies. Our assumption is that if a cookie is sent to the server 
with a cross-site request, the server will be vulnerable. Thus the server 
is now again vulnerable.


We can't pretend that the author will make a mistake if they always 
receive cookies but then assume that the author will suddenly stop making 
mistakes when we provide them with a way to opt-in to cookies. Either the 
author is going to make mistakes, or he isn't. We have to be consistent in 
our threat assessment.


Yes, if they only do that then they will be vulnerable.

The site is as always responsible for asking the user before allowing 
third-party access to private data, and yes, if they fail to do so 
properly they will be vulnerable.


/ Jonas



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Ian Hickson

On Thu, 19 Jun 2008, Jonas Sicking wrote:
 
 The site is as always responsible for asking the user before allowing 
 third-party access to private data, and yes, if they fail to do so 
 properly they will be vulnerable.

So I guess I don't really understand what your proposal solves, then. It 
seems like a lot of complexity for only a very minimal gain in only one 
very specific scenario (the site doesn't ever return cookie-based data 
cross-site). We're still relying on the author not making mistakes, 
despite the author will make a mistake being our underlying assumption. 
If the site has to know to not include the cookie opt-in header, why not 
just have the site ignore the cookies? (It also introduces the problems 
that Maciej mentioned, which I think are valid problems.)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
The site is as always responsible for asking the user before allowing 
third-party access to private data, and yes, if they fail to do so 
properly they will be vulnerable.


So I guess I don't really understand what your proposal solves, then. It 
seems like a lot of complexity for only a very minimal gain in only one 
very specific scenario (the site doesn't ever return cookie-based data 
cross-site). We're still relying on the author not making mistakes, 
despite the author will make a mistake being our underlying assumption. 
If the site has to know to not include the cookie opt-in header, why not 
just have the site ignore the cookies? (It also introduces the problems 
that Maciej mentioned, which I think are valid problems.)


Well, we are talking about two very different types of misstakes, which 
I think have very different likelyhoods of happening. If I understand 
you correctly.


One misstake is having URIs in the URI space where you opt in to 
Access-Control which serve private data without you realizing it.


The other mistake is intentionally publishing private data but 
forgetting to ask your users first before doing so.


Seems to me that the former is a lot more likely than the latter.

/ Jonas

Btw, I just realized that this thread says version 3, not sure why I 
made that mistake, this is obviously version 2 :)




Re: Opting in to cookies - proposal version 3

2008-06-19 Thread Jonas Sicking


Ian Hickson wrote:

On Thu, 19 Jun 2008, Jonas Sicking wrote:
So I guess I don't really understand what your proposal solves, then. 
It seems like a lot of complexity for only a very minimal gain in only 
one very specific scenario (the site doesn't ever return cookie-based 
data cross-site). We're still relying on the author not making 
mistakes, despite the author will make a mistake being our 
underlying assumption. If the site has to know to not include the 
cookie opt-in header, why not just have the site ignore the cookies? 
(It also introduces the problems that Maciej mentioned, which I think 
are valid problems.)
Well, we are talking about two very different types of misstakes, which 
I think have very different likelyhoods of happening. If I understand 
you correctly.


One misstake is having URIs in the URI space where you opt in to 
Access-Control which serve private data without you realizing it.


The other mistake is intentionally publishing private data but 
forgetting to ask your users first before doing so.


Seems to me that the former is a lot more likely than the latter.


Right but the mistake that we're not doing anything about and which seems 
likely to be far more common than either of those is:


Having URIs in the URI space where you opt in to Access-Control _and_ opt 
in to cookies which serve or affect private data without you realizing it.


Yes, the best way I can think of to reduce this risk is to reduce the
number of URIs where you opt in to cookies. That is what my proposal
tries to accomplish.

That is, your solution only works so long as the site doesn't ever opt in 
to cookies. Which seems uncommon.


This is not true. You can opt in to cookies on just a subset of the
URIs where you opt in to Access-Control with my proposal.

Additionally, this way you can make sure to ask the user always before 
sending the Access-Control-With-Credentials header. This way the risk of 
leaking private data without the user realizing is further reduced.


If cookies are always turned on for Access-Control there will be many 
situations where you will want to enable Access-Control without asking a 
user since you are not sending the users private data.


(I'm assuming that the case of providing data cross-domain for simple GET 
requests is most easily handled just by having that script send back the 
right magic, in which case none of this applies as the URI space is one 
URI and there are no preflights at all. For this use case we don't have 
to worry about cookies at all as the server just wouldn't look at them.)


I'm not following what you are saying here. What script is that 
script? And what is the right magic?


I am just as concerned about GET requests as any other. In fact, all the 
private data leaks I've heard about with crossdomain.xml has been 
related to GET requests.


/ Jonas



RE: New: Tracking Issues in XHR that we raisedRE: Was: Further LC Followup from IE RE: Potential bugs identified in XHR LC Test Suite

2008-06-19 Thread Zhenbin Xu



 -Original Message-
 From: Jonas Sicking [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 19, 2008 7:22 PM
 To: Zhenbin Xu
 Cc: Sunava Dutta; Ian Hickson; public-webapps@w3.org; IE8 Core AJAX
 SWAT Team
 Subject: Re: New: Tracking Issues in XHR that we raisedRE: Was:
 Further LC Followup from IE RE: Potential bugs identified in XHR LC
 Test Suite

 Zhenbin Xu wrote:
 
  -Original Message-
  From: Jonas Sicking [mailto:[EMAIL PROTECTED]
  Sent: Thursday, June 19, 2008 2:38 PM
  To: Sunava Dutta
  Cc: Ian Hickson; Zhenbin Xu; public-webapps@w3.org; IE8 Core AJAX
 SWAT
  Team
  Subject: Re: New: Tracking Issues in XHR that we raisedRE: Was:
  Further LC Followup from IE RE: Potential bugs identified in XHR LC
  Test Suite
 
  Sunava Dutta wrote:
  Thanks Ian, Zhenbin for clarifying the issues and a continuing very
  productive discussion.
  Meanwhile, I'm summarizing some of our requests for the editor
 based
  on issues we've had an opportunity to clarify...There are many
  conversations going on and I'd hate to see points getting lost and
  would like the specs/test cases updated for issues where
 discussions
  are not ongoing.
 
 
  -Ongoing discussion: Specify the parseError attributes for Document
  Objects or specify this can be sent out of band. This could be
  something we don't have to hold the XHR spec back for as long as we
  make a note in the specification that this is pending. There are
  people currently talking for and/or against it. Zhenbin is
  articulating IE's point.
  Sounds good to me. We have an informative Not in this
 specification
  section already, sounds like a good idea to add there.
 
  - Throwing exceptions on state violations is easier to understand
 and
  we should change the spec to reflect this. (for the sake of a
  consistent programming model). The spec should have
 INVALID_STATE_ERR
  exception (the exact language can be worked out)  if a site is
  expecting an exception and gets a null as this would not work if
 the
  developer is trying to write a wrapper object in XHR. I haven't
 heard
  any strong objection here or compelling argument against it that's
  been sustained.
  I do think there has been some disagreement here. Anne has commented
 on
  reasons for returning null rather than throwing an exception, and I
  think I agree with him. I think the correct cause of action here is
 to
  raise an issue in the issue tracker.
 
 
  [Zhenbin Xu] I am not familiar with the process here.  How does issue
  gets resolved when it is in issue tracker?

 Having it in the issue tracker just makes it easier to track since the
 issue tracker keeps track of all emails on the subject (if you put
 ISSUE
 XX in the subject).

  We have enough debate already
  that there is technical merit to throw exception rather than null. It
 is
  not going to be productive for us to keep spending time on it.

 I do agree we have debate, I don't agree we have agreement on that
 throwing an exception is the right thing to do.

 The argument for returning null is that it makes for a cleaner API,
 exceptions should only be thrown in exceptional circumstances. And
 based
 on available data it doesn't seem like sites currently care one way or
 another, so I think we should go for the cleaner API.

 What is the argument for throwing an exception?

 / Jonas


[Zhenbin Xu] State violations.  XHR is designed as a state machine. The whole
spec is written centered around states (OPENED, SENT etc.).  Remember we
are not designing a new object.  It is an object invented long time ago
and the inventor had decided to pick the exception model.

Yes there are complexities surrounding a state machine approach. This
is why in our new XDR model, we no longer use the readyState mode.
However we cannot retrofit an object that was created almost ten years ago.

















Re: Improving Communication and Expectations

2008-06-19 Thread Marcos Caceres

Hi Marc,
On Thu, Jun 19, 2008 at 6:05 AM, Marc Silbey
[EMAIL PROTECTED] wrote:
 Hey Marcos,

 I totally understand why you would be frustrated by our behavior here.

 I owe you, Anne, Art and the rest of the WAF group an apology for falling off 
 the radar without telling you where I was going. I am definitely sorry for 
 that.

I'm sorry I didn't raise this issue earlier (and more politely).

 I remember having good conversations with you and others at the Boston f2f on 
 access control and then in email shortly after. I stopped attending WAF calls 
 and we stopped giving feedback on access control for a long while so I can 
 understand why you think we vanished to do our own thing. This was mostly due 
 to the fact that my role in IE changed a little over a year ago and I started 
 working more on accessibility (PF WG) and then at a different capacity 
 altogether.

 When I was active in WAF, it wasn't at all clear to me that members of the 
 Web API WG intended to apply the WAF's access control model directly to XHR. 
 As a result, I didn't make our XDR team aware of Web API's work. We want to 
 avoid this in the future by having more IE folks participate in the various 
 WGs. It also helps that Web API and WAF are merged now too.

 I want to step back for a moment; I joined the IE team during our rebirth 
 if you will. We largely have a new team of people working on IE now and 
 you're starting to see some of us be a part of the W3C. I'll be the first to 
 admit that we're making mistakes during our reentry into the standards 
 conversation. We care deeply about our common web developer and we really 
 want to work with you and others in the working groups to improve standards.


I think everyone in the WG shares those goals.

 We're always open to constructive feedback on how we can better engage. I'm 
 hopeful that we can work through the group's climate and technical issues 
 together quickly. It goes without saying that we have a lot of respect for 
 the folks in the group and so I'm also hopeful that our feedback will be 
 taken seriously.


I guess the simplest thing is to communicate. That does not mean
anyone expects Microsoft to disclose product information. If you guys
are busy, and need to drop off for a while let us know. We all rely on
Microsoft, who has the largest market share in this space, to be
engaged so we don't end up with you guys dropping an XDR-bomb on the
group and more fragmentation on the Web. I say this because all other
desktop browser vendors actively participated in the design of
Access-Control and chose not to run off and do their own thing (but
they could have). When you guys run off and do your own thing (as was
the case with XDR), people may start coming up with all sorts of
ridiculous conspiracy theories [1] as to why you did that.

Kind regards,
Marcos

[1] http://datadriven.com.au/2008/06/18/ie8-xdomainrequest-conspiracy-theory/
-- 
Marcos Caceres
http://datadriven.com.au
http://standardssuck.org