Zhenbin Xu wrote:
I want to re-emphasize that XDR is targeting cross-domain access of
public data only. One can already access those public data on the
server anonymously.  XDR allows this to be done from within the
browser rather than through server side proxy or custom applications.
The custom header is simply additional measure to allow server
explicitly opt-in.

What do you mean by "additional" here? In addition to what?

CS-XHR, on the other hand, appears to be trying to handle cross-domain
access of private data. I don't know if the private data is meant to
be something similar to personal photo album or someone's private bank
account information.  I would assume they have different security
requirements.  I don't have a clear picture how banks can utilize
CS-XHR to handle their private data.  Trying to provide a general
solution here is bound to have a lot of pitfalls.

I think some people are as concerned about their personal photo album as they are about their bank account, so i'm not sure there is a big difference between the two. But I do agree that some parts of personal data is likely to have different security requirements than other parts.

I don't know how the banking people will feel about CS-XHR. It should be as safe as any other HTTP/HTTPS transaction and banks seem happy to send banking data using those protocols.

I know this may sound a bit vague and doesn't address the questions
below. But this is a long conversion and I am not sure if we can sort
out all without face to face discussions.  So I wanted to get the meta
points across and this may be of interest.

We do have very limited time at the F2F so the more we can resolve before, the further along we will be after. But of course I do realize that everyone here has other obligations.

Still looking forward to hearing back regarding the specifics of my feedback.

/ Jonas

-----Original Message-----
From: Jonas Sicking [mailto:[EMAIL PROTECTED]
Sent: Monday, June 16, 2008 10:00 PM
To: Sunava Dutta
Cc: Arthur Barstow; Marc Silbey; public-webapps; Eric Lawrence; Chris
Wilson; David Ross; Mark Shlimovich (SWI); Doug Stamper; Zhenbin Xu;
Michael Champion
Subject: Re: Need PDF of MS' input [Was Re: Seeking earlier feedback
from MS]

Hi Sunava et Al,

Thanks for the feedback!

This is a great start for a discussion. I hope we can get to more
concrete discussions about the various issues that microsoft is seeing
and try to figure out ways to address them.

There is a lot of experience at microsoft on these issues, especially
as
first deployers of the XMLHttpRequest API, so I'm greatly looking
forward to using that experience to improve the Access-Control spec.

Hopefully we can get to those meaty parts in this discussion that is
following from your mail.


I'll start with a mini FAQ to avoid repeating myself below:

Why is the PEP in the client rather than the server?

   In order to protect legacy servers some of the enforcement will have
   to live in the client. We can't expect existing legacy servers to
all
   of a sudden enforce something that they haven't before.

   In fact, even XDR using client side PEP. It's the client that looks
   for the XDomainRequest header and denies the webpage access to the
   data if the header is not there.

   In fact, Access-Control does allow full PEP on the server if it so
   chooses by providing an "Origin" header.

Is Access-Control designed with "Security by design"

   Yes. In many ways. For example Access-Control does not allow any
   requests to be sent to the server that aren't already possible today,
   unless the server explicitly asks to receive them.

   Additionally Access-Control sets up a safe way to transfer private
   data. This prevents sites from having to invent their own which
risks
   them inventing something less safe.

   Thirdly, Access-Control integrates well with the existing HTTP
   architecture of the web by supporting REST apis and the
   Content-Type header. This allows existing security infrastructure
   to inspect and understand Access-Control requests properly.

What about DNS rebinding attacks.

   Even with DNS rebinding attacks Access-Control is designed not to
   allow any requests which are not possible already in todays web
   platform as implemented in all major browsers.


Especially the last point is something that seems to have been
misunderstood at microsoft. It is not the case that DNS rebinding
attacks affect Access-Control any different than it affects the rest of
the web platform. Any server that wants to protect itself against DNS
rebinding attacks in the current web platform will automatically get
protected against Access-Control. And any site that does not protect
itself is already vulnerable to the exact same attacks with
Access-Control as it is on the current web platform. In fact,
Access-Control is less vulnerable than XMLHttpRequest on its own is. So
a server doesn't need to deploy anything extra to "defend" itself
against Access-Control.

  Section 4: Secure Design Principles

    Why Secure Design Principles Are Important__

*/"Secure by design/*/, in /software engineering
<http://en.wikipedia.org/wiki/Software_engineering>/, means that the
software has been designed from the ground up to be secure. Malicious
practices are assumed, and care is taken to minimize impact when a
security vulnerability is discovered. For instance, when dealing with
/user <http://en.wikipedia.org/wiki/User_%28computing%29>/ input,
when
the user has to type his or her name, and that name is then used
elsewhere in the /program
<http://en.wikipedia.org/wiki/Computer_program>/, care must be taken
that when a user enters a blank name, the program does not break." -
/Secure by Design, Wikipedia
<http://en.wikipedia.org/wiki/Secure_by_design>
Secure design principles are key to ensuring that users, whether the
end-user or service provider, are protected. The increasingly hostile
Web and ever more clever attackers lead to the proliferation of new
vectors like XSS and CSRF. In the Web of today, it is critical that
solutions be secure-by-design /prior/ to release. This does not
guarantee that there will be no exploits; however it does ensure that
the bug trail is significantly lower and goes a long way toward
protecting the user. For more details on this, please read our MSDN
article on The Trustworthy Computing Security Development Life Cycle
<http://msdn.microsoft.com/en-us/library/ms995349.aspx>.
This sounds great. We've been using these types of principals when
designing Access-Control too.

    Background of Client Side Cross-Domain Proposals

Cross-site XMLHttpRequest is essentially a combination of a cross-
domain
access mechanism, Access Control
<http://dev.w3.org/2006/waf/access-control/> (AC), and an object to
enable this mechanism, in this case, a versioned XMLHttpRequest
object
called XMLHttpRequest Level 2
<http://dev.w3.org/2006/webapi/XMLHttpRequest-2/> (XHR). This
cross-domain implementation will be referred to as CS-XHR.

*NOTE: This paper is based on the AC and XHR level 2 draft on
3/June/08.*

This is not entirely true. There is nothing that prevents Access-
Control
from being applied on XMLHttpRequest Level 1. Just as Access-Control
can
be applied to XInclude, XQuery, XSLT, XPath, <video>, CSS fonts, SVG,
XBL etc.

XDomainRequest (XDR) is the new object that we designed for cross
domain
using a "clean room" approach, one where we start with strict
security
principles and a "clean slate" and add functionality only if it meets
those principles.
We've used the same approach when designing Access-Control as well.

/"To me, it boils down to three issues: security, simplicity, and
architecture. I believe security concerns trump all others, and my
analysis is that Microsoft's security team made the right calls with
the XDR proposal, taking the conservative approach where no headers,
cookies or other credentials are transmitted to other domains, and
the
policy enforcement point (PEP) is assumed to be on the server. This
aligns with the de facto security model for today's Web where a user
establishes trust with the single domain, where the user and that
domain share secret information only between themselves, such as the
information stored in cookies. At OpenAjax Alliance, we have a
Security Task Force which contains some industry experts on web
security issues and the strong consensus (different than unanimity)
was a preference for XDR, mainly for security reasons. On the
simplicity side, XDR is appropriately simple (roughly as simple as
JSON Request), whereas Access Control has incrementally added
complexity (syntax rules for allowing/denying domains, two-step dance
for POST requests, detailed lists of headers that are transmitted) to
the point that it is now a small beast. On the architecture side,
Access Control is just plain wrong, with the PEP on the client
instead
of the server, which requires data to be sent along the pipe to the
client, where the client is trusted to discard the data if the user
isn't allowed to see the data; it is just plain architecturally wrong
to transmit data that is not meant to be seen. Regarding the
criticism
of XDR with more complex workflows where two sites need to work in
coordination with each other, possibly including the use of cookies
from the two sites, there are lots of ways to skin that cat and for
security reasons (such as CSRF concerns) should not be done within
the
context of the cross-domain request mechanism. For example, HTML5
allows postMessage(), so you can set up a web page with two IFRAMES,
each talking to a different server, and have them do client-side
communications via postMessage(); also, there are various server-side
alternatives to address these scenarios." - Jon Ferraiolo, Web
Architect, IBM & Open AJAX Alliance/
This is addressed by the FAQ above.

  Section 5: Security Concerns with Web API WG Proposal on Cross-
Domain
  XMLHttpRequest

 In this section, I'll demonstrate a few of these that could be
critical
blockers to implementation by browsers and security minded developers.
Mozilla echoed our sentiments here by removing CS-XHR support from
the
Beta <https://bugzilla.mozilla.org/show_bug.cgi?id=424923#c14> until
the
specification addressed further security concerns.
This is wholly false. The reasons we dropped support for Access-Control
in FF3 was very different from the concerns that microsoft has
expressed. In fact IE8 is vulnerable to the concerns that we had. The
debate we have had (and are still having) is whether these concerns can
be addressed without sacrificing security too much elsewhere.

    Extending XHR for Cross-Domain Access

*XHR has a history of bugs and extending it for cross-domain access
does
not build confidence.*

        Recommendation

Rather than working backwards to secure an object with a poor
security
record, it makes more sense to start from a basic architecture and
adding functionality incrementally, securely, and only as necessary.

        Discussion

XHR has a poor security record across all the major browsers ranging
from header spoofing attacks to re-direction attacks. Header spoofing
attacks now are even more scary given that CS-XHR uses headers to
determine which sites can access resources as well as what actions
they
can do (HTTP Verbs and headers).
So I want to try to understand this comment. Is the concern here over
the specific API that is used, or about what features that API allows?

It would be trivial to restrict XHR when used cross site such that the
security model is exactly that of XDR. By disallowing headers to be set
or retrieved, and by restricting the method to only "GET" and "POST"
cross-site XHR would have exactly the same feature set as XDR.

The discussion about which API to use is orthogonal to the discussion
about what features to allow for cross-site requests.

So is the concern here somehow related to security, or is it related to
concern about confusion of reusing the same object for both same-site
requests and cross-site requests (which might have different security
restrictions applied to it).

I'm not saying that API discussions aren't important. They most
certainly are. And it is an interesting discussion of if having the
same
API for same-site and cross-site (whatever that API is) is a good idea
or not. So I'm just trying to understand what the exact concern is here.

    XHR Behaves Differently in Cross-Domain Mode and Same-Site Mode.

*XHR behaves differently in cross-domain mode and same-site mode
leading
to unnecessary confusion for the web developer by being the same API
only in name.*


        Recommendation

XHR is a widely used object. Consequently, it is difficult to reverse
engineer without breaking existing deployments, adding complexity,
and
confusing developers. In the process this may introduce new holes
that
require further patching. This different cross domain behavior means
that it has all the disadvantages of XMLHttpRequest like its security
flaws without any clear benefit. Having a new object here without
these
redundant cross domain properties like getAllResponseHeaders will
mitigate a number of these worries.
Again, I'm confused about what the exact concern is here.

The only way I could see that reusing the same API for cross-site
requests would break existing deployments would be if such deployments
rely on that calls to XHR.open throw if the requested URI is a
cross-site one, now it will result in an error event being fired
instead.

Is that the concern?

Regarding complexity, that sounds like complexity for implementors?
Same
thing with "may introduce new holes that require further patching",
that
sounds like a concern about implementation bugs? Is that correct?

Again, all of these things are important, i'm just trying to understand
what the exact concern is here. Complexity for implementors is
definitely important in order to reduce bugs in the implementation.

    Access-Control Rules that Allow Wildcards

*Requiring implementers to maintain access control rules that allow
wildcards can lead to deployment errors.*
Isn't the XDomainRequest:1 the same thing as a wildcard? Doesn't that
mean that anyone can load the resource on that URI?

        Recommendation

*         For access where AC is important, other architectures like
Server Side Proxying
<http://developer.yahoo.com/javascript/howto-proxy.html> for service
providers who are interested in maintaining access control rules and
the
HTML 5.0's WG's Cross Document Messaging are recommended.

*         If you are going to use CS-XHR, we recommend avoiding
wildcards, auditing access control rules regularly, and avoiding
hosting
sensitive data from domains that expose data to CS-XHR.
The first seem very drastic, and isn't really a viable replacement in
many cases. And the second doesn't seem like a recommendation for the
spec, but rather for someone deploying the spec?

Wouldn't it be better to recommend that the spec disallows wildcarding
together with transferring cookies? That is the kind of input I was
hoping for from microsoft, and something that sounds like we should
take
a serious look at.


        Future Work

Permitting the end user to decide whether the web application they're
using should be able to make a cross-domain request may be worth
investigating. There are significant user experience challenges
because
the user may not understand the implications of such access.
Exactly, this hasn't worked very well in the past, which is why we've
chosen to avoid that path.

        Discussion

The service provider who sets the access permissions and returns the
requested content is another key player here. Providing a simple
scalable solution here will ensure that mistakes in permissions don't
unravel as services are deployed and maintained. For example, Flash
has
an access control mechanism similar to the one in CS-XHR and this has
been vulnerable to wildcarding attacks. Wildcarding attacks occur
when
access controls are set in error (a distinct possibility as the
number
of rules to filter cross domain requestors increases and becomes
complex) and allow for unintended access. This is especially scary
given
that AC can send cookies and credentials in requests. This also
violates
the AC drafts requirement
<http://www.w3.org/TR/access-control/#requirements> that it /"should
reduce the risk of inadvertently allowing access when it is not
intended. That is, it should be clear to the content provider when
access is granted and when it is not."/**
I'm not sure I agree here. Saying "Access-Control: allow <*>" makes it
pretty clear that everyone can read this resource, so we seem to pass
the spec requirement fine.

        Community Comments
...
/Flickr was vulnerable to this exploit, because it hosted an "allow
all"
policy file in its main domains: flickr.com and /www.flickr.com
<http://www.flickr.com>/. We notified Flickr and they fixed the hole
promptly by moving their APIs to a //separate domain/
<http://api.flickr.com/crossdomain.xml>/ and removing the
//crossdomain.xml file on their main domain/
<http://www.flickr.com/crossdomain.xml>/ (now 404). - Julien
Couvreur,/
http://blog.monstuff.com/archives/000302.html//
(just commenting on the flickr one here as that's the one I know of.)

This wasn't technically a problem due to wildcarding. The solution they
use still uses wildcards. This was a problem of them applying the
policy
too broadly across their URI space.

    Access-Control Rules Visible on the Client

*Allowing Access Control Rules to be visible on the client leads to
information disclosure.*
Only if the server so desires. The server has all information needed to
make this a pure server-side policy, so this doesn't appear to be an
issue.

        Recommendation

*         XDR ensures that servers regulate access to individual
requests and that rules are not available to the client.**
Actually, the XDR doesn't allow neither server or client side rules,
it's purely all or nothing. XDR currently relies on the 'referer'
header
which due to firewall filtering is unreliable to the point that it's
not
useful for security checks.

        Discussion

The access control rules need not be exposed to the world as this
information could potentially be sensitive. For example, your Bank
may
maintain a list of allowed partners based on your other frequently
accessed bank accounts. Making these rules available on the client
can
lead to profiling attacks if this data is intercepted. While AC and
XDR
allow servers to use the Access-Control-Origin header to make
access-control decisions preventing them from being viewed on the
client, the reality is that in practice web developers are likely to
opt
in for what's easiest and will not leverage this given the
alternative
available for AC.
You think that sites will knowingly broadcast their rules to the world,
while being concerned that the world will read it? That seems like a
far
stretch to me.

    Access-Control Rules in Headers

*Sending Access Control Rules in Headers can lead to inadvertent
access.*

        Recommendation

*         Enable users to restrict site-to-site access. This has its
own
set of challenges that need to be investigated like UI.

*         If you are using CS-XHR, we recommend not using it to send
sensitive data so that if Access Control (AC) rules are compromised,
the
impact of the data disclosed is minimal. When AC rules are audited
and
maintained, if the rules are spoofed (a possibility because XHR has
been
subject to header spoofing attacks and AC rules are maintained in
headers), the data may be compromised.


        Discussion

*         The Web API Cross Site XMLHttpRequest plan allows access
control rules to be in headers. It is especially dangerous given that
XMLHttpRequest has had header spoofing attacks in the past on
multiple
browsers. This could cause cross domain access to legacy sites not
opted
in to cross domain or change access control rules for existing sites
using CS-XHR.

*         To make things even more confusing, an XML file and headers
can be used to control access control in cross site XMLHttpRequest.


        Community Comments

/"(Description Provided by CVE) : Firefox before 1.0.7 and Mozilla
Suite
before 1.7.12 allows remote attackers to modify HTTP headers of XML
HTTP
requests via XMLHttpRequest, and possibly use the client to exploit
vulnerabilities in servers or proxies, including HTTP request
smuggling
and HTTP request splitting." /http://osvdb.org/osvdb/show/19645//

/"//That the XDR proposal enables cross-domain requests with minimal
complexity and in a way which is unlikely to cause IT administrators to
disable the feature, is, in my opinion, reason enough to be
enthusiastic. The XDR proposal seems like something that could be a
stable platform on which to start building new kinds of applications./

I'm not following this section at all. The first section talks about
allowing inadvertent access. I first guess that that was inadvertent
access to content. However the Discussion section talks about reading
headers, so is that the concern?

Then the first Community comment talks about inserting custom request
headers, which seems to be something different? Then the second comment
talks about that XDR is good which seems totally unrelated to security
comments about the Access-Control spec.

Can you rephrase the concern here, i'm just not understanding it.


    Maintaining Access Control Based on a Header

*Maintaining Access Control based on a header that instructs the
client
to serve the response to a particular domain/path instead of an
individual request leads to the potential for inadvertent access.*
Isn't XDR also header based? Or is this in general a concern about
header based solutions, Access-Control and XDR alike?

        Recommendation

*         Ensure proper and complete URL canonicalization if
Access-Control is ever granted by path.
Hmm.. this seems like a switch of topic. But yes, URL canonicalization
(or some people has preferred to call this the process of mapping a URI
to a file path) is a problem.

*         Enforcing access control on a per-request basis. Do not
permit
policy from one URL to regulate access to another URL.
Hmm.. again a topic switch. Can you please expand on this? Seems
unrelated to the header concern in the original title and the URL
canonicalization concern in the previous comment.

        Discussion

This can lead to vulnerabilities that occur when  the path of the
request
can be modified by an attacker using special characters, a flaw that
we
pointed out to Mozilla on a teleconference on cross origin requests.
A
solution here is currently being discussed by the Web API WG (See
right).
I assume here that "This" in the first sentence refers to the "can be
granted by path" issue?

If so, yes. Based on experience from Adobes Crossdomain.xml deployment
this seems to cover existing attacks.

Note the AC draft can be demonstrated to need the access control
implementers to take additional security measures
<http://lists.w3.org/Archives/Public/public-webapi/2008May/0435.html>
although this is against the draft's own requirement
<http://www.w3.org/TR/access-control/#requirements> of /"Must not
require content authors or site maintainers to implement new or
additional security protections to preserve their existing level of
security protection." /and/ /"/Must not introduce attack vectors to
servers that are only protected only by a firewall." /
How so? Access-Control is secure by design by requiring servers to opt
in. So if a server does nothing different from what it does today
Access-Control will always bail early and apply the existing Same-
Origin
policy that UAs do today. That applies always, even for content that is
currently only protected by a firewall.

That seems to cover both the requirements that you list above?

    Sending Cookies and Credentials Cross Domain

*The Access Control sends cookies and credentials cross domain in a
way
that increases the possibilities of information disclosure and
unauthorized actions on the user's behalf.*
It is not the sending of the cookies that is the concern here, it's the
sending of the users private data as a reply to the request that is the
concern, no?

Web developers want to transfer user-private data. They are going to do
so whether UAs provide official APIs to do so or not.

So I do think that we need to provide a safe solution for transmitting
private data, simply saying "never send cookies" is not a viable
solution unless an alternative is presented.

So far the solutions that sites seem to use is to ask the user for the
username/password for the third party site, and then employ a server to
server connection. This is a really bad design security wise since it
teaches users to give out their credentials. This is especially bad if
users use the same username/password on multiple sites, something that
is very common.

        Recommendation

*         Preventing cookies and other credentials from being sent
cross
domain will help ensure that private data is not inadvertently leaked
across domains.

*         The HTML 5.0 feature called Cross Document Messaging,
combined
with the same-origin XMLHttpRequest, enables regulated cross-domain
access on the client without requiring potentially dangerous
functionality (e.g., cross-domain submission of headers).
This is definitely a decent alternative. The concern is that sites will
want to communicate directly with other servers rather than proxy
everything through javascript and iframes. This can be demonstrated by
the number of such solutions that are deployed today, despite the fact
that iframe communication (albeit cumbersome such) has been available
for some time.

Additionally there is a risk that sites will stick to current <script
src=> solutions which has a lot of security concerns apart from the
ones
discussed here.

        Future Work

Future designs may include:

*         The user could enter credentials while making a proper
trust
decision about whom ultimately gets the credentials and who this
grants
access to. Any user trust decision needs to be properly understood as
there is the possibility that poor UI design or spoofing may lead to
the
user making the wrong decision. If done correctly this does provide
the
benefit of having the user's explicit assent and a number of existing
software dialog warnings are currently based on this mechanism.

*         The browser could send an XDomainRequestCookie header
<http://lists.w3.org/Archives/Public/public-webapi/2008May/0284.html>.
This would allow cookies to be sent in a header with a new name, so
that
existing sites would not inadvertently get a cookie and assume that
this
is cross domain. Sites could then ignore this header and not take
action
based on the use's session identifier. Aware servers on the other
hand
could read the new header and provide useful, user-specific services
based on its contents. This of course requires the server frameworks
to
need updates to look for such cookies and parse them properly. In
addition, any intermediary proxy that behaves differently based on
cookies would break, but these are issues that are definitely worth a
further look.
This doesn't scale very well though. You'd also need to introduce a
XDomainRequestAuthorization header and a XDomainRequestFutureAuth
header
and so on any time a new way of transmitting authorization data is
invented.

This is extra bad considering that you are violating the HTTP spec, so
any existing security infrastructure that deals with auth and cookie
headers will not recognize the new headers. So you are in fact reducing
certain aspects of security by going against the HTTP spec.

        Discussion

The way AC does these increases the potential for Cross-Site Request
Forgeries as requests will be automatically authenticated and may
contain headers otherwise impossible to send via script. For example,
a
user may be authenticated via CS-XHR to his or her bank from their
online tax preparation site. If they subsequently visit an evil site,
it
could craft CS-XHR requests to the Bank Site and send a token to
authorize actions. Even though CS-XHR requires an OPT-in model from
the
server (this is good), if there is an XSS vuln, AC header spoof, or
wildcard accidently set, this opens up another channel for unwanted
authenticated actions.

In addition, a number of sites may assume and rely on cookies being
sent
with cross-site requests and this could become a third party problem
if
cookies are sent by default. As the Web API WG members note, a large
number of sites will not understand cookie authorization and will
wind
up susceptible to CSRF.
(I assume there is a "not" missing in the first sentence in the above
paragraph?)

I don't understand the CSRF risks at all. CSRF is a problem with sites
thinking they are getting same-site requests, but forget that HTML
allows for cross site <form>s. CSRF forces sites to explicitly have to
opt out of getting certain requests.

Access-Control on the other hand is opt-in. I would think that any one
that opts in to Access-Control realizes that they are going to get
cross-site requests, that is the only reason you would opt in, that you
want cross-site requests.

As far as XSS concerns go, that would seem to almost by design be a
problem with any solution deployed. For example postMessage has similar
XSS concerns. If a bank site has allowed a tax preparation site to
perform postMessage, then if the tax preparation site gets XSS attacked,
the attacker could also attack the bank site.

The wildcard problem you raised above, so lets discuss that there. Or
is
this one different in some way?

The header spoof I think you raised above too, but as you could see
there I didn't really follow you there :) But please do elaborate there
if it's the same issue.

Privacy: Including the cookies lets sites more easily track users
across
domains.
Access-Control follows the UAs privacy settings. So there should be no
more concerns here than with cross-site <img>.

        Community Comments

*         /"sending cookies, by-default, with 'non-safe' requests. /

o   /many of the risks that are associated with allowing cross-site
XHR,
e.g. Cross-Site Request Forgery, can be mitigated by not sending
cookies
with these requests. /

*         /Jonas concerned that sites will assume and come to rely
upon
browsers not sending cookies with cross-site requests, which could
lead
to problems if we ever decide to start sending 3rd party cookies by
default/

This is a concern if we *don't* send cookies in the initial
implementation.

*         /We should not send cookies and auth headers."

/http://wiki.mozilla.org/User:Sicking/Cross_Site_XHR_Review#Discussion_
.26_Implications//

For the most part these concerns would be addressed by my proposals to
the mailing list. The remaining part is harder to solve and IE8 suffers
from this problem too.

# <http://krijnhoetmer.nl/irc-logs/whatwg/20080221#l-85>/ [00:04]
<Hixie> the reasons to include cookies are simple -- if we don't have
them, we (Google) basically can't use xhr./

/. . ./

# <http://krijnhoetmer.nl/irc-logs/whatwg/20080221#l-85>/ [00:19]
<sicking> so the thing is that CSRF today is kind of a catastrophe.
There are lots and lots and lots of sites that are susceptible to it.
If
we had a world where cookies weren't sent for third-party requests
we'd
be in a much safer web/
Yup. Fortunately Access-Control doesn't suffer from this problem since
it requires Opt-in. So it's not susceptible to CSRF in the same way.

    Sending Arbitrary Headers Cross Domain

*Sending arbitrary headers cross domain breaks a lot of assumptions
that
sites today may make, opening them up for exploits. Creating complex
rules to limit the headers sent cross domain makes the spec even more
difficult to deploy reliably.*
Headers are only sent if the server opts in under the current spec. So
I
don't understand how it could break any assumptions that sites make
today.

        Recommendation

Do not allow arbitrary headers to be sent cross domain. Avoid any
design
where the list of blocked and allowed headers is likely to be
confusing
and under constant revision as new attacks and interactions arise.

If you are implementing CS-XHR, we advise you take extreme caution in
what headers you allow in the OPTIONS request, in addition to testing
the allow list when opening up your service cross domain. Furthermore,
we recommend taking extra caution by ensuring that the headers do not
specific actions that are dangerous if the request is compromised by
a
DNS-Rebinding attack.
*No* headers are allowed to be set by the website in the OPTIONS
request. Only once a site has explicitly opted in can custom headers be
set.

For GET requests there is a very short whitelist (currently only has
two
entries) of headers that can be set without the site opting in.

        Discussion

In general, browsers today can not send cross-domain GET/HEAD
requests
with arbitrary headers. With AC, this now becomes possible, breaking
many previous assumptions. Microsoft is aware of sites dependent on
the
expectation that arbitrary headers cannot be sent cross domain and
this
is in accordance with HTML 4.0. This is not a good security practice
by
any means but enabling this functionality in a way that compromises
our
users is not an option. As an example, UPnP allows GET requests with
a
SOAP Action header to perform actions on a device. If the SOAP Action
header is not actively blocked by a cross-site XMLHTTPRequest client,
attackers will be able to perform this attack against routers and
other
UPnP devices. Contrast this with XDR, where arbitrary headers cannot
be
supplied, by default.
It seems like you are misunderstanding the spec. Only once a site has
opted in can arbitrary headers be set. So any currently existing sites,
such as UPnP devices, is unaffected.

An option here is to create a block list of bad headers. However,
this
quickly adds to the complexity of this already complex proposal and
to
make things worse will need continual updates to the spec once
implementations have shipped and more blacklisted headers are
discovered. This will presumably prevent the spec from stabilizing
and
browsers will have to update patches to secure their implementations.
Block lists are unacceptable we all agree. The block list currently in
the spec really should be moved to the XMLHttpRequest Level 1 spec as
that is where the issue lies, not with the Access-Control spec.

This is a lower concern but having an allow list would be another
option. That said, since web sites today do rely on not allowing
arbitrary headers across domain it is difficult to prove that the
headers on the allow list are not being used by sites for Same Site
Origin requests.
Agreed 100%. The spec follows this design.

To make things even more complicated, the AC spec specifies a
complicated mix of allow lists, black lists, and other headers. For
example, if a header is not in an allow list, it needs a pre-flight
check. (The spec already requires pre-flight checks for non-GET HTTP
verbs). This of course is another addition to the multi-part request
that AC allows and if the server agrees there's still a blacklist to
filter out headers that should not be allowed. The convoluted
approach
continues with XMLHttpRequest level 2 having its own set of
blacklists
that filter headers out prior to cross domain. Moving on, this black
list in XMLHttpRequest has a SHOULD not and MUST not specification
for
blocked headers, leaving the door open for different behaviors across
browsers.
Even when taking DNS rebindings attack into account (at least the way
you have defined them at the top of this email) Access-Control doesn't
allow any requests other than the ones already possible, so it's secure
by design.

The blacklist of headers in the Access-Control spec really is just the
same one as the XHR spec and really should just be covered there.

Header spoofing in XMLHttpRequest is a common vulnerability from the
past. Sending headers cross domain may allow for access control rules
to
be changed, enabling legacy services not opting in to Cross Site
XMLHttpRequest to be vulnerable.
I do agree there is some concern here. However do note that this only
applies to servers that opt in. But I do personally have some concern
for the servers that do opt in.


    Allowing Arbitrary HTTP Verbs
For the sake of avoiding confusion, http "verbs" are often referred to
as http "methods", so you might see me and others use that term.

*Allowing arbitrary HTTP verbs to be sent cross domain may allow
unauthorized actions on the server. Creating complex rules to secure
this opens up the possibility for other types of attacks.*
It would be great with more detail here. I'm especially interested to
hear feedback from microsoft here since I'm sure you have
implementation
experience since deploying XMLHttpRequest. Would be great to hear what
types of problems you ran into at that time.

        Recommendation

*         Do not allow non-GET and POST verbs. This is in line with
capabilities of HTML forms today and is specified by the HTML 4.**

*         If verbs are sent cross domain, pin the OPTIONS request for
non-GET verbs to the IP address of subsequent requests. This will be
a
first step toward mitigating DNS Rebinding and TOCTOU attacks.
As mentioned before, even with DNS Rebinding attacks Access-Control is
designed in such a way that it doesn't allow any types of requests to
be
sent that can't already be sent by the current web platform.

However the pinning is an interesting idea here. One we should discuss
further.

*         Using XMLHttpRequest to do this is inherently more
complicated
as XHR has its own rules for blocking verbs.
You mean that this is more complicated implementation wise? As stated
before, implementation complexity is certainly important to take into
consideration, I'm just trying to understand your concern.


Looking forward to continued discussion on these topics. There is
definitely some interesting stuff in here so I'm glad we got this
feedback!

Best Regards,
Jonas Sicking




Reply via email to