Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 5:38 AM, Ben Laurie b...@google.com wrote:
 I don't see why - if www.us.example.com chooses to delegate to
 www.hq.example.com, that that is its affair, not ours, surely?

Following redirects is insecure for sites that let users configure redirects.

Every time you trade away security like this, you make it more likely
that host-meta will be unusable for secure metadata.  If host-meta is
unsuitable for secure metadata, folks that require security will just
work around host-meta by creating a secure-meta.  I can't tell you
which of the security compromises will cause this to happen.  Security
is often a death of a thousand paper cuts that eventually add up to
you being owned.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 9:44 AM, Breno de Medeiros br...@google.com wrote:
 On Mon, Feb 23, 2009 at 9:32 AM, Adam Barth w...@adambarth.com wrote:
 Security is often a death of a thousand paper cuts that eventually add up 
 to
 you being owned.

 I don't understand this reasoning.

 1. The host-meta spec allows delegation to other domains/hosts

 2. Secure app does not allow redirection to other domains/hosts

 3. Secure app does not use host-meta and instead secure-meta, as apposed to,
 say, using host-meta and not following redirects to other sites?

What's the point of standardizing host-meta if every application will
require different processing rules to suit its own needs?
Applications will interoperate better by simply ignoring host-meta and
inventing their own metadata repository.

 For secure app to be secure re:no-redirect-rule it must in any way perform
 the check that the redirection is to another realm, surely?

To be secure, a user agent should not follow redirects to obtain
host-meta, regardless of where those redirects lead.

 There is enormous value in allowing redirects for host-meta. Applications
 with higher levels of security should implement their own security policies.

If you follow your current trajectory and continue to compromise away
security, applications that require security will implement their own
version of host-meta that is designed to be secure from the ground up
instead of trying to track down and patch all the gotchas in
host-meta.  Sadly, this will defeat the goal of having a central
metadata repository.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 10:13 AM, Breno de Medeiros br...@google.com wrote:
 Every application _will_ need to use different processing rules, because,
 well, they are interested in different things.

Applications will be interested in different facts in the host-meta
store, but why should they use different procedures for obtaining the
host-meta store?  They might as well use different stores entirely.

 What is the attack model here? I assume is the following: The attacker
 compromises the server to serve a re-direct when there should be a file
 served (or a 404). Well, the attacker can't upload a host-meta with what it
 wants in it? Why?

Often users can add redirects to a server without the ability to
upload content to that serve.  For example, tinyurl.com/host-meta now
redirects to a URL I control even though I have no ability to upload
content to tinyurl.com.  Why should I be able to set host-meta
properties for tinyurl.com?

 Perhaps that argument would be more convincing when you provide an example
 of an attack made possibly by introduction of a redirect that would not be
 possible by, say, adding a line to the host-meta file.

Ok.  I own tinyurl.com's host-meta store because of redirects.
Without redirects, I don't know how to own their host-meta store.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 10:26 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 It is pretty irresponsible to talk about 'security' as if there is a well 
 established standard
 applicable for the web as a whole.

These security issues are real in the sense that there are actual
servers in the world which will or will not be hackable based on the
decisions we make.

 HTTP, as in RFC 2616, isn't secure at all. Even 2617 doesn't make things 
 significantly
 better. Your entire approach is based on a very narrow viewpoint, biased by 
 worries about
 known exploits specific to browsers.

I disagree.  I can use redirects to own tinyurl.com's host-meta store
regardless of the existence of any Web browsers.

 None of my use cases for host-meta even remotely care about browsers. Are you
 suggesting we revise HTTP to make it secure?

I'm suggesting that the world is full of legacy servers.  If we fail
to consider how these legacy servers interact with new proposals, we
will introduce new vulnerabilities into those servers.

 /host-meta offers a simple mechanism to register metadata links. If you have 
 specific
 application security needs, you need to address them at the appropriate 
 level, that is,
 the application. If more than one application has the same needs, they can 
 come
 together and propose a security extension of the /host-meta spec. Not 
 supporting redirects
 is one such idea (though I find it utterly useless for security).

I think its more likely that folks that require security will ignore
host-meta an invent their own metadata store.

 But just for fun, how is a redirect any less secure than changing the content 
 of the
 /host-meta document at its original URI?

I don't have the ability to change the host-meta document at
tinyurl.com.  I do have the ability to add a redirect from /host-meta
to a URL I control.  Prior to host-meta, this is not a vulnerability
in tinyurl.

 Either you know the host-meta file you found is what the host-owner intended 
 or you
 don't. HTTP (which is really the only tool we are using here) doesn't offer 
 you any such
 assurances.

Reality is not as binary as you imply.  There are a spectrum of threat
models corresponding to different attacker abilities.  Following
redirects lets weaker attackers compromise host-meta, adding yet
another paper cut to the insecurity of host-meta.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 12:04 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 And I am already aware of one effort looking to add a trust layer to
 host-meta.

This seems heavy handed for dealing with very simple threats.

 Your suggestion of competing solutions fails simple test. It is
 easier to make the use of host-meta more restrictive (perhaps as you
 suggested) than invent a completely new one.

It's easier to build a secure metadata store by designing the store
with security in mind instead of trying to patch an existing insecure
store.

 Nothing in host-meta prevents you from implementing these restrictions
 (content type, redirections).

Let's imagine I have the following API for interacting with the host-meta store:

String getHostMetaValue(URL resource_url, String host_meta_key)

As an application programmer, its my job to use this API (i.e.,
host-meta) to implement, say, default charsets.  I make the following
API call:

var default_charset =
getHostMetaValue(http://tinyurl.com/preview.php;, Default-Charset);

Sadly, I can't use this API for this application because I'll get
hacked by redirects.

 By itself, host-meta includes no sensitive
 information or anything that can pose a threat. That will come from
 applications using it as a facility, just like they use HTTP.

So host-meta can wash its hands of all security concerns?

 We view standards architecture in a very different way. I want to create
 building blocks and only standardize where there is an overwhelming value in
 posing restrictions.

The end result will be that people who care about security won't use
host-meta.  We'll invent our own secure-meta that makes it easy to
store meta-data securely.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 No, it does not. It does introduce vulnerabilities to clients that visit
 tinyurl.com with the expectation that they will interpret some metadata at
 tinyurl.com to achieve specific aims.

You're right: someone has to use host-meta for something for this
attack to work.

 Simply substituting tinyurl.com's
 host-meta affects no one until tinyurl.com starts exposing some type of
 service or application that client apps might want to configure/discover
 using host-meta.

By owning their host-meta, I can opt them into whatever services use
host-meta for discovery.

Are you really saying that you don't care that I own their host-meta file?

 As for your example of default charsets, where you are using a browser to
 define a generic interpretation of how to use host-meta to discover default
 charsets, it sounds like such API would need to be designed as:

 getHostMetaValue(URL resource_url, String host_meta_key, boolean
 isAllowedToFollowRedirects)

 which hardly sounds to me like a burden.

Don't forget mime types!

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing)

What about paper cut #37?

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing, Boolean opt_out_of_paper_cut_37)

That's the path to madness.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 1:48 PM, Breno de Medeiros br...@google.com wrote:
 An application would have to use host-meta for a particular aim (e.g., a
 browser discovering default charsets) and implement the spec blindly without
 regard to security considerations.

Just because we can pass the buck to application-land doesn't mean we
should write a spec full of security land mines.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 3:05 PM, Breno de Medeiros br...@google.com wrote:
 crossdomain.xml was introduce to support a few specific applications
 (notably flash), and it did not take into account the security requirements
 of the application context. Tough.

I'm suggesting we learn from their mistakes instead of making the same
mistakes ourselves.

 Because at this point there is no consensus what a general delegation
 mechanism would look like. Quite possibly, this might be
 application-specific.

Why not handle delegation at the application layer instead of using
HTTP redirects for delegation?

 The alternative is to write a spec that
 introduces complexity to solve problems that we conjecture might exist in
 yet-to-be-developed applications. The risk then is that the spec will not
 see adoption, or that implementors will deploy partial spec compliance in
 ad-hoc fashion, which is also a danger to interoperability.

Great.  Let's remove the complexity of following redirects.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-22 Thread Adam Barth
On Sun, Feb 22, 2009 at 6:14 PM, Mark Nottingham m...@mnot.net wrote:
 A common use case (we think) will be to have
 http://www.us.example.com/host-meta HTTP redirect to
 http://www.hq.example.com/host-meta, or some other URI that's not on the
 same origin (as you defined it).

What behavior do you think is desirable here?  From a security point
of view, I would expect the host-meta from http://www.hq.example.com
to apply to http://www.hq.example.com (and not to
http://www.us.example.com).

 I think that the disconnect here is that your use case for 'origin' and this
 one -- while similar in many ways -- differ in this one, for good reasons.

I don't understand this comment.  In draft-abarth-origin, we need to
compute the origin of a HTTP request.  In this draft, we're interested
in computing the origin of an HTTP response.

 As such, I'm wondering whether or not it's useful to use the term 'origin'
 in this draft -- potentially going as far as renaming it (again!) to
 /origin-meta, although Eran is a bit concerned about confusing early
 adopters (with good cause, I think).

I don't have strong feelings about naming, but I wouldn't call it
origin-meta because different applications of the file might have
different (i.e., non-origin) scopes.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Adam Barth

On Thu, Feb 12, 2009 at 3:13 AM, Mark Nottingham m...@mnot.net wrote:
 My inclination, then, would be to note DNS rebinding as a risk in Security
 Considerations that prudent clients can protect themselves against, if
 necessary.

That sounds reasonable.

On Thu, Feb 12, 2009 at 3:22 AM, Mark Nottingham m...@mnot.net wrote:
 From that document;

 Valid content-type values are:

• text/* (any text type)
• application/xml
• application/xhtml+xml

 That's hardly an explicit Content-Type; it would be the default for a file
 with that name on the majority of servers on the planet; the only thing it's
 likely to affect is application/octet-stream, for those servers that don't
 have a clue about what XML is.

Interesting.  I wonder how they came up with this list.  The text/*
value is particularly unsettling.  /me should go hack them.

By the way, Breno asked for examples of sites were users can control
content at arbitrary paths.  Two extremely popular ones are MySpace
and Twitter.  For example, I signed up for a MySpace account at
http://www.myspace.com/hostmeta and I could do the same for Twitter.
As it happens, these two services don't let you pick URLs with a -
character in them, but I wouldn't hang my hat on that for security.

 Adam, my experience with security work is that there always needs to be a
 trade-off with usability (both implementer and end-user). While DNS
 rebinding is a concerning attack for *some* use cases, it doesn't affect all
 uses of this proposal; making such a requirement would needlessly burden
 implementers (as you point out). It's a bad trade-off.

I agree.  Certainly not every use case will care about DNS Rebinding.
Unfortunately, it will bite some application of host-meta at some
point.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Adam Barth

On Thu, Feb 12, 2009 at 11:38 AM, Breno de Medeiros br...@google.com wrote:
 Adam, did you try to create myspace.com/favicon.ico ?

I didn't try, but they already have their favicon there, so I suspect
it wouldn't work.

 You may not consider that a threat by companies do. If they were caught
 distributing illegal images to every browser that navigates to the root of
 their domain, they might be liable to crippling prosecution.

I mean, considering I can upload arbitrary images to my MySpace
profile, I doubt this would cause them much consternation.

 This is a common problem with all well-known-locations. That is why
 host-meta was written in a generic format so that it can be the _last_
 well-known-location. WKLs are evil, but also necessary.

Yeah, as I said in my first email, I think host-meta will be super useful.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 4:31 PM, Mark Nottingham m...@yahoo-inc.com wrote:
 Well, the authority is host + port; common sense tells us that it's unlikely
 that the same (host, port) tuple that we speak HTTP on is also going to
 support SMTP or XMPP. I'm not saying that common sense is universal,
 however.

These assumptions are often violated in attack scenarios, especially
by active network attackers who are very capable of hiding the honest
https://example.com server behind a spoofed http://example.com:443
server.

Adam




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is worth 
 excluding many
 hosting solutions where people not always have access to setting content-type 
 headers.
 After all, focusing on an HTTP GET based solution was based on getting the 
 most
 accessible approach.

Adobe found the security case compelling enough to break backwards
compatibility in their crossdomain.xml policy file system to enforce
this requirement.  Most serious Web sites opt-in to requiring an
explicit Content-Type.  For example,

$ wget http://mail.google.com/crossdomain.xml --save-headers
$ cat crossdomain.xml
HTTP/1.0 200 OK
Content-Type: text/x-cross-domain-policy
Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
Set-Cookie: ***REDACTED***
Date: Wed, 11 Feb 2009 18:07:40 GMT
Server: gws
Cache-Control: private, x-gzip-ok=
Expires: Wed, 11 Feb 2009 18:07:40 GMT

?xml version=1.0?
!DOCTYPE cross-domain-policy SYSTEM
http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
cross-domain-policy
  site-control permitted-cross-domain-policies=by-content-type /
/cross-domain-policy

Google Gears has also recently issued a security patch enforcing the
same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but is 
 very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I 
 rather do anything but
 get involved with another term at this point.

I'd greatly prefer that is this was stated explicitly.  Why leave such
a critical security requirement implied?

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 11:37 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 First, scheme is incorrect here as the scheme does not always determine a 
 specific protocol
 (see 'http' is not just for HTTP saga).

I don't understand this level of pedantry, but if you want host-meta
to be usable by Web browsers, you should use the algorithm in
draft-abarth-origin to compute its scope from its URL.  Any deviations
from this algorithm will introduce cracks in the browser's security
policy.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 10:14 AM, Adam Barth w...@adambarth.com wrote:
 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.

By the way, here's the chart of the various security protections Adobe
added to crossdomain.xml and which version they first appeared in:

http://www.adobe.com/devnet/flashplayer/articles/fplayer9-10_security.html

There is another one I forgot:

You need to restrict the scope of a host-meta file to a specific IP
address.  For example, if suppose you retrieve
http://example.com/host-meta from 123.123.123.123.  Now, you shouldn't
apply the information you get from that host-meta file to content
retrieved from 34.34.34.34.  You need to fetch another host-meta file
from that IP address.  If you don't do that, the host-meta file will
be vulnerable to DNS Rebinding.  For an explanation of how this caused
problems for crossdomain.xml, see:

http://www.adambarth.com/papers/2007/jackson-barth-bortz-shao-boneh.pdf

Sadly, this makes life much more complicated for implementers.  (Maybe
now you begin to see why this draft scares me.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

That would cause interoperability problems where user agents that care
about security would be incompatible with sites implemented with
insecure user agents in mind.  Based on past history, this leads to a
race to the bottom where no user agents can be both popular and
secure.


On Wed, Feb 11, 2009 at 11:46 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 How about clearly identifying the threat in the spec instead of making this
 a requirement?

 EHL


 On 2/11/09 10:14 AM, Adam Barth w...@adambarth.com wrote:

 On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is
 worth excluding many
 hosting solutions where people not always have access to setting
 content-type headers.
 After all, focusing on an HTTP GET based solution was based on getting the
 most
 accessible approach.

 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.  For example,

 $ wget http://mail.google.com/crossdomain.xml --save-headers
 $ cat crossdomain.xml
 HTTP/1.0 200 OK
 Content-Type: text/x-cross-domain-policy
 Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
 Set-Cookie: ***REDACTED***
 Date: Wed, 11 Feb 2009 18:07:40 GMT
 Server: gws
 Cache-Control: private, x-gzip-ok=
 Expires: Wed, 11 Feb 2009 18:07:40 GMT

 ?xml version=1.0?
 !DOCTYPE cross-domain-policy SYSTEM
 http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
 cross-domain-policy
   site-control permitted-cross-domain-policies=by-content-type /
 /cross-domain-policy

 Google Gears has also recently issued a security patch enforcing the
 same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but
 is very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I
 rather do anything but
 get involved with another term at this point.

 I'd greatly prefer that is this was stated explicitly.  Why leave such
 a critical security requirement implied?

 Adam





Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 11:52 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 Your approach is wrong. Host-meta should not be trying to address such
 security concerns.

Ignoring security problems doesn't make them go away.  It just means
you'll have to pay the piper more later.

 Applications making use of it should. There are plenty of
 applications where no one care about security. Obviously, crossdomain.xml
 needs to be secure, since, well, it is all about that.

What's the point of a central metadata repository that can't handle
the most popular use case of metadata?

 An application which strict security requirement should pay attention to the
 experience you are referring to. We certainly agree on that. But that is
 application-specific.

Here's what I recommend:

1) Change the scope of the host-meta to default to the origin of the
URL from which it was retrieved (as computed by the algorithm in
draft-abarth-origin).

2) Let particular applications narrow this scope if they require
additional granularity.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 11:55 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 There is nothing incorrect about: GET mailto:j...@example.com HTTP/1.1

I don't know how to get a Web browser to generate such a request, so I
am unable to assess its security implications.

 It might look funny to most people but it is perfectly valid. The protocol
 is HTTP, the scheme is mailto. HTTP can talk about any URI, not just http
 URIs. Since this is about *how* /host-meta is obtained, it should talk about
 protocol, not scheme.

Here's my understanding of how this should work (ignoring redirects
for the moment).  Please correct me if my understanding is incorrect
or incomplete:

1) The user agent retrieves the host-meta file by requesting a certain
URL from the network layer.

2) The network layer does some magic involving protocols and
electrical signals on wires and returns a sequence of bytes.

3) The user agent now must compute a scope for the retrieved host-meta file.

I recommend that the scope for the host-meta file be determined from
the URL irrespective of whatever magic goes on in step 2. because this
is the way all other security scopes are computed in Web browsers.
For example, if I view an HTML document location at
http://example.com/index.html, its security origin is (http,
example.com, 80) regardless of whether the HTML document was actually
retrieved by carrier pigeon or SMTP.

(To handle redirects, by the way, you have to use the last URL in the
redirect chain.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 I have to say that the current known use-cases for site-meta are:

 1. Security critical ones, but for server-to-server discovery uses (not
 browser mediated)

 2. Semantic ones, for user consumption, of an informative rather than
 security-critical nature. These use cases may be handled by browsers.

Why not address security metadata for user-agents?  For example, it
would be eminently useful to be able to express X-Content-Type-Options
[1] and X-Frame-Options [2] in a centralized metadata store instead of
wasting bandwidth on every HTTP response (as Google does for
X-Content-Type-Options).  I don't think anyone doubts that we're going
to see a proliferation of this kind of security metadata, e.g., along
the lines of [3].  I don't see the point of making a central metadata
store that ignores these important use cases.

Adam

[1] 
http://blogs.msdn.com/ie/archive/2008/09/02/ie8-security-part-vi-beta-2-update.aspx
[2] 
https://blogs.msdn.com/ie/archive/2009/01/27/ie8-security-part-vii-clickjacking-defenses.aspx
[3] http://people.mozilla.org/~bsterne/content-security-policy/



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 1:46 PM, Breno de Medeiros br...@google.com wrote:
 The current proposal for host-meta addresses some use cases that today
 simply _cannot_ be addressed without it.

I'm not familiar our process for adopting new use cases, but let's
think more carefully about one of the listed use cases:

On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 1. Security critical ones, but for server-to-server discovery uses (not
 browser mediated)

To serve this use case, we should require that the host-meta file be
served with a specific, novel content type.  Without this requirement,
servers that try to use the host-meta file for security-critical
server-to-server discovery will be tricked by attackers who upload
fake host-meta files to unknowing servers.

 Your proposal restricts the
 discovery process in ways that may have unintended consequences in terms of
 prohibiting future uses.

How does requiring a specific Content-Type prohibit future uses?

 This is so that browsers can avoid implementing
 same-domain policy checks at the application layer?

No, this is to protect servers that let attackers upload previously
benign content to now-magical paths.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:15 PM, Breno de Medeiros br...@google.com wrote:
 1. The mechanism is not sufficient strong to prevent against defacing
 attacks.

We're not worried about defacement attacks.  We're worried about Web
servers that explicitly allow their users to upload content.  For
example:

1) Webmail providers (e.g., Gmail) let users upload attachments.
2) Forums let users upload avatar images.
3) Wikipedia lets users upload various types of content.

 An attacker that can upload a file and choose how to set the
 content-type would be able to implement the attack. If servers are willing
 to let users upload files willy-nilly, and do not worry about magical paths,
 will they worry about magical content types?

In fact, none of these servers let users specify arbitrary content
types.  They restrict the content type of resources to protect
themselves from XSS attacks to an to ensure that they function
properly.

 2. This technique may prevent legitimate uses of the spec by developers who
 do not have the ability to set the appropriate header.

Many developers can control Content-Type headers using .htaccess files
(and their ilk).

 Is this more likely to prevent legitimate developers from getting things
 done than to prevent attacks from spoofing said magical paths? I would say
 yes.

What is your evidence for this claim?  My evidence for this being a
serious security issue is the experience of Adobe with their
crossdomain.xml file.  They started out with the same design you
currently use and were forced to add strict Content-Type handling to
protect Web sites from this very attack.  What is different about your
policy file system that will prevent you from falling into the same
trap?

 Defacing attacks are a threat to applications relying on this spec,

We're not talking about defacement attacks.

 and they
 should be explicitly aware of it rather than have a false sense of security
 based on ad-hoc mitigation techniques.

This mechanism does not provide a false sense of security.  In fact,
it provides real security today for Adobe's crossdomain.xml policy
file and for a similar Gears feature.  (Gears also started with your
design and was forced to patch their users.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:26 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 But you are missing the entire application layer here! A browser will not
 use host-meta. It will use an application spec that will use host-meta and
 that application, it security is a concern, will specify such requirements
 to ensure interoperability. It is not the job of host-meta to tell
 applications what is good for them.

In that case, the draft should not define a default scope for
host-meta files at all.  Each application that uses the host-meta file
should define the scope that it finds most useful.

As currently written, the draft is downright dangerous because it
defines a scope that is almost (but not quite!) right for Web
browsers.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:44 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 You got this backwards.

Ah.  Thanks for this response.  I understand the situation much better now.

Let me see if I understand this correctly for the case of the https scheme.

1. You want to find out more about example.com on port 443 speaking
HTTP-over-TLS.
2. You want to find out more about https://example.com/resource/1 (and
care about the HTTP-over-TLS representation).

In both cases, you will do (wrapped in a TLS session):

GET /host-meta HTTP/1.1
Host: example.com:443

Your point is that a Web browser would never want to find out more
about https://example.com/resource/1 and care about the HTTP
representation (it would always be interested in the HTTP-over-TLS
representation).

Thanks,
Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:46 PM, Breno de Medeiros br...@google.com wrote:
 However, such applications could handle
 specifying content-type as a requirement, as Eran rightly pointed out.

Why force everyone interested in use case (1) to add this requirement?
 This will have two results:

1) Some application will forget to add this requirement and be
vulnerable to attack.

2) A service that requires the well-known Content-Type will not be
able to inter-operate with a server that takes advantage of the laxity
of this spec.

 What is different about your
 policy file system that will prevent you from falling into the same
 trap?

 The difference being that cross-domain.xml is intended primarily for browser
 use and therefore optimization for that case sounds legitimate. This is not
 the case here.

We're discussing security-critical server-to-server discovery, which
is the first use-case you listed.

 Again, again, there is an application layer where browsers can implement
 such policies.

Sure, you can punt all security problems to the application layer
because I can't construct attacks without a complete system.

It sounds like there are three resolutions to this issues:

1) Require host-meta to be served with a particular, novel Content-Type.

2) Add a section to Security Considerations that explains that
applications using host-meta should consider adding requirement (1).

3) Ignore these attacks.

My opinion is that (3) will cause users of this spec a great deal of
pain.  I also think that (2) will cause users of this spec pain
because they'll ignore the warning and construct insecure systems.

By the way, there is a fourth solution, which I suspect you'll find
unappealing for the same reason you find (1) unappealing: use a method
other than GET to retrieve host-meta.  For example, CORS uses OPTIONS
for a similar purpose.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 3:32 PM, Breno de Medeiros br...@google.com wrote:
 In that case, content-type is a mild defense. Can you give me an example
 where a web-site administrator will allow files to be hosted at '/'?

There are enough of these sites to force Adobe to break backwards
compatibility in a Flash security release.

 I can find some fairly interesting names to host at '/'

 E.g.: favicon.ico, .htaccess, robots.txt, ...

OMG, you changed my favicon!  .htaccess only matters if Apache
interprets it (e.g., uploading an .htaccess file to Gmail doesn't do
anything interesting).

 Trying to secure such environments seems to me a waste of time, quite
 frankly.

Clearly, Adobe doesn't share your opinion.

 The most interesting threat of files uploaded to root is via defacement.
 This solution does nothing against that threat.

I you can deface my server, then I've got big problems already (e.g.,
my Web site is totally hacked).  Not addressing this issue creates a
security problem where none currently exists.

 1) Require host-meta to be served with a particular, novel Content-Type.

 Not feasible, because of limitations on developers that implement these
 server-to-server techniques.

That's an opinion.  We'll see if you're forced to patch the spec when
you're confronted with a horde of Web servers that you've just made
vulnerable to attack.

 2) Add a section to Security Considerations that explains that
 applications using host-meta should consider adding requirement (1).

 No. I would suggest adding a Security Considerations that say that host-meta
 SHOULD NOT be relied upon for ANY security-sensitive purposes _of_its_own_,

Then how are we to address use case (1)?

 and that applications that require levels of integrity against defacement
 attacks, etc., should implement real security techniques. Frankly, I think
 content-type does very little for security of such applications.

Your argument for why strict Content-Type handling is insecure is that
a more powerful attacker can win anyway.  My argument is that we have
implementation experience that we need to defend against these
threats.

I did a little more digging, and it looks like Silverlight's
clientaccesspolicy.xml also requires strict Content-Type processing:

http://msdn.microsoft.com/en-us/library/cc645032(VS.95).aspx

That makes 3 out of 3 systems that use strict Content-Type processing.

Microsoft's solution to the limited hosting environment problem
appears to be quite clever, actually.  I couldn't find documentation
(and haven't put in the effort to reverse engineer the behavior), but
it looks like they require a content type of applciation/xml, which
they get for free from limiting hosting providers by naming their file
with a .xml extension.  This is clever, because it protects all the
sites I listed earlier because those sites would have XSS if they let
an attacker control an application/xml resource on their server.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 4:00 PM, Breno de Medeiros br...@google.com wrote:
 All of the above systems target browsers and none have the usage
 requirements of the proposed spec.

The point is there are enough HTTP servers on the Internet that let
uses upload content in this way that these vendors have added strict
Content-Type processing to their metadata mechanisms.  If you don't
even warn consumers of your spec about these threats, those folks will
build applications on top of host-meta that make these servers
vulnerable to attack.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 4:40 PM, Breno de Medeiros br...@google.com wrote:
 Yes, but your solution prevents legitimate use cases that are a higher value
 proposition.

How does:

On Wed, Feb 11, 2009 at 3:22 PM, Adam Barth w...@adambarth.com wrote:
 2) Add a section to Security Considerations that explains that
 applications using host-meta should consider adding requirement (1) [strict 
 Content-Type processing].

prevent legitimate use cases?

It's not the ideal solution because it passes the buck to
application-land, but its orders of magnitude better than laying a
subtle trap for those folks.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 6:04 PM, Breno de Medeiros br...@google.com wrote:
 So the proposal is for a security considerations section that describes
 attending threats and strongly hint that applications will be vulnerable if
 they do not adopt techniques to validate the results. It would  suggest the
 use of content-type headers and explain what types of threats it protects
 against, provided that it includes caveats that this technique may not be
 sufficient for some applications and as well as not necessary for others
 that use higher-assurance approaches to directly validate the results
 discovered through host-meta.

Sounds good to me.  I'm not that familiar with IETF process.  Should I
draft this section and email it to someone?

 I still do not think this is necessary because the threat model attending
 this is much broader than crossdomain.xml and applications that rely on this
 will have to understand their own security needs or be necessarily
 vulnerable. On the other hand, I will not argue against it either.

For my part, I'd rather we go further and require strict Content-Type
processing.  :)

Adam