Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-24 Thread Ben Laurie
On Mon, Feb 23, 2009 at 5:32 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Feb 23, 2009 at 5:38 AM, Ben Laurie b...@google.com wrote:
 I don't see why - if www.us.example.com chooses to delegate to
 www.hq.example.com, that that is its affair, not ours, surely?

 Following redirects is insecure for sites that let users configure redirects.

 Every time you trade away security like this, you make it more likely
 that host-meta will be unusable for secure metadata.  If host-meta is
 unsuitable for secure metadata, folks that require security will just
 work around host-meta by creating a secure-meta.  I can't tell you
 which of the security compromises will cause this to happen.  Security
 is often a death of a thousand paper cuts that eventually add up to
 you being owned.

I thought signing was supposed to deal with the issues around redirects?



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-24 Thread Breno de Medeiros
Since XRD is maybe the first security-sensitive application to depend on
this proposed spec, I think it is appropriate that it work as a laboratory
for the signature-based approach.

On Tue, Feb 24, 2009 at 8:23 AM, Eran Hammer-Lahav e...@hueniverse.comwrote:

 It will, if extended to host-meta (it is currently discussed for XRD
 documents), but either way will not be part of the host-meta spec.

 EHL

  -Original Message-
  From: Ben Laurie [mailto:b...@google.com]
  Sent: Tuesday, February 24, 2009 1:55 AM
  To: Adam Barth
  Cc: Mark Nottingham; Eran Hammer-Lahav; www-talk@w3.org
  Subject: Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-
  meta-01)
 
  On Mon, Feb 23, 2009 at 5:32 PM, Adam Barth w...@adambarth.com wrote:
   On Mon, Feb 23, 2009 at 5:38 AM, Ben Laurie b...@google.com wrote:
   I don't see why - if www.us.example.com chooses to delegate to
   www.hq.example.com, that that is its affair, not ours, surely?
  
   Following redirects is insecure for sites that let users configure
  redirects.
  
   Every time you trade away security like this, you make it more likely
   that host-meta will be unusable for secure metadata.  If host-meta is
   unsuitable for secure metadata, folks that require security will just
   work around host-meta by creating a secure-meta.  I can't tell you
   which of the security compromises will cause this to happen.
   Security
   is often a death of a thousand paper cuts that eventually add up to
   you being owned.
 
  I thought signing was supposed to deal with the issues around
  redirects?




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Ben Laurie
On Mon, Feb 23, 2009 at 7:10 AM, Adam Barth w...@adambarth.com wrote:
 On Sun, Feb 22, 2009 at 6:14 PM, Mark Nottingham m...@mnot.net wrote:
 A common use case (we think) will be to have
 http://www.us.example.com/host-meta HTTP redirect to
 http://www.hq.example.com/host-meta, or some other URI that's not on the
 same origin (as you defined it).

 What behavior do you think is desirable here?  From a security point
 of view, I would expect the host-meta from http://www.hq.example.com
 to apply to http://www.hq.example.com (and not to
 http://www.us.example.com).

I don't see why - if www.us.example.com chooses to delegate to
www.hq.example.com, that that is its affair, not ours, surely?

It does complicate matters if you are expecting host-meta to be signed, though.


 I think that the disconnect here is that your use case for 'origin' and this
 one -- while similar in many ways -- differ in this one, for good reasons.

 I don't understand this comment.  In draft-abarth-origin, we need to
 compute the origin of a HTTP request.  In this draft, we're interested
 in computing the origin of an HTTP response.

 As such, I'm wondering whether or not it's useful to use the term 'origin'
 in this draft -- potentially going as far as renaming it (again!) to
 /origin-meta, although Eran is a bit concerned about confusing early
 adopters (with good cause, I think).

 I don't have strong feelings about naming, but I wouldn't call it
 origin-meta because different applications of the file might have
 different (i.e., non-origin) scopes.

 Adam





Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 5:38 AM, Ben Laurie b...@google.com wrote:
 I don't see why - if www.us.example.com chooses to delegate to
 www.hq.example.com, that that is its affair, not ours, surely?

Following redirects is insecure for sites that let users configure redirects.

Every time you trade away security like this, you make it more likely
that host-meta will be unusable for secure metadata.  If host-meta is
unsuitable for secure metadata, folks that require security will just
work around host-meta by creating a secure-meta.  I can't tell you
which of the security compromises will cause this to happen.  Security
is often a death of a thousand paper cuts that eventually add up to
you being owned.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 9:32 AM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 5:38 AM, Ben Laurie b...@google.com wrote:
  I don't see why - if www.us.example.com chooses to delegate to
  www.hq.example.com, that that is its affair, not ours, surely?

 Following redirects is insecure for sites that let users configure
 redirects.

 Every time you trade away security like this, you make it more likely
 that host-meta will be unusable for secure metadata.  If host-meta is
 unsuitable for secure metadata, folks that require security will just
 work around host-meta by creating a secure-meta.  I can't tell you
 which of the security compromises will cause this to happen.  Security
 is often a death of a thousand paper cuts that eventually add up to
 you being owned.


I don't understand this reasoning.

1. The host-meta spec allows delegation to other domains/hosts

2. Secure app does not allow redirection to other domains/hosts

3. Secure app does not use host-meta and instead secure-meta, as apposed to,
say, using host-meta and not following redirects to other sites?
For secure app to be secure re:no-redirect-rule it must in any way perform
the check that the redirection is to another realm, surely?

There is enormous value in allowing redirects for host-meta. Applications
with higher levels of security should implement their own security policies.




 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 9:44 AM, Breno de Medeiros br...@google.com wrote:
 On Mon, Feb 23, 2009 at 9:32 AM, Adam Barth w...@adambarth.com wrote:
 Security is often a death of a thousand paper cuts that eventually add up 
 to
 you being owned.

 I don't understand this reasoning.

 1. The host-meta spec allows delegation to other domains/hosts

 2. Secure app does not allow redirection to other domains/hosts

 3. Secure app does not use host-meta and instead secure-meta, as apposed to,
 say, using host-meta and not following redirects to other sites?

What's the point of standardizing host-meta if every application will
require different processing rules to suit its own needs?
Applications will interoperate better by simply ignoring host-meta and
inventing their own metadata repository.

 For secure app to be secure re:no-redirect-rule it must in any way perform
 the check that the redirection is to another realm, surely?

To be secure, a user agent should not follow redirects to obtain
host-meta, regardless of where those redirects lead.

 There is enormous value in allowing redirects for host-meta. Applications
 with higher levels of security should implement their own security policies.

If you follow your current trajectory and continue to compromise away
security, applications that require security will implement their own
version of host-meta that is designed to be secure from the ground up
instead of trying to track down and patch all the gotchas in
host-meta.  Sadly, this will defeat the goal of having a central
metadata repository.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 9:57 AM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 9:44 AM, Breno de Medeiros br...@google.com
 wrote:
  On Mon, Feb 23, 2009 at 9:32 AM, Adam Barth w...@adambarth.com wrote:
  Security is often a death of a thousand paper cuts that eventually add
 up to
  you being owned.
 
  I don't understand this reasoning.
 
  1. The host-meta spec allows delegation to other domains/hosts
 
  2. Secure app does not allow redirection to other domains/hosts
 
  3. Secure app does not use host-meta and instead secure-meta, as apposed
 to,
  say, using host-meta and not following redirects to other sites?

 What's the point of standardizing host-meta if every application will
 require different processing rules to suit its own needs?
 Applications will interoperate better by simply ignoring host-meta and
 inventing their own metadata repository.


Every application _will_ need to use different processing rules, because,
well, they are interested in different things.




  For secure app to be secure re:no-redirect-rule it must in any way
 perform
  the check that the redirection is to another realm, surely?

 To be secure, a user agent should not follow redirects to obtain
 host-meta, regardless of where those redirects lead.


What is the attack model here? I assume is the following: The attacker
compromises the server to serve a re-direct when there should be a file
served (or a 404). Well, the attacker can't upload a host-meta with what it
wants in it? Why?




  There is enormous value in allowing redirects for host-meta. Applications
  with higher levels of security should implement their own security
 policies.

 If you follow your current trajectory and continue to compromise away
 security, applications that require security will implement their own
 version of host-meta that is designed to be secure from the ground up
 instead of trying to track down and patch all the gotchas in
 host-meta.  Sadly, this will defeat the goal of having a central
 metadata repository.


Perhaps that argument would be more convincing when you provide an example
of an attack made possibly by introduction of a redirect that would not be
possible by, say, adding a line to the host-meta file.




 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


RE: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Eran Hammer-Lahav
It is pretty irresponsible to talk about 'security' as if there is a well 
established standard applicable for the web as a whole. HTTP, as in RFC 2616, 
isn't secure at all. Even 2617 doesn't make things significantly better. Your 
entire approach is based on a very narrow viewpoint, biased by worries about 
known exploits specific to browsers. None of my use cases for host-meta even 
remotely care about browsers. Are you suggesting we revise HTTP to make it 
secure?

/host-meta offers a simple mechanism to register metadata links. If you have 
specific application security needs, you need to address them at the 
appropriate level, that is, the application. If more than one application has 
the same needs, they can come together and propose a security extension of the 
/host-meta spec. Not supporting redirects is one such idea (though I find it 
utterly useless for security).

But just for fun, how is a redirect any less secure than changing the content 
of the /host-meta document at its original URI? Either you know the host-meta 
file you found is what the host-owner intended or you don't. HTTP (which is 
really the only tool we are using here) doesn't offer you any such assurances.

EHL


 -Original Message-
 From: a...@adambarth.com [mailto:a...@adambarth.com] On Behalf Of Adam
 Barth
 Sent: Monday, February 23, 2009 9:57 AM
 To: Breno de Medeiros
 Cc: Ben Laurie; Mark Nottingham; Eran Hammer-Lahav; www-talk@w3.org
 Subject: Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-
 meta-01)
 
 On Mon, Feb 23, 2009 at 9:44 AM, Breno de Medeiros br...@google.com
 wrote:
  On Mon, Feb 23, 2009 at 9:32 AM, Adam Barth w...@adambarth.com
 wrote:
  Security is often a death of a thousand paper cuts that eventually
 add up to
  you being owned.
 
  I don't understand this reasoning.
 
  1. The host-meta spec allows delegation to other domains/hosts
 
  2. Secure app does not allow redirection to other domains/hosts
 
  3. Secure app does not use host-meta and instead secure-meta, as
 apposed to,
  say, using host-meta and not following redirects to other sites?
 
 What's the point of standardizing host-meta if every application will
 require different processing rules to suit its own needs?
 Applications will interoperate better by simply ignoring host-meta and
 inventing their own metadata repository.
 
  For secure app to be secure re:no-redirect-rule it must in any way
 perform
  the check that the redirection is to another realm, surely?
 
 To be secure, a user agent should not follow redirects to obtain
 host-meta, regardless of where those redirects lead.
 
  There is enormous value in allowing redirects for host-meta.
 Applications
  with higher levels of security should implement their own security
 policies.
 
 If you follow your current trajectory and continue to compromise away
 security, applications that require security will implement their own
 version of host-meta that is designed to be secure from the ground up
 instead of trying to track down and patch all the gotchas in
 host-meta.  Sadly, this will defeat the goal of having a central
 metadata repository.
 
 Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 10:13 AM, Breno de Medeiros br...@google.com wrote:
 Every application _will_ need to use different processing rules, because,
 well, they are interested in different things.

Applications will be interested in different facts in the host-meta
store, but why should they use different procedures for obtaining the
host-meta store?  They might as well use different stores entirely.

 What is the attack model here? I assume is the following: The attacker
 compromises the server to serve a re-direct when there should be a file
 served (or a 404). Well, the attacker can't upload a host-meta with what it
 wants in it? Why?

Often users can add redirects to a server without the ability to
upload content to that serve.  For example, tinyurl.com/host-meta now
redirects to a URL I control even though I have no ability to upload
content to tinyurl.com.  Why should I be able to set host-meta
properties for tinyurl.com?

 Perhaps that argument would be more convincing when you provide an example
 of an attack made possibly by introduction of a redirect that would not be
 possible by, say, adding a line to the host-meta file.

Ok.  I own tinyurl.com's host-meta store because of redirects.
Without redirects, I don't know how to own their host-meta store.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 10:26 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 It is pretty irresponsible to talk about 'security' as if there is a well 
 established standard
 applicable for the web as a whole.

These security issues are real in the sense that there are actual
servers in the world which will or will not be hackable based on the
decisions we make.

 HTTP, as in RFC 2616, isn't secure at all. Even 2617 doesn't make things 
 significantly
 better. Your entire approach is based on a very narrow viewpoint, biased by 
 worries about
 known exploits specific to browsers.

I disagree.  I can use redirects to own tinyurl.com's host-meta store
regardless of the existence of any Web browsers.

 None of my use cases for host-meta even remotely care about browsers. Are you
 suggesting we revise HTTP to make it secure?

I'm suggesting that the world is full of legacy servers.  If we fail
to consider how these legacy servers interact with new proposals, we
will introduce new vulnerabilities into those servers.

 /host-meta offers a simple mechanism to register metadata links. If you have 
 specific
 application security needs, you need to address them at the appropriate 
 level, that is,
 the application. If more than one application has the same needs, they can 
 come
 together and propose a security extension of the /host-meta spec. Not 
 supporting redirects
 is one such idea (though I find it utterly useless for security).

I think its more likely that folks that require security will ignore
host-meta an invent their own metadata store.

 But just for fun, how is a redirect any less secure than changing the content 
 of the
 /host-meta document at its original URI?

I don't have the ability to change the host-meta document at
tinyurl.com.  I do have the ability to add a redirect from /host-meta
to a URL I control.  Prior to host-meta, this is not a vulnerability
in tinyurl.

 Either you know the host-meta file you found is what the host-owner intended 
 or you
 don't. HTTP (which is really the only tool we are using here) doesn't offer 
 you any such
 assurances.

Reality is not as binary as you imply.  There are a spectrum of threat
models corresponding to different attacker abilities.  Following
redirects lets weaker attackers compromise host-meta, adding yet
another paper cut to the insecurity of host-meta.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Eran Hammer-Lahav

On 2/23/09 11:46 AM, Adam Barth w...@adambarth.com wrote:

 Reality is not as binary as you imply.  There are a spectrum of threat
 models corresponding to different attacker abilities.

Exactly!

And I am already aware of one effort looking to add a trust layer to
host-meta. Your suggestion of competing solutions fails simple test. It is
easier to make the use of host-meta more restrictive (perhaps as you
suggested) than invent a completely new one.

Nothing in host-meta prevents you from implementing these restrictions
(content type, redirections). By itself, host-meta includes no sensitive
information or anything that can pose a threat. That will come from
applications using it as a facility, just like they use HTTP.

We view standards architecture in a very different way. I want to create
building blocks and only standardize where there is an overwhelming value in
posing restrictions.

EHL




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 12:04 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 And I am already aware of one effort looking to add a trust layer to
 host-meta.

This seems heavy handed for dealing with very simple threats.

 Your suggestion of competing solutions fails simple test. It is
 easier to make the use of host-meta more restrictive (perhaps as you
 suggested) than invent a completely new one.

It's easier to build a secure metadata store by designing the store
with security in mind instead of trying to patch an existing insecure
store.

 Nothing in host-meta prevents you from implementing these restrictions
 (content type, redirections).

Let's imagine I have the following API for interacting with the host-meta store:

String getHostMetaValue(URL resource_url, String host_meta_key)

As an application programmer, its my job to use this API (i.e.,
host-meta) to implement, say, default charsets.  I make the following
API call:

var default_charset =
getHostMetaValue(http://tinyurl.com/preview.php;, Default-Charset);

Sadly, I can't use this API for this application because I'll get
hacked by redirects.

 By itself, host-meta includes no sensitive
 information or anything that can pose a threat. That will come from
 applications using it as a facility, just like they use HTTP.

So host-meta can wash its hands of all security concerns?

 We view standards architecture in a very different way. I want to create
 building blocks and only standardize where there is an overwhelming value in
 posing restrictions.

The end result will be that people who care about security won't use
host-meta.  We'll invent our own secure-meta that makes it easy to
store meta-data securely.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 12:16 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 11:47 AM, Breno de Medeiros br...@google.com
 wrote:
  Or they may have to do it because host-meta does not allow redirects and
  they need it. I wonder what is more likely.

 One solution is to add content to a host-meta file that says where to
 find the host-meta file:

 My-Host-Meta-Is-Located-At: http://www.example.com/my-favorite-host-meta

 This has the advantage of not introducing vulnerabilities into existing
 servers.

  Because tinyurl.com allows you to do this.

 Yes.  Precisely.  Following redirects introduces a vulnerability into
 tinyurl.com.  That is why I recommend not following redirects.


No, it does not. It does introduce vulnerabilities to clients that visit
tinyurl.com with the expectation that they will interpret some metadata at
tinyurl.com to achieve specific aims. Simply substituting tinyurl.com's
host-meta affects no one until tinyurl.com starts exposing some type of
service or application that client apps might want to configure/discover
using host-meta.

As for your example of default charsets, where you are using a browser to
define a generic interpretation of how to use host-meta to discover default
charsets, it sounds like such API would need to be designed as:

getHostMetaValue(URL resource_url, String host_meta_key, boolean
isAllowedToFollowRedirects)

which hardly sounds to me like a burden.





 I don't know how to make a more compelling case for security than
 supplying a working proof-of-concept exploit that required all of five
 seconds to create on one of the world's most popular sites.

  I am more imaginative: I could do DNS spoofing,

 DNS spoofing requires a lot more work (i.e., a more powerful attacker)
 than abusing redirects.

  or I could choose another
  site to hack that is actually more interesting that tinyurl.

 So we shouldn't care about introducing vulnerabilities into tinyurl
 because we don't think they are important enough?

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 No, it does not. It does introduce vulnerabilities to clients that visit
 tinyurl.com with the expectation that they will interpret some metadata at
 tinyurl.com to achieve specific aims.

You're right: someone has to use host-meta for something for this
attack to work.

 Simply substituting tinyurl.com's
 host-meta affects no one until tinyurl.com starts exposing some type of
 service or application that client apps might want to configure/discover
 using host-meta.

By owning their host-meta, I can opt them into whatever services use
host-meta for discovery.

Are you really saying that you don't care that I own their host-meta file?

 As for your example of default charsets, where you are using a browser to
 define a generic interpretation of how to use host-meta to discover default
 charsets, it sounds like such API would need to be designed as:

 getHostMetaValue(URL resource_url, String host_meta_key, boolean
 isAllowedToFollowRedirects)

 which hardly sounds to me like a burden.

Don't forget mime types!

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing)

What about paper cut #37?

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing, Boolean opt_out_of_paper_cut_37)

That's the path to madness.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 1:21 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 1:04 PM, Breno de Medeiros br...@google.com
 wrote:
  No, it does not. It does introduce vulnerabilities to clients that visit
  tinyurl.com with the expectation that they will interpret some metadata
 at
  tinyurl.com to achieve specific aims.

 You're right: someone has to use host-meta for something for this
 attack to work.

  Simply substituting tinyurl.com's
  host-meta affects no one until tinyurl.com starts exposing some type of
  service or application that client apps might want to configure/discover
  using host-meta.

 By owning their host-meta, I can opt them into whatever services use
 host-meta for discovery.

 Are you really saying that you don't care that I own their host-meta file?

  As for your example of default charsets, where you are using a browser to
  define a generic interpretation of how to use host-meta to discover
 default
  charsets, it sounds like such API would need to be designed as:
 
  getHostMetaValue(URL resource_url, String host_meta_key, boolean
  isAllowedToFollowRedirects)
 
  which hardly sounds to me like a burden.

 Don't forget mime types!

 String getHostMetaValue(URL resource_url, String host_meta_key,
 Boolean is_allowed_to_follow_redirects, Boolean
 require_strict_mime_type_processing)

 What about paper cut #37?

 String getHostMetaValue(URL resource_url, String host_meta_key,
 Boolean is_allowed_to_follow_redirects, Boolean
 require_strict_mime_type_processing, Boolean opt_out_of_paper_cut_37)

 That's the path to madness.


Another path to madness is to write opt_out_of_paper_cut_37 as part of a
generic spec when the vulnerability affects a special class of
applications.  Unless it is thought out and written directly into the spec
or (as others including myself prefer) enforced by the application, it
certainly cannot just go away.




 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 1:21 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 1:04 PM, Breno de Medeiros br...@google.com
 wrote:
  No, it does not. It does introduce vulnerabilities to clients that visit
  tinyurl.com with the expectation that they will interpret some metadata
 at
  tinyurl.com to achieve specific aims.

 You're right: someone has to use host-meta for something for this
 attack to work.


An application would have to use host-meta for a particular aim (e.g., a
browser discovering default charsets) and implement the spec blindly without
regard to security considerations.





  Simply substituting tinyurl.com's
  host-meta affects no one until tinyurl.com starts exposing some type of
  service or application that client apps might want to configure/discover
  using host-meta.

 By owning their host-meta, I can opt them into whatever services use
 host-meta for discovery.

 Are you really saying that you don't care that I own their host-meta file?

  As for your example of default charsets, where you are using a browser to
  define a generic interpretation of how to use host-meta to discover
 default
  charsets, it sounds like such API would need to be designed as:
 
  getHostMetaValue(URL resource_url, String host_meta_key, boolean
  isAllowedToFollowRedirects)
 
  which hardly sounds to me like a burden.

 Don't forget mime types!

 String getHostMetaValue(URL resource_url, String host_meta_key,
 Boolean is_allowed_to_follow_redirects, Boolean
 require_strict_mime_type_processing)

 What about paper cut #37?

 String getHostMetaValue(URL resource_url, String host_meta_key,
 Boolean is_allowed_to_follow_redirects, Boolean
 require_strict_mime_type_processing, Boolean opt_out_of_paper_cut_37)

 That's the path to madness.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 1:48 PM, Breno de Medeiros br...@google.com wrote:
 An application would have to use host-meta for a particular aim (e.g., a
 browser discovering default charsets) and implement the spec blindly without
 regard to security considerations.

Just because we can pass the buck to application-land doesn't mean we
should write a spec full of security land mines.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Mark Nottingham

Adam,

To me, what's interesting here is that the problems you're  
illustrating have never been an issue AFAIK with robots.txt, and they  
didn't even come up as a concern during the discussions of P3P. I  
wasn't there for sitemaps, but AFAICT they've been deployed without  
the risk of unauthorised control of URIs being mentioned.


I think the reason for this is that once the mechanism gets  
deployment, site operators are aware of the import of allowing control  
of this URL, and take steps to assure that it isn't allowed if it's  
going to cause a problem. They haven't done that yet in this case (and  
thus you were able to get /host-meta) because this isn't deployed --  
or even useful -- yet.


I would agree that this is not a perfectly secure solution, but I do  
think it's good enough.


Of course, a mention in security considerations is worthwhile.

Cheers,



On 24/02/2009, at 8:21 AM, Adam Barth wrote:

On Mon, Feb 23, 2009 at 1:04 PM, Breno de Medeiros  
br...@google.com wrote:
No, it does not. It does introduce vulnerabilities to clients that  
visit
tinyurl.com with the expectation that they will interpret some  
metadata at

tinyurl.com to achieve specific aims.


You're right: someone has to use host-meta for something for this
attack to work.


Simply substituting tinyurl.com's
host-meta affects no one until tinyurl.com starts exposing some  
type of
service or application that client apps might want to configure/ 
discover

using host-meta.


By owning their host-meta, I can opt them into whatever services use
host-meta for discovery.

Are you really saying that you don't care that I own their host-meta  
file?


As for your example of default charsets, where you are using a  
browser to
define a generic interpretation of how to use host-meta to discover  
default

charsets, it sounds like such API would need to be designed as:

getHostMetaValue(URL resource_url, String host_meta_key, boolean
isAllowedToFollowRedirects)

which hardly sounds to me like a burden.


Don't forget mime types!

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing)

What about paper cut #37?

String getHostMetaValue(URL resource_url, String host_meta_key,
Boolean is_allowed_to_follow_redirects, Boolean
require_strict_mime_type_processing, Boolean opt_out_of_paper_cut_37)

That's the path to madness.

Adam



--
Mark Nottingham http://www.mnot.net/




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 2:23 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 2:07 PM, Mark Nottingham m...@mnot.net wrote:
  To me, what's interesting here is that the problems you're illustrating
 have
  never been an issue AFAIK with robots.txt,

 I recently reviewed a security paper that measured whether consumers
 of robots.txt follow redirects.  I'm not sure if their results are
 public yet, but some consumers followed redirects but others don't,
 causing interoperability problems.

  and they didn't even come up as a
  concern during the discussions of P3P. I wasn't there for sitemaps, but
  AFAICT they've been deployed without the risk of unauthorised control of
  URIs being mentioned.

 That just means they aren't interesting enough targets for attackers.
 For high-stakes metadata repositories, like crossdomain.xml, you find
 that people don't follow redirects.  If I recall correctly,
 crossdomain.xml started off allowing redirects but had to break
 backwards compatibility to stop sites from getting hacked.


crossdomain.xml was introduce to support a few specific applications
(notably flash), and it did not take into account the security requirements
of the application context. Tough.




  I think the reason for this is that once the mechanism gets deployment,
 site
  operators are aware of the import of allowing control of this URL, and
 take
  steps to assure that it isn't allowed if it's going to cause a problem.

 This is a terrible approach to security.  We shouldn't make it even
 harder to deploy a secure Web server by introducing more landmines
 that you have to avoid stepping on.

  They haven't done that yet in this case (and thus you were able to get
  /host-meta) because this isn't deployed -- or even useful -- yet.

 TinyURL doesn't appear to let me create a redirect with a . in the
 name, stopping me from creating a fake robots.txt or crossdomain.xml
 metadata store.  Similar to how MySpace and Twitter didn't let me make
 a profile with a - in the name, I wouldn't hang my hat on this for
 security.

  I would agree that this is not a perfectly secure solution, but I do
 think
  it's good enough.

 The net result is that most people aren't going to use host-meta for
 security-sensitive metadata.  The interoperability cost will be too
 high.

 Why not introduce a proper delegation mechanism instead of re-using
 HTTP redirects?  That would let you address the delegation use case
 without the security issue.


Because at this point there is no consensus what a general delegation
mechanism would look like. Quite possibly, this might be
application-specific. It is probably a better idea to see how this plays
out, how useful people find it to be, and if there are generic concerns that
can be addressed in a spec. The alternative is to write a spec that
introduces complexity to solve problems that we conjecture might exist in
yet-to-be-developed applications. The risk then is that the spec will not
see adoption, or that implementors will deploy partial spec compliance in
ad-hoc fashion, which is also a danger to interoperability.




  Of course, a mention in security considerations is worthwhile.

 Indeed.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Adam Barth
On Mon, Feb 23, 2009 at 3:05 PM, Breno de Medeiros br...@google.com wrote:
 crossdomain.xml was introduce to support a few specific applications
 (notably flash), and it did not take into account the security requirements
 of the application context. Tough.

I'm suggesting we learn from their mistakes instead of making the same
mistakes ourselves.

 Because at this point there is no consensus what a general delegation
 mechanism would look like. Quite possibly, this might be
 application-specific.

Why not handle delegation at the application layer instead of using
HTTP redirects for delegation?

 The alternative is to write a spec that
 introduces complexity to solve problems that we conjecture might exist in
 yet-to-be-developed applications. The risk then is that the spec will not
 see adoption, or that implementors will deploy partial spec compliance in
 ad-hoc fashion, which is also a danger to interoperability.

Great.  Let's remove the complexity of following redirects.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-23 Thread Breno de Medeiros
On Mon, Feb 23, 2009 at 3:48 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Feb 23, 2009 at 3:05 PM, Breno de Medeiros br...@google.com
 wrote:
  crossdomain.xml was introduce to support a few specific applications
  (notably flash), and it did not take into account the security
 requirements
  of the application context. Tough.

 I'm suggesting we learn from their mistakes instead of making the same
 mistakes ourselves.


I am saying that we do not have the application context here because this
spec is generic.




  Because at this point there is no consensus what a general delegation
  mechanism would look like. Quite possibly, this might be
  application-specific.

 Why not handle delegation at the application layer instead of using
 HTTP redirects for delegation?


I am not saying that HTTP redirects are the same as delegation. I think to
treat them on the same level is a mistake. An application can decide whether
to follow a redirect or not based on its security model. For applications
that expect signed content, the only delegation happens via signatures, and
following HTTP redirects is a transport event that has nothing to do with
delegation.




  The alternative is to write a spec that
  introduces complexity to solve problems that we conjecture might exist in
  yet-to-be-developed applications. The risk then is that the spec will not
  see adoption, or that implementors will deploy partial spec compliance in
  ad-hoc fashion, which is also a danger to interoperability.

 Great.  Let's remove the complexity of following redirects.


Or, from another point-of-view: Let's introduce restrictions on the spec
based on anticipated threats against non-existing applications.

However, I am tired of this argument. You haven't produced anything that
convinces me there is a need to be addressed here, and I have not managed to
convince you that this should be left to be specified when applications are
developed that show clear usage patterns to justify what is and what is not,
an acceptable restriction to be placed on the spec at a generic level.

My vote is that I think the spec is better left as is. Your vote is also
understood. See you in a future thread ...





 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-22 Thread Mark Nottingham

Hi Adam,

I'm collecting changes for the next rev of the draft, and found this  
dangling:


On 12/02/2009, at 7:31 AM, Adam Barth wrote:


Here's what I recommend:

1) Change the scope of the host-meta to default to the origin of the
URL from which it was retrieved (as computed by the algorithm in
draft-abarth-origin).



A common use case (we think) will be to have http://www.us.example.com/host-meta 
 HTTP redirect to http://www.hq.example.com/host-meta, or some  
other URI that's not on the same origin (as you defined it).


I think that the disconnect here is that your use case for 'origin'  
and this one -- while similar in many ways -- differ in this one, for  
good reasons.


We still intend, BTW, to re-introduce the protocol used to the mix, to  
disambiguate that point. It's just the redirect handling that's  
different.


As such, I'm wondering whether or not it's useful to use the term  
'origin' in this draft -- potentially going as far as renaming it  
(again!) to /origin-meta, although Eran is a bit concerned about  
confusing early adopters (with good cause, I think).


What are your thoughts?

Cheers,


--
Mark Nottingham http://www.mnot.net/




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-22 Thread Adam Barth
On Sun, Feb 22, 2009 at 6:14 PM, Mark Nottingham m...@mnot.net wrote:
 A common use case (we think) will be to have
 http://www.us.example.com/host-meta HTTP redirect to
 http://www.hq.example.com/host-meta, or some other URI that's not on the
 same origin (as you defined it).

What behavior do you think is desirable here?  From a security point
of view, I would expect the host-meta from http://www.hq.example.com
to apply to http://www.hq.example.com (and not to
http://www.us.example.com).

 I think that the disconnect here is that your use case for 'origin' and this
 one -- while similar in many ways -- differ in this one, for good reasons.

I don't understand this comment.  In draft-abarth-origin, we need to
compute the origin of a HTTP request.  In this draft, we're interested
in computing the origin of an HTTP response.

 As such, I'm wondering whether or not it's useful to use the term 'origin'
 in this draft -- potentially going as far as renaming it (again!) to
 /origin-meta, although Eran is a bit concerned about confusing early
 adopters (with good cause, I think).

I don't have strong feelings about naming, but I wouldn't call it
origin-meta because different applications of the file might have
different (i.e., non-origin) scopes.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Mark Nottingham


WRT DNS rebinding - my initial reaction is that this isn't the proper  
place to fix this problem; it's not unique by any means to this  
proposal.


My inclination, then, would be to note DNS rebinding as a risk in  
Security Considerations that prudent clients can protect themselves  
against, if necessary.


Luckily, the IETF has mechanisms in place to get security reviews of  
proposals, so we can avail ourselves of that to get more definitive  
advice.


Cheers,



On 12/02/2009, at 7:31 AM, Adam Barth wrote:

On Wed, Feb 11, 2009 at 11:52 AM, Eran Hammer-Lahav e...@hueniverse.com 
 wrote:
Your approach is wrong. Host-meta should not be trying to address  
such

security concerns.


Ignoring security problems doesn't make them go away.  It just means
you'll have to pay the piper more later.


Applications making use of it should. There are plenty of
applications where no one care about security. Obviously,  
crossdomain.xml

needs to be secure, since, well, it is all about that.


What's the point of a central metadata repository that can't handle
the most popular use case of metadata?

An application which strict security requirement should pay  
attention to the
experience you are referring to. We certainly agree on that. But  
that is

application-specific.


Here's what I recommend:

1) Change the scope of the host-meta to default to the origin of the
URL from which it was retrieved (as computed by the algorithm in
draft-abarth-origin).

2) Let particular applications narrow this scope if they require
additional granularity.

Adam



--
Mark Nottingham http://www.mnot.net/




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Adam Barth

On Thu, Feb 12, 2009 at 3:13 AM, Mark Nottingham m...@mnot.net wrote:
 My inclination, then, would be to note DNS rebinding as a risk in Security
 Considerations that prudent clients can protect themselves against, if
 necessary.

That sounds reasonable.

On Thu, Feb 12, 2009 at 3:22 AM, Mark Nottingham m...@mnot.net wrote:
 From that document;

 Valid content-type values are:

• text/* (any text type)
• application/xml
• application/xhtml+xml

 That's hardly an explicit Content-Type; it would be the default for a file
 with that name on the majority of servers on the planet; the only thing it's
 likely to affect is application/octet-stream, for those servers that don't
 have a clue about what XML is.

Interesting.  I wonder how they came up with this list.  The text/*
value is particularly unsettling.  /me should go hack them.

By the way, Breno asked for examples of sites were users can control
content at arbitrary paths.  Two extremely popular ones are MySpace
and Twitter.  For example, I signed up for a MySpace account at
http://www.myspace.com/hostmeta and I could do the same for Twitter.
As it happens, these two services don't let you pick URLs with a -
character in them, but I wouldn't hang my hat on that for security.

 Adam, my experience with security work is that there always needs to be a
 trade-off with usability (both implementer and end-user). While DNS
 rebinding is a concerning attack for *some* use cases, it doesn't affect all
 uses of this proposal; making such a requirement would needlessly burden
 implementers (as you point out). It's a bad trade-off.

I agree.  Certainly not every use case will care about DNS Rebinding.
Unfortunately, it will bite some application of host-meta at some
point.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Breno de Medeiros
Adam, did you try to create myspace.com/favicon.ico ?

You may not consider that a threat by companies do. If they were caught
distributing illegal images to every browser that navigates to the root of
their domain, they might be liable to crippling prosecution.

This is a common problem with all well-known-locations. That is why
host-meta was written in a generic format so that it can be the _last_
well-known-location. WKLs are evil, but also necessary.


On Thu, Feb 12, 2009 at 10:10 AM, Adam Barth w...@adambarth.com wrote:

 On Thu, Feb 12, 2009 at 3:13 AM, Mark Nottingham m...@mnot.net wrote:
  My inclination, then, would be to note DNS rebinding as a risk in
 Security
  Considerations that prudent clients can protect themselves against, if
  necessary.

 That sounds reasonable.

 On Thu, Feb 12, 2009 at 3:22 AM, Mark Nottingham m...@mnot.net wrote:
  From that document;
 
  Valid content-type values are:
 
 • text/* (any text type)
 • application/xml
 • application/xhtml+xml
 
  That's hardly an explicit Content-Type; it would be the default for a
 file
  with that name on the majority of servers on the planet; the only thing
 it's
  likely to affect is application/octet-stream, for those servers that
 don't
  have a clue about what XML is.

 Interesting.  I wonder how they came up with this list.  The text/*
 value is particularly unsettling.  /me should go hack them.

 By the way, Breno asked for examples of sites were users can control
 content at arbitrary paths.  Two extremely popular ones are MySpace
 and Twitter.  For example, I signed up for a MySpace account at
 http://www.myspace.com/hostmeta and I could do the same for Twitter.
 As it happens, these two services don't let you pick URLs with a -
 character in them, but I wouldn't hang my hat on that for security.

  Adam, my experience with security work is that there always needs to be a
  trade-off with usability (both implementer and end-user). While DNS
  rebinding is a concerning attack for *some* use cases, it doesn't affect
 all
  uses of this proposal; making such a requirement would needlessly burden
  implementers (as you point out). It's a bad trade-off.

 I agree.  Certainly not every use case will care about DNS Rebinding.
 Unfortunately, it will bite some application of host-meta at some
 point.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-12 Thread Adam Barth

On Thu, Feb 12, 2009 at 11:38 AM, Breno de Medeiros br...@google.com wrote:
 Adam, did you try to create myspace.com/favicon.ico ?

I didn't try, but they already have their favicon there, so I suspect
it wouldn't work.

 You may not consider that a threat by companies do. If they were caught
 distributing illegal images to every browser that navigates to the root of
 their domain, they might be liable to crippling prosecution.

I mean, considering I can upload arbitrary images to my MySpace
profile, I doubt this would cause them much consternation.

 This is a common problem with all well-known-locations. That is why
 host-meta was written in a generic format so that it can be the _last_
 well-known-location. WKLs are evil, but also necessary.

Yeah, as I said in my first email, I think host-meta will be super useful.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 4:31 PM, Mark Nottingham m...@yahoo-inc.com wrote:
 Well, the authority is host + port; common sense tells us that it's unlikely
 that the same (host, port) tuple that we speak HTTP on is also going to
 support SMTP or XMPP. I'm not saying that common sense is universal,
 however.

These assumptions are often violated in attack scenarios, especially
by active network attackers who are very capable of hiding the honest
https://example.com server behind a spoofed http://example.com:443
server.

Adam




Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is worth 
 excluding many
 hosting solutions where people not always have access to setting content-type 
 headers.
 After all, focusing on an HTTP GET based solution was based on getting the 
 most
 accessible approach.

Adobe found the security case compelling enough to break backwards
compatibility in their crossdomain.xml policy file system to enforce
this requirement.  Most serious Web sites opt-in to requiring an
explicit Content-Type.  For example,

$ wget http://mail.google.com/crossdomain.xml --save-headers
$ cat crossdomain.xml
HTTP/1.0 200 OK
Content-Type: text/x-cross-domain-policy
Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
Set-Cookie: ***REDACTED***
Date: Wed, 11 Feb 2009 18:07:40 GMT
Server: gws
Cache-Control: private, x-gzip-ok=
Expires: Wed, 11 Feb 2009 18:07:40 GMT

?xml version=1.0?
!DOCTYPE cross-domain-policy SYSTEM
http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
cross-domain-policy
  site-control permitted-cross-domain-policies=by-content-type /
/cross-domain-policy

Google Gears has also recently issued a security patch enforcing the
same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but is 
 very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I 
 rather do anything but
 get involved with another term at this point.

I'd greatly prefer that is this was stated explicitly.  Why leave such
a critical security requirement implied?

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Tue, Feb 10, 2009 at 11:37 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 First, scheme is incorrect here as the scheme does not always determine a 
 specific protocol
 (see 'http' is not just for HTTP saga).

I don't understand this level of pedantry, but if you want host-meta
to be usable by Web browsers, you should use the algorithm in
draft-abarth-origin to compute its scope from its URL.  Any deviations
from this algorithm will introduce cracks in the browser's security
policy.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 10:14 AM, Adam Barth w...@adambarth.com wrote:
 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.

By the way, here's the chart of the various security protections Adobe
added to crossdomain.xml and which version they first appeared in:

http://www.adobe.com/devnet/flashplayer/articles/fplayer9-10_security.html

There is another one I forgot:

You need to restrict the scope of a host-meta file to a specific IP
address.  For example, if suppose you retrieve
http://example.com/host-meta from 123.123.123.123.  Now, you shouldn't
apply the information you get from that host-meta file to content
retrieved from 34.34.34.34.  You need to fetch another host-meta file
from that IP address.  If you don't do that, the host-meta file will
be vulnerable to DNS Rebinding.  For an explanation of how this caused
problems for crossdomain.xml, see:

http://www.adambarth.com/papers/2007/jackson-barth-bortz-shao-boneh.pdf

Sadly, this makes life much more complicated for implementers.  (Maybe
now you begin to see why this draft scares me.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Eran Hammer-Lahav
How about clearly identifying the threat in the spec instead of making this a 
requirement?

EHL


On 2/11/09 10:14 AM, Adam Barth w...@adambarth.com wrote:

On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is worth 
 excluding many
 hosting solutions where people not always have access to setting content-type 
 headers.
 After all, focusing on an HTTP GET based solution was based on getting the 
 most
 accessible approach.

Adobe found the security case compelling enough to break backwards
compatibility in their crossdomain.xml policy file system to enforce
this requirement.  Most serious Web sites opt-in to requiring an
explicit Content-Type.  For example,

$ wget http://mail.google.com/crossdomain.xml --save-headers
$ cat crossdomain.xml
HTTP/1.0 200 OK
Content-Type: text/x-cross-domain-policy
Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
Set-Cookie: ***REDACTED***
Date: Wed, 11 Feb 2009 18:07:40 GMT
Server: gws
Cache-Control: private, x-gzip-ok=
Expires: Wed, 11 Feb 2009 18:07:40 GMT

?xml version=1.0?
!DOCTYPE cross-domain-policy SYSTEM
http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
cross-domain-policy
  site-control permitted-cross-domain-policies=by-content-type /
/cross-domain-policy

Google Gears has also recently issued a security patch enforcing the
same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but is 
 very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I 
 rather do anything but
 get involved with another term at this point.

I'd greatly prefer that is this was stated explicitly.  Why leave such
a critical security requirement implied?

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Eran Hammer-Lahav
Your approach is wrong. Host-meta should not be trying to address such security 
concerns. Applications making use of it should. There are plenty of 
applications where no one care about security. Obviously, crossdomain.xml needs 
to be secure, since, well, it is all about that. But copyright information 
about a side, or a summery of its content, or list of social networks with 
related features, are all examples where abuse is not likely to be part of the 
threat model.

An application which strict security requirement should pay attention to the 
experience you are referring to. We certainly agree on that. But that is 
application-specific.

EHL


On 2/11/09 10:26 AM, Adam Barth w...@adambarth.com wrote:

On Wed, Feb 11, 2009 at 10:14 AM, Adam Barth w...@adambarth.com wrote:
 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.

By the way, here's the chart of the various security protections Adobe
added to crossdomain.xml and which version they first appeared in:

http://www.adobe.com/devnet/flashplayer/articles/fplayer9-10_security.html

There is another one I forgot:

You need to restrict the scope of a host-meta file to a specific IP
address.  For example, if suppose you retrieve
http://example.com/host-meta from 123.123.123.123.  Now, you shouldn't
apply the information you get from that host-meta file to content
retrieved from 34.34.34.34.  You need to fetch another host-meta file
from that IP address.  If you don't do that, the host-meta file will
be vulnerable to DNS Rebinding.  For an explanation of how this caused
problems for crossdomain.xml, see:

http://www.adambarth.com/papers/2007/jackson-barth-bortz-shao-boneh.pdf

Sadly, this makes life much more complicated for implementers.  (Maybe
now you begin to see why this draft scares me.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Eran Hammer-Lahav
I don't care of this level of pedantry which is why I don't want to use terms 
that people have a problem agreeing what it means.

There is nothing incorrect about: GET mailto:j...@example.com HTTP/1.1

It might look funny to most people but it is perfectly valid. The protocol is 
HTTP, the scheme is mailto. HTTP can talk about any URI, not just http URIs. 
Since this is about *how* /host-meta is obtained, it should talk about 
protocol, not scheme.

EHL




On 2/11/09 10:18 AM, Adam Barth w...@adambarth.com wrote:

On Tue, Feb 10, 2009 at 11:37 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 First, scheme is incorrect here as the scheme does not always determine a 
 specific protocol
 (see 'http' is not just for HTTP saga).

I don't understand this level of pedantry, but if you want host-meta
to be usable by Web browsers, you should use the algorithm in
draft-abarth-origin to compute its scope from its URL.  Any deviations
from this algorithm will introduce cracks in the browser's security
policy.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

That would cause interoperability problems where user agents that care
about security would be incompatible with sites implemented with
insecure user agents in mind.  Based on past history, this leads to a
race to the bottom where no user agents can be both popular and
secure.


On Wed, Feb 11, 2009 at 11:46 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 How about clearly identifying the threat in the spec instead of making this
 a requirement?

 EHL


 On 2/11/09 10:14 AM, Adam Barth w...@adambarth.com wrote:

 On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is
 worth excluding many
 hosting solutions where people not always have access to setting
 content-type headers.
 After all, focusing on an HTTP GET based solution was based on getting the
 most
 accessible approach.

 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.  For example,

 $ wget http://mail.google.com/crossdomain.xml --save-headers
 $ cat crossdomain.xml
 HTTP/1.0 200 OK
 Content-Type: text/x-cross-domain-policy
 Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
 Set-Cookie: ***REDACTED***
 Date: Wed, 11 Feb 2009 18:07:40 GMT
 Server: gws
 Cache-Control: private, x-gzip-ok=
 Expires: Wed, 11 Feb 2009 18:07:40 GMT

 ?xml version=1.0?
 !DOCTYPE cross-domain-policy SYSTEM
 http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
 cross-domain-policy
   site-control permitted-cross-domain-policies=by-content-type /
 /cross-domain-policy

 Google Gears has also recently issued a security patch enforcing the
 same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but
 is very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I
 rather do anything but
 get involved with another term at this point.

 I'd greatly prefer that is this was stated explicitly.  Why leave such
 a critical security requirement implied?

 Adam





Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 11:52 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 Your approach is wrong. Host-meta should not be trying to address such
 security concerns.

Ignoring security problems doesn't make them go away.  It just means
you'll have to pay the piper more later.

 Applications making use of it should. There are plenty of
 applications where no one care about security. Obviously, crossdomain.xml
 needs to be secure, since, well, it is all about that.

What's the point of a central metadata repository that can't handle
the most popular use case of metadata?

 An application which strict security requirement should pay attention to the
 experience you are referring to. We certainly agree on that. But that is
 application-specific.

Here's what I recommend:

1) Change the scope of the host-meta to default to the origin of the
URL from which it was retrieved (as computed by the algorithm in
draft-abarth-origin).

2) Let particular applications narrow this scope if they require
additional granularity.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 11:55 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 There is nothing incorrect about: GET mailto:j...@example.com HTTP/1.1

I don't know how to get a Web browser to generate such a request, so I
am unable to assess its security implications.

 It might look funny to most people but it is perfectly valid. The protocol
 is HTTP, the scheme is mailto. HTTP can talk about any URI, not just http
 URIs. Since this is about *how* /host-meta is obtained, it should talk about
 protocol, not scheme.

Here's my understanding of how this should work (ignoring redirects
for the moment).  Please correct me if my understanding is incorrect
or incomplete:

1) The user agent retrieves the host-meta file by requesting a certain
URL from the network layer.

2) The network layer does some magic involving protocols and
electrical signals on wires and returns a sequence of bytes.

3) The user agent now must compute a scope for the retrieved host-meta file.

I recommend that the scope for the host-meta file be determined from
the URL irrespective of whatever magic goes on in step 2. because this
is the way all other security scopes are computed in Web browsers.
For example, if I view an HTML document location at
http://example.com/index.html, its security origin is (http,
example.com, 80) regardless of whether the HTML document was actually
retrieved by carrier pigeon or SMTP.

(To handle redirects, by the way, you have to use the last URL in the
redirect chain.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
I have to say that the current known use-cases for site-meta are:

1. Security critical ones, but for server-to-server discovery uses (not
browser mediated)

2. Semantic ones, for user consumption, of an informative rather than
security-critical nature. These use cases may be handled by browsers.

I agree that it is worth to look at the security consequences, but at least
to me at this point, it is not clear that the traditional same-policy
paradigm used by browsers is relevant here.


On Wed, Feb 11, 2009 at 12:38 PM, Adam Barth w...@adambarth.com wrote:


 On Wed, Feb 11, 2009 at 11:55 AM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:
  There is nothing incorrect about: GET mailto:j...@example.com HTTP/1.1

 I don't know how to get a Web browser to generate such a request, so I
 am unable to assess its security implications.

  It might look funny to most people but it is perfectly valid. The
 protocol
  is HTTP, the scheme is mailto. HTTP can talk about any URI, not just http
  URIs. Since this is about *how* /host-meta is obtained, it should talk
 about
  protocol, not scheme.

 Here's my understanding of how this should work (ignoring redirects
 for the moment).  Please correct me if my understanding is incorrect
 or incomplete:

 1) The user agent retrieves the host-meta file by requesting a certain
 URL from the network layer.

 2) The network layer does some magic involving protocols and
 electrical signals on wires and returns a sequence of bytes.

 3) The user agent now must compute a scope for the retrieved host-meta
 file.

 I recommend that the scope for the host-meta file be determined from
 the URL irrespective of whatever magic goes on in step 2. because this
 is the way all other security scopes are computed in Web browsers.
 For example, if I view an HTML document location at
 http://example.com/index.html, its security origin is (http,
 example.com, 80) regardless of whether the HTML document was actually
 retrieved by carrier pigeon or SMTP.

 (To handle redirects, by the way, you have to use the last URL in the
 redirect chain.)

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 I have to say that the current known use-cases for site-meta are:

 1. Security critical ones, but for server-to-server discovery uses (not
 browser mediated)

 2. Semantic ones, for user consumption, of an informative rather than
 security-critical nature. These use cases may be handled by browsers.

Why not address security metadata for user-agents?  For example, it
would be eminently useful to be able to express X-Content-Type-Options
[1] and X-Frame-Options [2] in a centralized metadata store instead of
wasting bandwidth on every HTTP response (as Google does for
X-Content-Type-Options).  I don't think anyone doubts that we're going
to see a proliferation of this kind of security metadata, e.g., along
the lines of [3].  I don't see the point of making a central metadata
store that ignores these important use cases.

Adam

[1] 
http://blogs.msdn.com/ie/archive/2008/09/02/ie8-security-part-vi-beta-2-update.aspx
[2] 
https://blogs.msdn.com/ie/archive/2009/01/27/ie8-security-part-vii-clickjacking-defenses.aspx
[3] http://people.mozilla.org/~bsterne/content-security-policy/



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 1:26 PM, Adam Barth w...@adambarth.com wrote:


 On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com
 wrote:
  I have to say that the current known use-cases for site-meta are:
 
  1. Security critical ones, but for server-to-server discovery uses (not
  browser mediated)
 
  2. Semantic ones, for user consumption, of an informative rather than
  security-critical nature. These use cases may be handled by browsers.

 Why not address security metadata for user-agents?  For example, it
 would be eminently useful to be able to express X-Content-Type-Options
 [1] and X-Frame-Options [2] in a centralized metadata store instead of
 wasting bandwidth on every HTTP response (as Google does for
 X-Content-Type-Options).  I don't think anyone doubts that we're going
 to see a proliferation of this kind of security metadata, e.g., along
 the lines of [3].  I don't see the point of making a central metadata
 store that ignores these important use cases.


The current proposal for host-meta addresses some use cases that today
simply _cannot_ be addressed without it. Your proposal restricts the
discovery process in ways that may have unintended consequences in terms of
prohibiting future uses. This is so that browsers can avoid implementing
same-domain policy checks at the application layer?





 Adam

 [1]
 http://blogs.msdn.com/ie/archive/2008/09/02/ie8-security-part-vi-beta-2-update.aspx
 [2]
 https://blogs.msdn.com/ie/archive/2009/01/27/ie8-security-part-vii-clickjacking-defenses.aspx
 [3] 
 http://people.mozilla.org/~bsterne/content-security-policy/http://people.mozilla.org/%7Ebsterne/content-security-policy/




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 1:46 PM, Breno de Medeiros br...@google.com wrote:
 The current proposal for host-meta addresses some use cases that today
 simply _cannot_ be addressed without it.

I'm not familiar our process for adopting new use cases, but let's
think more carefully about one of the listed use cases:

On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com wrote:
 1. Security critical ones, but for server-to-server discovery uses (not
 browser mediated)

To serve this use case, we should require that the host-meta file be
served with a specific, novel content type.  Without this requirement,
servers that try to use the host-meta file for security-critical
server-to-server discovery will be tricked by attackers who upload
fake host-meta files to unknowing servers.

 Your proposal restricts the
 discovery process in ways that may have unintended consequences in terms of
 prohibiting future uses.

How does requiring a specific Content-Type prohibit future uses?

 This is so that browsers can avoid implementing
 same-domain policy checks at the application layer?

No, this is to protect servers that let attackers upload previously
benign content to now-magical paths.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 2:01 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 1:46 PM, Breno de Medeiros br...@google.com
 wrote:
  The current proposal for host-meta addresses some use cases that today
  simply _cannot_ be addressed without it.

 I'm not familiar our process for adopting new use cases, but let's
 think more carefully about one of the listed use cases:

 On Wed, Feb 11, 2009 at 1:04 PM, Breno de Medeiros br...@google.com
 wrote:
  1. Security critical ones, but for server-to-server discovery uses (not
  browser mediated)

 To serve this use case, we should require that the host-meta file be
 served with a specific, novel content type.  Without this requirement,
 servers that try to use the host-meta file for security-critical
 server-to-server discovery will be tricked by attackers who upload
 fake host-meta files to unknowing servers.

  Your proposal restricts the
  discovery process in ways that may have unintended consequences in terms
 of
  prohibiting future uses.

 How does requiring a specific Content-Type prohibit future uses?

  This is so that browsers can avoid implementing
  same-domain policy checks at the application layer?

 No, this is to protect servers that let attackers upload previously
 benign content to now-magical paths.


1. The mechanism is not sufficient strong to prevent against defacing
attacks. An attacker that can upload a file and choose how to set the
content-type would be able to implement the attack. If servers are willing
to let users upload files willy-nilly, and do not worry about magical paths,
will they worry about magical content types?

2. This technique may prevent legitimate uses of the spec by developers who
do not have the ability to set the appropriate header.

Is this more likely to prevent legitimate developers from getting things
done than to prevent attacks from spoofing said magical paths? I would say
yes.

Defacing attacks are a threat to applications relying on this spec, and they
should be explicitly aware of it rather than have a false sense of security
based on ad-hoc mitigation techniques. For instance, for XRD discovery there
is work on a trust profile using signatures that operates on the basic
principle that 'the way to get the resource is fundamentally untrustworthy,
let the resources be self-validating.'





 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Eran Hammer-Lahav
But you are missing the entire application layer here! A browser will not use 
host-meta. It will use an application spec that will use host-meta and that 
application, it security is a concern, will specify such requirements to ensure 
interoperability. It is not the job of host-meta to tell applications what is 
good for them.

EHL


On 2/11/09 12:27 PM, Adam Barth w...@adambarth.com wrote:



That would cause interoperability problems where user agents that care
about security would be incompatible with sites implemented with
insecure user agents in mind.  Based on past history, this leads to a
race to the bottom where no user agents can be both popular and
secure.


On Wed, Feb 11, 2009 at 11:46 AM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 How about clearly identifying the threat in the spec instead of making this
 a requirement?

 EHL


 On 2/11/09 10:14 AM, Adam Barth w...@adambarth.com wrote:

 On Tue, Feb 10, 2009 at 11:51 PM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:
 In particular, you should require that
 the host-meta file should be served with a specific mime type (ignore
 the response if the mime type is wrong.  This protects servers that
 let users upload content from having attackers upload a bogus
 host-meta file.

 I am not sure the value added in security (which I find hard to buy) is
 worth excluding many
 hosting solutions where people not always have access to setting
 content-type headers.
 After all, focusing on an HTTP GET based solution was based on getting the
 most
 accessible approach.

 Adobe found the security case compelling enough to break backwards
 compatibility in their crossdomain.xml policy file system to enforce
 this requirement.  Most serious Web sites opt-in to requiring an
 explicit Content-Type.  For example,

 $ wget http://mail.google.com/crossdomain.xml --save-headers
 $ cat crossdomain.xml
 HTTP/1.0 200 OK
 Content-Type: text/x-cross-domain-policy
 Last-Modified: Tue, 04 Mar 2008 21:38:05 GMT
 Set-Cookie: ***REDACTED***
 Date: Wed, 11 Feb 2009 18:07:40 GMT
 Server: gws
 Cache-Control: private, x-gzip-ok=
 Expires: Wed, 11 Feb 2009 18:07:40 GMT

 ?xml version=1.0?
 !DOCTYPE cross-domain-policy SYSTEM
 http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd;
 cross-domain-policy
   site-control permitted-cross-domain-policies=by-content-type /
 /cross-domain-policy

 Google Gears has also recently issued a security patch enforcing the
 same Content-Type checks to protect their users from similar attacks.

 Also, if you want this feature to be useful for Web browsers, you
 should align the scope of the host-meta file with the notion or origin
 (not authority).

 The scope is host/port/protocol. The protocol is not said explicitly but
 is very much implied.
 I'll leave it up to Mark to address wordings. As for the term 'origin', I
 rather do anything but
 get involved with another term at this point.

 I'd greatly prefer that is this was stated explicitly.  Why leave such
 a critical security requirement implied?

 Adam






Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:15 PM, Breno de Medeiros br...@google.com wrote:
 1. The mechanism is not sufficient strong to prevent against defacing
 attacks.

We're not worried about defacement attacks.  We're worried about Web
servers that explicitly allow their users to upload content.  For
example:

1) Webmail providers (e.g., Gmail) let users upload attachments.
2) Forums let users upload avatar images.
3) Wikipedia lets users upload various types of content.

 An attacker that can upload a file and choose how to set the
 content-type would be able to implement the attack. If servers are willing
 to let users upload files willy-nilly, and do not worry about magical paths,
 will they worry about magical content types?

In fact, none of these servers let users specify arbitrary content
types.  They restrict the content type of resources to protect
themselves from XSS attacks to an to ensure that they function
properly.

 2. This technique may prevent legitimate uses of the spec by developers who
 do not have the ability to set the appropriate header.

Many developers can control Content-Type headers using .htaccess files
(and their ilk).

 Is this more likely to prevent legitimate developers from getting things
 done than to prevent attacks from spoofing said magical paths? I would say
 yes.

What is your evidence for this claim?  My evidence for this being a
serious security issue is the experience of Adobe with their
crossdomain.xml file.  They started out with the same design you
currently use and were forced to add strict Content-Type handling to
protect Web sites from this very attack.  What is different about your
policy file system that will prevent you from falling into the same
trap?

 Defacing attacks are a threat to applications relying on this spec,

We're not talking about defacement attacks.

 and they
 should be explicitly aware of it rather than have a false sense of security
 based on ad-hoc mitigation techniques.

This mechanism does not provide a false sense of security.  In fact,
it provides real security today for Adobe's crossdomain.xml policy
file and for a similar Gears feature.  (Gears also started with your
design and was forced to patch their users.)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:26 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 But you are missing the entire application layer here! A browser will not
 use host-meta. It will use an application spec that will use host-meta and
 that application, it security is a concern, will specify such requirements
 to ensure interoperability. It is not the job of host-meta to tell
 applications what is good for them.

In that case, the draft should not define a default scope for
host-meta files at all.  Each application that uses the host-meta file
should define the scope that it finds most useful.

As currently written, the draft is downright dangerous because it
defines a scope that is almost (but not quite!) right for Web
browsers.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Eran Hammer-Lahav


On 2/11/09 12:38 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 11:55 AM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:
 There is nothing incorrect about: GET mailto:j...@example.com HTTP/1.1

 I don't know how to get a Web browser to generate such a request, so I
 am unable to assess its security implications.

It really does not matter what a browser will do with such URI. This example
is not really of interest to a browser because that would imply that the
mailto URI has some visual representation the user wants to see. No such
representation exists today for the user to ask for and the brower to show.

But an application with full access to the HTTP protocol can use it to
obtain some representation of the URI that will mean something to it.

 It might look funny to most people but it is perfectly valid. The protocol
 is HTTP, the scheme is mailto. HTTP can talk about any URI, not just http
 URIs. Since this is about *how* /host-meta is obtained, it should talk about
 protocol, not scheme.

 Here's my understanding of how this should work (ignoring redirects
 for the moment).  Please correct me if my understanding is incorrect
 or incomplete:

 1) The user agent retrieves the host-meta file by requesting a certain
 URL from the network layer.

 2) The network layer does some magic involving protocols and
 electrical signals on wires and returns a sequence of bytes.

 3) The user agent now must compute a scope for the retrieved host-meta file.

 I recommend that the scope for the host-meta file be determined from
 the URL irrespective of whatever magic goes on in step 2. because this
 is the way all other security scopes are computed in Web browsers.
 For example, if I view an HTML document location at
 http://example.com/index.html, its security origin is (http,
 example.com, 80) regardless of whether the HTML document was actually
 retrieved by carrier pigeon or SMTP.

You got this backwards. You decide what the scope is, you get the document
for that scope, you use it.

1. You want to find out more about example.com on port 80 speaking HTTP.
2. You want to find out more about http://example.com/resource/1 (and care
about the HTTP representation).

In both cases, you will do:

GET /host-meta HTTP/1.1
Host: example.com:80

While this document can be identified with http://example.com/host-meta,
that URI alone is not enough to declare its scope. The fact you used HTTP to
obtain a representation of this URI is also needed. Again, protocol and
scheme are not the same thing.

Now, how do you know that what you got is authorized/trusted/unspoofed/etc?
You don't based on what the host-meta spec is offering. If you need that,
specify it in your application. If a bunch of people agree on how to add
security to this, we can address it separately.

Your argument is that without such security, this whole thing is useless. We
obviously disagree.

EHL








Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 2:36 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 2:15 PM, Breno de Medeiros br...@google.com
 wrote:
  1. The mechanism is not sufficient strong to prevent against defacing
  attacks.

 We're not worried about defacement attacks.  We're worried about Web
 servers that explicitly allow their users to upload content.  For
 example:

 1) Webmail providers (e.g., Gmail) let users upload attachments.
 2) Forums let users upload avatar images.
 3) Wikipedia lets users upload various types of content.

  An attacker that can upload a file and choose how to set the
  content-type would be able to implement the attack. If servers are
 willing
  to let users upload files willy-nilly, and do not worry about magical
 paths,
  will they worry about magical content types?

 In fact, none of these servers let users specify arbitrary content
 types.  They restrict the content type of resources to protect
 themselves from XSS attacks to an to ensure that they function
 properly.


For some purposes, such as the one you described, putting a host-meta almost
anywhere in a site could expose a browser to attacks similar to the ones
that crossdomain.xml presented. However, such applications could handle
specifying content-type as a requirement, as Eran rightly pointed out.





  2. This technique may prevent legitimate uses of the spec by developers
 who
  do not have the ability to set the appropriate header.

 Many developers can control Content-Type headers using .htaccess files
 (and their ilk).


And many others cannot. This is particularly irksome in outsourcing
situations where you have only partial control of the hosting environment or
depend on non-technical users to perform administrative tasks.




  Is this more likely to prevent legitimate developers from getting things
  done than to prevent attacks from spoofing said magical paths? I would
 say
  yes.

 What is your evidence for this claim?  My evidence for this being a
 serious security issue is the experience of Adobe with their
 crossdomain.xml file.  They started out with the same design you
 currently use and were forced to add strict Content-Type handling to
 protect Web sites from this very attack.  What is different about your
 policy file system that will prevent you from falling into the same
 trap?


The difference being that cross-domain.xml is intended primarily for browser
use and therefore optimization for that case sounds legitimate. This is not
the case here.

Again, again, there is an application layer where browsers can implement
such policies.




  Defacing attacks are a threat to applications relying on this spec,

 We're not talking about defacement attacks.

  and they
  should be explicitly aware of it rather than have a false sense of
 security
  based on ad-hoc mitigation techniques.

 This mechanism does not provide a false sense of security.  In fact,
 it provides real security today for Adobe's crossdomain.xml policy
 file and for a similar Gears feature.  (Gears also started with your
 design and was forced to patch their users.)

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:44 PM, Eran Hammer-Lahav e...@hueniverse.com wrote:
 You got this backwards.

Ah.  Thanks for this response.  I understand the situation much better now.

Let me see if I understand this correctly for the case of the https scheme.

1. You want to find out more about example.com on port 443 speaking
HTTP-over-TLS.
2. You want to find out more about https://example.com/resource/1 (and
care about the HTTP-over-TLS representation).

In both cases, you will do (wrapped in a TLS session):

GET /host-meta HTTP/1.1
Host: example.com:443

Your point is that a Web browser would never want to find out more
about https://example.com/resource/1 and care about the HTTP
representation (it would always be interested in the HTTP-over-TLS
representation).

Thanks,
Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 2:46 PM, Breno de Medeiros br...@google.com wrote:
 However, such applications could handle
 specifying content-type as a requirement, as Eran rightly pointed out.

Why force everyone interested in use case (1) to add this requirement?
 This will have two results:

1) Some application will forget to add this requirement and be
vulnerable to attack.

2) A service that requires the well-known Content-Type will not be
able to inter-operate with a server that takes advantage of the laxity
of this spec.

 What is different about your
 policy file system that will prevent you from falling into the same
 trap?

 The difference being that cross-domain.xml is intended primarily for browser
 use and therefore optimization for that case sounds legitimate. This is not
 the case here.

We're discussing security-critical server-to-server discovery, which
is the first use-case you listed.

 Again, again, there is an application layer where browsers can implement
 such policies.

Sure, you can punt all security problems to the application layer
because I can't construct attacks without a complete system.

It sounds like there are three resolutions to this issues:

1) Require host-meta to be served with a particular, novel Content-Type.

2) Add a section to Security Considerations that explains that
applications using host-meta should consider adding requirement (1).

3) Ignore these attacks.

My opinion is that (3) will cause users of this spec a great deal of
pain.  I also think that (2) will cause users of this spec pain
because they'll ignore the warning and construct insecure systems.

By the way, there is a fourth solution, which I suspect you'll find
unappealing for the same reason you find (1) unappealing: use a method
other than GET to retrieve host-meta.  For example, CORS uses OPTIONS
for a similar purpose.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 3:22 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 2:46 PM, Breno de Medeiros br...@google.com
 wrote:
  However, such applications could handle
  specifying content-type as a requirement, as Eran rightly pointed out.

 Why force everyone interested in use case (1) to add this requirement?
  This will have two results:

 1) Some application will forget to add this requirement and be
 vulnerable to attack.

 2) A service that requires the well-known Content-Type will not be
 able to inter-operate with a server that takes advantage of the laxity
 of this spec.

  What is different about your
  policy file system that will prevent you from falling into the same
  trap?
 
  The difference being that cross-domain.xml is intended primarily for
 browser
  use and therefore optimization for that case sounds legitimate. This is
 not
  the case here.

 We're discussing security-critical server-to-server discovery, which
 is the first use-case you listed.


In that case, content-type is a mild defense. Can you give me an example
where a web-site administrator will allow files to be hosted at '/'? I can
find some fairly interesting names to host at '/'

E.g.: favicon.ico, .htaccess, robots.txt, ...

Trying to secure such environments seems to me a waste of time, quite
frankly.

The most interesting threat of files uploaded to root is via defacement.
This solution does nothing against that threat.




  Again, again, there is an application layer where browsers can implement
  such policies.

 Sure, you can punt all security problems to the application layer
 because I can't construct attacks without a complete system.

 It sounds like there are three resolutions to this issues:

 1) Require host-meta to be served with a particular, novel Content-Type.


Not feasible, because of limitations on developers that implement these
server-to-server techniques.




 2) Add a section to Security Considerations that explains that
 applications using host-meta should consider adding requirement (1).


No. I would suggest adding a Security Considerations that say that host-meta
SHOULD NOT be relied upon for ANY security-sensitive purposes _of_its_own_,
and that applications that require levels of integrity against defacement
attacks, etc., should implement real security techniques. Frankly, I think
content-type does very little for security of such applications.




 3) Ignore these attacks.

 My opinion is that (3) will cause users of this spec a great deal of
 pain.  I also think that (2) will cause users of this spec pain
 because they'll ignore the warning and construct insecure systems.

 By the way, there is a fourth solution, which I suspect you'll find
 unappealing for the same reason you find (1) unappealing: use a method
 other than GET to retrieve host-meta.  For example, CORS uses OPTIONS
 for a similar purpose.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 3:32 PM, Breno de Medeiros br...@google.com wrote:
 In that case, content-type is a mild defense. Can you give me an example
 where a web-site administrator will allow files to be hosted at '/'?

There are enough of these sites to force Adobe to break backwards
compatibility in a Flash security release.

 I can find some fairly interesting names to host at '/'

 E.g.: favicon.ico, .htaccess, robots.txt, ...

OMG, you changed my favicon!  .htaccess only matters if Apache
interprets it (e.g., uploading an .htaccess file to Gmail doesn't do
anything interesting).

 Trying to secure such environments seems to me a waste of time, quite
 frankly.

Clearly, Adobe doesn't share your opinion.

 The most interesting threat of files uploaded to root is via defacement.
 This solution does nothing against that threat.

I you can deface my server, then I've got big problems already (e.g.,
my Web site is totally hacked).  Not addressing this issue creates a
security problem where none currently exists.

 1) Require host-meta to be served with a particular, novel Content-Type.

 Not feasible, because of limitations on developers that implement these
 server-to-server techniques.

That's an opinion.  We'll see if you're forced to patch the spec when
you're confronted with a horde of Web servers that you've just made
vulnerable to attack.

 2) Add a section to Security Considerations that explains that
 applications using host-meta should consider adding requirement (1).

 No. I would suggest adding a Security Considerations that say that host-meta
 SHOULD NOT be relied upon for ANY security-sensitive purposes _of_its_own_,

Then how are we to address use case (1)?

 and that applications that require levels of integrity against defacement
 attacks, etc., should implement real security techniques. Frankly, I think
 content-type does very little for security of such applications.

Your argument for why strict Content-Type handling is insecure is that
a more powerful attacker can win anyway.  My argument is that we have
implementation experience that we need to defend against these
threats.

I did a little more digging, and it looks like Silverlight's
clientaccesspolicy.xml also requires strict Content-Type processing:

http://msdn.microsoft.com/en-us/library/cc645032(VS.95).aspx

That makes 3 out of 3 systems that use strict Content-Type processing.

Microsoft's solution to the limited hosting environment problem
appears to be quite clever, actually.  I couldn't find documentation
(and haven't put in the effort to reverse engineer the behavior), but
it looks like they require a content type of applciation/xml, which
they get for free from limiting hosting providers by naming their file
with a .xml extension.  This is clever, because it protects all the
sites I listed earlier because those sites would have XSS if they let
an attacker control an application/xml resource on their server.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 3:57 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 3:32 PM, Breno de Medeiros br...@google.com
 wrote:
  In that case, content-type is a mild defense. Can you give me an example
  where a web-site administrator will allow files to be hosted at '/'?

 There are enough of these sites to force Adobe to break backwards
 compatibility in a Flash security release.

  I can find some fairly interesting names to host at '/'
 
  E.g.: favicon.ico, .htaccess, robots.txt, ...

 OMG, you changed my favicon!  .htaccess only matters if Apache
 interprets it (e.g., uploading an .htaccess file to Gmail doesn't do
 anything interesting).

  Trying to secure such environments seems to me a waste of time, quite
  frankly.

 Clearly, Adobe doesn't share your opinion.

  The most interesting threat of files uploaded to root is via defacement.
  This solution does nothing against that threat.

 I you can deface my server, then I've got big problems already (e.g.,
 my Web site is totally hacked).  Not addressing this issue creates a
 security problem where none currently exists.

  1) Require host-meta to be served with a particular, novel Content-Type.
 
  Not feasible, because of limitations on developers that implement these
  server-to-server techniques.

 That's an opinion.  We'll see if you're forced to patch the spec when
 you're confronted with a horde of Web servers that you've just made
 vulnerable to attack.

  2) Add a section to Security Considerations that explains that
  applications using host-meta should consider adding requirement (1).
 
  No. I would suggest adding a Security Considerations that say that
 host-meta
  SHOULD NOT be relied upon for ANY security-sensitive purposes
 _of_its_own_,

 Then how are we to address use case (1)?

  and that applications that require levels of integrity against defacement
  attacks, etc., should implement real security techniques. Frankly, I
 think
  content-type does very little for security of such applications.

 Your argument for why strict Content-Type handling is insecure is that
 a more powerful attacker can win anyway.  My argument is that we have
 implementation experience that we need to defend against these
 threats.

 I did a little more digging, and it looks like Silverlight's
 clientaccesspolicy.xml also requires strict Content-Type processing:

 http://msdn.microsoft.com/en-us/library/cc645032(VS.95).aspxhttp://msdn.microsoft.com/en-us/library/cc645032%28VS.95%29.aspx

 That makes 3 out of 3 systems that use strict Content-Type processing.


All of the above systems target browsers and none have the usage
requirements of the proposed spec.





 Microsoft's solution to the limited hosting environment problem
 appears to be quite clever, actually.  I couldn't find documentation
 (and haven't put in the effort to reverse engineer the behavior), but
 it looks like they require a content type of applciation/xml, which
 they get for free from limiting hosting providers by naming their file
 with a .xml extension.  This is clever, because it protects all the
 sites I listed earlier because those sites would have XSS if they let
 an attacker control an application/xml resource on their server.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 3:57 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 3:32 PM, Breno de Medeiros br...@google.com
 wrote:
  In that case, content-type is a mild defense. Can you give me an example
  where a web-site administrator will allow files to be hosted at '/'?

 There are enough of these sites to force Adobe to break backwards
 compatibility in a Flash security release.

  I can find some fairly interesting names to host at '/'
 
  E.g.: favicon.ico, .htaccess, robots.txt, ...

 OMG, you changed my favicon!  .htaccess only matters if Apache
 interprets it (e.g., uploading an .htaccess file to Gmail doesn't do
 anything interesting).

  Trying to secure such environments seems to me a waste of time, quite
  frankly.

 Clearly, Adobe doesn't share your opinion.

  The most interesting threat of files uploaded to root is via defacement.
  This solution does nothing against that threat.

 I you can deface my server, then I've got big problems already (e.g.,
 my Web site is totally hacked).  Not addressing this issue creates a
 security problem where none currently exists.


The web site of your corporation being totally hacked and the identity
system for users of your corporation server's being totally hacked are
problems of a completely different order of magnitude.

The fact is that site-meta will be used for purposes that introduce
attending threats even more significant than XSS (an example of which you
still have to provide in this thread) and security measures needed to
mitigate these threats are application specific.





  1) Require host-meta to be served with a particular, novel Content-Type.
 
  Not feasible, because of limitations on developers that implement these
  server-to-server techniques.

 That's an opinion.  We'll see if you're forced to patch the spec when
 you're confronted with a horde of Web servers that you've just made
 vulnerable to attack.

  2) Add a section to Security Considerations that explains that
  applications using host-meta should consider adding requirement (1).
 
  No. I would suggest adding a Security Considerations that say that
 host-meta
  SHOULD NOT be relied upon for ANY security-sensitive purposes
 _of_its_own_,

 Then how are we to address use case (1)?

  and that applications that require levels of integrity against defacement
  attacks, etc., should implement real security techniques. Frankly, I
 think
  content-type does very little for security of such applications.

 Your argument for why strict Content-Type handling is insecure is that
 a more powerful attacker can win anyway.  My argument is that we have
 implementation experience that we need to defend against these
 threats.

 I did a little more digging, and it looks like Silverlight's
 clientaccesspolicy.xml also requires strict Content-Type processing:

 http://msdn.microsoft.com/en-us/library/cc645032(VS.95).aspxhttp://msdn.microsoft.com/en-us/library/cc645032%28VS.95%29.aspx

 That makes 3 out of 3 systems that use strict Content-Type processing.

 Microsoft's solution to the limited hosting environment problem
 appears to be quite clever, actually.  I couldn't find documentation
 (and haven't put in the effort to reverse engineer the behavior), but
 it looks like they require a content type of applciation/xml, which
 they get for free from limiting hosting providers by naming their file
 with a .xml extension.  This is clever, because it protects all the
 sites I listed earlier because those sites would have XSS if they let
 an attacker control an application/xml resource on their server.

 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 4:00 PM, Breno de Medeiros br...@google.com wrote:
 All of the above systems target browsers and none have the usage
 requirements of the proposed spec.

The point is there are enough HTTP servers on the Internet that let
uses upload content in this way that these vendors have added strict
Content-Type processing to their metadata mechanisms.  If you don't
even warn consumers of your spec about these threats, those folks will
build applications on top of host-meta that make these servers
vulnerable to attack.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 4:38 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 11, 2009 at 4:00 PM, Breno de Medeiros br...@google.com
 wrote:
  All of the above systems target browsers and none have the usage
  requirements of the proposed spec.

 The point is there are enough HTTP servers on the Internet that let
 uses upload content in this way that these vendors have added strict
 Content-Type processing to their metadata mechanisms.  If you don't
 even warn consumers of your spec about these threats, those folks will
 build applications on top of host-meta that make these servers
 vulnerable to attack.


Yes, but your solution prevents legitimate use cases that are a higher value
proposition.




 Adam




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 4:40 PM, Breno de Medeiros br...@google.com wrote:
 Yes, but your solution prevents legitimate use cases that are a higher value
 proposition.

How does:

On Wed, Feb 11, 2009 at 3:22 PM, Adam Barth w...@adambarth.com wrote:
 2) Add a section to Security Considerations that explains that
 applications using host-meta should consider adding requirement (1) [strict 
 Content-Type processing].

prevent legitimate use cases?

It's not the ideal solution because it passes the buck to
application-land, but its orders of magnitude better than laying a
subtle trap for those folks.

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Ian Hickson

On Wed, 11 Feb 2009, Breno de Medeiros wrote:
 
   2. This technique may prevent legitimate uses of the spec by 
   developers who do not have the ability to set the appropriate 
   header.
 
  Many developers can control Content-Type headers using .htaccess files 
  (and their ilk).
 
 And many others cannot. This is particularly irksome in outsourcing 
 situations where you have only partial control of the hosting 
 environment or depend on non-technical users to perform administrative 
 tasks.

Note that if the spec says that UAs are to ignore the Content-Type header, 
this is a violation of the HTTP and MIME specifications. If this is 
intentional, then the HTTP or MIME specs should be changed.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 5:00 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 11 Feb 2009, Breno de Medeiros wrote:
  
2. This technique may prevent legitimate uses of the spec by
developers who do not have the ability to set the appropriate
header.
  
   Many developers can control Content-Type headers using .htaccess files
   (and their ilk).
 
  And many others cannot. This is particularly irksome in outsourcing
  situations where you have only partial control of the hosting
  environment or depend on non-technical users to perform administrative
  tasks.

 Note that if the spec says that UAs are to ignore the Content-Type header,
 this is a violation of the HTTP and MIME specifications. If this is
 intentional, then the HTTP or MIME specs should be changed.


The spec is letting applications decide what to do. It is not mandating
anything.




 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Ian Hickson

On Wed, 11 Feb 2009, Breno de Medeiros wrote:
 On Wed, Feb 11, 2009 at 5:00 PM, Ian Hickson i...@hixie.ch wrote:
  On Wed, 11 Feb 2009, Breno de Medeiros wrote:
   
 2. This technique may prevent legitimate uses of the spec by 
 developers who do not have the ability to set the appropriate 
 header.
   
Many developers can control Content-Type headers using .htaccess 
files (and their ilk).
  
   And many others cannot. This is particularly irksome in outsourcing 
   situations where you have only partial control of the hosting 
   environment or depend on non-technical users to perform 
   administrative tasks.
 
  Note that if the spec says that UAs are to ignore the Content-Type 
  header, this is a violation of the HTTP and MIME specifications. If 
  this is intentional, then the HTTP or MIME specs should be changed.
 
 The spec is letting applications decide what to do. It is not mandating 
 anything.

Well then what Adam is suggesting isn't controversial, and in fact it's 
already required (by HTTP/MIME). So adding a note to the site-meta spec 
reminding implementors of this doesn't seem like a bad idea.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
On Wed, Feb 11, 2009 at 5:25 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 11 Feb 2009, Breno de Medeiros wrote:
  On Wed, Feb 11, 2009 at 5:00 PM, Ian Hickson i...@hixie.ch wrote:
   On Wed, 11 Feb 2009, Breno de Medeiros wrote:

  2. This technique may prevent legitimate uses of the spec by
  developers who do not have the ability to set the appropriate
  header.

 Many developers can control Content-Type headers using .htaccess
 files (and their ilk).
   
And many others cannot. This is particularly irksome in outsourcing
situations where you have only partial control of the hosting
environment or depend on non-technical users to perform
administrative tasks.
  
   Note that if the spec says that UAs are to ignore the Content-Type
   header, this is a violation of the HTTP and MIME specifications. If
   this is intentional, then the HTTP or MIME specs should be changed.
 
  The spec is letting applications decide what to do. It is not mandating
  anything.

 Well then what Adam is suggesting isn't controversial, and in fact it's
 already required (by HTTP/MIME). So adding a note to the site-meta spec
 reminding implementors of this doesn't seem like a bad idea.


My only concern is that the requirement is construed as reasonably
sufficient for security (which is indeed the case of crossdomain.xml, but
not for many intended applications). The example Adam just gave, i.e.,
server-to-server authentication metadata being subverted by uploading a
file, is the type of application that I believe should ideally resist full
compromise of the server (e.g., by using metadata signed with offline keys).
So I am not necessarily opposed to it, but the language needs to make it
clear that this strategy serves to mitigate a very specific class of
threats.




 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Ian Hickson

On Wed, 11 Feb 2009, Breno de Medeiros wrote:
 
 My only concern is that the requirement is construed as reasonably 
 sufficient for security (which is indeed the case of crossdomain.xml, 
 but not for many intended applications). The example Adam just gave, 
 i.e., server-to-server authentication metadata being subverted by 
 uploading a file, is the type of application that I believe should 
 ideally resist full compromise of the server (e.g., by using metadata 
 signed with offline keys). So I am not necessarily opposed to it, but 
 the language needs to make it clear that this strategy serves to 
 mitigate a very specific class of threats.

Agreed. I don't think anyone is saying this is the be-all and end-all of 
security, only that it is one step of many needed to have defence in 
depth.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Breno de Medeiros
So the proposal is for a security considerations section that describes
attending threats and strongly hint that applications will be vulnerable if
they do not adopt techniques to validate the results. It would  suggest the
use of content-type headers and explain what types of threats it protects
against, provided that it includes caveats that this technique may not be
sufficient for some applications and as well as not necessary for others
that use higher-assurance approaches to directly validate the results
discovered through host-meta.

I still do not think this is necessary because the threat model attending
this is much broader than crossdomain.xml and applications that rely on this
will have to understand their own security needs or be necessarily
vulnerable. On the other hand, I will not argue against it either.

On Wed, Feb 11, 2009 at 5:50 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 11 Feb 2009, Breno de Medeiros wrote:
 
  My only concern is that the requirement is construed as reasonably
  sufficient for security (which is indeed the case of crossdomain.xml,
  but not for many intended applications). The example Adam just gave,
  i.e., server-to-server authentication metadata being subverted by
  uploading a file, is the type of application that I believe should
  ideally resist full compromise of the server (e.g., by using metadata
  signed with offline keys). So I am not necessarily opposed to it, but
  the language needs to make it clear that this strategy serves to
  mitigate a very specific class of threats.

 Agreed. I don't think anyone is saying this is the be-all and end-all of
 security, only that it is one step of many needed to have defence in
 depth.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




-- 
--Breno

+1 (650) 214-1007 desk
+1 (408) 212-0135 (Grand Central)
MTV-41-3 : 383-A
PST (GMT-8) / PDT(GMT-7)


Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-11 Thread Adam Barth

On Wed, Feb 11, 2009 at 6:04 PM, Breno de Medeiros br...@google.com wrote:
 So the proposal is for a security considerations section that describes
 attending threats and strongly hint that applications will be vulnerable if
 they do not adopt techniques to validate the results. It would  suggest the
 use of content-type headers and explain what types of threats it protects
 against, provided that it includes caveats that this technique may not be
 sufficient for some applications and as well as not necessary for others
 that use higher-assurance approaches to directly validate the results
 discovered through host-meta.

Sounds good to me.  I'm not that familiar with IETF process.  Should I
draft this section and email it to someone?

 I still do not think this is necessary because the threat model attending
 this is much broader than crossdomain.xml and applications that rely on this
 will have to understand their own security needs or be necessarily
 vulnerable. On the other hand, I will not argue against it either.

For my part, I'd rather we go further and require strict Content-Type
processing.  :)

Adam



Re: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-10 Thread Thomas Roessler





On 11 Feb 2009, at 01:31, Mark Nottingham wrote:

Gentle reminder; the draft asks for discussion on www-talk. Sending  
followups there (I should have mentioned this in the announcement,  
sorry)...


(and I should read instructions apologies)

The obvious solution to that part of the puzzle is to let the  
mechanism default to the same URI scheme, unless there is a  
specific convention to the contrary.  That should cover any URI  
schemes for which a safe retrieval operation is defined (HTTP,  
HTTPS, FTP come to mind).


I'm happy to clarify this by either adding scheme/protocol to the  
(host, port) tuple (although we'll probably have to come up with a  
different term than authority; PLEASE don't say endpoint ;),  
which will affect both the default scoping of application as well as  
the discovery mechanism, or just limiting it to discovery.


I'd use the (scheme, host, port) triple to identify the endpoints that  
we're dealing with here, both for scope and discovery. Adam Barth's  
draft-abarth-origin gives a canonicalization procedure for these  
tuples.  That will be useful when the tuples derived from different  
URIs need to be compared, to determine whether one is in the same site  
metadata scope as the other.


Calling that kind of triple an origin seems fine, and is consistent  
with the usage of that word in draft-abarth-origin and elsewhere.


The benefit of using the triple for both discovery and scope is that  
you don't acquire yet another possible cross-origin channel in the  
browser.



For other URI schemes, one could either punt on this issue  
completely, define a default fall-back to HTTP (or HTTPS, depending  
on which of the two better matches the security properties of the  
protocol in question), or actually say explicitly what's the  
correct scheme.


I'm inclined to punt on it. Default fall-back to HTTP makes too many  
assumptions.


Same inclination here, actually.




RE: Origin vs Authority; use of HTTPS (draft-nottingham-site-meta-01)

2009-02-10 Thread Eran Hammer-Lahav


 -Original Message-
 From: Mark Nottingham [mailto:m...@yahoo-inc.com]
 Sent: Tuesday, February 10, 2009 4:31 PM

 My understanding of the discussion's resolution was that this is not a
 goal for this spec any more; i.e., if there's any boundary-hopping, it
 will be defined by the protocol or application in use.

The only use case for finding out information about email addresses through 
host-meta is no longer in consideration. It was dropped mostly due to the fact 
that mailto URIs do not have an authority which means in order to go from a 
mailto URI to a host-meta authority, one has to write special handling specific 
for that URI scheme. This is not something we wanted to do in either host-meta 
or the discovery spec [1].

If the OpenID community wants to support email identifiers, they should find a 
way to address that at the application level, including dealing with all the 
authority and security issues it raises.

 I'm happy to clarify this by either adding scheme/protocol to the
 (host, port) tuple (although we'll probably have to come up with a
 different term than authority; PLEASE don't say endpoint ;), which
 will affect both the default scoping of application as well as the
 discovery mechanism, or just limiting it to discovery.

First, scheme is incorrect here as the scheme does not always determine a 
specific protocol (see 'http' is not just for HTTP saga). There are two ways in 
which a host-meta file can be obtained:

1. Given a host/port/protocol, the client can connect to the host/port and 
speak the protocol to obtain the resource /host-meta.

2. Given a URI, the client can connect to the host/port of the URI authority, 
speak the implied protocol from the URI scheme, and ask for the /host-meta 
resource. The resulting document is scoped for the host/port/protocol used.

Now, if someone had a mailto: URI, they could decide that for that application 
(which is likely to be an HTTP application) they are going to use the HTTP 
protocol with the domain name (and default port 80) of the email address. But 
again, that is outside the scope of our effort.

EHL