Re: Why the restriction on unauthenticated GET in CORS?

2012-07-21 Thread Eric Rescorla
Henry,

In my opinion as Chair, there has been broad consensus in the
WebAppSec WG that one of the basic design constraints of
CORS is that introducing CORS features into browsers not create
new security vulnerabilities for existing network deployments.
What you are proposing would have that result.

You are of course free to believe that that consensus is wrong,
but I do not believe that discussing this further serves any purpose.

Please take this discussion elsewhere.

-Ekr


On Fri, Jul 20, 2012 at 9:41 PM, Henry Story henry.st...@bblfish.net wrote:

 On 21 Jul 2012, at 05:39, Jonas Sicking wrote:

 On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net 
 wrote:

 On 20 Jul 2012, at 18:59, Adam Barth wrote:

 On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com 
 wrote:
 So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.

 Yes, i note the use of the word current and not final.

 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.

 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.

 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.

 Welcome to the web.  We support legacy systems.  If you don't want to
 support legacy systems, you might not enjoy working on improving the
 web platform.

 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while inquiry 
 to find out how many systems there are for which this is a problem, if any. 
 That is:

  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered

 One important group to consider is home routers. Routers are often
 secured only by checking that requests are coming through an internal
 connection. I.e. either through wifi or through the ethernet port. If
 web pages can place arbitrary requests to such routers it would mean
 that they can redirect traffic arbitrarily and transparently.

 The proposal is that requests to machines on private ip-ranges - i.e. machines
 on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from
 the unique unicast address space [1] - would still require the full CORS
 handshake as described currently. The proposal only  affects GET requests
 requiring no authentication,  to machines with public ip addresses: the
 responses to these requests would be allowed through to a CORS javascript
 request without requiring the server to add the Access-Control-Allow-Origin
 header to his response. Furthermore it was added that the browser should
 still add the Origin: Header.

 The argument is that machines on such public IP addresses that would
 respond to such GET requests would be accessible via the public internet
 and so would be in any case accessible via a CORS proxy.

 This proposal would clearly not affect home routers as currently deployed. The
 dangerous access to those are always to the machine when accessed via the
 192.168.x.x ip address range ( or the 10.x.x.x one ). If a router were 
 insecure
 when reached via its public name space and ip address, then it would be simply
 an insecure router.

 I agree that there is some part of risk that is being taken in making this
 decision here. The above does not quite follow analytically from primitives.
 It is possible that internal networks use public ip addresses for their own
 machines - they would need to do this because the 10.x.x.x address space was
 too small, or the ipv-6 equivalent was too small. Doing this they would make
 access to public sites with those ip-ranges impossible (since traffic would be
 redirected to the internal machines). My guess is that networks with this type
 of setup, don't allow just anybody to open a connection in them. At least
 seems very likely to be so for ipv4. I am not sure what the situation with 
 ipv6
 is, or what it should be. ( I am thinking by analogy there. ) Machines on ipv6
 addresses would be machines deployed by experienced people who would probably
 be able to change their software to respond differently to GET requests on 
 internal
 networks with an Origin: header whose value was not an internal machine.

 Henry

 [1] http://www.simpledns.com/private-ipv6.aspx



 / Jonas

 Social Web Architect
 http://bblfish.net/





Re: Why the restriction on unauthenticated GET in CORS?

2012-07-21 Thread Henry Story

On 21 Jul 2012, at 15:02, Eric Rescorla wrote:

 Henry,
 
 In my opinion as Chair, there has been broad consensus in the
 WebAppSec WG that one of the basic design constraints of
 CORS is that introducing CORS features into browsers not create
 new security vulnerabilities for existing network deployments.

I understand that concern completely. 

 What you are proposing would have that result.

Well that was what was in question. For example  Jonas Sicking, 
clearly misunderstood the proposal, since he believed this would
affect the security of home routers. Other responses seemed to 
believe that security via ip-address selection would be affected 
- not so for internal ip-addresses as argued below.
 
 You are of course free to believe that that consensus is wrong,

I understand the consensus, and I think as a general policy it is a
good one. I assume policies are general guides that have to be wielded
with care and not be used just to shut down interesting improvements
that may look like they are close to the borderline. Often the interesting
ideas are those that look weirdly like they are breaking and contradicting
a number of deeply held beliefs.

 but I do not believe that discussing this further serves any purpose.

I was not going to add anything myself after my previous e-mail, frankly. 
And I was just responding to what I thought are misunderstandings of a 
possibility I had seen. If you look carefully at this thread, I initially
was satisfied with the first answer to the problem. Then a new possibility
came up proposed by another member of this group, Cameron Jones, which 
we were considering. 
 
 Please take this discussion elsewhere.

I have other things to do than to discuss CORS. I have built a proxy to bypass
the limitations, and have some other ideas on how to get things done better. 
I was just sending some feedback at the cost of my time, to this group, as I 
thought it could be of interest.

All the best with getting through to final recommendation,

Henry

 
 -Ekr
 
 
 On Fri, Jul 20, 2012 at 9:41 PM, Henry Story henry.st...@bblfish.net wrote:
 
 On 21 Jul 2012, at 05:39, Jonas Sicking wrote:
 
 On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net 
 wrote:
 
 On 20 Jul 2012, at 18:59, Adam Barth wrote:
 
 On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com 
 wrote:
 So, this is a non-starter. Thanks for all the fish.
 
 That's why we have the current design.
 
 Yes, i note the use of the word current and not final.
 
 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.
 
 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.
 
 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.
 
 Welcome to the web.  We support legacy systems.  If you don't want to
 support legacy systems, you might not enjoy working on improving the
 web platform.
 
 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while 
 inquiry to find out how many systems there are for which this is a 
 problem, if any. That is:
 
 a) systems that use non standard internal ip addresses
 b) systems that use ip-address provenance for access control
 c) ? potentially other issues that we have not covered
 
 One important group to consider is home routers. Routers are often
 secured only by checking that requests are coming through an internal
 connection. I.e. either through wifi or through the ethernet port. If
 web pages can place arbitrary requests to such routers it would mean
 that they can redirect traffic arbitrarily and transparently.
 
 The proposal is that requests to machines on private ip-ranges - i.e. 
 machines
 on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from
 the unique unicast address space [1] - would still require the full CORS
 handshake as described currently. The proposal only  affects GET requests
 requiring no authentication,  to machines with public ip addresses: the
 responses to these requests would be allowed through to a CORS javascript
 request without requiring the server to add the Access-Control-Allow-Origin
 header to his response. Furthermore it was added that the browser should
 still add the Origin: Header.
 
 The argument is that machines on such public IP addresses that would
 respond to such GET requests would be accessible via the public internet
 and so would be in any case accessible via a CORS proxy.
 
 This 

Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.

 Could you expand on this response, please?

 My understanding is that requests generate from XHR will have Origin
 applied. This can be used to reject requests from 3rd party websites
 within browsers. Therefore, intranets have the potential to restrict
 access from internal user browsing habits.

They have the potential, but existing networks don't do that.  We need
to protect legacy systems that don't understand the Origin header.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.

 Yes, but if no user private data is being exposed then there is cost
 being paid for no benefit.

I think it's difficult to discuss ethics without agreeing on an
ethical theory.  Let's stick to technical, rather than ethical,
discussions.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Cameron Jones
On Fri, Jul 20, 2012 at 8:29 AM, Adam Barth w...@adambarth.com wrote:
 On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.

 Could you expand on this response, please?

 My understanding is that requests generate from XHR will have Origin
 applied. This can be used to reject requests from 3rd party websites
 within browsers. Therefore, intranets have the potential to restrict
 access from internal user browsing habits.

 They have the potential, but existing networks don't do that.  We need
 to protect legacy systems that don't understand the Origin header.


Yes, i understand that. When new features are introduced someone's
security policy is impacted, in this case (and by policy always the
case) it is those who provide public services who's security policy is
broken.

It just depends on who's perspective you look at it from.

The costs of private security *is* being paid by the public, although
it seems the public has to pay a high price for everything nowadays.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.

 Yes, but if no user private data is being exposed then there is cost
 being paid for no benefit.

 I think it's difficult to discuss ethics without agreeing on an
 ethical theory.  Let's stick to technical, rather than ethical,
 discussions.


Yes, but as custodians of a public space there is an ethical duty and
responsibility to represent the interests of all users of that space.
This is why the concerns deserve attention even if they may have been
visited before.

Given the level of impact affects the entire corpus of global public
data, it is valuable to do a impact and risk assessment to garner
whether the costs are significantly outweighed by either party.

With some further consideration, i can't see any other way to protect
IP authentication against targeted attacks through to their systems
without the mandatory upgrade of these systems to IP + Origin
Authentication.

So, this is a non-starter. Thanks for all the fish.

 Adam

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 8:29 AM, Adam Barth w...@adambarth.com wrote:
 On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.

 Could you expand on this response, please?

 My understanding is that requests generate from XHR will have Origin
 applied. This can be used to reject requests from 3rd party websites
 within browsers. Therefore, intranets have the potential to restrict
 access from internal user browsing habits.

 They have the potential, but existing networks don't do that.  We need
 to protect legacy systems that don't understand the Origin header.


 Yes, i understand that. When new features are introduced someone's
 security policy is impacted, in this case (and by policy always the
 case) it is those who provide public services who's security policy is
 broken.

 It just depends on who's perspective you look at it from.

 The costs of private security *is* being paid by the public, although
 it seems the public has to pay a high price for everything nowadays.

I'm not sure I understand the point you're making, but it's doesn't
really matter.  We're not going to introduce vulnerabilities into
legacy systems.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.

 Yes, but if no user private data is being exposed then there is cost
 being paid for no benefit.

 I think it's difficult to discuss ethics without agreeing on an
 ethical theory.  Let's stick to technical, rather than ethical,
 discussions.

 Yes, but as custodians of a public space there is an ethical duty and
 responsibility to represent the interests of all users of that space.
 This is why the concerns deserve attention even if they may have been
 visited before.

I'm sorry, but I'm unable to respond to any ethical arguments.  I can
only respond to technical arguments.

 Given the level of impact affects the entire corpus of global public
 data, it is valuable to do a impact and risk assessment to garner
 whether the costs are significantly outweighed by either party.

 With some further consideration, i can't see any other way to protect
 IP authentication against targeted attacks through to their systems
 without the mandatory upgrade of these systems to IP + Origin
 Authentication.

 So, this is a non-starter. Thanks for all the fish.

That's why we have the current design.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Cameron Jones
On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.

Yes, i note the use of the word current and not final.

Ethics are a starting point for designing technology responsibly. If
the goals can not be met for valid technological reasons then that it
a unfortunate outcome and one that should be avoided at all costs.

The costs of supporting legacy systems has real financial implications
notwithstanding an ethical ideology. If those costs become too great,
legacy systems loose their impenetrable pedestal.

The architectural impact of supporting for non-maintained legacy
systems is that web proxy intermediates are something we will all have
to live with.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.

 Yes, i note the use of the word current and not final.

 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.

 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.

 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.

Welcome to the web.  We support legacy systems.  If you don't want to
support legacy systems, you might not enjoy working on improving the
web platform.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 20 Jul 2012, at 18:59, Adam Barth wrote:

 On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.
 
 That's why we have the current design.
 
 Yes, i note the use of the word current and not final.
 
 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.
 
 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.
 
 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.
 
 Welcome to the web.  We support legacy systems.  If you don't want to
 support legacy systems, you might not enjoy working on improving the
 web platform.

Of course, but you seem to want to support hidden legacy systems, that is 
systems none of us know about or can see. It is still a worth while inquiry to 
find out how many systems there are for which this is a problem, if any. That 
is:

  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered

Systems with a) are going to be very rare it seems to me, and the question 
would be whether they can't really move over to standard internal ip addresses. 
Perhaps IPV6 makes that easy.

It is not clear that anyone should bother with designs such as b) - that's bad 
practice anyway I would guess.

  Anything else?

Henry

 
 Adam

Social Web Architect
http://bblfish.net/




Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Tab Atkins Jr.
On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net wrote:
 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while inquiry 
 to find out how many systems there are for which this is a problem, if any. 
 That is:

   a) systems that use non standard internal ip addresses
   b) systems that use ip-address provenance for access control
   c) ? potentially other issues that we have not covered

 Systems with a) are going to be very rare it seems to me, and the question 
 would be whether they can't really move over to standard internal ip 
 addresses. Perhaps IPV6 makes that easy.

 It is not clear that anyone should bother with designs such as b) - that's 
 bad practice anyway I would guess.

We know that systems which base their security at least in part on
network topology (are you on a computer inside the DMZ?) are common
(because it's easy).

~TJ



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 20 Jul 2012, at 21:02, Tab Atkins Jr. wrote:

 On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net wrote:
 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while inquiry 
 to find out how many systems there are for which this is a problem, if any. 
 That is:
 
  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered
 
 Systems with a) are going to be very rare it seems to me, and the question 
 would be whether they can't really move over to standard internal ip 
 addresses. Perhaps IPV6 makes that easy.
 
 It is not clear that anyone should bother with designs such as b) - that's 
 bad practice anyway I would guess.
 
 We know that systems which base their security at least in part on
 network topology (are you on a computer inside the DMZ?) are common
 (because it's easy).

How many of those would use ip addresses that are not standard private ip 
addresses?
( Because if they do, then they would not be affected ).
Of those that do not, would IPV6 offer them a scheme where they could easily 
use standard private ip addresses? 

 
 ~TJ

Social Web Architect
http://bblfish.net/




Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Ian Hickson
On Fri, 20 Jul 2012, Henry Story wrote:
 
 How many of those would use ip addresses that are not standard private 
 ip addresses? (Because if they do, then they would not be affected). Of 
 those that do not, would IPV6 offer them a scheme where they could 
 easily use standard private ip addresses?

I think you're missing the point, which is that Web browser implementors 
are not willing to risk breaking any such deployments, however convoluted 
that makes the resulting technology. If you want a technology to be 
implemented, you have to consider implementators' constraints as hard 
constraints on your designs. In this case, the constraint is that they 
will not implement anything that increases the potential attack surface 
area, whether or not the potentially vulnerable deployed services are 
designed sanely or not. Once you realise that this is a hard constraint, 
questions such as yours above are obviously moot.

HTH,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Jonas Sicking
On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net wrote:

 On 20 Jul 2012, at 18:59, Adam Barth wrote:

 On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.

 Yes, i note the use of the word current and not final.

 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.

 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.

 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.

 Welcome to the web.  We support legacy systems.  If you don't want to
 support legacy systems, you might not enjoy working on improving the
 web platform.

 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while inquiry 
 to find out how many systems there are for which this is a problem, if any. 
 That is:

   a) systems that use non standard internal ip addresses
   b) systems that use ip-address provenance for access control
   c) ? potentially other issues that we have not covered

One important group to consider is home routers. Routers are often
secured only by checking that requests are coming through an internal
connection. I.e. either through wifi or through the ethernet port. If
web pages can place arbitrary requests to such routers it would mean
that they can redirect traffic arbitrarily and transparently.

/ Jonas



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 21 Jul 2012, at 05:39, Jonas Sicking wrote:

 On Fri, Jul 20, 2012 at 11:58 AM, Henry Story henry.st...@bblfish.net wrote:
 
 On 20 Jul 2012, at 18:59, Adam Barth wrote:
 
 On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.
 
 That's why we have the current design.
 
 Yes, i note the use of the word current and not final.
 
 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.
 
 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.
 
 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.
 
 Welcome to the web.  We support legacy systems.  If you don't want to
 support legacy systems, you might not enjoy working on improving the
 web platform.
 
 Of course, but you seem to want to support hidden legacy systems, that is 
 systems none of us know about or can see. It is still a worth while inquiry 
 to find out how many systems there are for which this is a problem, if any. 
 That is:
 
  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered
 
 One important group to consider is home routers. Routers are often
 secured only by checking that requests are coming through an internal
 connection. I.e. either through wifi or through the ethernet port. If
 web pages can place arbitrary requests to such routers it would mean
 that they can redirect traffic arbitrarily and transparently.

The proposal is that requests to machines on private ip-ranges - i.e. machines
on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from 
the unique unicast address space [1] - would still require the full CORS 
handshake as described currently. The proposal only  affects GET requests 
requiring no authentication,  to machines with public ip addresses: the 
responses to these requests would be allowed through to a CORS javascript 
request without requiring the server to add the Access-Control-Allow-Origin 
header to his response. Furthermore it was added that the browser should 
still add the Origin: Header. 

The argument is that machines on such public IP addresses that would 
respond to such GET requests would be accessible via the public internet 
and so would be in any case accessible via a CORS proxy.

This proposal would clearly not affect home routers as currently deployed. The 
dangerous access to those are always to the machine when accessed via the 
192.168.x.x ip address range ( or the 10.x.x.x one ). If a router were insecure
when reached via its public name space and ip address, then it would be simply 
an insecure router.

I agree that there is some part of risk that is being taken in making this 
decision here. The above does not quite follow analytically from primitives.
It is possible that internal networks use public ip addresses for their own
machines - they would need to do this because the 10.x.x.x address space was
too small, or the ipv-6 equivalent was too small. Doing this they would make
access to public sites with those ip-ranges impossible (since traffic would be
redirected to the internal machines). My guess is that networks with this type
of setup, don't allow just anybody to open a connection in them. At least 
seems very likely to be so for ipv4. I am not sure what the situation with ipv6
is, or what it should be. ( I am thinking by analogy there. ) Machines on ipv6 
addresses would be machines deployed by experienced people who would probably
be able to change their software to respond differently to GET requests on 
internal
networks with an Origin: header whose value was not an internal machine.

Henry

[1] http://www.simpledns.com/private-ipv6.aspx


 
 / Jonas

Social Web Architect
http://bblfish.net/




Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:
 And it is the experience of this being required that led me to build a CORS 
 proxy [1] - (I am not the first to write one, I add quickly)

Yes, the Origin and unauthenticated CORS restrictions are trivially
circumvented by a simple proxy.


 So my argument is that this restriction could be lifted since

  1. GET is indempotent - and should not affect the resource fetched

HTTP method semantics are an obligation for conformance and not
guaranteed technically. Any method can be mis-used for any purpose
from a security point of view.

The people at risk from the different method semantics are those who
use them incorrectly, for example a bank which issues transactions
using GET over a URI:
http://dontbankonus.com/transfer?to=xyzamount=100

  2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.

Yes, the restriction on performing an unauthenticated GET only serves
to promote the implementation of 3rd party proxy intermediaries and,
if they become established, will introduce new security issues by way
of indirection.

The pertinent question for cross-origin requests here is - who is
authoring the link and therefore in control of the request? The reason
that cross-origin js which executes 3rd party non-origin code within a
page is not a problem for web security is that the author of the page
must explicitly include such a link. The control is within the
author's domain to apply prudence on what they link to and include
from. Honorable sites with integrity seek to protect their integrity
by maintaining bona-fide links to trusted and reputable 3rd parties.

  3. One could still pass the Origin: header as a warning to sites who may be 
 tracking people in unusual ways.

This is what concerns people about implementing a proxy - essentially
you are circumventing a recommended security practice whereby sites
use this header as a means of attempting to protect themselves from
CSRF attacks. This is futile and these sites would do better to
implement CSRF tokens which is the method used by organizations which
must protect against online fraud with direct financial implications,
ie your bank.

There are too many recommendations for protecting against CRSF and the
message is being lost. On the reverse, the poor uptake of CORS is
because people do not understand it and are wary of implementing
anything which they regard as a potential for risk if they get it
wrong.

   Lifting this restriction would make a lot of public data available on the 
 web for use by JS agents cleanly. Where requests require authentication or 
 are non-indempotent CORS makes a lot of sense, and those are areas where data 
 publishes would need to be aware of CORS anyway, and should implement it as 
 part of a security review. But for people publishing open data, CORS should 
 not be something they need to consider.


The restriction is in place as the default method of cross-origin
requests prior to XHR applied HTTP auth and cookies without
restriction. If this were extended in the same manner to XHR it would
allow any page to issue scripted authenticated requests to any site
you have visited within the lifetime of your browsing application
session. This would allow seemingly innocuous sites to do complex
multi-request CSRF attacks as background processes and against as many
targets as they can find while you're on the page.

The more sensible option is to make all XHR requests unauthenticated
unless explicitly scripted for such operation. A request to a public
IP address which carries no user-identifiable information is
completely harmless by definition.

On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson i...@hixie.ch wrote:
 No, such a proxy can't get to intranet pages.

 Authentication on the Internet can include many things, e.g. IP
 addresses or mere connectivity, that are not actually included in the body
 of an HTTP GET request. It's more than just cookies and HTTP auth headers.

The vulnerability of unsecured intranets can be eliminated by applying
the restriction to private IP ranges which is the source of this
attack vector. It is unsound (and potentially legally disputable) for
public access resources to be restricted and for public access
providers to pay the costs for the protection of private resources. It
is the responsibility of the resource's owner to pay the costs of
enforcing their chosen security policies.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Henry Story

On 19 Jul 2012, at 14:07, Cameron Jones wrote:

 On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:
 And it is the experience of this being required that led me to build a CORS 
 proxy [1] - (I am not the first to write one, I add quickly)
 
 Yes, the Origin and unauthenticated CORS restrictions are trivially
 circumvented by a simple proxy.
 
 
 So my argument is that this restriction could be lifted since
 
 1. GET is indempotent - and should not affect the resource fetched

I have to correct myself here: GET and HEAD are nullipotent (they have no 
sideffects
and the result is the same if they are executed 0 or more times) whereas PUT 
and DELETE
(with GET and HEAD) are indempotent ( they have the same result when executed 1 
or more times). 

 
 HTTP method semantics are an obligation for conformance and not
 guaranteed technically. Any method can be mis-used for any purpose
 from a security point of view.
 
 The people at risk from the different method semantics are those who
 use them incorrectly, for example a bank which issues transactions
 using GET over a URI:
 http://dontbankonus.com/transfer?to=xyzamount=100

yes, that is of course their problem, and one should not design to help people 
who do silly
things like that. 

 
 2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.
 
 Yes, the restriction on performing an unauthenticated GET only serves
 to promote the implementation of 3rd party proxy intermediaries and,
 if they become established, will introduce new security issues by way
 of indirection.
 
 The pertinent question for cross-origin requests here is - who is
 authoring the link and therefore in control of the request? The reason
 that cross-origin js which executes 3rd party non-origin code within a
 page is not a problem for web security is that the author of the page
 must explicitly include such a link. The control is within the
 author's domain to apply prudence on what they link to and include
 from. Honorable sites with integrity seek to protect their integrity
 by maintaining bona-fide links to trusted and reputable 3rd parties.

yes, though in the case of a JS based linked data application, like the 
semi-functionaing one I wrote and described earlier 
  http://bblfish.github.com/rdflib.js/example/people/social_book.html
( not all links work, you can click on Tim Berners Lee, and a few others )
the original javascript is not fetching more javascript, but fetching more data 
from the web.
Still your point remains valid. That address book needs to find ways to help 
show who says what, and of course not just upload any JS it finds on the web or 
else its reputation will suffer. My CORS proxy
only uploads RDFizable data.

 
 3. One could still pass the Origin: header as a warning to sites who may be 
 tracking people in unusual ways.
 
 This is what concerns people about implementing a proxy - essentially
 you are circumventing a recommended security practice whereby sites
 use this header as a means of attempting to protect themselves from
 CSRF attacks. This is futile and these sites would do better to
 implement CSRF tokens which is the method used by organizations which
 must protect against online fraud with direct financial implications,
 ie your bank.

I was suggesting the browser still pass the Origin: header even on a
request to a non authenticated page, for informational reasons. 

 
 There are too many recommendations for protecting against CRSF and the
 message is being lost. On the reverse, the poor uptake of CORS is
 because people do not understand it and are wary of implementing
 anything which they regard as a potential for risk if they get it
 wrong.
 
  Lifting this restriction would make a lot of public data available on the 
 web for use by JS agents cleanly. Where requests require authentication or 
 are non-nullipotent CORS makes a lot of sense, and those are areas where 
 data publishes would need to be aware of CORS anyway, and should implement 
 it as part of a security review. But for people publishing open data, CORS 
 should not be something they need to consider.
 
 
 The restriction is in place as the default method of cross-origin
 requests prior to XHR applied HTTP auth and cookies without
 restriction. If this were extended in the same manner to XHR it would
 allow any page to issue scripted authenticated requests to any site
 you have visited within the lifetime of your browsing application
 session. This would allow seemingly innocuous sites to do complex
 multi-request CSRF attacks as background processes and against as many
 targets as they can find while you're on the page.

indeed. Hence my suggestion that this restriction only be lifted for 
nullipotent and non
authenticated requests.

 The more sensible option is to make all XHR requests unauthenticated
 unless explicitly scripted for such operation. A 

Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
 On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:

 2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.

 Yes, the restriction on performing an unauthenticated GET only serves
 to promote the implementation of 3rd party proxy intermediaries and,
 if they become established, will introduce new security issues by way
 of indirection.

 The pertinent question for cross-origin requests here is - who is
 authoring the link and therefore in control of the request? The reason
 that cross-origin js which executes 3rd party non-origin code within a
 page is not a problem for web security is that the author of the page
 must explicitly include such a link. The control is within the
 author's domain to apply prudence on what they link to and include
 from. Honorable sites with integrity seek to protect their integrity
 by maintaining bona-fide links to trusted and reputable 3rd parties.

 yes, though in the case of a JS based linked data application, like the 
 semi-functionaing one I wrote and described earlier
   http://bblfish.github.com/rdflib.js/example/people/social_book.html
 ( not all links work, you can click on Tim Berners Lee, and a few others )
 the original javascript is not fetching more javascript, but fetching more 
 data from the web.
 Still your point remains valid. That address book needs to find ways to help 
 show who says what, and of course not just upload any JS it finds on the web 
 or else its reputation will suffer. My CORS proxy
 only uploads RDFizable data.


yes, i think you have run into a fundamental problem which must be
addressed in order for linked data to exist. dismissal of early
implementation experience is unhelpful at best.

i find myself in a similar situation whereby i have to write, maintain
and pay for the bandwidth of providing an intermediary proxy just to
service public requests. this has real financial consequences and is
unacceptable when there is no technical grounding for the
restrictions. as is stated before, it could even be regarded as a form
of censorship of freedom of expression for both the author publishing
their work freely and the consumer expressing new ideas.


 On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson i...@hixie.ch wrote:
 No, such a proxy can't get to intranet pages.

 Authentication on the Internet can include many things, e.g. IP
 addresses or mere connectivity, that are not actually included in the body
 of an HTTP GET request. It's more than just cookies and HTTP auth headers.

 The vulnerability of unsecured intranets can be eliminated by applying
 the restriction to private IP ranges which is the source of this
 attack vector. It is unsound (and potentially legally disputable) for
 public access resources to be restricted and for public access
 providers to pay the costs for the protection of private resources. It
 is the responsibility of the resource's owner to pay the costs of
 enforcing their chosen security policies.

 Thanks a lot for this suggestion. Ian Hickson's argument had convinced me, 
 but you have just provided a clean answer to it.

 If a mechanism can be found to apply restrictions for private IP ranges then 
 that should be used in preference to forcing the rest of the web to implement 
 CORS restrictions on public data. And indeed the firewall servers use private 
 ip ranges, which do in fact make a good distinguisher for public and non 
 public space.

 So the proposal is still alive it seems :-)


+1

i have complete support for the proposal.



 Social Web Architect
 http://bblfish.net/


Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Anne van Kesteren
On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges then 
 that
 should be used in preference to forcing the rest of the web to implement CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

It's not just private servers (there's no guarantee those only use
private IP ranges either). It's also IP-based authentication to
private resources as e.g. W3C has used for some time.


-- 
http://annevankesteren.nl/



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 2:54 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges then 
 that
 should be used in preference to forcing the rest of the web to implement CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

 It's not just private servers (there's no guarantee those only use
 private IP ranges either). It's also IP-based authentication to
 private resources as e.g. W3C has used for some time.



Isn't this mitigated by the Origin header?

Also, what about the point that this is unethically pushing the costs
of securing private resources onto public access providers?

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Anne van Kesteren
On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

No.


 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

It is far more unethical to expose a user's private data.


-- 
http://annevankesteren.nl/



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 3:06 PM, Eric Rescorla e...@rtfm.com wrote:
 On Thu, Jul 19, 2012 at 6:54 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges 
 then that
 should be used in preference to forcing the rest of the web to implement 
 CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

 It's not just private servers (there's no guarantee those only use
 private IP ranges either). It's also IP-based authentication to
 private resources as e.g. W3C has used for some time.

 Moreover, some companies have public IP ranges that are
 firewall blocked. It's not in general possible for the browser
 to distinguish publicly accessible IP addresses from non-publicly
 accessible IP addresses.

Yes it is impossible for a browser to detect intranet configurations.

The problem i have is that public providers are being forced to
changed their configurations over internal company networks changing
theirs.

Company IT departments have far more technical skills, and the ability
to perform the changes, than public publishers who may not even be
able to add CORS headers if they wanted to.


 More generally, CORS is designed to replicate the restrictions that non-CORS
 already imposes on browsers. Currently, browsers prevent JS from obtaining
 the result of this kind of cross-origin GET, thus CORS retains this 
 restriction.
 This is consistent with the general policy of not adding new features to
 browsers that would break people's existing security models, no matter
 how broken one might regard those models as being.


Aside form the intranet public IP concern, isn't this due to the
ambient authority applied to cross-origin GET requests? This turns
otherwise public information into a potentially private resource.

Removing all user-identifiable information from a request would
alleviate this restriction being necessary and not break anyone's
security policy (without considering the public-IP behind a firewall
scenario).


 I believe the WG already has consensus on this point.

 -Ekr

Thank-you for the response in light of the existing consensus.
Potential new information, even if not new, being addressed aids
understanding and can assist in further adoption and advocacy.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.



Could you expand on this response, please?

My understanding is that requests generate from XHR will have Origin
applied. This can be used to reject requests from 3rd party websites
within browsers. Therefore, intranets have the potential to restrict
access from internal user browsing habits.


 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.



Yes, but if no user private data is being exposed then there is cost
being paid for no benefit.

 --
 http://annevankesteren.nl/

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-17 Thread Ian Hickson
On Wed, 18 Jul 2012, Henry Story wrote:
 
 So my argument is that this restriction could be lifted since 
 
  1. GET is indempotent - and should not affect the resource fetched

  2. If there is no authentication, then the JS Agent could make the 
 request via a CORS praxy of its choosing, and so get the content of the 
 resource anyhow.

No, such a proxy can't get to intranet pages.

Authentication on the Internet can include many things, e.g. IP 
addresses or mere connectivity, that are not actually included in the body 
of an HTTP GET request. It's more than just cookies and HTTP auth headers.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-17 Thread Henry Story

On 18 Jul 2012, at 05:47, Ian Hickson wrote:

 On Wed, 18 Jul 2012, Henry Story wrote:
 
 So my argument is that this restriction could be lifted since 
 
 1. GET is indempotent - and should not affect the resource fetched
 
 2. If there is no authentication, then the JS Agent could make the 
 request via a CORS praxy of its choosing, and so get the content of the 
 resource anyhow.
 
 No, such a proxy can't get to intranet pages.
 
 Authentication on the Internet can include many things, e.g. IP 
 addresses or mere connectivity, that are not actually included in the body 
 of an HTTP GET request. It's more than just cookies and HTTP auth headers.

Ah yes, quite right.  Tricky space...

Perhaps my question can be useful in your CORS design-decisions-faq .

Thanks,

Henry


 
 -- 
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Social Web Architect
http://bblfish.net/