Re: CORS performance

2015-02-17 Thread Devdatta Akhawe
+1 to Anne's suggestion. The current design is pretty terrible for API
performance. I think a request to / with OPTIONS or something, with a
response that requires some server side logic (like return the random
number UA just sent) is pretty darn secure.

cheers
dev


On 17 February 2015 at 11:24, Anne van Kesteren ann...@annevk.nl wrote:
 On Tue, Feb 17, 2015 at 8:18 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 Individual resources should not be able to declare policy for the whole
 server, ...

 With HSTS we gave up on that.


 HTTP/1.1 rather has `OPTIONS *` for that, which would require a
 new kind of preflight request. And if the whole server is fine with
 cross-origin requests, I am not sure there is much of a point trying to
 lock it down by restricting request headers or methods.

 Yeah, I wasn't sure whether those should all be listed. Maybe simply
 declaring you're fluent in CORS in a unique way is sufficient.


 --
 https://annevankesteren.nl/




Re: Security use cases for packaging

2015-01-29 Thread Devdatta Akhawe
 Maybe the code from the downloaded package has to be run from a local origin 
 like chrome://*.

Doesn't the same issue that Chris raised still exist? You need a unit
of isolation that says only code signed with this public key runs in
this isolation compartment. Chrome extensions have that model.
Whether we achieve this via origins, COWLs, or origin+key as the
identifier, is a separate question, but Chris' high level bit remains true.

cheers
dev



Re: A URL API

2010-09-24 Thread Devdatta Akhawe
 If you really don't want to care what happened before, either do a
 clearParameter every time first, or define your own setParameter that
 just clears then appends.  Append/clear is a cleaner API design in
 general imo, precisely because you don't have to worry about colliding
 with previous activity by default.  A set/clear pair means that you
 have to explicitly check for existing data and handle it in a way that
 isn't completely trivial.

I am not saying remove append - I am saying that just also have set,
with the semantics that if you use set, its equivalent to clear;append

 Attempting to relegate same-name params to second-tier status isn't a
 good idea.  It's very useful for far more than the old services that
 are also accessed via basic HTML forms that you stated earlier.


I am not sure about that - I think a modern API design would be to
just send multiple values as an array (maybe CSV or TSV). Consider how
JSON values are encoded - do you have multiple values to denote
arrays? neither is this the case in XML (afaik). This semantic of
multiple yet different values for the same parameter is just
confusing, and as you said a mess for server side. I am less
optimistic than you are that it will be fixed.

cheers
devdtta


 ~TJ




Re: A URL API

2010-09-22 Thread Devdatta Akhawe
 2) I've added two flavors of appendParameter.  The first flavor takes
 a DOMString for a value and appends a single parameter.  The second
 flavor takes an array of DOMStrings and appends one parameter for each
 array.  This seemed better than using a variable number of arguments.

-1

I really want the setParameter method - appendParameter now requires
the developer to know what someone might have done in the past with
the URL object. this can be a cause of trouble as the web application
might do something that the developer doesn't expect , so I
specifically want the developer to opt-in to using appendParameters.

I know clearParameter is a method - but this is not the clear
separation between the '2 APIs' that we talked about earlier in the
thread.

I remember reading about how some web application frameworks combine
?q=aq=b to q=ab at the server side, whereas some will only consider
q=a and some will only consider q=b. This is such a mess - the
developer should have to specifically opt-in to this.

cheers
devdatta


 3) I've added a clearParameter method.

 Defining these methods required some low-level URL manipulation that's
 not actually defined anywhere (AFAIK), so I've added a reference to my
 work-in-progress draft about parsing and canonicalizing URLs.

 Adam


 On Tue, Sep 21, 2010 at 3:40 PM, Ojan Vafai o...@chromium.org wrote:
 appendParameter/clearParameter seems fine to me.
 On Wed, Sep 22, 2010 at 2:53 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

 On Mon, Sep 20, 2010 at 11:56 PM, Adam Barth w...@adambarth.com wrote:
  Ok.  I'm sold on having an API for constructing query parameters.
  Thoughts on what it should look like?  Here's what jQuery does:
 
  http://api.jquery.com/jQuery.get/
 
  Essentially, you supply a JSON object containing the parameters.  They
  also have some magical syntax for specifying multiple instances of the
  same parameter name.  I like the easy of supplying a JSON object, but
  I'm not in love with the magical syntax.  An alternative is to use two
  APIs, like we current have for reading the parameter values.

 jQuery's syntax isn't magical - the example they give using the query
 param name of 'choices[]' is doing that because PHP requires a [] at
 the end of the query param name to signal it that you want multiple
 values.  It's opaque, though - you could just as easily have left off
 the '[]' and it would have worked the same.

 The switch is just whether you pass an array or a string (maybe they
 support numbers too?).

 I recommend the method be called append*, so you can use it both for
 first sets and later additions (this is particularly useful if you're
 just looping through some data).  This obviously would then need a
 clear functionality as well.

 ~TJ







Re: A URL API

2010-09-21 Thread Devdatta Akhawe
or any webservice that likes to have lots of query parameters - Google
Search for example.

In general, why would you not want a robust way to make complicated
queries - those who are making simple queries and prefer simple one
liners can continue using it.


On 20 September 2010 23:42, Darin Fisher da...@chromium.org wrote:
 On Mon, Sep 20, 2010 at 11:02 AM, Garrett Smith dhtmlkitc...@gmail.com
 wrote:

 On 9/20/10, Julian Reschke julian.resc...@gmx.de wrote:
  On 20.09.2010 18:56, Garrett Smith wrote:
 [...]
  Requests that don't have lot of parameters are often simple one-liners:
 
  url = /getShipping/?zip= + zip + pid= + pid;
 
  That's exactly the kind of code that will fail once pid and zip
  contain things you don't expecz.
 
  What XHRs have complicated URL with a lot of query parameters?
 
  What XHRs?
 
 IOW, what are the cases where an XHR instance wants to use a lot o query
 params?


 Probably when speaking to a HTTP server designed to take input from an HTML
 form.
 -Darin




Re: A URL API

2010-09-21 Thread Devdatta Akhawe
+1 for 2 APIs - this whole multiple parameters with the same value is
too annoying imho and unnecessary for new web services . It should be
there only for old services that are also accessed via basic HTML
forms

cheers
devdatta

On 20 September 2010 23:56, Adam Barth w...@adambarth.com wrote:
 Ok.  I'm sold on having an API for constructing query parameters.
 Thoughts on what it should look like?  Here's what jQuery does:

 http://api.jquery.com/jQuery.get/

 Essentially, you supply a JSON object containing the parameters.  They
 also have some magical syntax for specifying multiple instances of the
 same parameter name.  I like the easy of supplying a JSON object, but
 I'm not in love with the magical syntax.  An alternative is to use two
 APIs, like we current have for reading the parameter values.

 Adam


 On Mon, Sep 20, 2010 at 11:47 PM, Devdatta Akhawe dev.akh...@gmail.com 
 wrote:
 or any webservice that likes to have lots of query parameters - Google
 Search for example.

 In general, why would you not want a robust way to make complicated
 queries - those who are making simple queries and prefer simple one
 liners can continue using it.


 On 20 September 2010 23:42, Darin Fisher da...@chromium.org wrote:
 On Mon, Sep 20, 2010 at 11:02 AM, Garrett Smith dhtmlkitc...@gmail.com
 wrote:

 On 9/20/10, Julian Reschke julian.resc...@gmx.de wrote:
  On 20.09.2010 18:56, Garrett Smith wrote:
 [...]
  Requests that don't have lot of parameters are often simple one-liners:
 
  url = /getShipping/?zip= + zip + pid= + pid;
 
  That's exactly the kind of code that will fail once pid and zip
  contain things you don't expecz.
 
  What XHRs have complicated URL with a lot of query parameters?
 
  What XHRs?
 
 IOW, what are the cases where an XHR instance wants to use a lot o query
 params?


 Probably when speaking to a HTTP server designed to take input from an HTML
 form.
 -Darin






Re: A URL API

2010-09-21 Thread Devdatta Akhawe
On 21 September 2010 00:47, Ojan Vafai o...@chromium.org wrote:
 How about setParameter(name, value...) that takes var_args number of values?
 Alternately, it could take either a DOMString or an ArrayDOMString for the
 value. I prefer the var_args.

What happens when I do
setParameter('x','a','b','c');

and now want to add another - I will have to do weird things like
getting the parameter via getAllParameter and then append to the array
and function.call or something like that

doesn't look very nice according to me - I like the separation into 2
APIs because I think that makes the common case of single parameter
values clean and robust


cheers
devdatta


 Also, getParameterByName and getAllParametersByName seem unnecessarily
 wordy. How about getParameter/getParameterAll to match
 querySelector/querySelectorAll? Putting All at the end is admittedly
 awkward, but this is the uncommon case, so I'm OK with it for making the
 common case less wordy.
 Ojan
 On Tue, Sep 21, 2010 at 4:56 PM, Adam Barth w...@adambarth.com wrote:

 Ok.  I'm sold on having an API for constructing query parameters.
 Thoughts on what it should look like?  Here's what jQuery does:

 http://api.jquery.com/jQuery.get/

 Essentially, you supply a JSON object containing the parameters.  They
 also have some magical syntax for specifying multiple instances of the
 same parameter name.  I like the easy of supplying a JSON object, but
 I'm not in love with the magical syntax.  An alternative is to use two
 APIs, like we current have for reading the parameter values.

 Adam


 On Mon, Sep 20, 2010 at 11:47 PM, Devdatta Akhawe dev.akh...@gmail.com
 wrote:
  or any webservice that likes to have lots of query parameters - Google
  Search for example.
 
  In general, why would you not want a robust way to make complicated
  queries - those who are making simple queries and prefer simple one
  liners can continue using it.
 
 
  On 20 September 2010 23:42, Darin Fisher da...@chromium.org wrote:
  On Mon, Sep 20, 2010 at 11:02 AM, Garrett Smith
  dhtmlkitc...@gmail.com
  wrote:
 
  On 9/20/10, Julian Reschke julian.resc...@gmx.de wrote:
   On 20.09.2010 18:56, Garrett Smith wrote:
  [...]
   Requests that don't have lot of parameters are often simple
   one-liners:
  
   url = /getShipping/?zip= + zip + pid= + pid;
  
   That's exactly the kind of code that will fail once pid and zip
   contain things you don't expecz.
  
   What XHRs have complicated URL with a lot of query parameters?
  
   What XHRs?
  
  IOW, what are the cases where an XHR instance wants to use a lot o
  query
  params?
 
 
  Probably when speaking to a HTTP server designed to take input from an
  HTML
  form.
  -Darin
 
 






Re: A URL API

2010-09-21 Thread Devdatta Akhawe

 Perhaps appendParameter(x, a, b, c) ?


where appendParameter is the second API  - separate from setParameter?

so appendParmeter(x',a,b,c); setParameter(x,a)
would result in ?x=a

and without the second function call it would be
?x=ax=bx=c

I am fine with this.

cheers
devdatta

 Adam




Re: A URL API

2010-09-19 Thread Devdatta Akhawe
hi

Is the word 'hash' for fragment identifiers common? I personally
prefer the attribute being called 'fragment' or 'fragmentID' over
'hash' - its the standard afaik in all the RFCs.

regards
devdatta

On 19 September 2010 15:47, João Eiras joao.ei...@gmail.com wrote:

 That would be different behavior than what Location and HTMLAnchorElement
 do; they unescape various componenents. Is the benefit worth the divergence?

 As a side note, an out-of-document HTMLAnchorElement already provides most
 of the functionality of this interface. Things it can't do:
 - Resolve a relative URL against no base at all (probably not very useful
 since the interface can only represent an absolute URL).
 - Resolve against an arbitrary base (maybe you could do it awkwardly using
 base tag tricks).
 - Read or write the lastPathComponent or origin without further parsing
 (should origin really be writable? That's kind of weird...)
 - Read search parameters conveniently without parsing.

 It might be nice to provide the parts of this that make sense on
 HTMLAnchorElement and Location, then see if a new interface really pulls its
 weight.


 I idea too. I would rather extend Location to include these features, so
 they would be immediately available in links and the location object. And
 make Location into a constructor new Location(url, base)





Re: A URL API

2010-09-19 Thread Devdatta Akhawe

 1) There are now two methods for getting at the URL parameters.  The

and none for setting them?


cheers
devdatta


 2) The origin attribute is now readonly.  Once I wired up the origin
 attribute to the actual definition of how to compute the origin of a
 URL, it became obvious that we don't want to support assigning to the
 attribute.  In particular, it doesn't seems particularly meaningful to
 assign the string null to the attribute even though that's a
 perfectly reasonable value for the attribute to return.

 3) I've added definitions for what the interface actually does.

 In response to folks who think we should add these APIs to
 HTMLAnchorElement and Location, I agree.  Currently, the draft is
 written to refer to HTML5 for the definitions of the common elements,
 but we could easily reverse that dependency or incorporate this API
 into HTML5.

 Adam





Re: A URL API

2010-09-17 Thread Devdatta Akhawe
hi

 You mean you didn't mention that I drafted a much better one over two
 years ago?


Garrett : could you send a link to your ES4 draft/proposal ? My simple
google skills couldn't find it.

thanks
devdatta


 And you felt this API was worth mentioning?

 My criticism is spot-on and appropriate. Cursory, dismissive and
 thoughtless replies are as inappropriate and counterproductive as
 flippant messages I'm seeing in my inbox. And I'm not about to be
 pigeonholed or scapegoated into being the bad guy here. Make sense?

 That said, Garrett's right.

 My arguments are supported by the reasons that I provided; nothing
 more, nothing less.

 Garrett





[cors] Protecting benign but buggy client side code

2010-08-20 Thread Devdatta Akhawe
Hi

The CORS specification in its current form seems to be very concerned
about increasing attack surface of benign servers (the preflight
request etc. concern). Seeing [1] I am concerned about the other case
- benign clients and malicious cross origin servers.

for the tl;dr crowd - my (possibly wrong) summary of the attack
facebook.com loads content using the stuff after a '#' in a URL, thus
facebook.com/#profile.php loads content from facebook.com/profile.php
using XHR.
a URL like facebook.com/#evil.com/evil.php , with evil.com configured
to AccessControlAllowOrigin * could result in HTML injection.

It seems that over here facebook is a benign server that some time in
the past assumed that XHR can only be same origin, and with the
introduction of cross origin XHR is suddenly vulnerable to XSS. In
general, a client needs to 'add' stuff to its js to be safe after the
introduction of XHR. This isn't ideal.


Regards
devdatta

[1] http://m-austin.com/blog/?p=19



Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?

2010-07-07 Thread Devdatta Akhawe
 Because it's undesirable to prevent the browser from sending cookies on an
 img request,

Why ? I can understand why you can't do it today - but why is this
undesirable even for new applications? Ad tracking ?

~devdatta

On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote:


 On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote:

 On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote:
 [...]

 That's unfortunate-- at least for now, that prevents servers from echoing
 the origin in the Access-Control-Allow-Origin header, so servers cannot host
 public images that don't taint canvases.  The same problem likely exists
 for other types of requests that might adopt CORS, like fonts, etc.

 Why would public images or fonts need credentials?

 Because it's undesirable to prevent the browser from sending cookies on an
 img request, and the user might have cookies for the image's site.  It's
 typical for the browser to send cookies on such requests, and those are
 considered a type of credentials by CORS.
 Charlie




 I believe the plan is to change HTML5 once CORS is somewhat more stable
 and use it for various pieces of infrastructure there. At that point we can
 change img to transmit an Origin header with an origin. We could also
 decide to change CORS and allow the combination of * and the credentials
 flag being true. I think * is not too different from echoing back the value
 of a header.


 I would second the proposal to allow * with credentials.  It seems
 roughly equivalent to echoing back the Origin header, and it would allow
 CORS to work on images and other types of requests without changes to HTML5.
 Thanks,
 Charlie



 --
     Cheers,
     --MarkM





Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?

2010-07-07 Thread Devdatta Akhawe
hmm, I think I quoted the wrong part of your email. I wanted to ask
why would it be undesirable to make CORS GET requests cookie-less. It
seems the argument here is reduction of implementation work. Is this
the only one? Note that even AnonXmlHttpRequest intends to make GET
requests cookie-less.

Regards
devdatta



 I meant undesirable in that it will require much deeper changes to
 browsers.
 I wouldn't mind making it possible to request an image or other subresource
 without cookies, but I don't think there's currently a mechanism for that,
 is there?  And if there's consensus that user agents shouldn't send cookies
 at all on third party subresources, I'm ok with that, but I imagine there
 would be pushback on that sort of proposal-- it would likely affect
 compatibility with existing web sites.  I haven't gathered any data on it,
 though.
 The benefit to allowing * with credentials is that it lets CORS work with
 the existing browser request logic for images and other subresources, where
 cookies are currently sent with the request.
 Charlie


 On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote:
 
 
  On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com
  wrote:
 
  On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org
  wrote:
  [...]
 
  That's unfortunate-- at least for now, that prevents servers from
  echoing
  the origin in the Access-Control-Allow-Origin header, so servers
  cannot host
  public images that don't taint canvases.  The same problem likely
  exists
  for other types of requests that might adopt CORS, like fonts, etc.
 
  Why would public images or fonts need credentials?
 
  Because it's undesirable to prevent the browser from sending cookies on
  an
  img request, and the user might have cookies for the image's site.
   It's
  typical for the browser to send cookies on such requests, and those are
  considered a type of credentials by CORS.
  Charlie
 
 
 
 
  I believe the plan is to change HTML5 once CORS is somewhat more
  stable
  and use it for various pieces of infrastructure there. At that point
  we can
  change img to transmit an Origin header with an origin. We could
  also
  decide to change CORS and allow the combination of * and the
  credentials
  flag being true. I think * is not too different from echoing back the
  value
  of a header.
 
 
  I would second the proposal to allow * with credentials.  It seems
  roughly equivalent to echoing back the Origin header, and it would
  allow
  CORS to work on images and other types of requests without changes to
  HTML5.
  Thanks,
  Charlie
 
 
 
  --
      Cheers,
      --MarkM
 
 





Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?

2010-07-07 Thread Devdatta Akhawe
 It's not just implementation effort-- as I mentioned, it's potentially a
 compatibility question.  If you are proposing not sending cookies on any
 cross-origin images (or other potential candidates for CORS), do you have
 any data about which sites that might affect?

Its not clear to me on how it would affect sites. It would be like the
user cleared his cache and made a request.

regards
devdatta


 Personally, I would love to see cross-origin subresource requests change to
 not using cookies, but that could break existing web sites that include
 subresources from partner sites, etc.  Is there a proposal or discussion
 about this somewhere?
 In the mean time, the canvas tainting example in the spec seems difficult to
 achieve.
 Charlie


 On Wed, Jul 7, 2010 at 5:05 PM, Devdatta Akhawe dev.akh...@gmail.com
 wrote:

 hmm, I think I quoted the wrong part of your email. I wanted to ask
 why would it be undesirable to make CORS GET requests cookie-less. It
 seems the argument here is reduction of implementation work. Is this
 the only one? Note that even AnonXmlHttpRequest intends to make GET
 requests cookie-less.

 Regards
 devdatta


 
  I meant undesirable in that it will require much deeper changes to
  browsers.
  I wouldn't mind making it possible to request an image or other
  subresource
  without cookies, but I don't think there's currently a mechanism for
  that,
  is there?  And if there's consensus that user agents shouldn't send
  cookies
  at all on third party subresources, I'm ok with that, but I imagine
  there
  would be pushback on that sort of proposal-- it would likely affect
  compatibility with existing web sites.  I haven't gathered any data on
  it,
  though.
  The benefit to allowing * with credentials is that it lets CORS work
  with
  the existing browser request logic for images and other subresources,
  where
  cookies are currently sent with the request.
  Charlie
 
 
  On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote:
  
  
   On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com
   wrote:
  
   On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org
   wrote:
   [...]
  
   That's unfortunate-- at least for now, that prevents servers from
   echoing
   the origin in the Access-Control-Allow-Origin header, so servers
   cannot host
   public images that don't taint canvases.  The same problem likely
   exists
   for other types of requests that might adopt CORS, like fonts, etc.
  
   Why would public images or fonts need credentials?
  
   Because it's undesirable to prevent the browser from sending cookies
   on
   an
   img request, and the user might have cookies for the image's site.
    It's
   typical for the browser to send cookies on such requests, and those
   are
   considered a type of credentials by CORS.
   Charlie
  
  
  
  
   I believe the plan is to change HTML5 once CORS is somewhat more
   stable
   and use it for various pieces of infrastructure there. At that
   point
   we can
   change img to transmit an Origin header with an origin. We could
   also
   decide to change CORS and allow the combination of * and the
   credentials
   flag being true. I think * is not too different from echoing back
   the
   value
   of a header.
  
  
   I would second the proposal to allow * with credentials.  It seems
   roughly equivalent to echoing back the Origin header, and it would
   allow
   CORS to work on images and other types of requests without changes
   to
   HTML5.
   Thanks,
   Charlie
  
  
  
   --
       Cheers,
       --MarkM
  
  
 
 





Re: CORS Header Filtering?

2010-05-12 Thread Devdatta
IIRC HTTP-WG has asked this WG to change this behavior from a
whitelist to a blacklist. There was a huge discussion about this a
while back -- maybe this could be an example of why CORS should follow
the HTTP-WG's recommendations.

-devdatta

On 12 May 2010 11:50, Nathan nat...@webr3.org wrote:
 All,

 Serious concern this time, I've just noted that as per 6.1 Cross-Origin
 Request of the CORS spec, User Agents must strip all response headers other
 than:

 * Cache-Control
 * Content-Language
 * Content-Type
 * Expires
 * Last-Modified
 * Pragma

 This simply can't be, many other headers are needed

 Link header is going to be heavily used (notably for Web Access Control!)

 Allow is needed when there's a 405 response (use GET instead of POST)

 Content-Location is needed to be able to show the user the real URI and
 provide it for subsequent requests and bookmarks

 Location is needed when a new resource has been created via POST (where a
 redirect wouldn't happen).

 Retry-After  Warning are needed for rather obvious reasons.

 There are non rfc2616 headers on which functionality is often dependent (DAV
 headers for instance) - SPARQL Update also exposes via the MS-Author-via
 header.

 In short there are a whole host of reasons why many different headers are
 needed (including many not listed here).

 Nathan





Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Devdatta
While most of the discussion in this thread is just repeats of
previous discussions, I think Tyler makes a good (and new) point in
that the current CORS draft still has no mention of the possible
security problems that Tyler talks about. The current draft's security
section

http://dev.w3.org/2006/waf/access-control/#security

is ridiculous considering the amount of discussion that has taken
place on this issue on this mailing list.

Before going to rec, I believe Anne needs to substantially improve
this section - based on stuff from maybe Maciej's presentation - which
I found really informative. He could also cite UMP as a possible
option for those worried about security.


Cheers
devdatta



On 12 May 2010 12:26, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 11:17 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 9:01 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, May 11, 2010 at 5:15 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 11 May 2010, Tyler Close wrote:

 CORS introduces subtle but severe Confused Deputy vulnerabilities

 I don't think everyone is convinced that this is the case.

 AFAICT, there is consensus that CORS has Confused Deputy
 vulnerabilities. I can pull up email quotes from almost everyone
 involved in the conversation.

 It is also not a question of opinion, but fact. CORS uses ambient
 authority for access control in 3 party scenarios. CORS is therefore
 vulnerable to Confused Deputy.

 First I should note that I have no idea what this argument is trying
 to result in. Is this an attempt at preventing CORS from going to REC?
 Or are we just rat holing old discussions?

 That said, I feel like I don't want to let the above claim go
 unanswered. Like Ian, I think you are oversimplifying the situation. I
 would argue that UMP risks resulting in the same confused deputy
 problems as CORS in the same complex scenarios where CORS risks
 confused deputy problems.

 With an UMP based web application it seems like a big risk that people
 will create APIs like:

 function fetchResource(uri, successCallback) {
  req = new UMPOrWhateverWellCallItRequest();
  uri += securityToken= + gSecurityToken;
  req.open(GET, uri);
  req.send();
  req.onload = function() { successCallback(req.responseText) };
 }

 Such code risks suffering from the exact same confused deputy problems
 as CORS.

 To paraphrase: Developers might build something that is broken in the
 following way; therefore, we should give them something that is
 already broken in that way.

 My concern with UMP is that it takes no responsibility for
 the security model and instead puts all responsibility on web sites.

 The UMP spec does go to significant lengths to show developers how to
 do things the right way and why. The Security Considerations section
 provides a straightforward model for safely using UMP. CORS has
 nothing similar.

 I'm not convinced this will result in increased security on the web,
 just the ability for UAs to hide behind arguments like it's not our
 fault that the website has a bug.

 The best we can do is provide good tools and show people how to use
 them. CORS is a tool with known problems and no instructions on safe
 use.

 I don't see why we couldn't just give websites the ability to use
 either security model and stop wasting time reiterating old
 discussions.

 I just don't understand why we want to deploy a broken security model.

 --Tyler

 --
 Waterken News: Capability security on the Web
 http://waterken.sourceforge.net/recent.html





Re: CORS Header Filtering?

2010-05-12 Thread Devdatta
Do you have real examples of someone in a later stage adding headers
but expecting it to be protected by Same Origin Policy (in that they
are fine with SOP script accessing the headers ) ?

Regards
devdatta

On 12 May 2010 12:51, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 12:33 PM, Nathan nat...@webr3.org wrote:
 Yes,

 The simplest argument I can give is that we (server admins) are trusted to
 set the CORS headers, but not to remove any headers we don't want an XHR
 request to see - this is frankly ridiculous.

 The problem is there might not be a single server admin but many.
 Quoting from the UMP spec:

 
 Some HTTP servers construct an HTTP response in multiple stages. In
 such a deployment, an earlier stage might produce a uniform response
 which is augmented with additional response headers by a later stage
 that does not understand a uniform response header. This later stage
 might add response headers with the expectation they will be protected
 by the Same Origin Policy. The developer of the earlier stage might be
 unable to update the program logic of the later stage. To accommodate
 this deployment scenario, user-agents can filter out response headers
 on behalf of the server before exposing a uniform response to the
 requesting content.
 

 http://dev.w3.org/2006/waf/UMP/#response-header-filtering

 I believe the design presented in UMP for response header filtering
 addresses all use-cases, including your Location header example
 below.

 --Tyler

 CORS and same origin rules have already closed off the web and made *true*
 client side applications almost impossible, in addition it's planned to
 remove headers which are vital for many applications to work. Including many
 headers that are vital to the way the web works and part of the HTTP spec
 for very good reasons.

 Can't happen, not good, no argument could ever change my opinion on this,
 and it definitely needs changed.

 http://tools.ietf.org/html/rfc5023#section-5.3

 AtomPub 5.3: Creating a Resource
 ..If the Member Resource was created successfully, the server
 responds with a status code of 201 and a *Location* header that
 contains the IRI of the newly created Entry Resource.

 You can't seriously block REST, the design of the web - this is ridiculous.

 Nathan

 Devdatta wrote:

 IIRC HTTP-WG has asked this WG to change this behavior from a
 whitelist to a blacklist. There was a huge discussion about this a
 while back -- maybe this could be an example of why CORS should follow
 the HTTP-WG's recommendations.

 -devdatta

 On 12 May 2010 11:50, Nathan nat...@webr3.org wrote:

 All,

 Serious concern this time, I've just noted that as per 6.1 Cross-Origin
 Request of the CORS spec, User Agents must strip all response headers
 other
 than:

 * Cache-Control
 * Content-Language
 * Content-Type
 * Expires
 * Last-Modified
 * Pragma

 This simply can't be, many other headers are needed

 Link header is going to be heavily used (notably for Web Access Control!)

 Allow is needed when there's a 405 response (use GET instead of POST)

 Content-Location is needed to be able to show the user the real URI and
 provide it for subsequent requests and bookmarks

 Location is needed when a new resource has been created via POST (where a
 redirect wouldn't happen).

 Retry-After  Warning are needed for rather obvious reasons.

 There are non rfc2616 headers on which functionality is often dependent
 (DAV
 headers for instance) - SPARQL Update also exposes via the MS-Author-via
 header.

 In short there are a whole host of reasons why many different headers are
 needed (including many not listed here).

 Nathan










 --
 Waterken News: Capability security on the Web
 http://waterken.sourceforge.net/recent.html




Re: [UMP] Server opt-in

2010-01-12 Thread Devdatta
 My question, then, is how can a server enjoy the confidentiality
 benefits of UMP without paying the security costs of CORS?  As
 currently specced, a server needs to take all the CORS risks in order
 to use UMP.  That seems unnecessary.


The page at http://dev.w3.org/2006/waf/UMP/#security clearly mentions
that if you want to have confidentiality benefits of UMP you need to
ensure that resources you want accessed only by particular principals
need to use explicit permission tokens (some nonce I presume).

I don't understand how a server that protects all its relevant
resources through a nonce/permission token can lose confidentiality or
have any security costs of CORS just by doing
Access-Control-Allow-Origin: * ?

Regards
Devdatta



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Devdatta

 Another example would be an XBL binding file on hixie.ch that is
 accessible only to pages on damowmow.com. With CORS I can do this with one
 line in my .htaccess file. I don't see how to do it at all with UM.


Seems to me that these examples can just as easily be done with IE's
XDomainRequest. Are there examples for CORS which can't be done by
UM/XDR  ?

Cheers
devdatta

2009/12/16 Ian Hickson i...@hixie.ch:
 On Wed, 16 Dec 2009, Kenton Varda wrote:

 Without the benefit of full context (I only started following this list
 recently), I'd like cautiously to suggest that the UM solution to Ian's
 challenge seems awkward because the challenge is itself a poor design,
 and UM tends to be more difficult to work with when used to implement
 designs that are poor in the first place.

 Specifically -- and note that I'm not sure I follow all the details, so
 I could be missing things -- it seems that the challenge calls for site
 B to be hard-coded to talk to site A.  In a better world, site B would
 be able to talk to any site that provides feeds in the desired format.

 A concrete example of the example I was talking about is Google's Finance
 GData API. There's a fixed URL on A (Google's site) that represents my
 finance information. There's a site B (my portal page) that is hard-coded
 to fetch that data and display it. I'm logged into A, I'm not logged into
 B, and I've told A (Google) that it's ok to give B access to my financial
 data. Today, this involves a complicated set of bouncing back and forth.
 With CORS, it could be done with zero server-side scripting -- the file
 could just be statically generated with an HTTP header that grants
 permission to my portal to read the page.

 Another example would be an XBL binding file on hixie.ch that is
 accessible only to pages on damowmow.com. With CORS I can do this with one
 line in my .htaccess file. I don't see how to do it at all with UM.


 So imagine, for example, that when the user visits site A originally,
 the site can somehow tell the browser I would like to provide a
 capability implementing the com.example.Feed interface.  The URL for
 this capability is [something unguessable]..  Then, when the user
 visits site B, it has a socket for an object implementing
 com.example.Feed.  When the user clicks on this socket, the browser
 pops up a list of com.example.Feed implementations that it knows about,
 such as the one from site A.  The user can then click on that one and
 thus hook up the sites.

 As a user, in both the finance case and XBL case, I don't want any UI. I
 just want it to Work.

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'





Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Devdatta
hmm.. just a XDR GET on the file at hixie.ch which allows access only
if the request is from damowmow.com ?

I am not sure -- is there anything special about XBL bindings which
would result in this not working ?

Cheers
devdatta

2009/12/16 Ian Hickson i...@hixie.ch:
 On Wed, 16 Dec 2009, Devdatta wrote:
 
  Another example would be an XBL binding file on hixie.ch that is
  accessible only to pages on damowmow.com. With CORS I can do this with one
  line in my .htaccess file. I don't see how to do it at all with UM.

 Seems to me that these examples can just as easily be done with IE's
 XDomainRequest.

 How?

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Re: CORS versus Uniform Messaging?

2009-12-14 Thread Devdatta

 I also agree with Jonas on these points.  What might make the most
 sense is to let the marketplace decide which model is most useful.
 The most likely outcome (in my mind) is that they are optimized for
 different use cases and will each find their own niche.


I am not sure this is the case. Seems to me that the CORS API is more
powerful than UM (in that stuff that can be done by UM can be done by
CORS). In the end, if W3 pushes out both UM and CORS, why would a
developer use UM ? I imagine most would end up using CORS (even if
they can achieve their goals with UM), just because its easier and
quicker.

Cheers
devdatta



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-13 Thread Devdatta

 Some parts of the protocol are not clear to me. Can you please clarify
 the following :
 1 In msg 1, what script context is the browser running in ? Site A or
 Site B ? (in other words who initiates the whole protocol ?)

 Server A, or a bookmark.

Wasn't Maciej's original scenario that of a user going to Site B (an
event's site) and adding stuff to his calendar at A ? In such a
scenario, the complete protocol should ideally start with B.

Thanks
devdatta



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-12 Thread Devdatta
Hi Tyler,

Some parts of the protocol are not clear to me. Can you please clarify
the following :
1 In msg 1, what script context is the browser running in ? Site A or
Site B ? (in other words who initiates the whole protocol ?)

2 Msg 3 is a form POST or a XHR POST ? If the latter , 5 needs to be
marked as a GuestXHR

3 The 'secret123' token : Does it expire? If yes when/how ? Also, if
it expires, will the user have to again confirm the grant from A ?


Thanks
Devdatta



2009/11/10 Tyler Close tyler.cl...@gmail.com:
 I've elaborated on the example at:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I've tried to include all the information from our email exchange.
 Please let me know what parts of the description remain ambiguous.

 Just so that we're on the same page, the prior description was only
 meant to give the reader enough information to see that the scenario
 is possible to implement under Maciej's stated constraints. I expected
 the reader to fill in their favored technique where that choice could
 be done safely in many ways. Many of the particulars of the design
 (cookies vs URL arguments, 303 vs automated form post, UI for noting
 conflicts) can be done in several different ways and the choice isn't
 very relevant to the current discussion. All that said, I'm happy to
 fill out the scenario with as much detail as you'd like, if that helps
 us reach an understanding.

 --Tyler

 On Thu, Nov 5, 2009 at 8:31 PM, Adam Barth w...@adambarth.com wrote:
 You seem to be saying that your description of the protocol is not
 complete and that you've left out several security-critical steps,
 such as

 1) The user interface for confirming transactions.
 2) The information the server uses to figure out which users it is talking 
 to.

 Can you please provide a complete description of your protocol with
 all the steps required?  I don't see how we can evaluate the security
 of your protocol without such a description.

 Thanks,
 Adam


 On Thu, Nov 5, 2009 at 12:05 PM, Tyler Close tyler.cl...@gmail.com wrote:
 Hi Adam,

 Responses inline below...

 On Thu, Nov 5, 2009 at 8:56 AM, Adam Barth w...@adambarth.com wrote:
 Hi Tyler,

 I've been trying to understand the GuestXHR protocol you propose for
 replacing CORS:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I don't understand the message in step 5.  It seems like it might have
 a CSRF vulnerability.  More specifically, what does the server do when
 it receives a GET request for https://B/got?A=secret123?

 Think of the resource at /got as like an Inbox for accepting an add
 event permission from anyone. The meta-variable A in the query
 string, along with the secret, is the URL to send events to. So a
 concrete request might look like:

 GET /got?site=https%3%2F%2Fcalendar.example.coms=secret123
 Host: upcoming.example.net

 When upcoming.example.net receives this request, it might:

 1) If no association for the site exists, add it
 2) If an existing association for the site exists respond with a page
 notifying the user of the collision and asking if it should overwrite
 or ignore.

 Notice that step 6 is a response from Site B back to the user's browser.

 Alternatively, the response in step 6 could always be a confirmation
 page asking the user to confirm any state change that is about to be
 made. So, the page from the upcoming event site might say:

 I just received a request to add a calendar to your profile. Did you
 initiate this request? yes no

 Note that such a page would also be a good place to ask the user for a
 petname for the new capability, if you're into such things, but I
 digress...

 The slides say Associate user,A with secret123.  That sounds like
 server B changes state to associate secret123 with the the pair (user,
 A).  What stops an attacker from forging a cross-site request of the
 form https://B/got?A=evil123?

 In the design as presented, nothing prevents this. I considered the
 mitigation presented above sufficient for Maciej's challenge. If
 desired, we could tighten things up, without resorting to an Origin
 header, but I'd have to add some more stuff to the explanation.

  Won't that overwrite the association?

 That seems like a bad idea.

 There doesn't seem to be anything in the protocol that binds the A
 in that message to server A.

 The A is just the URL for server A.

 More generally, how does B know the message https://B/got?A=secret123
 has anything to do with user?  There doesn't seem to be anything in
 the message identifying the user.  (Of course, we could use cookies to
 do that, but we're assuming the cookie header isn't present.)

 This request is just a normal page navigation, so cookies and such
 ride along with the request. In the diagrams, all requests are normal
 navigation requests unless prefixed with GXHR:.

 We used these normal navigation requests in order to keep the user
 interface and network communication diagram as similar to Maciej's
 solution as possible. If I

Re: STS and lockCA

2009-11-11 Thread Devdatta
 One idea to consider, especially for lockCA, is to somehow denote that STS 
 should expire at the same time
 as the cert, perhaps by  omitting max-age or allowing max-age=cert, etc.  
 This will prevent accidentally
 causing STS to last longer or shorter than the cert expiration, especially 
 when it's rotated out or revoked.

 Why do we need a browser mechanism for that?  It seems like the site
 can easily compute whatever max-age value it wishes to set.

I am actually afraid that the website can easily miscompute that.

In general, with STS , I am afraid of sites miscalculating some
max-age like setting and taking themselves offline. Having browsers
automatically expire STS at the same time as the cert makes sense to
me. Sites that do their certs right do not lose any security
properties and sites that mess up worst case fall back to old
HTTP/HTTPS behaviour (and not take themselves offline).

You could ofcourse argue that STS site's admin won't be stupid. While
I wouldn't put my money on that, that's a assumption that the
specification free to make but should be explicit about (for e.g by
telling the spec reader : we are assuming you are smart, if you mess
up you can easily take your site offline)

Cheers
Devdatta

2009/11/11 Adam Barth w...@adambarth.com:
 On Tue, Nov 10, 2009 at 7:40 PM, Bil Corry b...@corry.biz wrote:
 Gervase Markham wrote on 10/01/2009 5:51 PM:
 I therefore propose a simple extension to the STS standard; a single
 token to be appended to the end of the header:

 lockCA

 One idea to consider, especially for lockCA, is to somehow denote that STS 
 should expire at the same time as the cert, perhaps by omitting max-age or 
 allowing max-age=cert, etc.  This will prevent accidentally causing STS to 
 last longer or shorter than the cert expiration, especially when it's 
 rotated out or revoked.

 Why do we need a browser mechanism for that?  It seems like the site
 can easily compute whatever max-age value it wishes to set.

 Adam