Re: CORS performance proposal

2015-02-19 Thread Bjoern Hoehrmann
* Martin Thomson wrote:
>On 20 February 2015 at 11:39, Bjoern Hoehrmann  wrote:
>> The proposal is to use `OPTIONS * HTTP/1.1` not `OPTIONS /x HTTP/1.1`.
>
>I missed that.  In which case I'd point out that `OPTIONS *` is very
>poorly supported.  Some people (myself included) want it to die a
>flaming death.

Evidence for "poorly supported" would certainly be helpful (web hosting
packages without TLS support, for instance, do not count, though).
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance proposal

2015-02-19 Thread Martin Thomson
On 20 February 2015 at 11:39, Bjoern Hoehrmann  wrote:
> The proposal is to use `OPTIONS * HTTP/1.1` not `OPTIONS /x HTTP/1.1`.

I missed that.  In which case I'd point out that `OPTIONS *` is very
poorly supported.  Some people (myself included) want it to die a
flaming death.



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Thu, Feb 19, 2015 at 4:31 PM, Anne van Kesteren  wrote:
> On Thu, Feb 19, 2015 at 10:05 PM, Jeffrey Walton  wrote:
>> For what its worth, I'm just the messenger. There are entire
>> organizations with Standard Operating Procedures (SOPs) built around
>> the stuff I'm talking about. I'm telling you what they do based on my
>> experiences.
>
> From your arguments though it sounds like they would be fine with
> buying PCs from Lenovo with installed spyware, which makes it all
> rather dubious. You can't cite the Lenovo case as a failure of
> browsers when it's a compromised client.
>

No :) The organizations I work with have SOPs in place to address
that. They would not be running an unapproved image in the first
place.

*If* the user installed a CA for interception purposes, then yes, I
would blame the platform. The user does not set organizational
policies, and its not acceptable the browser allow the secure channel
to be subverted by an externality.

I think the secret ingredient that is missing in the browser secret
sauce is a Key Usage of INTERCEPTION. This way, a user who installs a
certificate without INTERCEPTION won't be able to use it for
interception because the browser won't break a known good pinset
without it. And users who install one with INTERCEPTION will know what
they are getting. I know it sounds like Steve Bellovin's Evil Bit RFC
(Aril Fools Day RFC), but that's what the security model forces us
into because we can't differentiate between the "good" bad guys and
the "bad" guys.

In native apps (and sometimes hybrid apps), we place a control to
ensure that does not happen. We are not encumbered by the broken
security model.

Jeff



Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 12:38 PM, Brad Hill  wrote:
> I think that POSTing JSON would probably expose to CSRF a lot of things that
> work over HTTP but don't expect to be interacted with by web browsers in
> that manner.  That's why the recent JSON encoding for forms mandates that it
> be same-origin only.

Note that you can already POST JSON cross-origin. Without any
preflight. The only thing you can't do is to set the "Content-Type"
header to the official JSON mimetype.

So the question is, does the server check that the Content-Type header
is set to "application/json" and if not abort any processing?

/ Jonas



Re: CORS performance proposal

2015-02-19 Thread Bjoern Hoehrmann
* Martin Thomson wrote:
>On 20 February 2015 at 00:29, Anne van Kesteren  wrote:
>>   Access-Control-Allow-Origin-Wide-Cache: [origin]
>
>This has some pretty implications for server deployments that host
>mutual distrustful applications.  Now, these servers are already
>pretty well hosed from other directions, but I don't believe that
>there is any pre-existing case where a header field set in a request
>to /x could affect future requests to /y.
>
>An alternative would be to use /.well-known for site wide policies.

The proposal is to use `OPTIONS * HTTP/1.1` not `OPTIONS /x HTTP/1.1`.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: CORS performance proposal

2015-02-19 Thread Brian Smith
On Thu, Feb 19, 2015 at 5:29 AM, Anne van Kesteren  wrote:
> When the user agent is about to make its first preflight to an origin
> (timeout up to the user agent), it first makes a preflight that looks
> like:
>
>   OPTIONS *
>   Access-Control-Request-Origin-Wide-Cache: [origin]
>   Access-Control-Request-Method: *
>   Access-Control-Request-Headers: *

This would make CORS preflight even slower for every server that
doesn't implement the new proposal (i.e. every currently-deployed
server and probably most servers deployed in the future). Perhaps the
"OPTIONS *" request could be made in parallel with the initial
preflight requests.

But, then, what happens when the information in the "OPTIONS *"
response conflicts with the information in the normal preflight
request?

> I think this has a reasonable tradeoff between security and opening up
> all the power of the HTTP APIs on the server without the performance
> hit. It still makes the developer very conscious about the various
> features involved.

I think developer consciousness is exactly the issue here:

1. Let's say you want to add "OPTIONS *" preflight support to an
existing web application. How do you go about finding all the things
that need to change to make that safe to do? It seems very difficult
to successfully find every place the app assumes it is protected by
the fact that it doesn't do CORS.

2. Similar to #1, let's say that two teams develop two parts of a
website. One of the teams follows normal CORS rules and the other
depends on the proposed "OPTIONS *" mechanism. This would be a
disaster if/when both apps are deployed on the same origin.

3. Because of these issues, an organization forces its developers to
develop every app as though every resource is CORS-enabled, to
future-proof against the scenerio where "OPTIONS *" is deployed in the
future. This makes the development of the web app more difficult and
slower.

4. In the discussion of Entry Point Regulation (EPR) on WebAppSec, the
main argument in favor of it is that it is impossible for developers
to do things like #3 correctly and it is unreasonable for us to expect
them to. I'm don't buy the EPR argument completely, but I do see some
merit in the underlying "secure by default" argument behind EPR.

Because of these concerns, I think that it would be worthwhile to
study a concrete example of the problem, to make sure we correctly
understand the use case we're trying to solve. As we saw yesterday
with the PouchDB/CouchDB example, it is easy to accidentally and
unnecessarily force a preflight. It may also be the case that we can
find other, safer, ways to avoid preflights and/or optimize how they
are done, such as by optimizing CORS for use with HTTP/2 server push
mechanisms. But, we need to see real instances of the problem first.

Cheers,
Brian



Re: CORS explained simply

2015-02-19 Thread Arthur Barstow

On 2/19/15 4:28 PM, henry.st...@bblfish.net wrote:

Hi,

   I find that understanding CORS is a really not easy.
It seems that what is missing is an general overview document,
that would start by explaining why the simplest possible method
won't work, in order to help the user understand then why more
complex method are needed.

For example the first thing one should start by explaining is for

  1) requests that do not require authentication
q1: why is the origin sent at all? And why are there still restictions?
q2: why does POSTing a url encoded form not require pre-flight? But why 
does POSTing other data do?

  2) On requests that do need authentication:
q3: Why are the pre-flight requests needed at all?

I know that the answer to q1 is that some servers have access control methods
based on ip address of the client. But it is worth stating clearly the 
requirement
in the specs so that this can be understood.

There is also the question as to why the server needs to make a decision as to
what the client can see. But why can't it be the client? After all the user 
could
decide to give more rights to some JS apps than to others, and that would work 
too.

I am not saying that these questions don't have answers. It is just that they
would help a developer understand why CORS has taken the shape it has, and so
understanding the reaons for the decisions taken, better be able to think about 
it.


Hi Henry,

I agree this type of info would be useful so a long time ago I started 
to bookmark related resources (f.ex. see [1]) but stopped as CORS became 
deployed and sites like enable-cors.org emerged. Maciej's deck [2] is 
still a real nice overview.


(BTW, public-webappsec might be a good place to send your e-mail.)

-Thanks, AB

[1] https://delicious.com/afbarstow/CORS
[2] 
https://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0468/CORS.pdf





Re: CORS performance

2015-02-19 Thread Bjoern Hoehrmann
* Jonas Sicking wrote:
>We most likely can consider the content-type header as *not* "custom".
>I was one of the people way back when that pointed out that there's a
>theoretical chance that allowing arbitrary content-type headers could
>cause security issues. But it seems highly theoretical.
>
>I suspect that the mozilla security team would be fine with allowing
>arbitrary content-types to be POSTed though. Worth asking. I can't
>speak for other browser vendors of course.

I think the situation might well be worse now than it was when we first
started discussing what is now "CORS". In any case, this would be an ex-
periment that cannot easily be undone, browser vendors would not pay the
bill if there are actually large scale security vulnerabilities opened
up by such a change, and I do not really see notable benefits in con-
ducting such an experiment.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: [WebCrypto.Next] Any ideas on how to proceed?

2015-02-19 Thread Harry Halpin


On 02/18/2015 08:59 AM, David Leon Gil wrote:
> W.r.t. WebCrypto-Next:
> 
> It would be wonderful to see a few useful algorithms added to the spec:
> 
> - a modern VOF (e.g., SHAKE256)
> - a fast hash function (e.g., BLAKE2b)
> - a sequential-hard KDF (e.g., scrypt)
> - some non-NSA curves

New algorithms are definitely within charter for the existing WebCrypto WG.

Note that we are aware and have the formal dependencies for the IETF
CFRG recommendation in terms of non-NIST curves.

For SHAKE256, BLAKE, and scrypt I suggest asking the WebCrypto WG if
they have any opinion and if such code can already be accessed via calls
to underlying platform (NSS/Windows/etc.) or would new underlying code
have to be written and distributed.

> 
> as well as a slightly higher-level interface that makes it less
> complicated to do things like (cryptographically sound) ECDH without
> shooting yourself in the foot repeatedly. (I tried with the current
> API, and I have fewer toes.)

Yes, this is a design goal once we have the supported browser profile
properly worked out. I'd love to hear more about your thoughts and
experiences on this.

> 
> There are some other things that would be great to see standardized in
> this area, but WebCrypto may not be the appropriate WG.

I believe there will be a survey shortly - again, this a discussion for
general Web Security, so whether or not something goes in WebCrypto or
not is not the most important question, the question is whether or not
we can get consensus for doing it, and if so, then we can find the most
appropriate existing Working Group or create a new one.


> 
> On Tue, Feb 17, 2015 at 10:30 PM, Anders Rundgren
>  wrote:
>> As you probably noted, all proposals related to
>> http://www.w3.org/2012/webcrypto/webcrypto-next-workshop/
>> were shot down.
>>
>> Are we waiting on something, and if so is the case, exactly what?
>>
>> Is the idea of building on an already semi-established solution like Chrome
>> Native Messaging unacceptable?
>>
>> Or should this disparate community rather standardize on U2F?
>>
>> Another solution (IMO "workaround") is using local services supplying
>> "Security Services" through Redirects, XHR or WebSockets.
>>
>> Since the (in)famous plugins were simply removed without any thoughts of the
>> implications, it seems that the browser vendors currently "own" this
>> question.
>>
>> Anders
>>
> 



Re: The futile war between Native and Web

2015-02-19 Thread Anne van Kesteren
On Thu, Feb 19, 2015 at 10:05 PM, Jeffrey Walton  wrote:
> For what its worth, I'm just the messenger. There are entire
> organizations with Standard Operating Procedures (SOPs) built around
> the stuff I'm talking about. I'm telling you what they do based on my
> experiences.

>From your arguments though it sounds like they would be fine with
buying PCs from Lenovo with installed spyware, which makes it all
rather dubious. You can't cite the Lenovo case as a failure of
browsers when it's a compromised client.


-- 
https://annevankesteren.nl/



Re: CORS performance

2015-02-19 Thread henry.st...@bblfish.net

> On 19 Feb 2015, at 22:04, Martin Thomson  wrote:
> 
> On 18 February 2015 at 06:31, Brad Hill  wrote:
>> Some of the things that argue against /.well-known are:
>> 
>> 1) Added latency of fetching the resource.
> 
> It's not available everywhere yet, but you could push it, based on the below.
> 
>> 2) Clients hammering servers for non-existent /.well-known resources (the
>> favicon issue)
> 
> You could avoid that by Link:-ing to the /.well-known and only hitting
> it if the link appears.

I assume you mean the Link: header. In that case I like the idea.

Well the client could even cache the document and only hit it once for the whole
server. Furthermore there would then be no need for the url to be in a 
.well-known
location. It could be any resource whatsoever. 

That is the way that the "Web Access Control" system functions. See link from 
this page

http://www.w3.org/2005/Incubator/webid/spec/

Every resource has a link to those resources that require access.
Aslo see the curl examples 
https://github.com/read-write-web/rww-play/wiki/Curl-Interactions


Henry

Social Web Architect
http://bblfish.net/




CORS explained simply

2015-02-19 Thread henry.st...@bblfish.net
Hi,

  I find that understanding CORS is a really not easy.
It seems that what is missing is an general overview document,
that would start by explaining why the simplest possible method
won't work, in order to help the user understand then why more
complex method are needed.

For example the first thing one should start by explaining is for

 1) requests that do not require authentication
   q1: why is the origin sent at all? And why are there still restictions?
   q2: why does POSTing a url encoded form not require pre-flight? But why does 
POSTing other data do?

 2) On requests that do need authentication:
   q3: Why are the pre-flight requests needed at all?

I know that the answer to q1 is that some servers have access control methods
based on ip address of the client. But it is worth stating clearly the 
requirement
in the specs so that this can be understood.

There is also the question as to why the server needs to make a decision as to
what the client can see. But why can't it be the client? After all the user 
could
decide to give more rights to some JS apps than to others, and that would work 
too.

I am not saying that these questions don't have answers. It is just that they
would help a developer understand why CORS has taken the shape it has, and so
understanding the reaons for the decisions taken, better be able to think about 
it.

Henry 


Social Web Architect
http://bblfish.net/




Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
 > I am not sure about that...

Data has three states:

  (1) Data in storage
  (2) Data on display
  (3) Data in transit

Because browsers can't authenticate servers with any degree of
certainty, they have lost the "data in transit" state. That leaves a
poor choice of options, like side loading on location limited
channels. Side loading and location limited channels are not very
scalable.

Another option is to allow the browser to handle the lower value data
and accept the risk. That's what US financial institutions do so they
don't lose customers.

The final option is to "put your trust in the browser platform".
That's what many people are happy to do. But its not one size fits
all, and it has gaps that become pain points when data sensitivity is
above trivial or low.

> I think it is possible to make a web site safe.  In
> order to achieve that, we to make sure, that
>
> a) the (script) code doesn't misbehave (=CSP);
> b) the integrity of the (script) code is secured on the server and while
> in transit;

I think these are necessary preconditions, but not the only conditions.

For what its worth, I'm just the messenger. There are entire
organizations with Standard Operating Procedures (SOPs) built around
the stuff I'm talking about. I'm telling you what they do based on my
experiences.

Jeff

On Thu, Feb 19, 2015 at 3:55 PM, Michaela Merz
 wrote:
>
> I am not sure about that. Based on the premise that the browser itself
> doesn't leak data, I think it is possible to make a web site safe.  In
> order to achieve that, we to make sure, that
>
> a) the (script) code doesn't misbehave (=CSP);
> b) the integrity of the (script) code is secured on the server and while
> in transit;
>
> I believe both of those imperative necessities are achievable.
>
> Michaela
>
>
> On 02/19/2015 01:43 PM, Jeffrey Walton wrote:
>> On Thu, Feb 19, 2015 at 1:44 PM, Bjoern Hoehrmann  wrote:
>>> * Jeffrey Walton wrote:
 Here's yet another failure that Public Key Pinning should have
 stopped, but the browser's rendition of HPKP could not stop because of
 the broken security model:
 http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
>>> In this story the legitimate user with full administrative access to the
>>> systems is Lenovo. I do not really see how actual user agents could have
>>> "stopped" anything here. Timbled agents that act on behalf of someone
>>> other than the user might have denied users their right to modify their
>>> system as Lenovo did here, but that is clearly out of scope of browsers.
>>> --
>> Like I said, the security model is broken and browser based apps can
>> only handle low value data.



Re: CORS performance

2015-02-19 Thread Martin Thomson
On 18 February 2015 at 06:31, Brad Hill  wrote:
> Some of the things that argue against /.well-known are:
>
> 1) Added latency of fetching the resource.

It's not available everywhere yet, but you could push it, based on the below.

> 2) Clients hammering servers for non-existent /.well-known resources (the
> favicon issue)

You could avoid that by Link:-ing to the /.well-known and only hitting
it if the link appears.



Re: CORS performance proposal

2015-02-19 Thread Martin Thomson
On 20 February 2015 at 00:29, Anne van Kesteren  wrote:
>   Access-Control-Allow-Origin-Wide-Cache: [origin]


This has some pretty implications for server deployments that host
mutual distrustful applications.  Now, these servers are already
pretty well hosed from other directions, but I don't believe that
there is any pre-existing case where a header field set in a request
to /x could affect future requests to /y.

An alternative would be to use /.well-known for site wide policies.



Re: The futile war between Native and Web

2015-02-19 Thread Michaela Merz

I am not sure about that. Based on the premise that the browser itself
doesn't leak data, I think it is possible to make a web site safe.  In
order to achieve that, we to make sure, that

a) the (script) code doesn't misbehave (=CSP);
b) the integrity of the (script) code is secured on the server and while
in transit;

I believe both of those imperative necessities are achievable.

Michaela


On 02/19/2015 01:43 PM, Jeffrey Walton wrote:
> On Thu, Feb 19, 2015 at 1:44 PM, Bjoern Hoehrmann  wrote:
>> * Jeffrey Walton wrote:
>>> Here's yet another failure that Public Key Pinning should have
>>> stopped, but the browser's rendition of HPKP could not stop because of
>>> the broken security model:
>>> http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
>> In this story the legitimate user with full administrative access to the
>> systems is Lenovo. I do not really see how actual user agents could have
>> "stopped" anything here. Timbled agents that act on behalf of someone
>> other than the user might have denied users their right to modify their
>> system as Lenovo did here, but that is clearly out of scope of browsers.
>> --
> Like I said, the security model is broken and browser based apps can
> only handle low value data.
>
> Jeff
>




Re: CORS performance

2015-02-19 Thread Brad Hill
I think that POSTing JSON would probably expose to CSRF a lot of things
that work over HTTP but don't expect to be interacted with by web browsers
in that manner.  That's why the recent JSON encoding for forms mandates
that it be same-origin only.

On Thu Feb 19 2015 at 12:23:48 PM Jonas Sicking  wrote:

> On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
> >> so presumably it is OK to set the Content-Type to text/plain
> >
> > Thats not ok, but may explain my confusion, is Content-Type considered a
> > Custom Header that will always trigger a preflight? if so then none of
> the
> > caching will apply, CouchDB requires sending the appropriate content-type
>
> We most likely can consider the content-type header as *not* "custom".
> I was one of the people way back when that pointed out that there's a
> theoretical chance that allowing arbitrary content-type headers could
> cause security issues. But it seems highly theoretical.
>
> I suspect that the mozilla security team would be fine with allowing
> arbitrary content-types to be POSTed though. Worth asking. I can't
> speak for other browser vendors of course.
>
> / Jonas
>
>


Re: CORS performance proposal

2015-02-19 Thread Jonas Sicking
Would this be allowed for both requests with credentials and requests
without credentials? The security implications of the two are very
different.

/ Jonas

On Thu, Feb 19, 2015 at 5:29 AM, Anne van Kesteren  wrote:
> When the user agent is about to make its first preflight to an origin
> (timeout up to the user agent), it first makes a preflight that looks
> like:
>
>   OPTIONS *
>   Access-Control-Request-Origin-Wide-Cache: [origin]
>   Access-Control-Request-Method: *
>   Access-Control-Request-Headers: *
>
> If the response is
>
>   2xx XX
>   Access-Control-Allow-Origin-Wide-Cache: [origin]
>   Access-Control-Allow-Methods: *
>   Access-Control-Allow-Headers: *
>   Access-Control-Max-Age: [max-age]
>
> then no more preflights will be made for the duration of [max-age] (or
> shortened per user agent preference). If the response includes
>
>   Access-Control-Allow-Credentials: true
>
> the cache scope is increased to requests that include credentials.
>
> I think this has a reasonable tradeoff between security and opening up
> all the power of the HTTP APIs on the server without the performance
> hit. It still makes the developer very conscious about the various
> features involved.
>
> The cache would be on a per requesting origin basis as per the headers
> above. The Origin and Access-Control-Allow-Origin would not take part
> in this exchange, to make it very clear what this is about.
>
> (This does not affect Access-Control-Expose-Headers or any of the
> other headers required as part of non-preflight responses.)
>
>
> --
> https://annevankesteren.nl/
>



Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
>> so presumably it is OK to set the Content-Type to text/plain
>
> Thats not ok, but may explain my confusion, is Content-Type considered a
> Custom Header that will always trigger a preflight? if so then none of the
> caching will apply, CouchDB requires sending the appropriate content-type

We most likely can consider the content-type header as *not* "custom".
I was one of the people way back when that pointed out that there's a
theoretical chance that allowing arbitrary content-type headers could
cause security issues. But it seems highly theoretical.

I suspect that the mozilla security team would be fine with allowing
arbitrary content-types to be POSTed though. Worth asking. I can't
speak for other browser vendors of course.

/ Jonas



Re: CORS performance

2015-02-19 Thread Jonas Sicking
On Thu, Feb 19, 2015 at 3:30 AM, Anne van Kesteren  wrote:
> On Thu, Feb 19, 2015 at 12:17 PM, Dale Harvey  wrote:
>> With Couch / PouchDB we are working with an existing REST API wherein every
>> request is to a different url (which is unlikely to change), the performance
>> impact is significant since most of the time is used up by latency, the CORS
>> preflight request essentially double the time it takes to do anything
>
> Yeah, also, it should not be up to us how people design their HTTP
> APIs. Limiting HTTP in that way because it is hard to make CORS scale
> seems bad.
>
>
> I think we've been too conservative when introducing CORS. It's
> effectively protecting content behind a firewall,

...and content that uses user credentials like cookies.

/ Jonas



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Thu, Feb 19, 2015 at 1:44 PM, Bjoern Hoehrmann  wrote:
> * Jeffrey Walton wrote:
>>Here's yet another failure that Public Key Pinning should have
>>stopped, but the browser's rendition of HPKP could not stop because of
>>the broken security model:
>>http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
>
> In this story the legitimate user with full administrative access to the
> systems is Lenovo. I do not really see how actual user agents could have
> "stopped" anything here. Timbled agents that act on behalf of someone
> other than the user might have denied users their right to modify their
> system as Lenovo did here, but that is clearly out of scope of browsers.
> --
Like I said, the security model is broken and browser based apps can
only handle low value data.

Jeff



Re: The futile war between Native and Web

2015-02-19 Thread Bjoern Hoehrmann
* Jeffrey Walton wrote:
>Here's yet another failure that Public Key Pinning should have
>stopped, but the browser's rendition of HPKP could not stop because of
>the broken security model:
>http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.

In this story the legitimate user with full administrative access to the
systems is Lenovo. I do not really see how actual user agents could have
"stopped" anything here. Timbled agents that act on behalf of someone
other than the user might have denied users their right to modify their
system as Lenovo did here, but that is clearly out of scope of browsers.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
 Available for hire in Berlin (early 2015)  · http://www.websitedev.de/ 



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Thu, Feb 19, 2015 at 12:15 PM, Anne van Kesteren  wrote:
> On Thu, Feb 19, 2015 at 6:10 PM, Jeffrey Walton  wrote:
>> On Mon, Feb 16, 2015 at 3:34 AM, Anne van Kesteren  wrote:
>>> What would you suggest instead?
>>
>> Sorry to dig up an old thread.
>>
>> Here's yet another failure that Public Key Pinning should have
>> stopped, but the browser's rendition of HPKP could not stop because of
>> the broken security model:
>> http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
>
> That does not really answer my questions though.
>
Good point.

Stop letting externalities control critical security parameters
unmolested since an externality is not the origin nor the the user.

HPKP has a reporting mode, but a broken pinset is a MUST NOT report.
Broken pinsets should be reported to the user and the origin so the
browser is no longer complicit in covering up for the attacker.

Jeff



Re: The futile war between Native and Web

2015-02-19 Thread Anne van Kesteren
On Thu, Feb 19, 2015 at 6:10 PM, Jeffrey Walton  wrote:
> On Mon, Feb 16, 2015 at 3:34 AM, Anne van Kesteren  wrote:
>> What would you suggest instead?
>
> Sorry to dig up an old thread.
>
> Here's yet another failure that Public Key Pinning should have
> stopped, but the browser's rendition of HPKP could not stop because of
> the broken security model:
> http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.

That does not really answer my questions though.


-- 
https://annevankesteren.nl/



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Mon, Feb 16, 2015 at 3:34 AM, Anne van Kesteren  wrote:
> On Sun, Feb 15, 2015 at 10:59 PM, Jeffrey Walton  wrote:
>> For the first point, Pinning with Overrides
>> (tools.ietf.org/html/draft-ietf-websec-key-pinning) is a perfect
>> example of the wrong security model. The organizations I work with did
>> not drink the Web 2.0 koolaide, its its not acceptable to them that an
>> adversary can so easily break the secure channel.
>
> What would you suggest instead?

Sorry to dig up an old thread.

Here's yet another failure that Public Key Pinning should have
stopped, but the browser's rendition of HPKP could not stop because of
the broken security model:
http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.

Jeff



Re: Web Components F2F in April 2015

2015-02-19 Thread Dimitri Glazkov
On Thu, Feb 19, 2015 at 5:28 AM, Arthur Barstow 
wrote:

>
> When will you be able to confirm the location? Regardless, I think we
> should consider the meeting as confirmed.
>

Working on it now. Will report back shortly.

:DG<


Re: Staying on Topic [Was: Re: WebPortable/PlatformProprietary - An Established Concept]

2015-02-19 Thread Arthur Barstow

On 2/19/15 9:57 AM, Anders Rundgren wrote:
Where are you supposed to propose new APIs?  Can such proposal be made 
by non-W3C members?
This was a proposal for using Chrome Native Messaging as the 
foundation for a new standard.


Perhaps you should pursue the Community Group process 
.


-Thanks, AB





Re: Staying on Topic [Was: Re: WebPortable/PlatformProprietary - An Established Concept]

2015-02-19 Thread Anders Rundgren

On 2015-02-19 15:47, Arthur Barstow wrote:

On 2/19/15 9:35 AM, Anders Rundgren wrote:

Hi Anders,


Hi Art,



In the spirit of restricting postings on this list to the group's
chartered scope ...


http://www.w3.org/2008/webapps/

"This work will include both documenting existing APIs such as XMLHttpRequest
 and developing new APIs in order to enable richer web applications"

Where are you supposed to propose new APIs?  Can such proposal be made by 
non-W3C members?
This was a proposal for using Chrome Native Messaging as the foundation for a 
new standard.

Cheers
Anders



I don't see a clear and direct connection between your posting [1] and
WebApps' chartered scope [2]. If I missed such a connection, please
focus your related postings to a specific spec and use the spec's
short-name as the Subject: prefix (f.ex. for the Manifest spec use
"[appmanifest] ...").

-Thanks, AB

[1]

[2] 






Staying on Topic [Was: Re: WebPortable/PlatformProprietary - An Established Concept]

2015-02-19 Thread Arthur Barstow

On 2/19/15 9:35 AM, Anders Rundgren wrote:

Hi Anders,

In the spirit of restricting postings on this list to the group's 
chartered scope ...


I don't see a clear and direct connection between your posting [1] and 
WebApps' chartered scope [2]. If I missed such a connection, please 
focus your related postings to a specific spec and use the spec's 
short-name as the Subject: prefix (f.ex. for the Manifest spec use 
"[appmanifest] ...").


-Thanks, AB

[1] 


[2] 



WebPortable/PlatformProprietary - An Established Concept

2015-02-19 Thread Anders Rundgren

HTTPS Client Certificate Authentication is supported by all browsers since 
almost 20 years back.
It exposes a fully standardized interface to Web Applications which simply is 
an URL.
In spite of that it is entirely proprietary with respect to integration in the 
browser platform
with implementations based on PKCS #11, CryptoAPI, JCE, .NET, NSS as well as 
working with a
huge range of secure key-containers like SIM, PIV, TEE, TPM, "Soft Keys".  This 
side of the
coin has not been standardized since it [provably] wasn't needed.

In: 
https://lists.w3.org/Archives/Public/public-webcrypto-comments/2015Jan/.html
Google's Ryan Sleevy writes:
   What you're looking for is
http://blog.chromium.org/2013/10/connecting-chrome-apps-and-extensions.html

This scheme could (after "Polishing" + W3C Standardization), without doubt 
support the same
powerful paradigm as HTTPS Client Certificate Authentication 
(WebPortable/PlatformProprietary),
for virtually any security application you could think of.

Cheers,
Anders




Re: CORS performance proposal

2015-02-19 Thread Dale Harvey
> The cache would be on a per requesting origin basis as per the headers
> above. The Origin and Access-Control-Allow-Origin would not take part
> in this exchange, to make it very clear what this is about.

I dont want to conflate what could be seperate proposals, but they seem
closely related, this would improve the situation for easing the number of
preflight requests to be made, however still requires servers to follow
what is a fairly complicated process of setting up the appropriate headers

What if we allowed one of the response fields to denote this url is on the
public internet, please dont bother with cors restrictions. This means the
process of setting up cors could be to ensure a single response returns
with the appropriate headers and servers no longer need to worry about
every possible headers clients can send to each particular url.

(Clients would have to set a custom header to ensure the preflight
optimisation was skipped I believe)

This would be very much in line with how it was implemented for flash -
http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html


On 19 February 2015 at 13:29, Anne van Kesteren  wrote:

> When the user agent is about to make its first preflight to an origin
> (timeout up to the user agent), it first makes a preflight that looks
> like:
>
>   OPTIONS *
>   Access-Control-Request-Origin-Wide-Cache: [origin]
>   Access-Control-Request-Method: *
>   Access-Control-Request-Headers: *
>
> If the response is
>
>   2xx XX
>   Access-Control-Allow-Origin-Wide-Cache: [origin]
>   Access-Control-Allow-Methods: *
>   Access-Control-Allow-Headers: *
>   Access-Control-Max-Age: [max-age]
>
> then no more preflights will be made for the duration of [max-age] (or
> shortened per user agent preference). If the response includes
>
>   Access-Control-Allow-Credentials: true
>
> the cache scope is increased to requests that include credentials.
>
> I think this has a reasonable tradeoff between security and opening up
> all the power of the HTTP APIs on the server without the performance
> hit. It still makes the developer very conscious about the various
> features involved.
>
> The cache would be on a per requesting origin basis as per the headers
> above. The Origin and Access-Control-Allow-Origin would not take part
> in this exchange, to make it very clear what this is about.
>
> (This does not affect Access-Control-Expose-Headers or any of the
> other headers required as part of non-preflight responses.)
>
>
> --
> https://annevankesteren.nl/
>
>


Re: CORS performance

2015-02-19 Thread James M Snell
On Feb 19, 2015 3:33 AM, "Anne van Kesteren"  wrote:
>
> On Thu, Feb 19, 2015 at 12:17 PM, Dale Harvey  wrote:
> > With Couch / PouchDB we are working with an existing REST API wherein
every
> > request is to a different url (which is unlikely to change), the
performance
> > impact is significant since most of the time is used up by latency, the
CORS
> > preflight request essentially double the time it takes to do anything
>
> Yeah, also, it should not be up to us how people design their HTTP
> APIs. Limiting HTTP in that way because it is hard to make CORS scale
> seems bad.
>
>

+1. Forcing developers to change their APIs would be bad form at this
stage. Not to mention just plain silly.

Optimizing with an OPTIONS * preflight is a good option but won't be as
broadly available to developers as a response header. Perhaps another
approach would be to allow a resource to declare a CORS policy only for
subordinate resources, rather than the entire origin.

For instance, an OPTIONS sent to http://example.org/api/ can return CORS
headers that cover every URL prefixed with http://example.org/api/. That
would logically extend all the way up to OPTIONS * in order to set a policy
that covers the entire origin.

- James

> I think we've been too conservative when introducing CORS. It's
> effectively protecting content behind a firewall, but we added all
> these additional opt in mechanism beyond protecting content behind a
> firewall due to unease about the potential risks. Figuring out to what
> extent that actually serves a purpose would be good.
>
> If declaring this policy through a header is not acceptable, we could
> attempt a double preflight fetch for the very first CORS fetch against
> an origin (that requires a preflight). Try OPTIONS * before OPTIONS
> /actual-request. If that handshake succeeds (details TBD) no more
> preflights necessary for the entire origin.
>
>
> --
> https://annevankesteren.nl/
>


Re: CORS performance

2015-02-19 Thread Brian Smith
Dale Harvey  wrote:
>> I believe none of these require preflight unless a mistake is being
>> made (probably setting Content-Type on GET requests).
>
> http://www.w3.org/TR/cors/#preflight-result-cache-0
>
> If the cache is against the url, and we are sending requests to different
> urls, wont requests to different urls always trigger a preflight?

In general, if your GET requests don't set custom headers, preflight
isn't necessary, because CORS has an optimization for GET (and POST)
that avoids preflight, for exactly the cases like yours..

>> Also, regardless, you can use the CouchDB bulk document API to fetch
>> all these documents in one request, instead of 70,000 requests.
>
> CouchDB has no bulk document fetch api

Sorry. I was reading
http://docs.couchdb.org/en/latest/api/database/bulk-api.html#db-bulk-docs
and assumed it had been implemented already. But I see that maybe you
are trying to do something slightly different anyway with PouchDB.
Regardless, no preflight should be necessary for this and so Anne's
proposal won't help with it.

>> I agree that things can be improved here. I think the solution may be
>> better developer tools. In particular, devtools should tell you
>> exactly why a request triggered preflight.
>
> Whats wrong with 'This origin is part of the public internet and doesnt need
> any complications or restrictions due to CORS' ie Anne proposal?

I didn't say anything was wrong with Anne's proposal. What I said is
that it would help to have somebody present a concrete example of
where it would be useful.

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Dale Harvey
> If the cache is against the url, and we are sending requests to different
urls, wont
> requests to different urls always trigger a preflight?

I just realised my mistake, GETS without custom headers should need to
trigger preflight requests, sorry

On 19 February 2015 at 13:31, Dale Harvey  wrote:

> Will take a look at the content-type on GET requests, thanks
>
> > I believe none of these require preflight unless a mistake is being
> > made (probably setting Content-Type on GET requests).
>
> http://www.w3.org/TR/cors/#preflight-result-cache-0
>
> If the cache is against the url, and we are sending requests to different
> urls, wont requests to different urls always trigger a preflight?
>
> > Also, regardless, you can use the CouchDB bulk document API to fetch
> > all these documents in one request, instead of 70,000 requests.
>
> CouchDB has no bulk document fetch api, it has all_docs but that isnt
> appropriate for this case, there is a talk about introducing it
> https://issues.apache.org/jira/browse/COUCHDB-2310, however its going to
> take a while (I would personally rather we replace it with a streaming api)
>
> > I agree that things can be improved here. I think the solution may be
> > better developer tools. In particular, devtools should tell you
> > exactly why a request triggered preflight.
>
> Whats wrong with 'This origin is part of the public internet and doesnt
> need any complications or restrictions due to CORS' ie Anne proposal?
>
>
> On 19 February 2015 at 13:21, Brian Smith  wrote:
>
>> On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
>> >> so presumably it is OK to set the Content-Type to text/plain
>> >
>> > Thats not ok, but may explain my confusion, is Content-Type considered a
>> > Custom Header that will always trigger a preflight?
>>
>> To be clear, my comment was about POST requests to the bulk document
>> API, not about other requests.
>>
>> I ran your demo and observed the network traffic using Wireshark.
>> Indeed, OPTIONS requests are being sent for every GET. But, that is
>> because you are setting the Content-Type header field on your GET
>> requests. Since GET requests don't have a request body, you shouldn't
>> set the Content-Type header field on them. And, if you do, then
>> browsers will treat it as a custom header field. That is what forces
>> the preflight for those requests.
>>
>> Compare the network traffic for these two scripts:
>>
>>   
>> xhr=new XMLHttpRequest();
>> xhr.open("GET",
>> "
>> http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
>> ",
>> true);
>> xhr.setRequestHeader("Accept","application/json");
>> xhr.setRequestHeader("Content-Type","application/json");
>> xhr.send();
>>   
>>
>>   
>> xhr=new XMLHttpRequest();
>> xhr.open("GET",
>> "
>> http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
>> ",
>> true);
>> xhr.setRequestHeader("Accept","application/json");
>> xhr.send();
>>   
>>
>> They are the same, except the second one doesn't set the Content-Type
>> header, and thus it doesn't cause the preflight to be sent.
>>
>> > if so then none of the
>> > caching will apply, CouchDB requires sending the appropriate
>> content-type
>>
>> CouchDB may require sending "Accept: application/json", but that isn't
>> considered a custom header field, so it doesn't trigger preflight.
>>
>> > The /_changes requests are only part of the problem, once we receive the
>> > changes information we then have to request information about individual
>> > documents which all have a unique id
>> >
>> >   GET /registry/mypackagename
>> >
>> > We do one of those per document (70,000 npm docs), all trigger a
>> preflight
>> > (whether or not custom headers are involved)
>>
>> I believe none of these require preflight unless a mistake is being
>> made (probably setting Content-Type on GET requests).
>>
>> Also, regardless, you can use the CouchDB bulk document API to fetch
>> all these documents in one request, instead of 70,000 requests.
>>
>> > Also performance details aside every week somebody has a library or
>> proxy
>> > that sends some custom header or they just missed a step when
>> configuring
>> > CORS, its a constant source of confusion for our users. We try to get
>> around
>> > it by providing helper scripts but Anne's proposal mirroring flashes
>> cross
>> > domain.xml sounds vastly superior to the current implementation from the
>> > developers perspective.
>>
>> I agree that things can be improved here. I think the solution may be
>> better developer tools. In particular, devtools should tell y

Re: CORS performance

2015-02-19 Thread Dale Harvey
Will take a look at the content-type on GET requests, thanks

> I believe none of these require preflight unless a mistake is being
> made (probably setting Content-Type on GET requests).

http://www.w3.org/TR/cors/#preflight-result-cache-0

If the cache is against the url, and we are sending requests to different
urls, wont requests to different urls always trigger a preflight?

> Also, regardless, you can use the CouchDB bulk document API to fetch
> all these documents in one request, instead of 70,000 requests.

CouchDB has no bulk document fetch api, it has all_docs but that isnt
appropriate for this case, there is a talk about introducing it
https://issues.apache.org/jira/browse/COUCHDB-2310, however its going to
take a while (I would personally rather we replace it with a streaming api)

> I agree that things can be improved here. I think the solution may be
> better developer tools. In particular, devtools should tell you
> exactly why a request triggered preflight.

Whats wrong with 'This origin is part of the public internet and doesnt
need any complications or restrictions due to CORS' ie Anne proposal?


On 19 February 2015 at 13:21, Brian Smith  wrote:

> On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
> >> so presumably it is OK to set the Content-Type to text/plain
> >
> > Thats not ok, but may explain my confusion, is Content-Type considered a
> > Custom Header that will always trigger a preflight?
>
> To be clear, my comment was about POST requests to the bulk document
> API, not about other requests.
>
> I ran your demo and observed the network traffic using Wireshark.
> Indeed, OPTIONS requests are being sent for every GET. But, that is
> because you are setting the Content-Type header field on your GET
> requests. Since GET requests don't have a request body, you shouldn't
> set the Content-Type header field on them. And, if you do, then
> browsers will treat it as a custom header field. That is what forces
> the preflight for those requests.
>
> Compare the network traffic for these two scripts:
>
>   
> xhr=new XMLHttpRequest();
> xhr.open("GET",
> "
> http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
> ",
> true);
> xhr.setRequestHeader("Accept","application/json");
> xhr.setRequestHeader("Content-Type","application/json");
> xhr.send();
>   
>
>   
> xhr=new XMLHttpRequest();
> xhr.open("GET",
> "
> http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
> ",
> true);
> xhr.setRequestHeader("Accept","application/json");
> xhr.send();
>   
>
> They are the same, except the second one doesn't set the Content-Type
> header, and thus it doesn't cause the preflight to be sent.
>
> > if so then none of the
> > caching will apply, CouchDB requires sending the appropriate content-type
>
> CouchDB may require sending "Accept: application/json", but that isn't
> considered a custom header field, so it doesn't trigger preflight.
>
> > The /_changes requests are only part of the problem, once we receive the
> > changes information we then have to request information about individual
> > documents which all have a unique id
> >
> >   GET /registry/mypackagename
> >
> > We do one of those per document (70,000 npm docs), all trigger a
> preflight
> > (whether or not custom headers are involved)
>
> I believe none of these require preflight unless a mistake is being
> made (probably setting Content-Type on GET requests).
>
> Also, regardless, you can use the CouchDB bulk document API to fetch
> all these documents in one request, instead of 70,000 requests.
>
> > Also performance details aside every week somebody has a library or proxy
> > that sends some custom header or they just missed a step when configuring
> > CORS, its a constant source of confusion for our users. We try to get
> around
> > it by providing helper scripts but Anne's proposal mirroring flashes
> cross
> > domain.xml sounds vastly superior to the current implementation from the
> > developers perspective.
>
> I agree that things can be improved here. I think the solution may be
> better developer tools. In particular, devtools should tell you
> exactly why a request triggered preflight.
>
> Cheers,
> Brian
>


CORS performance proposal

2015-02-19 Thread Anne van Kesteren
When the user agent is about to make its first preflight to an origin
(timeout up to the user agent), it first makes a preflight that looks
like:

  OPTIONS *
  Access-Control-Request-Origin-Wide-Cache: [origin]
  Access-Control-Request-Method: *
  Access-Control-Request-Headers: *

If the response is

  2xx XX
  Access-Control-Allow-Origin-Wide-Cache: [origin]
  Access-Control-Allow-Methods: *
  Access-Control-Allow-Headers: *
  Access-Control-Max-Age: [max-age]

then no more preflights will be made for the duration of [max-age] (or
shortened per user agent preference). If the response includes

  Access-Control-Allow-Credentials: true

the cache scope is increased to requests that include credentials.

I think this has a reasonable tradeoff between security and opening up
all the power of the HTTP APIs on the server without the performance
hit. It still makes the developer very conscious about the various
features involved.

The cache would be on a per requesting origin basis as per the headers
above. The Origin and Access-Control-Allow-Origin would not take part
in this exchange, to make it very clear what this is about.

(This does not affect Access-Control-Expose-Headers or any of the
other headers required as part of non-preflight responses.)


-- 
https://annevankesteren.nl/



Re: Web Components F2F in April 2015

2015-02-19 Thread Arthur Barstow

[ -  the easily recognizable p-webapps subscribers ]

On 2/18/15 12:50 PM, Dimitri Glazkov wrote:
Following Art's suggestion [1], I propose a Web Components-specific 
F2F with with the primary goal of reaching consensus on the Shadow DOM 
contentious bits [2].


When: Friday, April 24, 2015
Where: Google San Francisco or Mountain View (to be confirmed)
What: a one-day meeting


Thanks for organizing this meeting Dimitri!

When will you be able to confirm the location? Regardless, I think we 
should consider the meeting as confirmed.


-Thanks, ArtB


Tentative agenda:

1) Go over the contentious bits, discuss pros/cons
2) Brainstorm/present ideas on changes to current spec
3) Decide on changes to current spec
4) If we have time left, review custom elements bits [3]

I stubbed out the basics in 
https://www.w3.org/wiki/Webapps/WebComponentsApril2015Meeting


:DG<

[1]: 
https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0407.html
[2]: 
https://github.com/w3c/webcomponents/wiki/Shadow-DOM:-Contentious-Bits

[3]: https://wiki.whatwg.org/wiki/Custom_Elements





Re: CORS performance

2015-02-19 Thread Brian Smith
On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
>> so presumably it is OK to set the Content-Type to text/plain
>
> Thats not ok, but may explain my confusion, is Content-Type considered a
> Custom Header that will always trigger a preflight?

To be clear, my comment was about POST requests to the bulk document
API, not about other requests.

I ran your demo and observed the network traffic using Wireshark.
Indeed, OPTIONS requests are being sent for every GET. But, that is
because you are setting the Content-Type header field on your GET
requests. Since GET requests don't have a request body, you shouldn't
set the Content-Type header field on them. And, if you do, then
browsers will treat it as a custom header field. That is what forces
the preflight for those requests.

Compare the network traffic for these two scripts:

  
xhr=new XMLHttpRequest();
xhr.open("GET",
"http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4";,
true);
xhr.setRequestHeader("Accept","application/json");
xhr.setRequestHeader("Content-Type","application/json");
xhr.send();
  

  
xhr=new XMLHttpRequest();
xhr.open("GET",
"http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4";,
true);
xhr.setRequestHeader("Accept","application/json");
xhr.send();
  

They are the same, except the second one doesn't set the Content-Type
header, and thus it doesn't cause the preflight to be sent.

> if so then none of the
> caching will apply, CouchDB requires sending the appropriate content-type

CouchDB may require sending "Accept: application/json", but that isn't
considered a custom header field, so it doesn't trigger preflight.

> The /_changes requests are only part of the problem, once we receive the
> changes information we then have to request information about individual
> documents which all have a unique id
>
>   GET /registry/mypackagename
>
> We do one of those per document (70,000 npm docs), all trigger a preflight
> (whether or not custom headers are involved)

I believe none of these require preflight unless a mistake is being
made (probably setting Content-Type on GET requests).

Also, regardless, you can use the CouchDB bulk document API to fetch
all these documents in one request, instead of 70,000 requests.

> Also performance details aside every week somebody has a library or proxy
> that sends some custom header or they just missed a step when configuring
> CORS, its a constant source of confusion for our users. We try to get around
> it by providing helper scripts but Anne's proposal mirroring flashes cross
> domain.xml sounds vastly superior to the current implementation from the
> developers perspective.

I agree that things can be improved here. I think the solution may be
better developer tools. In particular, devtools should tell you
exactly why a request triggered preflight.

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Dale Harvey
> so presumably it is OK to set the Content-Type to text/plain

Thats not ok, but may explain my confusion, is Content-Type considered a
Custom Header that will always trigger a preflight? if so then none of the
caching will apply, CouchDB requires sending the appropriate content-type

I tried setting up a little demo here, it will replicate the npm registry
for 5 seconds - http://paste.pouchdb.com/paste/q8n610/#output

You can see in the network logs various OPTIONS requests for
http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=311&limit=100&_nonce=UIZRQHrUG1Gjbm6S
etc etc

The /_changes requests are only part of the problem, once we receive the
changes information we then have to request information about individual
documents which all have a unique id

  GET /registry/mypackagename

We do one of those per document (70,000 npm docs), all trigger a preflight
(whether or not custom headers are involved)

We can and are doing a lot of thing to try and improve performance / reduce
the number of HTTP requests, but for our particular case its dealing with
10 years of established server protocols, there isnt 'a server', theres at
least 10 server implementations across all platforms by various projects /
companies that all need / try to interoperate, we cant just make ad hoc
changes to the protocol to get around CORS limitations.

Also performance details aside every week somebody has a library or proxy
that sends some custom header or they just missed a step when configuring
CORS, its a constant source of confusion for our users. We try to get
around it by providing helper scripts but Anne's proposal mirroring flashes
cross domain.xml sounds vastly superior to the current implementation from
the developers perspective.

On 19 February 2015 at 12:05, Brian Smith  wrote:

> Dale Harvey  wrote:
> > The REST api pretty much by design means a unique url per request
>
> CouchDB has http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API,
> which allows you to fetch or edit and create multiple documents at
> once, with one HTTP request. CouchDB's documentation says you're
> supposed to POST a JSON document for editing, but the example doesn't
> set the Content-Type on the request so presumably it is OK to set the
> Content-Type to text/plain. This means that you'd have ONE request and
> ZERO preflights to edit N documents.
>
> > in this case a lot of the requests look like
> >
> >   GET origin/_change?since=0
> >   GET origin/_change?since=the last id
>
> A GET like this won't require preflight unless you set custom header
> fields on the request. Are you setting custom headers? If so, which
> ones and why? I looked at the CouchDB documentation and it doesn't
> mention any custom header fields. Thus, it seems to me like none of
> the GET requests should require preflight.
>
> Also, if your server is SPDY or HTTP/2, you should be able to
> configure it so that when the server receives a request "GET
> /whatever/123", it replies with the response for that request AND
> pushes the response for the not-even-yet-sent "OPTIONS /whatever/123"
> request. In that case, even if you don't use the preflight-less bulk
> document API and insist on using PUT, there's zero added latency from
> the preflight.
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Brian Smith
Dale Harvey  wrote:
> The REST api pretty much by design means a unique url per request

CouchDB has http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API,
which allows you to fetch or edit and create multiple documents at
once, with one HTTP request. CouchDB's documentation says you're
supposed to POST a JSON document for editing, but the example doesn't
set the Content-Type on the request so presumably it is OK to set the
Content-Type to text/plain. This means that you'd have ONE request and
ZERO preflights to edit N documents.

> in this case a lot of the requests look like
>
>   GET origin/_change?since=0
>   GET origin/_change?since=the last id

A GET like this won't require preflight unless you set custom header
fields on the request. Are you setting custom headers? If so, which
ones and why? I looked at the CouchDB documentation and it doesn't
mention any custom header fields. Thus, it seems to me like none of
the GET requests should require preflight.

Also, if your server is SPDY or HTTP/2, you should be able to
configure it so that when the server receives a request "GET
/whatever/123", it replies with the response for that request AND
pushes the response for the not-even-yet-sent "OPTIONS /whatever/123"
request. In that case, even if you don't use the preflight-less bulk
document API and insist on using PUT, there's zero added latency from
the preflight.

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Dale Harvey
> What is it about PouchDB and CouchDB that causes them to require
> preflight for all of these requests in the first place? What is
> difficult about changing them to not require preflight for all of
> these requests?

The REST api pretty much by design means a unique url per request, in this
case a lot of the requests look like

  GET origin/_change?since=0
  GET origin/_change?since=the last id

Its unlikely to change since its 10 years old across standardized across
several different products that works well in most cases aside for just
being kinda slow when you try to use it over CORS.

> If declaring this policy through a header is not acceptable, we could
> attempt a double preflight fetch for the very first CORS fetch against
> an origin (that requires a preflight). Try OPTIONS * before OPTIONS
> /actual-request. If that handshake succeeds (details TBD) no more
> preflights necessary for the entire origin.

This is very much what I expected when I first used CORS, similiar to the
flash cross-domain.xml file, I would just like to mark an origin I control
as being accessible from any host, as the only things CORS protects is data
behind a firewall I think it should be a simple mechanism to say "this
domain is not behind a firewall, have at it"


On 19 February 2015 at 11:30, Brian Smith  wrote:

> Dale Harvey  wrote:
> > With Couch / PouchDB we are working with an existing REST API wherein
> every
> > request is to a different url (which is unlikely to change), the
> performance
> > impact is significant since most of the time is used up by latency, the
> CORS
> > preflight request essentially double the time it takes to do anything
>
> I understand that currently the cost of this API is 2*N and you want
> to reduce the 2 to 1 instead of reducing the N, even though N is
> usually much larger than 2.
>
> What is it about PouchDB and CouchDB that causes them to require
> preflight for all of these requests in the first place? What is
> difficult about changing them to not require preflight for all of
> these requests?
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Anne van Kesteren
On Thu, Feb 19, 2015 at 12:17 PM, Dale Harvey  wrote:
> With Couch / PouchDB we are working with an existing REST API wherein every
> request is to a different url (which is unlikely to change), the performance
> impact is significant since most of the time is used up by latency, the CORS
> preflight request essentially double the time it takes to do anything

Yeah, also, it should not be up to us how people design their HTTP
APIs. Limiting HTTP in that way because it is hard to make CORS scale
seems bad.


I think we've been too conservative when introducing CORS. It's
effectively protecting content behind a firewall, but we added all
these additional opt in mechanism beyond protecting content behind a
firewall due to unease about the potential risks. Figuring out to what
extent that actually serves a purpose would be good.

If declaring this policy through a header is not acceptable, we could
attempt a double preflight fetch for the very first CORS fetch against
an origin (that requires a preflight). Try OPTIONS * before OPTIONS
/actual-request. If that handshake succeeds (details TBD) no more
preflights necessary for the entire origin.


-- 
https://annevankesteren.nl/



Re: CORS performance

2015-02-19 Thread Brian Smith
Dale Harvey  wrote:
> With Couch / PouchDB we are working with an existing REST API wherein every
> request is to a different url (which is unlikely to change), the performance
> impact is significant since most of the time is used up by latency, the CORS
> preflight request essentially double the time it takes to do anything

I understand that currently the cost of this API is 2*N and you want
to reduce the 2 to 1 instead of reducing the N, even though N is
usually much larger than 2.

What is it about PouchDB and CouchDB that causes them to require
preflight for all of these requests in the first place? What is
difficult about changing them to not require preflight for all of
these requests?

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Dale Harvey
With Couch / PouchDB we are working with an existing REST API wherein every
request is to a different url (which is unlikely to change), the
performance impact is significant since most of the time is used up by
latency, the CORS preflight request essentially double the time it takes to
do anything

On 19 February 2015 at 10:50, Brian Smith  wrote:

> On Thu, Feb 19, 2015 at 2:45 AM, Anne van Kesteren 
> wrote:
> > On Thu, Feb 19, 2015 at 11:43 AM, Brian Smith 
> wrote:
> >> 1. Preflight is only necessary for a subset of CORS requests.
> >> Preflight is never done for GET or HEAD, and you can avoid preflight
> >> for POST requests by making your API accept data in a format that
> >> matches what HTML forms post. Therefore, we're only talking about PUT,
> >> DELETE, less common forms of POST, and other less commonly-used
> >> methods.
> >
> > Euh, if you completely ignore headers, sure. But most HTTP APIs will
> > use some amount of custom headers, meaning *all* methods require a
> > preflight.
>
> Is it really true that most HTTP APIs will sue some amount of custom
> headers? And, is is it necessary for these APIs to be designed such
> that the custom headers are required?
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Brian Smith
On Thu, Feb 19, 2015 at 2:45 AM, Anne van Kesteren  wrote:
> On Thu, Feb 19, 2015 at 11:43 AM, Brian Smith  wrote:
>> 1. Preflight is only necessary for a subset of CORS requests.
>> Preflight is never done for GET or HEAD, and you can avoid preflight
>> for POST requests by making your API accept data in a format that
>> matches what HTML forms post. Therefore, we're only talking about PUT,
>> DELETE, less common forms of POST, and other less commonly-used
>> methods.
>
> Euh, if you completely ignore headers, sure. But most HTTP APIs will
> use some amount of custom headers, meaning *all* methods require a
> preflight.

Is it really true that most HTTP APIs will sue some amount of custom
headers? And, is is it necessary for these APIs to be designed such
that the custom headers are required?

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Anne van Kesteren
On Thu, Feb 19, 2015 at 11:45 AM, Anne van Kesteren  wrote:
> On Thu, Feb 19, 2015 at 11:43 AM, Brian Smith  wrote:
>> 1. Preflight is only necessary for a subset of CORS requests.
>> Preflight is never done for GET or HEAD, and you can avoid preflight
>> for POST requests by making your API accept data in a format that
>> matches what HTML forms post. Therefore, we're only talking about PUT,
>> DELETE, less common forms of POST, and other less commonly-used
>> methods.
>
> Euh, if you completely ignore headers, sure. But most HTTP APIs will
> use some amount of custom headers, meaning *all* methods require a
> preflight.

And you seem to forget that an HTTP API typically covers a large set
of URLs (e.g. if you use remote database such as PouchDB). With CORS
that requires at least one preflight per URL.


-- 
https://annevankesteren.nl/



Re: Better focus support for Shadow DOM

2015-02-19 Thread chaals
Hi, I noted the bugs, this thread, and the document in the HTML accessibility wiki where I am trying to collate stuff about focus navigation and similar keyboard access issues (in what might yet be a vain attempt to really improve the situation which is overall pretty dismal still): https://www.w3.org/WAI/PF/HTML/wiki/Keyboard 19.02.2015, 04:56, "Takayoshi Kochi (河内 隆仁)" :[Shadow]: Shadow host with tabindex=-1, all descendent tree should be ignored for tab navigationhttps://www.w3.org/Bugs/Public/show_bug.cgi?id=27965  I am not sure if this really is a problem (which might just be my brain going slowly). See my comment for more details, but it isn't clear to me that this needs to be fixed. On further reflection, I wonder if the point is where the author has explicitly marked tabindex="-1" - and how this relates to comound ARIA controls...  cheers Focus on shadow host should slide to its inner focusable nodehttps://www.w3.org/Bugs/Public/show_bug.cgi?id=28054On Wed, Jan 14, 2015 at 2:27 PM, Takayoshi Kochi (河内 隆仁)  wrote:Hi, For shadow DOMs which has multiple focusable fields under the host,the current behavior of tab navigation order gets somewhat weirdwhen you want to specify tabindex explicitly. This is the doc to introduce a new attribute "delegatesFocus" to resolve the issue.https://docs.google.com/document/d/1k93Ez6yNSyWQDtGjdJJqTBPmljk9l2WS3JTe5OHHB50/edit?usp=sharing Any comments are welcome!-- Takayoshi Kochi -- Takayoshi Kochi  --Charles McCathie Nevile - web standards - CTO Office, Yandexcha...@yandex-team.ru - - - Find more at http://yandex.com 

Re: CORS performance

2015-02-19 Thread Anne van Kesteren
On Thu, Feb 19, 2015 at 11:43 AM, Brian Smith  wrote:
> 1. Preflight is only necessary for a subset of CORS requests.
> Preflight is never done for GET or HEAD, and you can avoid preflight
> for POST requests by making your API accept data in a format that
> matches what HTML forms post. Therefore, we're only talking about PUT,
> DELETE, less common forms of POST, and other less commonly-used
> methods.

Euh, if you completely ignore headers, sure. But most HTTP APIs will
use some amount of custom headers, meaning *all* methods require a
preflight.


-- 
https://annevankesteren.nl/



Re: CORS performance

2015-02-19 Thread Brian Smith
Anne van Kesteren  wrote:
> Concerns raised by Monsur
> https://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0260.html
> and others before him are still valid.
>
> When you have an HTTP API on another origin you effectively get a huge
> performance penalty. Even with caching of preflights, as each fetch is
> likely to go to a distinct URL.

Definitely there is a huge performance penalty per-request when the
preflight isn't cached.

But:

1. Preflight is only necessary for a subset of CORS requests.
Preflight is never done for GET or HEAD, and you can avoid preflight
for POST requests by making your API accept data in a format that
matches what HTML forms post. Therefore, we're only talking about PUT,
DELETE, less common forms of POST, and other less commonly-used
methods.

2. It seems very wasteful to design an API that requires multiple
PUT/DELETE/POST requests to complete a transaction (I'm using the word
"transaction" loosely here; I don't mean ACID, necessarily). A lot of
people think that REST means that you should only change a resource by
PUT/DELETE/POSTing to its URL, however that's a misunderstanding of
what REST is really about. Regardless of CORS, it is a good idea to
design APIs in such a way that every modification transaction can be
done in one or as few requests as possible, and this can be done a way
that is in line with RESTful design. This type of design is
more-or-less required when you need ACID transactions anyway.

3. Often you can predict which resources need preflight. With HTTP/2
and SPDY, the server can use the server push mechanism to push
preflight responses to the client before the client even sends them.

Given #1, #2, and #3, I'm a little bit unsure how bad the performance
problem really is, and how bad it will be going forward. It would be
good to see a concrete example to get a better understanding of the
issue.

Cheers,
Brian



Re: CORS performance

2015-02-19 Thread Mike West
On Tue, Feb 17, 2015 at 8:43 PM, Bjoern Hoehrmann  wrote:

> * Anne van Kesteren wrote:
> >On Tue, Feb 17, 2015 at 8:18 PM, Bjoern Hoehrmann 
> wrote:
> >> Individual resources should not be able to declare policy for the whole
> >> server, ...
> >
> >With HSTS we gave up on that.
>
> Well, HSTS essentially removes communication options, while the intent
> of CORS is to add communication options. I don't think you can compare
> them like that. HSTS is more like a redirect and misconfiguration may
> result in denial of service, while CORS misconfiguration can have more
> far-reaching consequences like exposing user information.


I share this concern. Note that CSP pinning as we're discussing it is also
purely negative in nature. It can block you from loading resources you'd
otherwise have access to, but can't force your host into exposing resources
you otherwise wouldn't.

Brad's .well-known suggestion is interesting. I'm worried about the latency
impacts, but it's probably worth exploring what it would take to add this
kind of thing to the Manifest spec (or some same-origin-limited version
thereof).

-mike

--
Mike West , @mikewest

Google Germany GmbH, Dienerstrasse 12, 80331 München,
Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der
Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth
Flores
(Sorry; I'm legally required to add this exciting detail to emails. Bleh.)