Fwd: Fingerprinting Guidance for Web Specification Authors

2015-12-01 Thread Arthur Barstow
Editors, All - please see "Fingerprinting Guidance for Web Specification 
Authors" 
 and 
reflect it in your spec, accordingly.


 Forwarded Message 
Subject:Fingerprinting Guidance for Web Specification Authors
Resent-Date:Thu, 26 Nov 2015 09:15:07 +
Resent-From:public-priv...@w3.org
Date:   Thu, 26 Nov 2015 09:14:30 +
From:   Christine Runnegar 
To: public-privacy (W3C mailing list) 



PING colleagues,

The Privacy Interest Group (PING) has published a Draft Group Note of 
Fingerprinting Guidance for Web Specification Authors.

You can find it here.

http://www.w3.org/TR/2015/NOTE-fingerprinting-guidance-20151124/

We will discuss next steps on our call next Thursday (3 December 2015 at UTC 
17).

Thanks to Nick.

Christine and Tara






Re: Fingerprinting Guidance for Web Specification Authors

2015-12-01 Thread Jeffrey Walton
On Tue, Dec 1, 2015 at 11:52 AM, Arthur Barstow  wrote:
> Editors, All - please see "Fingerprinting Guidance for Web Specification
> Authors" 
> and reflect it in your spec, accordingly.

Tracking can be a tricky problem because it occurs at many layers in a
typical stack. It starts when the first TCP SYN is sent. If browsers
use an underlying transport like TCP/IP, should something be said
about it (ie., its in scope or out of scope)?

ETAGS were not mentioned in the document. In the context of the PING
working group, I would expect to see a treatment.

Jeff



Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-01 Thread Brad Hill
> As far as I see it, a "mixed content" has the word "content", which is
supposed to designate something that can be included in a web page and
therefore be dangerous.

"Mixed Content" (and "mixed content blocking") is a term of art that has
been in use for many years in the browser community.  As such, you are
correct that it is a bit inadequate in that it's coinage predates the
widespread use of AJAX-type patterns, but we felt that it was better to
continue to use and refine a well-known term than to introduce a new term.
Apologies if this has created any confusion.

More than just "content", the question boils down to "what does seeing the
lock promise the user?"  Browsers interpret the lock as a promise to the
user that the web page / application they are currently interacting with is
safe according to the threat model of TLS.  That is to say, it is protected
end-to-end on the network path against attempts to impersonate, eavesdrop
or modify traffic.  In a modern browser, this includes not just fetches of
images and script done declaratively in HTML, but any kind of potentially
insecure network communication.

In general, we've seen this kind of argument before for building secure
protocols at layer 7 on top of insecure transports - notably Netflix's
"Message Security Layer".  Thus far, no browser vendors have been convinced
that these are good ideas, that there is a way to safely allow their use in
TLS-protected contexts, or that building these systems with existing TLS
primitives is not an adequate solution.

In specific, I don't think you'll ever find support for treating Tor
traffic that is subject to interception and modification after it leaves an
exit node as equivalent to HTTPS, especially since we know there are active
attacks being mounted against this traffic on a regular basis.  (This is
why I suggested .onion sites as potentially secure contexts, which do not
suffer from the same exposure outside of the Tor network.)

-Brad

On Tue, Dec 1, 2015 at 5:42 AM Aymeric Vitte  wrote:

>
>
> Le 01/12/2015 05:31, Brad Hill a écrit :
> > Let's keep this discussion civil, please.
>
> Maybe some wording was a little tough below, apologies for this, the
> logjam attack is difficult to swallow, how something that is supposed to
> protect forward secrecy can do quietly the very contrary without even
> having the keys compromised, it's difficult to understand too why TLS
> did not implement a mechanism to protect the DH client public key.
>
> >
> > The reasons behind blocking of non-secure WebSocket connections from
> > secure contexts are laid out in the following document:
> >
> > http://www.w3.org/TR/mixed-content/
> >
> > A plaintext ws:// connection does not meet the requirements of
> > authentication, encryption and integrity, so far as the user agent is
> > able to tell, so it cannot allow it.
>
> The spec just mentions to align the behavior of ws to xhr, fetch and
> eventsource without giving more reasons, or I missed them.
>
> Let's concentrate on ws here.
>
> As far as I see it, a "mixed content" has the word "content", which is
> supposed to designate something that can be included in a web page and
> therefore be dangerous.
>
> WS cannot include anything in a page by itself, it is designed to
> discuss with external entities, for other purposes than fetching
> resources (images, js, etc) from web servers that are logically tied to
> a domain, in that case you can use xhr or other fetching means instead.
>
> Therefore it is logical to envision that those external entities used
> with WS are not necessarily web servers and might not have valid
> certificates.
>
> WS cannot hurt anything unless the application decides to insert the
> results in the page, which is not something problematic with WS only,
> the application is loaded via https, so is supposed to be secure but if
> it is doing wrong things nothing can save you, even wss.
>
> Unlike usual fetching means, in case of malicious use WS cannot really
> hurt like scaning URLs, this is a specific protocol not talked by anybody.
>
> As a result of the current policy, if we want to establish WS with
> entities that can't have a valid certificate, we must load the code via
> http which is obviously completely insecure.
>
> So forbiding WS with https is just putting the users at risk and prevent
> any current or future uses of ws with entities that can't have a valid
> certificate, therefore reducing the interest and potential of ws to
> something very small.
>
>
> >
> > If there is a plausible mechanism by which browsers could distinguish
> > external communications which meet the necessary security criteria using
> > protocols other than TLS or authentication other than from the Web PKI,
> > there is a reasonable case to be made that such could be considered as
> > potentially secure origins and URLs.  (as has been done to some extent
> > for WebRTC, as you have already noted)
> >
>
> To some extent yes... maybe some 

Re: WS/Service Workers, TLS and future apps - [was Re: HTTP is just fine]

2015-12-01 Thread Aymeric Vitte


Le 01/12/2015 05:31, Brad Hill a écrit :
> Let's keep this discussion civil, please.  

Maybe some wording was a little tough below, apologies for this, the
logjam attack is difficult to swallow, how something that is supposed to
protect forward secrecy can do quietly the very contrary without even
having the keys compromised, it's difficult to understand too why TLS
did not implement a mechanism to protect the DH client public key.

> 
> The reasons behind blocking of non-secure WebSocket connections from
> secure contexts are laid out in the following document:
> 
> http://www.w3.org/TR/mixed-content/
> 
> A plaintext ws:// connection does not meet the requirements of
> authentication, encryption and integrity, so far as the user agent is
> able to tell, so it cannot allow it.  

The spec just mentions to align the behavior of ws to xhr, fetch and
eventsource without giving more reasons, or I missed them.

Let's concentrate on ws here.

As far as I see it, a "mixed content" has the word "content", which is
supposed to designate something that can be included in a web page and
therefore be dangerous.

WS cannot include anything in a page by itself, it is designed to
discuss with external entities, for other purposes than fetching
resources (images, js, etc) from web servers that are logically tied to
a domain, in that case you can use xhr or other fetching means instead.

Therefore it is logical to envision that those external entities used
with WS are not necessarily web servers and might not have valid
certificates.

WS cannot hurt anything unless the application decides to insert the
results in the page, which is not something problematic with WS only,
the application is loaded via https, so is supposed to be secure but if
it is doing wrong things nothing can save you, even wss.

Unlike usual fetching means, in case of malicious use WS cannot really
hurt like scaning URLs, this is a specific protocol not talked by anybody.

As a result of the current policy, if we want to establish WS with
entities that can't have a valid certificate, we must load the code via
http which is obviously completely insecure.

So forbiding WS with https is just putting the users at risk and prevent
any current or future uses of ws with entities that can't have a valid
certificate, therefore reducing the interest and potential of ws to
something very small.


> 
> If there is a plausible mechanism by which browsers could distinguish
> external communications which meet the necessary security criteria using
> protocols other than TLS or authentication other than from the Web PKI,
> there is a reasonable case to be made that such could be considered as
> potentially secure origins and URLs.  (as has been done to some extent
> for WebRTC, as you have already noted)
> 

To some extent yes... maybe some other solutions could be studied via
something like letsencrypt to get automatically something like temporary
valid certificates (which then might eliminate the main topic of this
discussion if feasible)


> If you want to continue this discussion here, please:
> 
> 1) State your use cases clearly for those on this list who do not
> already know them.  You want to "use the Tor protocol" over websockets? 

Note: I am not affiliated at all to the Tor project

Yes, that's already a reality with projects such as Peersm and Flashproxy.

Peersm has the onion proxy function inside browsers which establishes
Tor circuits with Tor nodes using WS.

Flashproxy connects a censored Tor user to a Tor node and relays the Tor
protocol using WS between both.

> To connect to what?  Why? 

The Tor protocol is just an example, let's see it as another secure
protocol.

If we go further, we can imagine what I described here:
https://mailman.stanford.edu/pipermail/liberationtech/2015-November/015680.html

Which mentions the "browsing paradox" too.

Not usual I believe (last paragraph, the idea is not to proxy only to
URLs as it is today but to proxy to interfaces, such as ws,xhr and
WebRTC), but this will happen one day (maybe I should patent this too...)

Applications are numerous and not restricted to those examples.

 Why is it important to bootstrap an
> application like this over regular http(s) instead of, for example, as
> an extension or modified user-agent like TBB?

The Tor Browser is designed for secure browsing, because for example
solving the "browsing paradox" and having the onion proxy inside
browsers is not enough to insure secure anonymous browsing, Tor
Browser's features will still be required.

Some applications need some modifications of the browser, some others
need extensions but plenty of applications could be installed from the
web sites directly.

The obvious advantages are: no installation, works on any
device/platform, no dev/maintenance of the application for different
platforms with the associated risks (sw integrity) and complexity for
the users (not talking here about the problematic aspect of code loading
for a web 

RE: Meeting date, january

2015-12-01 Thread Domenic Denicola
From: Chaals McCathie Nevile [mailto:cha...@yandex-team.ru]

> Yes, likewise for me. Anne, Olli specifically called you out as someone we
> should ask. I am assuming most people are OK either way, having heard no
> loud screaming except for Elliot...

I would be pretty heartbroken if we met without Elliott. So let's please do the 
25th.