Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
Hi Johann

On Thu, Mar 23, 2017 at 6:37 PM, Johann Hofmann 
wrote:

> Hey,
>
> concerns about the viability of such a decentralized systems aside, I
> still don't understand the advantage of blocking on an API level vs. simply
> showing the SafeBrowsing error page that we currently have in place.
>

I think I explained my self badly. It is not on an API level, it on origin
level but it should be a P2P protocol where not  one actor only has full
control of that list. Entries on that list would be agreed on a network
scale, so the peers agree that a domain is harmful.


>
> Why would we continue to allow a user to visit a clearly harmful page?
>

Well, it depends. I remember a couple of occasions where Pirate Bay was
blocked. It was one of the mirrors that appeared after closing the original
site. The SafeBrowsing UI was not informing me about the potential damage I
could suffer. It only talked about phishing. I wanted to contiue browsing.

In my opinion, the final decision should come from the user. That said, it
does not mean the UI could encourage to not doing it.


>
> You're saying that a user should be allowed to shoot their own feet. How
> would that be different from the existing permission prompts? This sounds
> like it could be easily maneuvered with some social engineering from the
> website.
>

Several times you propose new functionallity to the Web, spec editors must
to remove some features because of potential danger. It means that the
legit usage is useful but if abused, it would be bad for the user. The
protocol would allow to decrease the risk of shipping powerful APIs.


>
> Your proposal says " what happens from here is up to the browser". This
> doesn't really make a good impression to me as a browser developer since it
> appears like important UI questions are just hand-waved away in your
> concept.
>

Then the explanation is badly worded. Sorry. What I wanted to mean is that
once, a Web property is declared harmful, it is not part of the protocol to
decide what happens next. Firefox could decide to let the user decide while
SuperSafeBrowser woud decide to not do it.

The protocol can give recommendations of what to be shown and I see this as
an opportunity to research on "important UI questions" as you said.


>
> Cheers,
>
> Johann


Thank you for your comments.


>
>
> On 23/03/2017 02:09, Jonathan Kingston wrote:
>
>> This seems a little like the idea WOT(https://www.mywot.com/) had,
>> Showing
>> the user that they might be looking at a website that isn't considered
>> great but isn't perhaps bad enough to be blocked.
>>
>> I agree that one web actor owning this power isn't a great place to be in
>> and that in itself might be enough justification in at least looking
>> further into this direction.
>>
>> If there was enough evidence to suggest we should revoke an advert
>> providers ability to track someone without breaking the web that might be
>> interesting.
>> There is also some research (which I am not sure I can share publicly) to
>> suggest we should limit API usage to avoid security flaws within browsers
>> based upon a strong correlation of Lines of Code, CVE's and the low number
>> of sites that use those APIs. Perhaps there is a rationale to make
>> websites
>> earn enough trust for new features that have a high risk. For example
>> would
>> Reddits sub resources really need WebVR or WebGL?
>> But we would also have to counter the cost of building this over just
>> making the APIs secure in the first place and also understand we would
>> hurt
>> web innovation with that too.
>>
>> On Tue, Mar 21, 2017 at 10:11 PM, Eric Rescorla  wrote:
>>
>> There seem to be three basic ideas here:
>>>
>>> 0. Blacklisting at the level of API rather than site.
>>> 1. Some centralized but democratic  mechanism for building a list of
>>> misbehaving sites.
>>> 2. A mechanism for distributing the list of misbehaving sites to clients.
>>>
>>> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
>>> to say
>>> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
>>> bad, but
>>> specific APIs aren't disabled, but it's easy to see how you would extend
>>> it to that
>>> if you actually wanted to provide that function. I'm not sure that's
>>> actually
>>> very attractive--it's hard enough for users to understand safe browsing.
>>> Safe
>>> Browsing is of course centralized, but that comes with a number of
>>> advantages
>>> and it's not clear what the advantage of decentralized blacklist
>>> dissemination
>>> is, given the networking realities.
>>>
>>> You posit a mechanism for forming the list of misbehaving sites, but
>>> distributed
>>> reputation is really hard, and it's not clear that Google is actually
>>> doing a bad
>>> job of running Safe Browsing, so given that this is a fairly major
>>> unsolved problem,
>>> I'd be reluctant to set out to build a mechanism like this without a
>>> pretty clear
>>> design.
>>>

Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
Hi Jonathan

On Thu, Mar 23, 2017 at 9:09 AM, Jonathan Kingston 
wrote:

> This seems a little like the idea WOT(https://www.mywot.com/) had,
> Showing the user that they might be looking at a website that isn't
> considered great but isn't perhaps bad enough to be blocked.
>

Yes. I talk about it in
https://salvadelapuente.com/posts/2016/07/29/towards-the-web-of-trust/


>
> I agree that one web actor owning this power isn't a great place to be in
> and that in itself might be enough justification in at least looking
> further into this direction.
>
> If there was enough evidence to suggest we should revoke an advert
> providers ability to track someone without breaking the web that might be
> interesting.
> There is also some research (which I am not sure I can share publicly) to
> suggest we should limit API usage to avoid security flaws within browsers
> based upon a strong correlation of Lines of Code, CVE's and the low number
> of sites that use those APIs. Perhaps there is a rationale to make websites
> earn enough trust for new features that have a high risk. For example would
> Reddits sub resources really need WebVR or WebGL?
>

That's very interesting. Could you share those correlations with me
privately? Of course, the tools needed to perform the software reviews
could be supported by automatic tools. Here is a value proposition coming
from browser vendors or anyone who want to adhere to the protocol.


> But we would also have to counter the cost of building this over just
> making the APIs secure in the first place and also understand we would hurt
> web innovation with that too.
>

Ideally, the protocol is intended to worry "not too much" about misusing
powerful APIs and so, to boost Web innovation and experimentation.


>
> On Tue, Mar 21, 2017 at 10:11 PM, Eric Rescorla  wrote:
>
>> There seem to be three basic ideas here:
>>
>> 0. Blacklisting at the level of API rather than site.
>> 1. Some centralized but democratic  mechanism for building a list of
>> misbehaving sites.
>> 2. A mechanism for distributing the list of misbehaving sites to clients.
>>
>> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
>> to say
>> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
>> bad, but
>> specific APIs aren't disabled, but it's easy to see how you would extend
>> it to that
>> if you actually wanted to provide that function. I'm not sure that's
>> actually
>> very attractive--it's hard enough for users to understand safe browsing.
>> Safe
>> Browsing is of course centralized, but that comes with a number of
>> advantages
>> and it's not clear what the advantage of decentralized blacklist
>> dissemination
>> is, given the networking realities.
>>
>> You posit a mechanism for forming the list of misbehaving sites, but
>> distributed
>> reputation is really hard, and it's not clear that Google is actually
>> doing a bad
>> job of running Safe Browsing, so given that this is a fairly major
>> unsolved problem,
>> I'd be reluctant to set out to build a mechanism like this without a
>> pretty clear
>> design.
>>
>> -Ekr
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>>> Hi Jonathan
>>>
>>> In the short and medium terms, it scales better than a white list and
>>
>> distributes the effort of finding APIs misuses. Mozilla and other vendor
>>> browser could still review the code of the site and add its vote in
>>> favour
>>> or against the Web property.
>>>
>>> In the long term, the system would help finding new security threats
>>> such a
>>> tracking or fingerprinting algorithms by encouraging the honest report of
>>> evidences, somehow.
>>>
>>> With this system, the threat is considered the result of both potential
>>> risk and chances of actual misuse. The revocation protocol reduces
>>> threatening situations by minimising the number of Web properties abusing
>>> the APIs.
>>>
>>> As a side effect, it provides the infrastructure for a real distributed
>>> and
>>> cross browser database which can be of utility for other unforeseen uses.
>>>
>>> What do you think?
>>>
>>>
>>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
>>> escribió:
>>>
>>> Hey,
>>> What would be the advantage of using this over the safesite list?
>>> Obviously
>>> there would be less broken sites on the web as we would be permitting the
>>> site to still be viewed by the user rather than just revoking the
>>> permission but are there other advantages?
>>>
>>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>>> sdelapue...@mozilla.com> wrote:
>>>
>>> > Hi, folks.
>>> >
>>> > Some time ago, I've started to think about an idea to experiment with
>>> new
>>> > powerful Web APIs: a sort of "deceptive site" database for harmful
>>> uses of
>>> > browsers APIs. I've been curating that idea and come up with the
>>> concept of
>>> > a "revocation protocol" 

Re: Revocation protocol idea

2017-03-31 Thread Eric Rescorla
On Fri, Mar 31, 2017 at 4:20 AM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi Eric
>
> On Wed, Mar 22, 2017 at 6:11 AM, Eric Rescorla  wrote:
>
>> There seem to be three basic ideas here:
>>
>> 0. Blacklisting at the level of API rather than site.
>> 1. Some centralized but democratic  mechanism for building a list of
>> misbehaving sites.
>> 2. A mechanism for distributing the list of misbehaving sites to clients.
>>
>
> I think I did not explain it well. It would be a black list on site level
> and it would not be centralised but distributed.
>

I had understood your point that it would be centrally organized but
democratically decided and then
distributed to users.


The idea is that is a site is harmful for the user, all their permissions
> should be revoked and we shuold communicate the user why this site is
> harmful. The list of misbehaving sites, the reasons of why them are
> dangerous and the evidence supporting misbehaving should be in a
> cross-browser distrubuted DB.
>

Yes, I understood that.


As Jonathan notes, Firefox already has a mechanism for doing #2, which is
>> to say
>> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
>> bad, but
>> specific APIs aren't disabled, but it's easy to see how you would extend
>> it to that
>> if you actually wanted to provide that function. I'm not sure that's
>> actually
>> very attractive--it's hard enough for users to understand safe browsing.
>> Safe
>> Browsing is of course centralized, but that comes with a number of
>> advantages
>> and it's not clear what the advantage of decentralized blacklist
>> dissemination
>> is, given the networking realities.
>>
>> You posit a mechanism for forming the list of misbehaving sites, but
>> distributed
>> reputation is really hard, and it's not clear that Google is actually
>> doing a bad
>> job of running Safe Browsing, so given that this is a fairly major
>> unsolved problem,
>> I'd be reluctant to set out to build a mechanism like this without a
>> pretty clear
>> design.
>>
>
> I've been looking at this paper on prediction markets based on BitCoin
> 
> for inspiration. It is true that distributed reputation is a hard problem
> but I think we could adapt the concepts on that paper to this scenario if
> reviewers "bet" on site reputation and there is some incentive. Of course,
> further research is needed to mitigate the chance for a reviewer to lie in
> its report and prevent forms of Sybil attack but it seems to be some
> solutions out there.
>

As I said, we have mechanisms that seem to me to be doing a fairly adequate
job at this general class of problem, with the major drawback being that
they are centralized. I'm not saying that that couldn't be addressed, but
it doesn't seem like such a solution is ready to hand. As such, this seems
like it is fairly far outside the realm of anything Firefox would do in in
the short to medium term.

-Ekr


>
>>
>> -Ekr
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>>> Hi Jonathan
>>>
>>> In the short and medium terms, it scales better than a white list and
>>
>> distributes the effort of finding APIs misuses. Mozilla and other vendor
>>> browser could still review the code of the site and add its vote in
>>> favour
>>> or against the Web property.
>>>
>>> In the long term, the system would help finding new security threats
>>> such a
>>> tracking or fingerprinting algorithms by encouraging the honest report of
>>> evidences, somehow.
>>>
>>> With this system, the threat is considered the result of both potential
>>> risk and chances of actual misuse. The revocation protocol reduces
>>> threatening situations by minimising the number of Web properties abusing
>>> the APIs.
>>>
>>> As a side effect, it provides the infrastructure for a real distributed
>>> and
>>> cross browser database which can be of utility for other unforeseen uses.
>>>
>>> What do you think?
>>>
>>>
>>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
>>> escribió:
>>>
>>> Hey,
>>> What would be the advantage of using this over the safesite list?
>>> Obviously
>>> there would be less broken sites on the web as we would be permitting the
>>> site to still be viewed by the user rather than just revoking the
>>> permission but are there other advantages?
>>>
>>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>>> sdelapue...@mozilla.com> wrote:
>>>
>>> > Hi, folks.
>>> >
>>> > Some time ago, I've started to think about an idea to experiment with
>>> new
>>> > powerful Web APIs: a sort of "deceptive site" database for harmful
>>> uses of
>>> > browsers APIs. I've been curating that idea and come up with the
>>> concept of
>>> > a "revocation protocol" to revoke user granted permissions for origins
>>> > abusing those APIs.
>>> >
>>> > 

Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
Hi Eric

On Wed, Mar 22, 2017 at 6:11 AM, Eric Rescorla  wrote:

> There seem to be three basic ideas here:
>
> 0. Blacklisting at the level of API rather than site.
> 1. Some centralized but democratic  mechanism for building a list of
> misbehaving sites.
> 2. A mechanism for distributing the list of misbehaving sites to clients.
>

I think I did not explain it well. It would be a black list on site level
and it would not be centralised but distributed.
The idea is that is a site is harmful for the user, all their permissions
should be revoked and we shuold communicate the user why this site is
harmful. The list of misbehaving sites, the reasons of why them are
dangerous and the evidence supporting misbehaving should be in a
cross-browser distrubuted DB.


>
> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
> to say
> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
> bad, but
> specific APIs aren't disabled, but it's easy to see how you would extend
> it to that
> if you actually wanted to provide that function. I'm not sure that's
> actually
> very attractive--it's hard enough for users to understand safe browsing.
> Safe
> Browsing is of course centralized, but that comes with a number of
> advantages
> and it's not clear what the advantage of decentralized blacklist
> dissemination
> is, given the networking realities.
>
> You posit a mechanism for forming the list of misbehaving sites, but
> distributed
> reputation is really hard, and it's not clear that Google is actually
> doing a bad
> job of running Safe Browsing, so given that this is a fairly major
> unsolved problem,
> I'd be reluctant to set out to build a mechanism like this without a
> pretty clear
> design.
>

I've been looking at this paper on prediction markets based on BitCoin

for inspiration. It is true that distributed reputation is a hard problem
but I think we could adapt the concepts on that paper to this scenario if
reviewers "bet" on site reputation and there is some incentive. Of course,
further research is needed to mitigate the chance for a reviewer to lie in
its report and prevent forms of Sybil attack but it seems to be some
solutions out there.


>
> -Ekr
>
>
>
>
>
>
>
> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
>> Hi Jonathan
>>
>> In the short and medium terms, it scales better than a white list and
>
> distributes the effort of finding APIs misuses. Mozilla and other vendor
>> browser could still review the code of the site and add its vote in favour
>> or against the Web property.
>>
>> In the long term, the system would help finding new security threats such
>> a
>> tracking or fingerprinting algorithms by encouraging the honest report of
>> evidences, somehow.
>>
>> With this system, the threat is considered the result of both potential
>> risk and chances of actual misuse. The revocation protocol reduces
>> threatening situations by minimising the number of Web properties abusing
>> the APIs.
>>
>> As a side effect, it provides the infrastructure for a real distributed
>> and
>> cross browser database which can be of utility for other unforeseen uses.
>>
>> What do you think?
>>
>>
>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
>> escribió:
>>
>> Hey,
>> What would be the advantage of using this over the safesite list?
>> Obviously
>> there would be less broken sites on the web as we would be permitting the
>> site to still be viewed by the user rather than just revoking the
>> permission but are there other advantages?
>>
>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>> > Hi, folks.
>> >
>> > Some time ago, I've started to think about an idea to experiment with
>> new
>> > powerful Web APIs: a sort of "deceptive site" database for harmful uses
>> of
>> > browsers APIs. I've been curating that idea and come up with the
>> concept of
>> > a "revocation protocol" to revoke user granted permissions for origins
>> > abusing those APIs.
>> >
>> > I published the idea on GitHub [1] and I was wondering about the utility
>> > and feasibility of such a system so I would thank any feedback you want
>> to
>> > provide.
>> >
>> > I hope it will be of interest for you.
>> >
>> > [1] https://github.com/delapuente/revocation-protocol
>> >
>> > --
>> > 
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>


-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org

Re: Revocation protocol idea

2017-03-23 Thread Johann Hofmann

Hey,

concerns about the viability of such a decentralized systems aside, I 
still don't understand the advantage of blocking on an API level vs. 
simply showing the SafeBrowsing error page that we currently have in place.


Why would we continue to allow a user to visit a clearly harmful page?

You're saying that a user should be allowed to shoot their own feet. How 
would that be different from the existing permission prompts? This 
sounds like it could be easily maneuvered with some social engineering 
from the website.


Your proposal says " what happens from here is up to the browser". This 
doesn't really make a good impression to me as a browser developer since 
it appears like important UI questions are just hand-waved away in your 
concept.


Cheers,

Johann

On 23/03/2017 02:09, Jonathan Kingston wrote:

This seems a little like the idea WOT(https://www.mywot.com/) had, Showing
the user that they might be looking at a website that isn't considered
great but isn't perhaps bad enough to be blocked.

I agree that one web actor owning this power isn't a great place to be in
and that in itself might be enough justification in at least looking
further into this direction.

If there was enough evidence to suggest we should revoke an advert
providers ability to track someone without breaking the web that might be
interesting.
There is also some research (which I am not sure I can share publicly) to
suggest we should limit API usage to avoid security flaws within browsers
based upon a strong correlation of Lines of Code, CVE's and the low number
of sites that use those APIs. Perhaps there is a rationale to make websites
earn enough trust for new features that have a high risk. For example would
Reddits sub resources really need WebVR or WebGL?
But we would also have to counter the cost of building this over just
making the APIs secure in the first place and also understand we would hurt
web innovation with that too.

On Tue, Mar 21, 2017 at 10:11 PM, Eric Rescorla  wrote:


There seem to be three basic ideas here:

0. Blacklisting at the level of API rather than site.
1. Some centralized but democratic  mechanism for building a list of
misbehaving sites.
2. A mechanism for distributing the list of misbehaving sites to clients.

As Jonathan notes, Firefox already has a mechanism for doing #2, which is
to say
"Safe Browsing". Now, Safe Browsing is binary, either a site is good or
bad, but
specific APIs aren't disabled, but it's easy to see how you would extend
it to that
if you actually wanted to provide that function. I'm not sure that's
actually
very attractive--it's hard enough for users to understand safe browsing.
Safe
Browsing is of course centralized, but that comes with a number of
advantages
and it's not clear what the advantage of decentralized blacklist
dissemination
is, given the networking realities.

You posit a mechanism for forming the list of misbehaving sites, but
distributed
reputation is really hard, and it's not clear that Google is actually
doing a bad
job of running Safe Browsing, so given that this is a fairly major
unsolved problem,
I'd be reluctant to set out to build a mechanism like this without a
pretty clear
design.

-Ekr







On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:


Hi Jonathan

In the short and medium terms, it scales better than a white list and

distributes the effort of finding APIs misuses. Mozilla and other vendor

browser could still review the code of the site and add its vote in favour
or against the Web property.

In the long term, the system would help finding new security threats such
a
tracking or fingerprinting algorithms by encouraging the honest report of
evidences, somehow.

With this system, the threat is considered the result of both potential
risk and chances of actual misuse. The revocation protocol reduces
threatening situations by minimising the number of Web properties abusing
the APIs.

As a side effect, it provides the infrastructure for a real distributed
and
cross browser database which can be of utility for other unforeseen uses.

What do you think?


El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
escribió:

Hey,
What would be the advantage of using this over the safesite list?
Obviously
there would be less broken sites on the web as we would be permitting the
site to still be viewed by the user rather than just revoking the
permission but are there other advantages?

On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:


Hi, folks.

Some time ago, I've started to think about an idea to experiment with

new

powerful Web APIs: a sort of "deceptive site" database for harmful uses

of

browsers APIs. I've been curating that idea and come up with the

concept of

a "revocation protocol" to revoke user granted permissions for origins
abusing those APIs.

I published the idea on GitHub [1] and I was wondering about the utility

Re: Revocation protocol idea

2017-03-22 Thread Jonathan Kingston
This seems a little like the idea WOT(https://www.mywot.com/) had, Showing
the user that they might be looking at a website that isn't considered
great but isn't perhaps bad enough to be blocked.

I agree that one web actor owning this power isn't a great place to be in
and that in itself might be enough justification in at least looking
further into this direction.

If there was enough evidence to suggest we should revoke an advert
providers ability to track someone without breaking the web that might be
interesting.
There is also some research (which I am not sure I can share publicly) to
suggest we should limit API usage to avoid security flaws within browsers
based upon a strong correlation of Lines of Code, CVE's and the low number
of sites that use those APIs. Perhaps there is a rationale to make websites
earn enough trust for new features that have a high risk. For example would
Reddits sub resources really need WebVR or WebGL?
But we would also have to counter the cost of building this over just
making the APIs secure in the first place and also understand we would hurt
web innovation with that too.

On Tue, Mar 21, 2017 at 10:11 PM, Eric Rescorla  wrote:

> There seem to be three basic ideas here:
>
> 0. Blacklisting at the level of API rather than site.
> 1. Some centralized but democratic  mechanism for building a list of
> misbehaving sites.
> 2. A mechanism for distributing the list of misbehaving sites to clients.
>
> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
> to say
> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
> bad, but
> specific APIs aren't disabled, but it's easy to see how you would extend
> it to that
> if you actually wanted to provide that function. I'm not sure that's
> actually
> very attractive--it's hard enough for users to understand safe browsing.
> Safe
> Browsing is of course centralized, but that comes with a number of
> advantages
> and it's not clear what the advantage of decentralized blacklist
> dissemination
> is, given the networking realities.
>
> You posit a mechanism for forming the list of misbehaving sites, but
> distributed
> reputation is really hard, and it's not clear that Google is actually
> doing a bad
> job of running Safe Browsing, so given that this is a fairly major
> unsolved problem,
> I'd be reluctant to set out to build a mechanism like this without a
> pretty clear
> design.
>
> -Ekr
>
>
>
>
>
>
>
> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
>> Hi Jonathan
>>
>> In the short and medium terms, it scales better than a white list and
>
> distributes the effort of finding APIs misuses. Mozilla and other vendor
>> browser could still review the code of the site and add its vote in favour
>> or against the Web property.
>>
>> In the long term, the system would help finding new security threats such
>> a
>> tracking or fingerprinting algorithms by encouraging the honest report of
>> evidences, somehow.
>>
>> With this system, the threat is considered the result of both potential
>> risk and chances of actual misuse. The revocation protocol reduces
>> threatening situations by minimising the number of Web properties abusing
>> the APIs.
>>
>> As a side effect, it provides the infrastructure for a real distributed
>> and
>> cross browser database which can be of utility for other unforeseen uses.
>>
>> What do you think?
>>
>>
>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
>> escribió:
>>
>> Hey,
>> What would be the advantage of using this over the safesite list?
>> Obviously
>> there would be less broken sites on the web as we would be permitting the
>> site to still be viewed by the user rather than just revoking the
>> permission but are there other advantages?
>>
>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>> > Hi, folks.
>> >
>> > Some time ago, I've started to think about an idea to experiment with
>> new
>> > powerful Web APIs: a sort of "deceptive site" database for harmful uses
>> of
>> > browsers APIs. I've been curating that idea and come up with the
>> concept of
>> > a "revocation protocol" to revoke user granted permissions for origins
>> > abusing those APIs.
>> >
>> > I published the idea on GitHub [1] and I was wondering about the utility
>> > and feasibility of such a system so I would thank any feedback you want
>> to
>> > provide.
>> >
>> > I hope it will be of interest for you.
>> >
>> > [1] https://github.com/delapuente/revocation-protocol
>> >
>> > --
>> > 
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>

Re: Revocation protocol idea

2017-03-21 Thread Eric Rescorla
There seem to be three basic ideas here:

0. Blacklisting at the level of API rather than site.
1. Some centralized but democratic  mechanism for building a list of
misbehaving sites.
2. A mechanism for distributing the list of misbehaving sites to clients.

As Jonathan notes, Firefox already has a mechanism for doing #2, which is
to say
"Safe Browsing". Now, Safe Browsing is binary, either a site is good or
bad, but
specific APIs aren't disabled, but it's easy to see how you would extend it
to that
if you actually wanted to provide that function. I'm not sure that's
actually
very attractive--it's hard enough for users to understand safe browsing.
Safe
Browsing is of course centralized, but that comes with a number of
advantages
and it's not clear what the advantage of decentralized blacklist
dissemination
is, given the networking realities.

You posit a mechanism for forming the list of misbehaving sites, but
distributed
reputation is really hard, and it's not clear that Google is actually doing
a bad
job of running Safe Browsing, so given that this is a fairly major unsolved
problem,
I'd be reluctant to set out to build a mechanism like this without a pretty
clear
design.

-Ekr







On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi Jonathan
>
> In the short and medium terms, it scales better than a white list and

distributes the effort of finding APIs misuses. Mozilla and other vendor
> browser could still review the code of the site and add its vote in favour
> or against the Web property.
>
> In the long term, the system would help finding new security threats such a
> tracking or fingerprinting algorithms by encouraging the honest report of
> evidences, somehow.
>
> With this system, the threat is considered the result of both potential
> risk and chances of actual misuse. The revocation protocol reduces
> threatening situations by minimising the number of Web properties abusing
> the APIs.
>
> As a side effect, it provides the infrastructure for a real distributed and
> cross browser database which can be of utility for other unforeseen uses.
>
> What do you think?
>
>
> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
> escribió:
>
> Hey,
> What would be the advantage of using this over the safesite list? Obviously
> there would be less broken sites on the web as we would be permitting the
> site to still be viewed by the user rather than just revoking the
> permission but are there other advantages?
>
> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
> > Hi, folks.
> >
> > Some time ago, I've started to think about an idea to experiment with new
> > powerful Web APIs: a sort of "deceptive site" database for harmful uses
> of
> > browsers APIs. I've been curating that idea and come up with the concept
> of
> > a "revocation protocol" to revoke user granted permissions for origins
> > abusing those APIs.
> >
> > I published the idea on GitHub [1] and I was wondering about the utility
> > and feasibility of such a system so I would thank any feedback you want
> to
> > provide.
> >
> > I hope it will be of interest for you.
> >
> > [1] https://github.com/delapuente/revocation-protocol
> >
> > --
> > 
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revocation protocol idea

2017-03-21 Thread Salvador de la Puente
Hi Jonathan

In the short and medium terms, it scales better than a white list and
distributes the effort of finding APIs misuses. Mozilla and other vendor
browser could still review the code of the site and add its vote in favour
or against the Web property.

In the long term, the system would help finding new security threats such a
tracking or fingerprinting algorithms by encouraging the honest report of
evidences, somehow.

With this system, the threat is considered the result of both potential
risk and chances of actual misuse. The revocation protocol reduces
threatening situations by minimising the number of Web properties abusing
the APIs.

As a side effect, it provides the infrastructure for a real distributed and
cross browser database which can be of utility for other unforeseen uses.

What do you think?


El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" 
escribió:

Hey,
What would be the advantage of using this over the safesite list? Obviously
there would be less broken sites on the web as we would be permitting the
site to still be viewed by the user rather than just revoking the
permission but are there other advantages?

On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi, folks.
>
> Some time ago, I've started to think about an idea to experiment with new
> powerful Web APIs: a sort of "deceptive site" database for harmful uses of
> browsers APIs. I've been curating that idea and come up with the concept of
> a "revocation protocol" to revoke user granted permissions for origins
> abusing those APIs.
>
> I published the idea on GitHub [1] and I was wondering about the utility
> and feasibility of such a system so I would thank any feedback you want to
> provide.
>
> I hope it will be of interest for you.
>
> [1] https://github.com/delapuente/revocation-protocol
>
> --
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revocation protocol idea

2017-03-08 Thread Jonathan Kingston
Hey,
What would be the advantage of using this over the safesite list? Obviously
there would be less broken sites on the web as we would be permitting the
site to still be viewed by the user rather than just revoking the
permission but are there other advantages?

On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi, folks.
>
> Some time ago, I've started to think about an idea to experiment with new
> powerful Web APIs: a sort of "deceptive site" database for harmful uses of
> browsers APIs. I've been curating that idea and come up with the concept of
> a "revocation protocol" to revoke user granted permissions for origins
> abusing those APIs.
>
> I published the idea on GitHub [1] and I was wondering about the utility
> and feasibility of such a system so I would thank any feedback you want to
> provide.
>
> I hope it will be of interest for you.
>
> [1] https://github.com/delapuente/revocation-protocol
>
> --
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform