Is Quantum DOM affecting DevTools?

2017-09-28 Thread Salvador de la Puente
Hello there!

I was testing some WebRTC demos in two separate tabs in Nightly. I realized
I needed to switch from one to another for them to connect faster. I
thought it would be related to Quantum DOM throttling and I was wondering...

   - Is it possible that throttling would be affecting DevTools?
   - Would it be possible to disable throttling if DevTools are open?

What do you think?

-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #20

2017-08-22 Thread Salvador de la Puente
Congratulations for this 20th issue of the newsletter and for all the hard
work that the Quantum team is doing!

On Fri, Aug 18, 2017 at 7:36 PM, Olli Pettay  wrote:

>
>
> On 08/18/2017 08:28 PM, Ehsan Akhgari wrote:
>
>> On Fri, Aug 18, 2017 at 9:39 AM, Ryan VanderMeulen <
>> rvandermeu...@mozilla.com > wrote:
>>
>> Is there a good way to get a sense of what the higher-impact bugs are
>> that remain for improving Speedometer? Just going through the deps is
>> difficult because it's hard to assess how much of a win some of those
>> are. Are we gated mostly on JS perf at this point? Layout? Something else?
>> :-)
>>
>>
>> That's a pretty hard question to answer since in many cases the impact of
>> each individual bug fix may fall below the measurement noise in the
>> benchmark score, and also it's pretty hard to map what you see in
>> profiles to benchmark score numbers, except for bugs that have some kind of
>> in
>> progress patch which allows us to measure the before and after state
>> without them having been fixed yet.
>>
>> In general Speedometer performance isn't generally gated on anything
>> extremely big and instead has been improved by fixing many small performance
>> issues all over the place.  That being said, there are some "high
>> profile" bugs that come to my mind.  Jan may think of some more in
>> SpiderMonkey:
>>
>>   * https://bugzilla.mozilla.org/show_bug.cgi?id=651120 I think will be
>> able to gain us a few extra points but it's a complex change with many
>> dependencies and a few people are helping out Cătălin with it.
>> (Interestingly I just realized it wasn't on the dependency list of the main
>> SM2 bug!)
>>
> /me would like to see some Speedometer numbers from a build with the
> patches for bug 651120
>
>   * https://bugzilla.mozilla.org/show_bug.cgi?id=1346723 may also help
>> some more still.  We have already done a ton of work under that bug, but
>> there's some more work to be done.  However this bug is getting closer to
>> the state where most of the remaining work involves fixing many different
>> issues, each of which is costing a bit of the overall time spent there
>> when running the benchmark.
>>   * https://bugzilla.mozilla.org/show_bug.cgi?id=1349255 is a sync IPC
>> that hurts Speedometer so fixing it may have an outsized impact relative to
>> other bug fixes.
>>
> I thought this would affect Speedometer significantly, since it shows up
> in the profiles, but I disabled the sync message altogether and rerun and
> couldn't really see difference locally. Perhaps the sync calls happen
> usually when Speedometer isn't running tests but just loading pages or so.
>
>   * https://bugzilla.mozilla.org/show_bug.cgi?id=1377131 may have a large
>> impact also, but I'm not sure exactly how much.  Olli may be able to provide
>> more information about that.
>>
> Locally my patches seem to affect quite a lot on linux, but unfortunately
> less on Windows.
>
>
>> Cheers,
>> Ehsan
>>
>>
>> Thanks!
>>
>> -Ryan
>>
>>
>> On Fri, Aug 18, 2017 at 1:26 AM, Ehsan Akhgari <
>> ehsan.akhg...@gmail.com > wrote:
>>
>> Hi everyone,
>>
>> It is hard to believe that we've gotten to the twentieth of these
>> newsletters.  That also means that we're very quickly approaching the finish
>> line for this sprint.  We only have a bit more than five more
>> weeks to go before Firefox 57 merges to beta.  It may be a good time to
>> start to
>> think more carefully about what we pay attention to in the
>> remaining time, both in terms of the risk of patches landing, and the
>> opportunity
>> cost of what we decide to put off until 58 and the releases after.
>>
>> We still have a large number of triaged bugs
>> > 22[qf%3Ap%22%20%40nobody%40mozilla.org> that are available for someone
>> to pick up
>> and work on.  If you have some spare cycles, we really would
>> appreciate if you consider picking one or two bugs from this list and
>> working on
>> them.  They span many different areas of the codebase so finding
>> something in your area of interest and expertise should hopefully be simple.
>> Quantum Flow isn't the kind of project that requires fixing every
>> single one of these bugs to be finished successfully, but at the same time
>> big performance improvements often consist of many small parts,
>> so the cumulative impact of a few additional fixes can make a big impact.
>>
>> It is worth mentioning that lately while lurking on various tech
>> news and blog sites where Nightly users comment, I have seen quite a few
>> positive comments about Nightly performance from users.  It's
>> easy to get lost in the details of the work involved in getting rid of
>> synchronous IPCs, synchronous layout/style flushes, unnecessary
>> memory 

Re: Intent to remove: sensor APIs

2017-08-02 Thread Salvador de la Puente
I strongly encourage you to take a look at the telemetry stats regarding
the usage of deviceorientation API and other. I don't know the penetration
of proximity and ambient light APIs but deviceorientation is definitively
used.

Please, consider twice before taking a final decision.

El 31 jul. 2017 3:44 p. m., "Anne van Kesteren"  escribió:

> On Mon, Jul 24, 2017 at 6:11 PM, Anne van Kesteren 
> wrote:
> > Please consider the request to remove device orientation retracted for
> > now. We'll still need to figure out some kind of long term plan for
> > that API though. WebVR building on it through libraries that abstract
> > away the browser incompatibilities will just make it harder to fix the
> > underpinnings going forward. (And there's always the risk that folks
> > don't use libraries and code directly against what Chrome ships. Seems
> > likely even.)
>
> Small update: we'll start by just disabling proximity. Disabling
> ambient light will follow soon after, but is a little trickier as we
> use the web-facing API in the Firefox for Android frontend.
> (Suggestions for fixing the orientation interoperability mess are
> still welcome!)
>
>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-27 Thread Salvador de la Puente
Well, I'm not saying "don't fix it" but if we switch the API off then
other-than-evil ways of using the API will never happen and, as Belen said
before, it undermines the confidence on the Web platform.

I can not foresee the canonical use of the API that would support the
decision of not switching it off, but neither I thought of reading an image
pixel by pixel with the ambient light sensor, to be honest.

On Wed, Apr 26, 2017 at 8:27 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
wrote:

> On 04/26/2017 11:36 AM, Salvador de la Puente wrote:
>
>> Right, I did not remember that request to victim.com <http://victim.com>
>> originated in  tags inside evil.com <http://evil.com> went to the
>> network with victim.com <http://victim.com> credentials so clients can
>> reach more than servers. That's fine.
>>
>> Anyway, with only that use of the APIs, is it not a little bit early to
>> say that every possible usage will be harmful?
>>
> Nobody is saying that every usage of the API is harmful.  But the browser
> engine doesn't have a way to read the mind of the author of the page to
> figure out why a certain API is being used.  When APIs expose risks like
> this we need some kind of mitigation.  Rate limits are one approach, but
> it's hard to get right and it's hard to demonstrate its effectiveness given
> the existence of wakelock APIs as described in the article.
>
> Also please note that typically Web APIs aren't designed based on the
> assumption that most of the consumers use them in a non-malicious way,
> quite the contrary -- we usually assume an untrusted caller because we
> can't know much about the intentions of the caller, and our users can't be
> expected to make security decisions before visiting a website.
>
> Do we have any way to measure the traffic of the web properties using this
>> API?
>>
>
> That has no bearing on the security aspect of this.  And no, we don't.  At
> best we can add some telemetry for the number of sessions that have an
> event handler for this or some such which bug 1359124 was filed for.
>
>
>> On Wed, Apr 26, 2017 at 5:28 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com
>> <mailto:ehsan.akhg...@gmail.com>> wrote:
>>
>> On 04/25/2017 08:26 PM, Salvador de la Puente wrote:
>>
>> So the risk is not that high since if the image is not
>> protected I can get it and do evil things without requiring
>> the Light Sensor API. Isn't it?
>>
>>
>> No, the risk is extremely high.
>>
>> Here is a concrete example.  Some banks give their users scanned
>> copies of their cheques (including secret financial information)
>> as cookie protected images.  Browsers already have protections in
>> place that prevent cross-origin pages from reading the pixel
>> values of these images by tainting such images and remembering
>> that an image is coming from such a source and preventing the
>> contents of such a tainted image to be read out through an API
>> that gives you access to raw pixel values.  Merely uploading the
>> URL of such an image to the evil.com <http://evil.com> server
>> doesn't help the attacker since they won't have access to the
>> user's credentials on the server side.  The attack vector being
>> discussed here introduces a new vulnerability vector for websites
>> to try to steal sensitive information like this in ways that
>> currently isn't possible.
>>
>>
>> On Wed, Apr 26, 2017 at 1:30 AM, Eric Rescorla <e...@rtfm.com
>> <mailto:e...@rtfm.com> <mailto:e...@rtfm.com
>> <mailto:e...@rtfm.com>>> wrote:
>>
>>
>>
>> On Tue, Apr 25, 2017 at 3:40 PM, Salvador de la Puente
>> <sdelapue...@mozilla.com <mailto:sdelapue...@mozilla.com>
>> <mailto:sdelapue...@mozilla.com
>> <mailto:sdelapue...@mozilla.com>>> wrote:
>>
>> The article says:
>>
>> Embed an image from the attacked domain; generally
>> this will
>> be a resource
>> > which varies for different authenticated users such
>> as the
>> logged-in user’s
>> > avatar or a security code.
>> >
>>
>> And then refers all the steps to this image (binarizing,
>> expand and measure
>> per pixel) but, If I can embed that image, it is because I
>> know the UR

Re: Ambient Light Sensor API

2017-04-26 Thread Salvador de la Puente
Right, I did not remember that request to victim.com originated in 
tags inside evil.com went to the network with victim.com credentials so
clients can reach more than servers. That's fine.

Anyway, with only that use of the APIs, is it not a little bit early to say
that every possible usage will be harmful? Do we have any way to measure
the traffic of the web properties using this API?

On Wed, Apr 26, 2017 at 5:28 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
wrote:

> On 04/25/2017 08:26 PM, Salvador de la Puente wrote:
>
>> So the risk is not that high since if the image is not protected I can
>> get it and do evil things without requiring the Light Sensor API. Isn't it?
>>
>
> No, the risk is extremely high.
>
> Here is a concrete example.  Some banks give their users scanned copies of
> their cheques (including secret financial information) as cookie protected
> images.  Browsers already have protections in place that prevent
> cross-origin pages from reading the pixel values of these images by
> tainting such images and remembering that an image is coming from such a
> source and preventing the contents of such a tainted image to be read out
> through an API that gives you access to raw pixel values.  Merely uploading
> the URL of such an image to the evil.com server doesn't help the attacker
> since they won't have access to the user's credentials on the server side.
> The attack vector being discussed here introduces a new vulnerability
> vector for websites to try to steal sensitive information like this in ways
> that currently isn't possible.
>
>
>> On Wed, Apr 26, 2017 at 1:30 AM, Eric Rescorla <e...@rtfm.com > e...@rtfm.com>> wrote:
>>
>>
>>
>> On Tue, Apr 25, 2017 at 3:40 PM, Salvador de la Puente
>> <sdelapue...@mozilla.com <mailto:sdelapue...@mozilla.com>> wrote:
>>
>> The article says:
>>
>> Embed an image from the attacked domain; generally this will
>> be a resource
>> > which varies for different authenticated users such as the
>> logged-in user’s
>> > avatar or a security code.
>> >
>>
>> And then refers all the steps to this image (binarizing,
>> expand and measure
>> per pixel) but, If I can embed that image, it is because I
>> know the URL for
>> it and the proper auth tokens if it is protected. In that
>> case, why to not
>> simply steal the image?
>>
>>
>> The simple version of this is that the image is cookie protected.
>>
>> -Ekr
>>
>>
>> On Wed, Apr 26, 2017 at 12:23 AM, Jonathan Kingston
>> <j...@mozilla.com <mailto:j...@mozilla.com>> wrote:
>>
>> > Auth related images are the attack vector, that and history
>> attacks on
>> > same domain.
>> >
>> > On Tue, Apr 25, 2017 at 11:17 PM, Salvador de la Puente <
>> > sdelapue...@mozilla.com <mailto:sdelapue...@mozilla.com>>
>> wrote:
>> >
>> >> Sorry for my ignorance but, in the case of Stealing
>> cross-origin
>> >> resources,
>> >> I don't get the point of the attack. If have the ability to
>> embed the
>> >> image
>> >> in step 1, why to not simply send this to evil.com
>> <http://evil.com> for further
>> >> processing?
>> >> How it is possible for evil.com <http://evil.com> to get
>> access to protected resources?
>> >>
>> >> On Tue, Apr 25, 2017 at 8:04 PM, Ehsan Akhgari
>> <ehsan.akhg...@gmail.com <mailto:ehsan.akhg...@gmail.com>>
>> >> wrote:
>> >>
>> >> > On 04/25/2017 10:25 AM, Andrew Overholt wrote:
>> >> >
>> >> >> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla
>> <e...@rtfm.com <mailto:e...@rtfm.com>> wrote:
>> >> >>
>> >> >> Going back to Jonathan's (I think) question. Does anyone
>> use this at
>> >> all
>> >> >>> in
>> >> >>> the field?
>> >> >>>
>> >> >>> Chrome's usage metrics say <= 0.0001% of page loads:
>> >> >>
>> https://www.chromes

Re: Ambient Light Sensor API

2017-04-25 Thread Salvador de la Puente
So the risk is not that high since if the image is not protected I can get
it and do evil things without requiring the Light Sensor API. Isn't it?

On Wed, Apr 26, 2017 at 1:30 AM, Eric Rescorla <e...@rtfm.com> wrote:

>
>
> On Tue, Apr 25, 2017 at 3:40 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
>> The article says:
>>
>> Embed an image from the attacked domain; generally this will be a resource
>> > which varies for different authenticated users such as the logged-in
>> user’s
>> > avatar or a security code.
>> >
>>
>> And then refers all the steps to this image (binarizing, expand and
>> measure
>> per pixel) but, If I can embed that image, it is because I know the URL
>> for
>> it and the proper auth tokens if it is protected. In that case, why to not
>> simply steal the image?
>>
>
> The simple version of this is that the image is cookie protected.
>
> -Ekr
>
>
>> On Wed, Apr 26, 2017 at 12:23 AM, Jonathan Kingston <j...@mozilla.com>
>> wrote:
>>
>> > Auth related images are the attack vector, that and history attacks on
>> > same domain.
>> >
>> > On Tue, Apr 25, 2017 at 11:17 PM, Salvador de la Puente <
>> > sdelapue...@mozilla.com> wrote:
>> >
>> >> Sorry for my ignorance but, in the case of Stealing cross-origin
>> >> resources,
>> >> I don't get the point of the attack. If have the ability to embed the
>> >> image
>> >> in step 1, why to not simply send this to evil.com for further
>> >> processing?
>> >> How it is possible for evil.com to get access to protected resources?
>> >>
>> >> On Tue, Apr 25, 2017 at 8:04 PM, Ehsan Akhgari <
>> ehsan.akhg...@gmail.com>
>> >> wrote:
>> >>
>> >> > On 04/25/2017 10:25 AM, Andrew Overholt wrote:
>> >> >
>> >> >> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla <e...@rtfm.com>
>> wrote:
>> >> >>
>> >> >> Going back to Jonathan's (I think) question. Does anyone use this at
>> >> all
>> >> >>> in
>> >> >>> the field?
>> >> >>>
>> >> >>> Chrome's usage metrics say <= 0.0001% of page loads:
>> >> >> https://www.chromestatus.com/metrics/feature/popularity#Ambi
>> >> >> entLightSensorConstructor.
>> >> >>
>> >> >
>> >> > This is the new version of the spec which we don't ship.
>> >> >
>> >> >
>> >> > We are going to collect telemetry in
>> >> >> https://bugzilla.mozilla.org/show_bug.cgi?id=1359124.
>> >> >> ___
>> >> >> dev-platform mailing list
>> >> >> dev-platform@lists.mozilla.org
>> >> >> https://lists.mozilla.org/listinfo/dev-platform
>> >> >>
>> >> >
>> >> > ___
>> >> > dev-platform mailing list
>> >> > dev-platform@lists.mozilla.org
>> >> > https://lists.mozilla.org/listinfo/dev-platform
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> 
>> >> ___
>> >> dev-platform mailing list
>> >> dev-platform@lists.mozilla.org
>> >> https://lists.mozilla.org/listinfo/dev-platform
>> >>
>> >
>> >
>>
>>
>> --
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>


-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-25 Thread Salvador de la Puente
The article says:

Embed an image from the attacked domain; generally this will be a resource
> which varies for different authenticated users such as the logged-in user’s
> avatar or a security code.
>

And then refers all the steps to this image (binarizing, expand and measure
per pixel) but, If I can embed that image, it is because I know the URL for
it and the proper auth tokens if it is protected. In that case, why to not
simply steal the image?

On Wed, Apr 26, 2017 at 12:23 AM, Jonathan Kingston <j...@mozilla.com> wrote:

> Auth related images are the attack vector, that and history attacks on
> same domain.
>
> On Tue, Apr 25, 2017 at 11:17 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
>> Sorry for my ignorance but, in the case of Stealing cross-origin
>> resources,
>> I don't get the point of the attack. If have the ability to embed the
>> image
>> in step 1, why to not simply send this to evil.com for further
>> processing?
>> How it is possible for evil.com to get access to protected resources?
>>
>> On Tue, Apr 25, 2017 at 8:04 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com>
>> wrote:
>>
>> > On 04/25/2017 10:25 AM, Andrew Overholt wrote:
>> >
>> >> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla <e...@rtfm.com> wrote:
>> >>
>> >> Going back to Jonathan's (I think) question. Does anyone use this at
>> all
>> >>> in
>> >>> the field?
>> >>>
>> >>> Chrome's usage metrics say <= 0.0001% of page loads:
>> >> https://www.chromestatus.com/metrics/feature/popularity#Ambi
>> >> entLightSensorConstructor.
>> >>
>> >
>> > This is the new version of the spec which we don't ship.
>> >
>> >
>> > We are going to collect telemetry in
>> >> https://bugzilla.mozilla.org/show_bug.cgi?id=1359124.
>> >> ___
>> >> dev-platform mailing list
>> >> dev-platform@lists.mozilla.org
>> >> https://lists.mozilla.org/listinfo/dev-platform
>> >>
>> >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>>
>>
>>
>> --
>> 
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>


-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-25 Thread Salvador de la Puente
Sorry for my ignorance but, in the case of Stealing cross-origin resources,
I don't get the point of the attack. If have the ability to embed the image
in step 1, why to not simply send this to evil.com for further processing?
How it is possible for evil.com to get access to protected resources?

On Tue, Apr 25, 2017 at 8:04 PM, Ehsan Akhgari 
wrote:

> On 04/25/2017 10:25 AM, Andrew Overholt wrote:
>
>> On Tue, Apr 25, 2017 at 9:35 AM, Eric Rescorla  wrote:
>>
>> Going back to Jonathan's (I think) question. Does anyone use this at all
>>> in
>>> the field?
>>>
>>> Chrome's usage metrics say <= 0.0001% of page loads:
>> https://www.chromestatus.com/metrics/feature/popularity#Ambi
>> entLightSensorConstructor.
>>
>
> This is the new version of the spec which we don't ship.
>
>
> We are going to collect telemetry in
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1359124.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi on Aurora

2017-04-11 Thread Salvador de la Puente
How does this relate with Project Down and the end of Aurora channel? Will
be multi-e10s enabled when shifting from nightly to beta?

On Wed, Apr 5, 2017 at 1:34 AM, Blake Kaplan  wrote:

> Hey all,
>
> We recently enabled 4 content processes by default on Aurora. We're
> still tracking several bugs that we are planning to fix in the near
> future as well as getting more memory measurements in Telemetry as we
> look towards a staged rollout in Beta and beyond.
>
> We were able to turn on in Aurora thanks to a bunch of work from
> bkelly, baku, Gabor, and a bunch of other folks.
>
> Here's looking forward to riding more trains!
> --
> Blake
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
antage of decentralized blacklist
>>> dissemination
>>> is, given the networking realities.
>>>
>>> You posit a mechanism for forming the list of misbehaving sites, but
>>> distributed
>>> reputation is really hard, and it's not clear that Google is actually
>>> doing a bad
>>> job of running Safe Browsing, so given that this is a fairly major
>>> unsolved problem,
>>> I'd be reluctant to set out to build a mechanism like this without a
>>> pretty clear
>>> design.
>>>
>>> -Ekr
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
>>> sdelapue...@mozilla.com> wrote:
>>>
>>> Hi Jonathan
>>>>
>>>> In the short and medium terms, it scales better than a white list and
>>>>
>>> distributes the effort of finding APIs misuses. Mozilla and other vendor
>>>
>>>> browser could still review the code of the site and add its vote in
>>>> favour
>>>> or against the Web property.
>>>>
>>>> In the long term, the system would help finding new security threats
>>>> such
>>>> a
>>>> tracking or fingerprinting algorithms by encouraging the honest report
>>>> of
>>>> evidences, somehow.
>>>>
>>>> With this system, the threat is considered the result of both potential
>>>> risk and chances of actual misuse. The revocation protocol reduces
>>>> threatening situations by minimising the number of Web properties
>>>> abusing
>>>> the APIs.
>>>>
>>>> As a side effect, it provides the infrastructure for a real distributed
>>>> and
>>>> cross browser database which can be of utility for other unforeseen
>>>> uses.
>>>>
>>>> What do you think?
>>>>
>>>>
>>>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" <jkings...@mozilla.com>
>>>> escribió:
>>>>
>>>> Hey,
>>>> What would be the advantage of using this over the safesite list?
>>>> Obviously
>>>> there would be less broken sites on the web as we would be permitting
>>>> the
>>>> site to still be viewed by the user rather than just revoking the
>>>> permission but are there other advantages?
>>>>
>>>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>>>> sdelapue...@mozilla.com> wrote:
>>>>
>>>> Hi, folks.
>>>>>
>>>>> Some time ago, I've started to think about an idea to experiment with
>>>>>
>>>> new
>>>>
>>>>> powerful Web APIs: a sort of "deceptive site" database for harmful uses
>>>>>
>>>> of
>>>>
>>>>> browsers APIs. I've been curating that idea and come up with the
>>>>>
>>>> concept of
>>>>
>>>>> a "revocation protocol" to revoke user granted permissions for origins
>>>>> abusing those APIs.
>>>>>
>>>>> I published the idea on GitHub [1] and I was wondering about the
>>>>> utility
>>>>> and feasibility of such a system so I would thank any feedback you want
>>>>>
>>>> to
>>>>
>>>>> provide.
>>>>>
>>>>> I hope it will be of interest for you.
>>>>>
>>>>> [1] https://github.com/delapuente/revocation-protocol
>>>>>
>>>>> --
>>>>> 
>>>>> ___
>>>>> dev-platform mailing list
>>>>> dev-platform@lists.mozilla.org
>>>>> https://lists.mozilla.org/listinfo/dev-platform
>>>>>
>>>>> ___
>>>> dev-platform mailing list
>>>> dev-platform@lists.mozilla.org
>>>> https://lists.mozilla.org/listinfo/dev-platform
>>>>
>>>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>


-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
Hi Jonathan

On Thu, Mar 23, 2017 at 9:09 AM, Jonathan Kingston <jkings...@mozilla.com>
wrote:

> This seems a little like the idea WOT(https://www.mywot.com/) had,
> Showing the user that they might be looking at a website that isn't
> considered great but isn't perhaps bad enough to be blocked.
>

Yes. I talk about it in
https://salvadelapuente.com/posts/2016/07/29/towards-the-web-of-trust/


>
> I agree that one web actor owning this power isn't a great place to be in
> and that in itself might be enough justification in at least looking
> further into this direction.
>
> If there was enough evidence to suggest we should revoke an advert
> providers ability to track someone without breaking the web that might be
> interesting.
> There is also some research (which I am not sure I can share publicly) to
> suggest we should limit API usage to avoid security flaws within browsers
> based upon a strong correlation of Lines of Code, CVE's and the low number
> of sites that use those APIs. Perhaps there is a rationale to make websites
> earn enough trust for new features that have a high risk. For example would
> Reddits sub resources really need WebVR or WebGL?
>

That's very interesting. Could you share those correlations with me
privately? Of course, the tools needed to perform the software reviews
could be supported by automatic tools. Here is a value proposition coming
from browser vendors or anyone who want to adhere to the protocol.


> But we would also have to counter the cost of building this over just
> making the APIs secure in the first place and also understand we would hurt
> web innovation with that too.
>

Ideally, the protocol is intended to worry "not too much" about misusing
powerful APIs and so, to boost Web innovation and experimentation.


>
> On Tue, Mar 21, 2017 at 10:11 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>> There seem to be three basic ideas here:
>>
>> 0. Blacklisting at the level of API rather than site.
>> 1. Some centralized but democratic  mechanism for building a list of
>> misbehaving sites.
>> 2. A mechanism for distributing the list of misbehaving sites to clients.
>>
>> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
>> to say
>> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
>> bad, but
>> specific APIs aren't disabled, but it's easy to see how you would extend
>> it to that
>> if you actually wanted to provide that function. I'm not sure that's
>> actually
>> very attractive--it's hard enough for users to understand safe browsing.
>> Safe
>> Browsing is of course centralized, but that comes with a number of
>> advantages
>> and it's not clear what the advantage of decentralized blacklist
>> dissemination
>> is, given the networking realities.
>>
>> You posit a mechanism for forming the list of misbehaving sites, but
>> distributed
>> reputation is really hard, and it's not clear that Google is actually
>> doing a bad
>> job of running Safe Browsing, so given that this is a fairly major
>> unsolved problem,
>> I'd be reluctant to set out to build a mechanism like this without a
>> pretty clear
>> design.
>>
>> -Ekr
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>>> Hi Jonathan
>>>
>>> In the short and medium terms, it scales better than a white list and
>>
>> distributes the effort of finding APIs misuses. Mozilla and other vendor
>>> browser could still review the code of the site and add its vote in
>>> favour
>>> or against the Web property.
>>>
>>> In the long term, the system would help finding new security threats
>>> such a
>>> tracking or fingerprinting algorithms by encouraging the honest report of
>>> evidences, somehow.
>>>
>>> With this system, the threat is considered the result of both potential
>>> risk and chances of actual misuse. The revocation protocol reduces
>>> threatening situations by minimising the number of Web properties abusing
>>> the APIs.
>>>
>>> As a side effect, it provides the infrastructure for a real distributed
>>> and
>>> cross browser database which can be of utility for other unforeseen uses.
>>>
>>> What do you think?
>>>
>>>
>>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" <jkings...@mozilla.com>
>>> escribió:
>>>
>>> Hey,
>>> What would be th

Re: Revocation protocol idea

2017-03-31 Thread Salvador de la Puente
Hi Eric

On Wed, Mar 22, 2017 at 6:11 AM, Eric Rescorla <e...@rtfm.com> wrote:

> There seem to be three basic ideas here:
>
> 0. Blacklisting at the level of API rather than site.
> 1. Some centralized but democratic  mechanism for building a list of
> misbehaving sites.
> 2. A mechanism for distributing the list of misbehaving sites to clients.
>

I think I did not explain it well. It would be a black list on site level
and it would not be centralised but distributed.
The idea is that is a site is harmful for the user, all their permissions
should be revoked and we shuold communicate the user why this site is
harmful. The list of misbehaving sites, the reasons of why them are
dangerous and the evidence supporting misbehaving should be in a
cross-browser distrubuted DB.


>
> As Jonathan notes, Firefox already has a mechanism for doing #2, which is
> to say
> "Safe Browsing". Now, Safe Browsing is binary, either a site is good or
> bad, but
> specific APIs aren't disabled, but it's easy to see how you would extend
> it to that
> if you actually wanted to provide that function. I'm not sure that's
> actually
> very attractive--it's hard enough for users to understand safe browsing.
> Safe
> Browsing is of course centralized, but that comes with a number of
> advantages
> and it's not clear what the advantage of decentralized blacklist
> dissemination
> is, given the networking realities.
>
> You posit a mechanism for forming the list of misbehaving sites, but
> distributed
> reputation is really hard, and it's not clear that Google is actually
> doing a bad
> job of running Safe Browsing, so given that this is a fairly major
> unsolved problem,
> I'd be reluctant to set out to build a mechanism like this without a
> pretty clear
> design.
>

I've been looking at this paper on prediction markets based on BitCoin
<http://bravenewcoin.com/assets/Whitepapers/Augur-A-Decentralized-Open-Source-Platform-for-Prediction-Markets.pdf>
for inspiration. It is true that distributed reputation is a hard problem
but I think we could adapt the concepts on that paper to this scenario if
reviewers "bet" on site reputation and there is some incentive. Of course,
further research is needed to mitigate the chance for a reviewer to lie in
its report and prevent forms of Sybil attack but it seems to be some
solutions out there.


>
> -Ekr
>
>
>
>
>
>
>
> On Tue, Mar 21, 2017 at 2:40 PM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
>> Hi Jonathan
>>
>> In the short and medium terms, it scales better than a white list and
>
> distributes the effort of finding APIs misuses. Mozilla and other vendor
>> browser could still review the code of the site and add its vote in favour
>> or against the Web property.
>>
>> In the long term, the system would help finding new security threats such
>> a
>> tracking or fingerprinting algorithms by encouraging the honest report of
>> evidences, somehow.
>>
>> With this system, the threat is considered the result of both potential
>> risk and chances of actual misuse. The revocation protocol reduces
>> threatening situations by minimising the number of Web properties abusing
>> the APIs.
>>
>> As a side effect, it provides the infrastructure for a real distributed
>> and
>> cross browser database which can be of utility for other unforeseen uses.
>>
>> What do you think?
>>
>>
>> El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" <jkings...@mozilla.com>
>> escribió:
>>
>> Hey,
>> What would be the advantage of using this over the safesite list?
>> Obviously
>> there would be less broken sites on the web as we would be permitting the
>> site to still be viewed by the user rather than just revoking the
>> permission but are there other advantages?
>>
>> On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
>> sdelapue...@mozilla.com> wrote:
>>
>> > Hi, folks.
>> >
>> > Some time ago, I've started to think about an idea to experiment with
>> new
>> > powerful Web APIs: a sort of "deceptive site" database for harmful uses
>> of
>> > browsers APIs. I've been curating that idea and come up with the
>> concept of
>> > a "revocation protocol" to revoke user granted permissions for origins
>> > abusing those APIs.
>> >
>> > I published the idea on GitHub [1] and I was wondering about the utility
>> > and feasibility of such a system so I would thank any feedback you want
>> to
>> > provide.
>> >
>> > I hope i

Re: Revocation protocol idea

2017-03-21 Thread Salvador de la Puente
Hi Jonathan

In the short and medium terms, it scales better than a white list and
distributes the effort of finding APIs misuses. Mozilla and other vendor
browser could still review the code of the site and add its vote in favour
or against the Web property.

In the long term, the system would help finding new security threats such a
tracking or fingerprinting algorithms by encouraging the honest report of
evidences, somehow.

With this system, the threat is considered the result of both potential
risk and chances of actual misuse. The revocation protocol reduces
threatening situations by minimising the number of Web properties abusing
the APIs.

As a side effect, it provides the infrastructure for a real distributed and
cross browser database which can be of utility for other unforeseen uses.

What do you think?


El 8 mar. 2017 10:54 p. m., "Jonathan Kingston" <jkings...@mozilla.com>
escribió:

Hey,
What would be the advantage of using this over the safesite list? Obviously
there would be less broken sites on the web as we would be permitting the
site to still be viewed by the user rather than just revoking the
permission but are there other advantages?

On Sun, Mar 5, 2017 at 4:23 PM, Salvador de la Puente <
sdelapue...@mozilla.com> wrote:

> Hi, folks.
>
> Some time ago, I've started to think about an idea to experiment with new
> powerful Web APIs: a sort of "deceptive site" database for harmful uses of
> browsers APIs. I've been curating that idea and come up with the concept of
> a "revocation protocol" to revoke user granted permissions for origins
> abusing those APIs.
>
> I published the idea on GitHub [1] and I was wondering about the utility
> and feasibility of such a system so I would thank any feedback you want to
> provide.
>
> I hope it will be of interest for you.
>
> [1] https://github.com/delapuente/revocation-protocol
>
> --
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #1

2017-03-14 Thread Salvador de la Puente
Very interesting stuff. Of course, there are things scaping my
understanding but it is good to know.

Thank you for sharing, Eshang.

El 9 mar. 2017 2:12 p. m., "Ehsan Akhgari" 
escribió:

> Hi everyone,
>
> A while ago a number of engineers including myself started to look into a
> performance project that turned into Quantum Flow.  The focus of the
> project is finding and prioritising the issues across the browser so we
> will need help from many of you to get them fixed.  I’m planning to write
> regular updates about the project and highlight the focus areas and the
> ongoing work.  In this first email I’m going to start by giving some
> background about how we started and where we are now.
>
> Quantum Flow is a performance task force focusing on eliminating
> performance cliffs in the browser that aren’t part of other Quantum
> projects.   Project Quantum’s overall focus is to deliver a high
> performance browser engine, and we are making some great progress on the
> four main sub-projects that are attacking large portions of the rendering
> pipeline, but that leaves us with various performance issues elsewhere in
> the browser which users may still hit, and we have to fix all such issues
> to ensure that the ultimate result is a next generation browser (and
> browser engine) we all can be proud of.
>
> A good way to think about how Quantum Flow fits with the rest of Quantum
> projects is to imagine it as the foundation we need for the other projects
> to build up on.  For example, if a bad bug somewhere in the browser causes
> a jank in some code for a few hundreds of milliseconds, all of the benefit
> that we obtain from cooperative scheduling of JS on the page with Quantum
> DOM, resolving the styles in parallel on all of your CPU cores with Quantum
> CSS and rasterize the page directly on the GPU with Quantum Render will
> still result in a janky experience[0].  So we want to ensure that we remove
> these types of roadblocks that would prevent the rest of Quantum to shine.
>
> The above description may feel a little big vague, and a little bit too
> broad, so let me try to explain how the need for Quantum Flow became
> apparent.  Around the beginning of this year a number of us gathered for a
> work week in Taipei with the goal of measuring and improving the
> performance of Firefox on a few large websites that we knew we had
> performance problems on.  Initially we were only focusing on Google
> Suite[1], and we started by profiling some of the test cases run by the
> Hasal framework[2].
>
> We had a bit of a difficult time finding actionable issues that we could
> improve since these websites are massive and it can be extremely difficult
> to find out why the overall time of some particular interaction is
> different when comparing different browsers head to head.  Also, we started
> seeing some performance issues on those websites that were coming from
> parts of the browser that were a bit surprising.  For example, Chris Pearce
> found out that on Google Docs, the content process can be blocked on the
> parent process for a synchronous IPC message to initialize spell
> checking[3] even though Google Docs doesn’t use the browser’s spell
> checking facilities!
>
> Following the breadcrumbs, we started to wonder what else we can learn
> about if we profile more usage scenarios in the browser.  As you may
> expect, we have found a fair amount of performance issues in various parts
> of the browser.  That’s hardly a surprise given the size and complexity of
> the code base, but we have also learned a lot about the adverse impact of
> some of these issues at play here.  These findings have uncovered larger
> problem areas that we decided we need to address as part of an initiative
> that we call Quantum Flow.
>
> I’m planning to focus on one important class of performance issues in this
> first email, mostly because it’s probably the most prevalent of the issues
> we have been looking at so far: synchronous IPC messages from the content
> process to the parent process.  We currently have a high number[4] of these
> types of messages.  But of course not every one of these messages is equal,
> we have gathered telemetry[5] on them.  We have a tracker bug[6] to track
> fixing them all.
>
> Some people here may remember the impact of synchronous I/O on the
> performance of Firefox a few years ago, or you may have had to deal with
> such performance issues in other applications.  Based on my experience
> measuring synchronous IPC, I now sometimes miss synchronous I/O.  :-)  I
> have seen synchronous IPC calls that take amount to *seconds* of pause time
> on the content process’s main thread.  To some extent, with e10s, we hide
> some of the pauses that happen on the content process.  For example, APZ[7]
> allows you to scroll even when the content process main thread is busy and
> we can force-paint on tab switch when the content process is busy running
> JS[8], but eventually 

Revocation protocol idea

2017-03-05 Thread Salvador de la Puente
Hi, folks.

Some time ago, I've started to think about an idea to experiment with new
powerful Web APIs: a sort of "deceptive site" database for harmful uses of
browsers APIs. I've been curating that idea and come up with the concept of
a "revocation protocol" to revoke user granted permissions for origins
abusing those APIs.

I published the idea on GitHub [1] and I was wondering about the utility
and feasibility of such a system so I would thank any feedback you want to
provide.

I hope it will be of interest for you.

[1] https://github.com/delapuente/revocation-protocol

-- 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform