Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
On Mon, Jan 4, 2016 at 12:31 PM, Bobby Holley  wrote:

> On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
> wrote:
>
>> Hey Daniel,
>>
>> Thanks for the heads-up.  This is a useful thing to keep in mind as we
>> work
>> through the SHA-1 deprecation.
>>
>> To be honest, this seems like a net positive to me, since it gives users a
>> clear incentive to uninstall this sort of software.
>>
>
> By "this sort of software" do you mean "Firefox"? Because that's what 95%
> of our users experiencing this are going to do absent anything clever on
> our end.
>
> We clearly need to determine the scale of the problem to determine how
> much time it's worth investing into this. But I think we should assume that
> an affected user is a lost use in this case.
>

I was being a bit glib because I think in a lot of cases, it won't be just
Firefox that's affected -- all of the user's HTTPS will quit working,
across all browsers.

I agree that it would be good to get more data here.  I think Adam is on
the right track.

--Richard


>
> bholley
>
>
>
>>
>> --Richard
>>
>> On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
>> wrote:
>>
>> > Heads-up, from a user-complaint/ support / "keep an eye out for this"
>> > perspective:
>> >  * Starting January 1st 2016 (a few days ago), Firefox rejects
>> > recently-issued SSL certs that use the (obsolete) SHA1 hash
>> algorithm.[1]
>> >
>> >  * For users who unknowingly have a local SSL proxy on their machine
>> > from spyware/adware/antivirus (stuff like superfish), this may cause
>> > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
>> > autogenerated certificates.  (Every cert that gets sent to Firefox will
>> > use SHA1 and will have an issued date of "just now", which is after
>> > January 1 2016; hence, the cert is untrusted, even if the spyware put
>> > its root in our root store.)
>> >
>> >  * I'm not sure what action we should (or can) take about this, but for
>> > now we should be on the lookout for this, and perhaps consider writing a
>> > support article about it if we haven't already. (Not sure there's much
>> > help we can offer, since removing spyware correctly/completely can be
>> > tricky and varies on a case by case basis.)
>> >
>> > (Context: I received a family-friend-Firefox-support phone call today,
>> > who this had this exact problem.  Every HTTPS site was broken for her in
>> > Firefox, since January 1st.  IE worked as expected (that is, it happily
>> > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
>> > remotely figure out what the piece of spyware was or how to remove it --
>> > but the rejected certs reported their issuer as being "Digital Marketing
>> > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
>> > turn up anything useful, unfortunately; so I suspect this is "niche"
>> > spyware, or perhaps the name is dynamically generated.)
>> >
>> > Anyway -- I have a feeling this will be somewhat-widespread problem,
>> > among users who have spyware (and perhaps crufty "secure browsing"
>> > antivirus tools) installed.
>> >
>> > ~Daniel
>> >
>> > [1]
>> >
>> >
>> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Cameron Kaiser
What's different about nsIProtocolHandler in e10s? OverbiteFF works in 
45 aurora without e10s on, but fails to recognize the protocol it 
defines with e10s enabled. There's no explanation of this in the browser 
console and seemingly no error. Do I have to do extra work to register 
the protocol handler component, or is there some other problem? A 
cursory search of MDN was no help.


Cameron Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsThread now leaks runnables if dispatch fails

2016-01-04 Thread Kyle Huey
(This is a continuation of the discussion in bug 1218297)

In bug 1155059 we made nsIEventTarget::Dispatch take an
already_AddRefed instead of a raw pointer.  This was done to allow the
dispatcher to transfer its reference to the runnable to the thread the
runnable will run on.  That solves a race condition we've had
historically where the destructor of a runnable can run on either the
dispatching thread or the executing thread, depending on whether the
executing thread can run the event to completion before the
dispatching thread destroys the nsCOMPtr on the stack.  So far, so
good.

In bug 1218297 we saw a case where dispatch to a thread (the socket
transport service thread in this case) fails because the thread has
already shut down.  In our brave new world, nsThread simply leaks the
runnable.  It can't release the reference it holds, because that would
reintroduce the race condition we wanted to avoid, and it can't
release the reference on the correct thread so it's already gone away.
But now we have a new source of intermittent leaks.

Was this anticipated when designing bug 1155059?  I don't think
leaking is acceptable here, so we may need to back that out and return
to living with that race condition.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
First a bit of good news: The overall trend line for SHA-1 errors is not
spiking (yet).  Bin 6 of SSL_CERT_VERIFICATION_ERRORS corresponds to
ERROR_CERT_SIGNATURE_ALGORITHM_DISABLED, which is what you get when you
reject a bad SHA-1 cert.

https://ipv.sx/telemetry/general-v2.html?channels=beta%20release=SSL_CERT_VERIFICATION_ERRORS=6

Now for the bad news: Telemetry is actually useless for the specific case
we're talking about here.  Telemetry is submitted over HTTPS (about:config
/ toolkit.telemetry.server), so measurements from affected clients will
never reach the server.

So we can't get any measurements unless we revert the SHA-1 intolerance.
Given this, I'm sort of inclined to do that, collect some data, then maybe
re-enable it in 45 or 46.  What do others think?

--Richard


On Mon, Jan 4, 2016 at 1:43 PM, Richard Barnes  wrote:

>
>
> On Mon, Jan 4, 2016 at 12:31 PM, Bobby Holley 
> wrote:
>
>> On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
>> wrote:
>>
>>> Hey Daniel,
>>>
>>> Thanks for the heads-up.  This is a useful thing to keep in mind as we
>>> work
>>> through the SHA-1 deprecation.
>>>
>>> To be honest, this seems like a net positive to me, since it gives users
>>> a
>>> clear incentive to uninstall this sort of software.
>>>
>>
>> By "this sort of software" do you mean "Firefox"? Because that's what 95%
>> of our users experiencing this are going to do absent anything clever on
>> our end.
>>
>> We clearly need to determine the scale of the problem to determine how
>> much time it's worth investing into this. But I think we should assume that
>> an affected user is a lost use in this case.
>>
>
> I was being a bit glib because I think in a lot of cases, it won't be just
> Firefox that's affected -- all of the user's HTTPS will quit working,
> across all browsers.
>
> I agree that it would be good to get more data here.  I think Adam is on
> the right track.
>
> --Richard
>
>
>>
>> bholley
>>
>>
>>
>>>
>>> --Richard
>>>
>>> On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
>>> wrote:
>>>
>>> > Heads-up, from a user-complaint/ support / "keep an eye out for this"
>>> > perspective:
>>> >  * Starting January 1st 2016 (a few days ago), Firefox rejects
>>> > recently-issued SSL certs that use the (obsolete) SHA1 hash
>>> algorithm.[1]
>>> >
>>> >  * For users who unknowingly have a local SSL proxy on their machine
>>> > from spyware/adware/antivirus (stuff like superfish), this may cause
>>> > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
>>> > autogenerated certificates.  (Every cert that gets sent to Firefox will
>>> > use SHA1 and will have an issued date of "just now", which is after
>>> > January 1 2016; hence, the cert is untrusted, even if the spyware put
>>> > its root in our root store.)
>>> >
>>> >  * I'm not sure what action we should (or can) take about this, but for
>>> > now we should be on the lookout for this, and perhaps consider writing
>>> a
>>> > support article about it if we haven't already. (Not sure there's much
>>> > help we can offer, since removing spyware correctly/completely can be
>>> > tricky and varies on a case by case basis.)
>>> >
>>> > (Context: I received a family-friend-Firefox-support phone call today,
>>> > who this had this exact problem.  Every HTTPS site was broken for her
>>> in
>>> > Firefox, since January 1st.  IE worked as expected (that is, it happily
>>> > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
>>> > remotely figure out what the piece of spyware was or how to remove it
>>> --
>>> > but the rejected certs reported their issuer as being "Digital
>>> Marketing
>>> > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
>>> > turn up anything useful, unfortunately; so I suspect this is "niche"
>>> > spyware, or perhaps the name is dynamically generated.)
>>> >
>>> > Anyway -- I have a feeling this will be somewhat-widespread problem,
>>> > among users who have spyware (and perhaps crufty "secure browsing"
>>> > antivirus tools) installed.
>>> >
>>> > ~Daniel
>>> >
>>> > [1]
>>> >
>>> >
>>> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
>>> > ___
>>> > dev-platform mailing list
>>> > dev-platform@lists.mozilla.org
>>> > https://lists.mozilla.org/listinfo/dev-platform
>>> >
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Tanvi Vyas

On 1/4/16 11:20 AM, Richard Barnes wrote:

First a bit of good news: The overall trend line for SHA-1 errors is not
spiking (yet).  Bin 6 of SSL_CERT_VERIFICATION_ERRORS corresponds to
ERROR_CERT_SIGNATURE_ALGORITHM_DISABLED, which is what you get when you
reject a bad SHA-1 cert.

https://ipv.sx/telemetry/general-v2.html?channels=beta%20release=SSL_CERT_VERIFICATION_ERRORS=6

Now for the bad news: Telemetry is actually useless for the specific case
we're talking about here.  Telemetry is submitted over HTTPS (about:config
/ toolkit.telemetry.server), so measurements from affected clients will
never reach the server.

So we can't get any measurements unless we revert the SHA-1 intolerance.
Given this, I'm sort of inclined to do that, collect some data, then maybe
re-enable it in 45 or 46.  What do others think?


I agree that we should revert the change (assuming its not already too 
late given updates are over HTTPS) until we figure out how widespread 
this problem is and determine how to handle it.




--Richard


On Mon, Jan 4, 2016 at 1:43 PM, Richard Barnes  wrote:



On Mon, Jan 4, 2016 at 12:31 PM, Bobby Holley 
wrote:


On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
wrote:


Hey Daniel,

Thanks for the heads-up.  This is a useful thing to keep in mind as we
work
through the SHA-1 deprecation.

To be honest, this seems like a net positive to me, since it gives users
a
clear incentive to uninstall this sort of software.


By "this sort of software" do you mean "Firefox"? Because that's what 95%
of our users experiencing this are going to do absent anything clever on
our end.

We clearly need to determine the scale of the problem to determine how
much time it's worth investing into this. But I think we should assume that
an affected user is a lost use in this case.


I was being a bit glib because I think in a lot of cases, it won't be just
Firefox that's affected -- all of the user's HTTPS will quit working,
across all browsers.

I agree that it would be good to get more data here.  I think Adam is on
the right track.

--Richard



bholley




--Richard

On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
wrote:


Heads-up, from a user-complaint/ support / "keep an eye out for this"
perspective:
  * Starting January 1st 2016 (a few days ago), Firefox rejects
recently-issued SSL certs that use the (obsolete) SHA1 hash

algorithm.[1]

  * For users who unknowingly have a local SSL proxy on their machine
from spyware/adware/antivirus (stuff like superfish), this may cause
*all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
autogenerated certificates.  (Every cert that gets sent to Firefox will
use SHA1 and will have an issued date of "just now", which is after
January 1 2016; hence, the cert is untrusted, even if the spyware put
its root in our root store.)

  * I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing

a

support article about it if we haven't already. (Not sure there's much
help we can offer, since removing spyware correctly/completely can be
tricky and varies on a case by case basis.)

(Context: I received a family-friend-Firefox-support phone call today,
who this had this exact problem.  Every HTTPS site was broken for her

in

Firefox, since January 1st.  IE worked as expected (that is, it happily
accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
remotely figure out what the piece of spyware was or how to remove it

--

but the rejected certs reported their issuer as being "Digital

Marketing

Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
turn up anything useful, unfortunately; so I suspect this is "niche"
spyware, or perhaps the name is dynamically generated.)

Anyway -- I have a feeling this will be somewhat-widespread problem,
among users who have spyware (and perhaps crufty "secure browsing"
antivirus tools) installed.

~Daniel

[1]



https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 1:00 PM, Adam Roach wrote:
One of the points that Benjamin Smedberg has been trying to drive home 
is that data collection is everyone's job.


After sending, I realized that this is a slight misquote. It should have 
been "data is everyone's job" (i.e.: there's more to data than collection).


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 12:19 AM, Daniel Holbert wrote:
> I wasn't able to
> remotely figure out what the piece of spyware was or how to remove it --
> but the rejected certs reported their issuer as being "Digital Marketing
> Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
> turn up anything useful, unfortunately; so I suspect this is "niche"
> spyware, or perhaps the name is dynamically generated.)

UPDATE: in my family friend's case, the shoddy MITM spyware in question
was "Simmons Connect Research Application", a consumer profiling tool
that's tied to Experian which users can voluntarily install in exchange
for points that you can use to buy stuff.

She was able to fix the problem by uninstalling that program (simmons
connect research application).

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Dave Townsend
aus5 (the server the app updater checks) is still pinned:
https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739

On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong  wrote:
> On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
>
>> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
>>
>>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
>>>
 Wouldn't the SSL cert failures also prevent submitting the telemetry
 payload to Mozilla's servers?

>>>
>>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
>>> for that matter! (I'm assuming the update-check is performed over HTTPS.)
>>>
>>
>> If I remember correctly, update checks are pinned to a specific CA, so
>> updates for users with software that MITM AUS would already be broken?
>
> That was removed awhile ago in favor of using mar signing as an exploit
> mitigation.
>
>
>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 12:29 PM, Daniel Holbert wrote:

I had a similar thought, but I think it's too late for such telemetry to
be effective. The vast majority of users who are affected will have
already stopped using Firefox, or will immediately do so, as soon as
they discover that their webmail, bank, google, facebook, etc. don't work.


That's a valid point for the first batch of users that is hit with the 
issue on day one. (Aside: I wonder what the preponderant behavior will 
be when Chrome also starts choking on those sites.) It'll be interesting 
to see whether there's a detectable decline in user count that 
correlates with the beginning of the year.


At the same time, I know that Google tends to measure quite a bit about 
Chrome's behavior. Lacking our own numbers, perhaps we reach out to them 
and ask if they're willing to share what they know.


In any case, people install new things all the time. While it is too 
late to catch the large wave of users who are running into the problem 
this week, it would be nice to have data about this problem on an 
ongoing basis.



(We could have used this sort of telemetry before Jan 1 if we'd forseen
this potential problem.  I don't blame us for not forseeing this, though.)


You're correct: given our current habits, it's understandable that no 
one thought to measure this. I think there's an object lesson to be 
learned here.


Mozilla has a clear and stated intention to be more data driven in how 
we do things. One of the points that Benjamin Smedberg has been trying 
to drive home is that data collection is everyone's job. In the same way 
that we would never land code without thinking about how to test it, we 
need to develop a mindset in which we don't land code without 
considering whether and how to measure it. It's not a perfect analogy, 
since many things won't need specific new metrics, but it should be part 
of the mental checklist: "did I think about whether we need to measure 
anything about this feature?"


If just asking that question were part of our culture, I'm certain we 
would have thought of landing exactly this kind telemetry as part of the 
same patch that disabled SHA-1; or, even better, in advance of it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
For reference, I've now filed a bug to cover outreach for the specific
tool that this user was using:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1236664

I'm also trying to get my hands on the software, but it's "invitation
only", so that may prove difficult.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:33 AM, Josh Matthews wrote:
> Wouldn't the SSL cert failures also prevent submitting the telemetry
> payload to Mozilla's servers?

Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
for that matter! (I'm assuming the update-check is performed over HTTPS.)

So there might be literally nothing we can do to improve the situation
for these users, from a changes-to-Firefox perspective.

Even if we wanted to take the extreme measure of issuing an update to
delay our new-SHA1-certs-not-trusted-after date (extremely
hypothetical), this wouldn't help users who are affected by this
problem, because they couldn't receive the update (I think).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Robert Strong
On Mon, Jan 4, 2016 at 10:53 AM, Chris Peterson 
wrote:

> On 1/4/16 10:45 AM, Daniel Holbert wrote:
>
>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
>>
>>> >Wouldn't the SSL cert failures also prevent submitting the telemetry
>>> >payload to Mozilla's servers?
>>>
>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
>> for that matter! (I'm assuming the update-check is performed over HTTPS.)
>>
>> So there might be literally nothing we can do to improve the situation
>> for these users, from a changes-to-Firefox perspective.
>>
>
> On Windows, Firefox is updated by a background service (the Mozilla
> Maintenance Service). Will the SHA-1/spyware proxy breakage affect the
> background service?

The maintenance service does not have network access and the update check
and download occur within Firefox itself. The update check and download has
already been addressed via several bugs.

Robert



> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 12:07 PM, Daniel Holbert wrote:
> UPDATE: in my family friend's case, the shoddy MITM spyware in question
> was "Simmons Connect Research Application", a consumer profiling tool
> that's tied to Experian which users can voluntarily install in exchange
> for points that you can use to buy stuff.

I reached out to Experian on Twitter:
 https://twitter.com/CodingExon/status/684105591288008704
...and also via a web form on one of their Simmons Connect pages.

I also sent the following to
http://www.digitalmarketresearchapps.com/contact.html , which seems to
be the HTTPS interception library that they're using:
==
Hi,
I'm a software engineer at Mozilla, working on the Firefox web browser,
and I'm contacting you about something extremely urgent -- I'm hoping to
reach an engineer who works on your HTTPS interception library/tool.

As of January 1st (several days ago), your tool *entirely breaks* HTTPS
connections in Firefox, due to your tool's reliance on a deprecated
security algorithm called SHA1. The importance of this is hard to
overstate -- for users who have your tool installed, their internet
access is *completely* broken, including their ability to download
browser updates.  Chrome users are (or will soon be) affected as well,
and Internet Explorer/Edge users will be affected at some point in the
next year -- all browsers are coordinating on phasing out SHA1
certificate support.

Specifically:
Based on a user report, it seems "Digital Market Research Apps" is
issuing certificates for a consumer profiling tool called "Simmons
Connect".  As of January 1st, this user was unable to visit any HTTPS
site in Firefox, because the tool was providing newly-generated
certificates using the obsolete SHA1 algorithm.  And per
https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
, such certificates are treated as untrusted.

Please contact me as soon as possible.  For users with your software
installed, it's of the utmost urgency that you issue an update, to make
your certificates use a newer algorithm than SHA1.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Chris Peterson

On 1/4/16 10:45 AM, Daniel Holbert wrote:

On 01/04/2016 10:33 AM, Josh Matthews wrote:

>Wouldn't the SSL cert failures also prevent submitting the telemetry
>payload to Mozilla's servers?

Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
for that matter! (I'm assuming the update-check is performed over HTTPS.)

So there might be literally nothing we can do to improve the situation
for these users, from a changes-to-Firefox perspective.


On Windows, Firefox is updated by a background service (the Mozilla 
Maintenance Service). Will the SHA-1/spyware proxy breakage affect the 
background service?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Robert Strong
On Mon, Jan 4, 2016 at 12:37 PM, Robert Strong  wrote:

>
>
> On Mon, Jan 4, 2016 at 10:53 AM, Chris Peterson 
> wrote:
>
>> On 1/4/16 10:45 AM, Daniel Holbert wrote:
>>
>>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
>>>
 >Wouldn't the SSL cert failures also prevent submitting the telemetry
 >payload to Mozilla's servers?

>>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
>>> for that matter! (I'm assuming the update-check is performed over HTTPS.)
>>>
>>> So there might be literally nothing we can do to improve the situation
>>> for these users, from a changes-to-Firefox perspective.
>>>
>>
>> On Windows, Firefox is updated by a background service (the Mozilla
>> Maintenance Service). Will the SHA-1/spyware proxy breakage affect the
>> background service?
>
> The maintenance service does not have network access and the update check
> and download occur within Firefox itself. The update check and download has
> already been addressed via several bugs.
>
I unintentionally made it seem that this would also handle the MITM case
which is not handled by any of these bugs since app update just utilizes
built-in Firefox networking.



>
>
> Robert
>
>
>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Robert Strong
I was under the impression (perhaps falsely) that the params for those
entries made it so that aus4 and aus5 don't enforce pinning.


On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend  wrote:

> aus5 (the server the app updater checks) is still pinned:
>
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
>
> On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
> wrote:
> > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
> >
> >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
> >>
> >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
> >>>
>  Wouldn't the SSL cert failures also prevent submitting the telemetry
>  payload to Mozilla's servers?
> 
> >>>
> >>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
> >>> for that matter! (I'm assuming the update-check is performed over
> HTTPS.)
> >>>
> >>
> >> If I remember correctly, update checks are pinned to a specific CA, so
> >> updates for users with software that MITM AUS would already be broken?
> >
> > That was removed awhile ago in favor of using mar signing as an exploit
> > mitigation.
> >
> >
> >
> >>
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Jesper Kristensen

Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:

On 01/04/2016 10:33 AM, Josh Matthews wrote:

Wouldn't the SSL cert failures also prevent submitting the telemetry
payload to Mozilla's servers?


Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
for that matter! (I'm assuming the update-check is performed over HTTPS.)


If I remember correctly, update checks are pinned to a specific CA, so 
updates for users with software that MITM AUS would already be broken?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Cameron Kaiser

On 1/4/16 12:09 PM, Dave Townsend wrote:

On Mon, Jan 4, 2016 at 12:03 PM, Cameron Kaiser  wrote:

What's different about nsIProtocolHandler in e10s? OverbiteFF works in 45
aurora without e10s on, but fails to recognize the protocol it defines with
e10s enabled. There's no explanation of this in the browser console and
seemingly no error. Do I have to do extra work to register the protocol
handler component, or is there some other problem? A cursory search of MDN
was no help.


Assuming you are registering the protocol handler in chrome.manifest
it will only be registered in the parent process but you will probably
need to register it in the child process too and make it do something
sensible in each case. You'll have to do that with JS in a frame or
process script.


That makes sense, except I'm not sure how to split it apart. Are there 
any examples of what such a parent-child protocol handler should look 
like in a basic sense? The p-c goop in netwerk/protocol/ is not really 
amenable to determining this, not least of which being written in C++.


Cameron Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread jvehent
On Monday, January 4, 2016 at 2:27:57 PM UTC-5, Tanvi Vyas wrote:
> On 1/4/16 11:20 AM, Richard Barnes wrote:
> > So we can't get any measurements unless we revert the SHA-1 intolerance.
> > Given this, I'm sort of inclined to do that, collect some data, then maybe
> > re-enable it in 45 or 46.  What do others think?
> 
> I agree that we should revert the change (assuming its not already too 
> late given updates are over HTTPS) until we figure out how widespread 
> this problem is and determine how to handle it.
> 

Updates are pulled from https://aus[2-5].mozilla.org/update/... They should be 
broken as well for those users.

- Julien
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Robert Strong
On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:

> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
>
>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
>>
>>> Wouldn't the SSL cert failures also prevent submitting the telemetry
>>> payload to Mozilla's servers?
>>>
>>
>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
>> for that matter! (I'm assuming the update-check is performed over HTTPS.)
>>
>
> If I remember correctly, update checks are pinned to a specific CA, so
> updates for users with software that MITM AUS would already be broken?

That was removed awhile ago in favor of using mar signing as an exploit
mitigation.



>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread David Keeler
> { "aus5.mozilla.org", true, true, true, 7, _mozilla },

Just for clarification and future reference, the second "true" means this
entry is in test mode, so it's not actually enforced by default.

On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend  wrote:

> aus5 (the server the app updater checks) is still pinned:
>
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
>
> On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
> wrote:
> > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
> >
> >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
> >>
> >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
> >>>
>  Wouldn't the SSL cert failures also prevent submitting the telemetry
>  payload to Mozilla's servers?
> 
> >>>
> >>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
> >>> for that matter! (I'm assuming the update-check is performed over
> HTTPS.)
> >>>
> >>
> >> If I remember correctly, update checks are pinned to a specific CA, so
> >> updates for users with software that MITM AUS would already be broken?
> >
> > That was removed awhile ago in favor of using mar signing as an exploit
> > mitigation.
> >
> >
> >
> >>
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Robert Strong
On Mon, Jan 4, 2016 at 1:11 PM, Robert Strong  wrote:

> I was under the impression (perhaps falsely) that the params for those
> entries made it so that aus4 and aus5 don't enforce pinning.
>
and the pinning hack I added years ago was removed.


>
>
>
> On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend 
> wrote:
>
>> aus5 (the server the app updater checks) is still pinned:
>>
>> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
>>
>> On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
>> wrote:
>> > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
>> > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
>> >
>> >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
>> >>
>> >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
>> >>>
>>  Wouldn't the SSL cert failures also prevent submitting the telemetry
>>  payload to Mozilla's servers?
>> 
>> >>>
>> >>> Hmm... actually, I'll bet the cert errors will prevent Firefox
>> updates,
>> >>> for that matter! (I'm assuming the update-check is performed over
>> HTTPS.)
>> >>>
>> >>
>> >> If I remember correctly, update checks are pinned to a specific CA, so
>> >> updates for users with software that MITM AUS would already be broken?
>> >
>> > That was removed awhile ago in favor of using mar signing as an exploit
>> > mitigation.
>> >
>> >
>> >
>> >>
>> >> ___
>> >> dev-platform mailing list
>> >> dev-platform@lists.mozilla.org
>> >> https://lists.mozilla.org/listinfo/dev-platform
>> >>
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Bobby Holley
If you're forking mozilla-central, that shouldn't be a problem, right?
On Jan 4, 2016 12:46 AM, "罗勇刚(Yonggang Luo)"  wrote:

> Well, cause that's not possible to implement APIs in webidl in the
> external dll with the xul
>
> On Mon, Jan 4, 2016 at 3:55 PM, Bobby Holley 
> wrote:
>
>> Why not use webidl to expose the apis you want in workers?
>> On Jan 3, 2016 8:08 PM, "罗勇刚(Yonggang Luo)" 
>> wrote:
>>
>>>
>>>
>>> On Mon, Jan 4, 2016 at 6:38 AM, Bobby Holley 
>>> wrote:
>>>
 As XPConnect module owner, I'd like to get it in record that we will
 almost
 certainly not take this or any support code for it (i.e. code in
 js-ctypes)
 in mozilla-central.

 Well, I was already have a modified version of mozilla-central, if the
>>> upstream is not possible, then I just need to maintain the fork and can not
>>> upstream the code.
>>>
 On Sun, Jan 3, 2016 at 11:46 AM, Joshua Cranmer  
 wrote:

 > On 1/3/2016 10:24 AM, 罗勇刚(Yonggang Luo) wrote:
 >
 >> So that we could be able to access xpcom in worker.
 >> And we could be able  to implement thunderbird new msg protocol in
 pure
 >> javascript
 >>
 >
 > I will point out that Thunderbird developers are already looking into
 > replacing the xpcom use of message protocols, so if that is the
 primary
 > goal, then you are wasting your time, I am afraid.
 >
 > I will also point out that both JavaScript and C++ have moved on from
 the
 > time xpconnect was developed to the point that use of xpconnect
 requires
 > designing APIs that are uncomfortable to use from C++ or JavaScript
 (or
 > even both!), so it is a much better investment of time to move APIs to
 > newer paradigms than it is to try to develop a system that almost no
 one
 > really understands.
 >
 > --
 > Joshua Cranmer
 > Thunderbird and DXR developer
 > Source code archæologist
 >
 >
 > ___
 > dev-platform mailing list
 > dev-platform@lists.mozilla.org
 > https://lists.mozilla.org/listinfo/dev-platform
 >
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

>>>
>>>
>>>
>>> --
>>>  此致
>>> 礼
>>> 罗勇刚
>>> Yours
>>> sincerely,
>>> Yonggang Luo
>>>
>>
>
>
> --
>  此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Thanks for all the great teamwork with the Sheriffs in 2015!

2016-01-04 Thread Juan Gómez
Yeah! We do love our sheriffs! :)

On Sun, Jan 3, 2016 at 9:42 PM, Nicholas Nethercote 
wrote:

> Sheriffs make developers' lives easier. Thank you, sheriffs.
>
> Nick
>
> On Thu, Dec 31, 2015 at 1:19 AM, Carsten Book  wrote:
> > Hi,
> >
> > Sheriffing is not just about Checkins, Uplifts and Backouts - its also a
> > lot of teamwork with different Groups and our Community like Developers,
> IT
> > Teams and Release Engineering and a lot more to keep the trees up and
> > running. And without this great teamwork our job would be nearly
> impossible!
> >
> > So far in 2015 we had around:
> >
> > 56471 changesets with 336218 changes to 70807 files in mozilla-central
> > and 4391 Bugs filed for intermittent failures (and a lot of them fixed).
> >
> > So thanks a lot for the great teamwork with YOU in 2015 - especially
> also a
> > great thanks to our Community Sheriffs like philor, nigelb and Aryx who
> > done great work!
> >
> > I hope we can continue this great teamwork in 2016 and also the monthly
> > sheriff report with interesting news from the sheriffs and how you can
> > contribute will continue then :)
> >
> > Have a great start into 2016!
> >
> > Tomcat
> > on behalf of the Sheriffs-Team
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
On Mon, Jan 4, 2016 at 7:22 PM, Bobby Holley  wrote:

> If you're forking mozilla-central, that shouldn't be a problem, right?
>
I am asking for some help, seeking for problems when I was trying to
implement XPCOM  connect with js-ctypes.
So far, the garbage collection would be the biggest problem.
Even though WebIDL has it's own advantages, but XPCOM are more
language-neutral.

On Jan 4, 2016 12:46 AM, "罗勇刚(Yonggang Luo)"  wrote:
>
>> Well, cause that's not possible to implement APIs in webidl in the
>> external dll with the xul
>>
>> On Mon, Jan 4, 2016 at 3:55 PM, Bobby Holley 
>> wrote:
>>
>>> Why not use webidl to expose the apis you want in workers?
>>> On Jan 3, 2016 8:08 PM, "罗勇刚(Yonggang Luo)" 
>>> wrote:
>>>


 On Mon, Jan 4, 2016 at 6:38 AM, Bobby Holley 
 wrote:

> As XPConnect module owner, I'd like to get it in record that we will
> almost
> certainly not take this or any support code for it (i.e. code in
> js-ctypes)
> in mozilla-central.
>
> Well, I was already have a modified version of mozilla-central, if the
 upstream is not possible, then I just need to maintain the fork and can not
 upstream the code.

> On Sun, Jan 3, 2016 at 11:46 AM, Joshua Cranmer  <
> pidgeo...@gmail.com>
> wrote:
>
> > On 1/3/2016 10:24 AM, 罗勇刚(Yonggang Luo) wrote:
> >
> >> So that we could be able to access xpcom in worker.
> >> And we could be able  to implement thunderbird new msg protocol in
> pure
> >> javascript
> >>
> >
> > I will point out that Thunderbird developers are already looking into
> > replacing the xpcom use of message protocols, so if that is the
> primary
> > goal, then you are wasting your time, I am afraid.
> >
> > I will also point out that both JavaScript and C++ have moved on
> from the
> > time xpconnect was developed to the point that use of xpconnect
> requires
> > designing APIs that are uncomfortable to use from C++ or JavaScript
> (or
> > even both!), so it is a much better investment of time to move APIs
> to
> > newer paradigms than it is to try to develop a system that almost no
> one
> > really understands.
> >
> > --
> > Joshua Cranmer
> > Thunderbird and DXR developer
> > Source code archæologist
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



 --
  此致
 礼
 罗勇刚
 Yours
 sincerely,
 Yonggang Luo

>>>
>>
>>
>> --
>>  此致
>> 礼
>> 罗勇刚
>> Yours
>> sincerely,
>> Yonggang Luo
>>
>


-- 
 此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
Heads-up, from a user-complaint/ support / "keep an eye out for this"
perspective:
 * Starting January 1st 2016 (a few days ago), Firefox rejects
recently-issued SSL certs that use the (obsolete) SHA1 hash algorithm.[1]

 * For users who unknowingly have a local SSL proxy on their machine
from spyware/adware/antivirus (stuff like superfish), this may cause
*all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
autogenerated certificates.  (Every cert that gets sent to Firefox will
use SHA1 and will have an issued date of "just now", which is after
January 1 2016; hence, the cert is untrusted, even if the spyware put
its root in our root store.)

 * I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already. (Not sure there's much
help we can offer, since removing spyware correctly/completely can be
tricky and varies on a case by case basis.)

(Context: I received a family-friend-Firefox-support phone call today,
who this had this exact problem.  Every HTTPS site was broken for her in
Firefox, since January 1st.  IE worked as expected (that is, it happily
accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
remotely figure out what the piece of spyware was or how to remove it --
but the rejected certs reported their issuer as being "Digital Marketing
Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
turn up anything useful, unfortunately; so I suspect this is "niche"
spyware, or perhaps the name is dynamically generated.)

Anyway -- I have a feeling this will be somewhat-widespread problem,
among users who have spyware (and perhaps crufty "secure browsing"
antivirus tools) installed.

~Daniel

[1]
https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
Well, cause that's not possible to implement APIs in webidl in the external
dll with the xul

On Mon, Jan 4, 2016 at 3:55 PM, Bobby Holley  wrote:

> Why not use webidl to expose the apis you want in workers?
> On Jan 3, 2016 8:08 PM, "罗勇刚(Yonggang Luo)"  wrote:
>
>>
>>
>> On Mon, Jan 4, 2016 at 6:38 AM, Bobby Holley 
>> wrote:
>>
>>> As XPConnect module owner, I'd like to get it in record that we will
>>> almost
>>> certainly not take this or any support code for it (i.e. code in
>>> js-ctypes)
>>> in mozilla-central.
>>>
>>> Well, I was already have a modified version of mozilla-central, if the
>> upstream is not possible, then I just need to maintain the fork and can not
>> upstream the code.
>>
>>> On Sun, Jan 3, 2016 at 11:46 AM, Joshua Cranmer  
>>> wrote:
>>>
>>> > On 1/3/2016 10:24 AM, 罗勇刚(Yonggang Luo) wrote:
>>> >
>>> >> So that we could be able to access xpcom in worker.
>>> >> And we could be able  to implement thunderbird new msg protocol in
>>> pure
>>> >> javascript
>>> >>
>>> >
>>> > I will point out that Thunderbird developers are already looking into
>>> > replacing the xpcom use of message protocols, so if that is the primary
>>> > goal, then you are wasting your time, I am afraid.
>>> >
>>> > I will also point out that both JavaScript and C++ have moved on from
>>> the
>>> > time xpconnect was developed to the point that use of xpconnect
>>> requires
>>> > designing APIs that are uncomfortable to use from C++ or JavaScript (or
>>> > even both!), so it is a much better investment of time to move APIs to
>>> > newer paradigms than it is to try to develop a system that almost no
>>> one
>>> > really understands.
>>> >
>>> > --
>>> > Joshua Cranmer
>>> > Thunderbird and DXR developer
>>> > Source code archæologist
>>> >
>>> >
>>> > ___
>>> > dev-platform mailing list
>>> > dev-platform@lists.mozilla.org
>>> > https://lists.mozilla.org/listinfo/dev-platform
>>> >
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>>
>> --
>>  此致
>> 礼
>> 罗勇刚
>> Yours
>> sincerely,
>> Yonggang Luo
>>
>


-- 
 此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsThread now leaks runnables if dispatch fails

2016-01-04 Thread Karl Tomlinson
Kyle Huey writes:

> (This is a continuation of the discussion in bug 1218297)
>
> In bug 1155059 we made nsIEventTarget::Dispatch take an
> already_AddRefed instead of a raw pointer.  This was done to allow the
> dispatcher to transfer its reference to the runnable to the thread the
> runnable will run on.  That solves a race condition we've had
> historically where the destructor of a runnable can run on either the
> dispatching thread or the executing thread, depending on whether the
> executing thread can run the event to completion before the
> dispatching thread destroys the nsCOMPtr on the stack.

IIUC solving the race condition wasn't the goal of the change in
API, but something that was done to retain existing workarounds
for leaks.

> In bug 1218297 we saw a case where dispatch to a thread (the socket
> transport service thread in this case) fails because the thread has
> already shut down.  In our brave new world, nsThread simply leaks the
> runnable.

It did previously leak in some quirky situations where objects
were intentionally created with no reference so as to leak. e.g.

  auto runnable = new FooRunnable(...);
  target->Dispatch(runnable, flags);

Since
https://hg.mozilla.org/mozilla-central/rev/2265e031ab97#l25.46
however, Dispatch() leaks even when the caller is doing
ref-counting properly.

> It can't release the reference it holds, because that would
> reintroduce the race condition we wanted to avoid, and it can't
> release the reference on the correct thread so it's already gone away.
> But now we have a new source of intermittent leaks.
>
> Was this anticipated when designing bug 1155059?  I don't think
> leaking is acceptable here, so we may need to back that out and return
> to living with that race condition.

I agree this approach is not going to work out.  Short term, I
think the situation could be improved and most of those changes
kept by adding a DispatchOrLeak() variant that can be used by the
callers that want to leak.  Then we still have the leaks we had
prior to 2265e031ab97 but the new ones are resolved.

For the remaining (old) leaks I can think of two options:

1) Never call Dispatch() when it might fail and assert that it
   does not.

2) Keep and additional reference to the runnable in the caller if
   it doesn't want Dispatch() to release the last reference.

With the former, the caller needs to ensure the thread lives long
enough for the Dispatch() to succeed, or that clean up happens
before the thread is closed.  With the latter, the caller needs to
find another way to release when Dispatch() fails.

Making Dispatch() infallible would permit only option 1.  Keeping
Dispatch() fallible allows the caller to use either option.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Single- vs. double-precision FP in gfx types

2016-01-04 Thread Nicholas Nethercote
On Wed, Dec 9, 2015 at 3:49 PM, Nicholas Nethercote
 wrote:
>>
>> One interesting thing I found is that a *lot* of the functions that
>> take an nsRenderingContext or gfxContext do so because they end up
>> passing it into text run code -- gfxTextRun uses a gfxContext, via
>> gfxTextRunFactory::Parameters::mContext. However, only a few things
>> from the gfxContext are used by the text run code:

I found a good way to handle this, which let me convert 228
occurrences of gfxContext and nsRenderingContext (~13% of them) in the
tree to DrawTarget. Details in
https://bugzilla.mozilla.org/show_bug.cgi?id=1231550, the patch for
which I just landed on inbound.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
WEBGL_dynamic_texture is due for a pruning refactor (I think I'm on
the hook for this), so don't base anything on it as-is.

IIRC, we don't believe WEBGL_security_sensitive_resources is viably
implementable in a safe way (timing attacks!), so I suggest ignoring
it.

Extending texture uploads to allow dom::Element uploads is easily done
from a technical standpoint, but doing it efficiently will take novel
non-trivial work. (not every dom::Element has a Layer/Image)

Adding picking to WebGL is a non-starter.

>From an API standpoint, it could be interesting to try to use
ImageBitmaps as handles to snapshots of dom::Elements.

I think that it would be most efficient just to have a meeting about
these topics, instead of iterating slower via email.

-Jeff

On Mon, Jan 4, 2016 at 1:46 PM, Kearwood "Kip" Gilbert
 wrote:
> Hello All,
>
> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
> in front of the user.  Additionally, we often wish to show 2d graphics,
> video, and CSS animations as a texture in 3d scenes.  Creating these
> textures is something that CSS and HTML are great at.
>
> Unfortunately, I am not aware of an easy and efficient way to capture an
> animated of an interactive HTML Element and bring it into the WebGL
> context.  A "moz-element" -like API would be useful here.
>
> Perhaps we could solve this by implementing and extending the proposed
> WEBGL_dynamic_texture extension:
>
> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
> Essentially, we would extend the same API but allow the WDTStream
> interface to apply to more HTML elements, not just HTMLCanvasElement,
> HTMLImageElement, or HTMLVideoElement.
>
> We would also need to implement WEBGL_security_sensitive_resources to
> enforce the security model:
>
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
> Does this sound like a good idea?  I feel that this is something that
> all WebGL developers would want, as it would make building front-ends
> for games much easier.
>
> If others feel the same, I would also like to follow up with a proposal
> to make the captured HTML elements interactive through use of an
> explicit "pick buffer" added to canvases.
>
> I look forward to your feedback.
>
> Cheers,
>   - Kearwood "Kip" Gilbert
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Xidorn Quan
On Tue, Jan 5, 2016 at 8:46 AM, Kearwood "Kip" Gilbert
 wrote:
> Hello All,
>
> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
> in front of the user.  Additionally, we often wish to show 2d graphics,
> video, and CSS animations as a texture in 3d scenes.  Creating these
> textures is something that CSS and HTML are great at.
>
> Unfortunately, I am not aware of an easy and efficient way to capture an
> animated of an interactive HTML Element and bring it into the WebGL
> context.  A "moz-element" -like API would be useful here.

Is it possible to access pixels' color inside the texture? If yes,
please mind the privacy issue around :visited selector on links, as
users' history would be leaked via either painting link directly, or
painting an element using -moz-element(#a-link-element) as background.

More information can be found in [1] and also [2].

[1] 
https://developer.mozilla.org/en-US/docs/Web/CSS/Privacy_and_the_:visited_selector
[2] 
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Drawing_DOM_objects_into_a_canvas#Security

- Xidorn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsThread now leaks runnables if dispatch fails

2016-01-04 Thread Randell Jesup
>Kyle Huey writes:
>
>> (This is a continuation of the discussion in bug 1218297)
>>
>> In bug 1155059 we made nsIEventTarget::Dispatch take an
>> already_AddRefed instead of a raw pointer.  This was done to allow the
>> dispatcher to transfer its reference to the runnable to the thread the
>> runnable will run on.  That solves a race condition we've had
>> historically where the destructor of a runnable can run on either the
>> dispatching thread or the executing thread, depending on whether the
>> executing thread can run the event to completion before the
>> dispatching thread destroys the nsCOMPtr on the stack.
>
>IIUC solving the race condition wasn't the goal of the change in
>API, but something that was done to retain existing workarounds
>for leaks.

Solving the "who knows what thread this will be released on" was a
primary goal.  See comment 0, and the discussion here it references.

Independently, bsmedberg wanted to make Dispatch infallible (at least
normally), thus making the pattern of "avoid leak in the case of
Dispatch failing" irrelevant (once done, which it hasn't been).

I started the conversion of Dispatch(rawptr) in the tree to
Dispatch(already_AddRefed<>); xidorn took over finishing it but hasn't
yet.  We should re-energize that.

There's considerable discussion in the bug of when leaks occur and also
the assumed behavior of DispatchToMainThread (which is especially
failure-prone because of how XPCOM dispatch works - even when MainThread
still exists, that can fail in shutdown).

>> In bug 1218297 we saw a case where dispatch to a thread (the socket
>> transport service thread in this case) fails because the thread has
>> already shut down.  In our brave new world, nsThread simply leaks the
>> runnable.

Yup.  In cases where we anticipate a possible Dispatch failure (which is
supposed to become impossible, but isn't currently) you can use the
(still-existing) raw ptr interface and handle Dispatch failure
explicitly to release the (leaked) reference, if it's safe.  Not all
cases are safe to release in that case (which is what drove the initial
bug filing, where it tried to release JS objects on Dispatch failure off
mainthread).  Leaking is better than crashing/sec-bugs.

If the problem is that when this happens, a leak is reported by infra,
then we could ping the leak-checker that there was a dispatch failure
and leaks are expected.  Even better but maybe not possible would be
annotate the root of the leak and suppress anything tied to it.

>> It can't release the reference it holds, because that would
>> reintroduce the race condition we wanted to avoid, and it can't
>> release the reference on the correct thread so it's already gone away.
>> But now we have a new source of intermittent leaks.
>>
>> Was this anticipated when designing bug 1155059?  I don't think
>> leaking is acceptable here, so we may need to back that out and return
>> to living with that race condition.
>
>I agree this approach is not going to work out.  Short term, I
>think the situation could be improved and most of those changes
>kept by adding a DispatchOrLeak() variant that can be used by the
>callers that want to leak.  Then we still have the leaks we had
>prior to 2265e031ab97 but the new ones are resolved.
>
>For the remaining (old) leaks I can think of two options:
>
>1) Never call Dispatch() when it might fail and assert that it
>   does not.
>
>2) Keep and additional reference to the runnable in the caller if
>   it doesn't want Dispatch() to release the last reference.
>
>With the former, the caller needs to ensure the thread lives long
>enough for the Dispatch() to succeed, or that clean up happens
>before the thread is closed.  With the latter, the caller needs to
>find another way to release when Dispatch() fails.
>
>Making Dispatch() infallible would permit only option 1.  Keeping
>Dispatch() fallible allows the caller to use either option.

There's another (likely impractical) option: wrap all thread-sensitive
pointers in a ReleaseRefOnThread<> holder which refuses to release
objects on the wrong thread and RELEASE_ASSERTS, and integrate that with
Dispatch's failure path to not assert if you try to release it due to a
Dispatch failure and instead leak and mark it somehow for automation.

Leaks are annoying, but only because of automation meant to catch
persistent/accumulating leaks - shutdown-time-only leaks are far less
concerning, and (waving hands) 99% of Dispatch()-failures that create
leaks are in shutdown.

Also - blocking event processing and DispatchToMainthread before the
final cycle-collect adds to the problem since anything released in the
final CC pass can't DispatchToMainthread; this has forced a number of
tricks or bits of code to handle Dispatch failure.

Once upon a time we didn't do very much with other threads, and also
didn't pass JS references around, etc.  We use threads a lot more and in
much more complex ways now.

As Nathan said in the bug:
> Well, the runnables themselves 

Re: nsThread now leaks runnables if dispatch fails

2016-01-04 Thread Bobby Holley
It seems like we should make the default behavior infallible + assert, and
introduce a separate dispatch method for the fallible cases.

On Mon, Jan 4, 2016 at 10:50 AM, Kyle Huey  wrote:

> (This is a continuation of the discussion in bug 1218297)
>
> In bug 1155059 we made nsIEventTarget::Dispatch take an
> already_AddRefed instead of a raw pointer.  This was done to allow the
> dispatcher to transfer its reference to the runnable to the thread the
> runnable will run on.  That solves a race condition we've had
> historically where the destructor of a runnable can run on either the
> dispatching thread or the executing thread, depending on whether the
> executing thread can run the event to completion before the
> dispatching thread destroys the nsCOMPtr on the stack.  So far, so
> good.
>
> In bug 1218297 we saw a case where dispatch to a thread (the socket
> transport service thread in this case) fails because the thread has
> already shut down.  In our brave new world, nsThread simply leaks the
> runnable.  It can't release the reference it holds, because that would
> reintroduce the race condition we wanted to avoid, and it can't
> release the reference on the correct thread so it's already gone away.
> But now we have a new source of intermittent leaks.
>
> Was this anticipated when designing bug 1155059?  I don't think
> leaking is acceptable here, so we may need to back that out and return
> to living with that race condition.
>
> - Kyle
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Richard Barnes
In any case, the pin check doesn't matter.  The certificate verification
will have failed well before the pin checks are done.

On Mon, Jan 4, 2016 at 4:14 PM, David Keeler  wrote:

> > { "aus5.mozilla.org", true, true, true, 7, _mozilla },
>
> Just for clarification and future reference, the second "true" means this
> entry is in test mode, so it's not actually enforced by default.
>
> On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend 
> wrote:
>
> > aus5 (the server the app updater checks) is still pinned:
> >
> >
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
> >
> > On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
> > wrote:
> > > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> > > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
> > >
> > >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
> > >>
> > >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
> > >>>
> >  Wouldn't the SSL cert failures also prevent submitting the telemetry
> >  payload to Mozilla's servers?
> > 
> > >>>
> > >>> Hmm... actually, I'll bet the cert errors will prevent Firefox
> updates,
> > >>> for that matter! (I'm assuming the update-check is performed over
> > HTTPS.)
> > >>>
> > >>
> > >> If I remember correctly, update checks are pinned to a specific CA, so
> > >> updates for users with software that MITM AUS would already be broken?
> > >
> > > That was removed awhile ago in favor of using mar signing as an exploit
> > > mitigation.
> > >
> > >
> > >
> > >>
> > >> ___
> > >> dev-platform mailing list
> > >> dev-platform@lists.mozilla.org
> > >> https://lists.mozilla.org/listinfo/dev-platform
> > >>
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Kearwood "Kip" Gilbert
Thanks for your feedback, Jeff!

My comments below...

On 2016-01-04 2:50 PM, Jeff Gilbert wrote:
> WEBGL_dynamic_texture is due for a pruning refactor (I think I'm on
> the hook for this), so don't base anything on it as-is.
I'm glad I checked with you first.  I would be interested in any
alternatives that might be on the horizon.
>
> IIRC, we don't believe WEBGL_security_sensitive_resources is viably
> implementable in a safe way (timing attacks!), so I suggest ignoring
> it.
Perhaps enforcing a CORS-like model would be simpler to secure, by
limiting what content you can get handles for?
>
> Extending texture uploads to allow dom::Element uploads is easily done
> from a technical standpoint, but doing it efficiently will take novel
> non-trivial work. (not every dom::Element has a Layer/Image)
Agree.  Perhaps we could make the problem simpler by forcing an element
to have a layer when it is captured?  Perhaps similar to will-change.
>
> Adding picking to WebGL is a non-starter.
I suspect that most of the picking / interactivity could be implemented
in JS.  Some extensions may make this more efficient by enabling content
to perform picking asynchronously in a webworker with
WEBGL_shared_resources or perhaps by using glQuery and
EXT_disjoint_timer_query.

>
> From an API standpoint, it could be interesting to try to use
> ImageBitmaps as handles to snapshots of dom::Elements.
This is promising..  I'll explore this a bit.
>
> I think that it would be most efficient just to have a meeting about
> these topics, instead of iterating slower via email.

Sounds great, if you don't mind joining in.  I'll ping you and get
something set up.
>
> -Jeff
Thanks again, Jeff!

>
> On Mon, Jan 4, 2016 at 1:46 PM, Kearwood "Kip" Gilbert
>  wrote:
>> Hello All,
>>
>> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
>> in front of the user.  Additionally, we often wish to show 2d graphics,
>> video, and CSS animations as a texture in 3d scenes.  Creating these
>> textures is something that CSS and HTML are great at.
>>
>> Unfortunately, I am not aware of an easy and efficient way to capture an
>> animated of an interactive HTML Element and bring it into the WebGL
>> context.  A "moz-element" -like API would be useful here.
>>
>> Perhaps we could solve this by implementing and extending the proposed
>> WEBGL_dynamic_texture extension:
>>
>> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>>
>> Essentially, we would extend the same API but allow the WDTStream
>> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> HTMLImageElement, or HTMLVideoElement.
>>
>> We would also need to implement WEBGL_security_sensitive_resources to
>> enforce the security model:
>>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>>
>> Does this sound like a good idea?  I feel that this is something that
>> all WebGL developers would want, as it would make building front-ends
>> for games much easier.
>>
>> If others feel the same, I would also like to follow up with a proposal
>> to make the captured HTML elements interactive through use of an
>> explicit "pick buffer" added to canvases.
>>
>> I look forward to your feedback.
>>
>> Cheers,
>>   - Kearwood "Kip" Gilbert
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


"-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Kearwood "Kip" Gilbert
Hello All,

In WebVR, we often present UI as a Head's Up Display (HUD) that floats
in front of the user.  Additionally, we often wish to show 2d graphics,
video, and CSS animations as a texture in 3d scenes.  Creating these
textures is something that CSS and HTML are great at.

Unfortunately, I am not aware of an easy and efficient way to capture an
animated of an interactive HTML Element and bring it into the WebGL
context.  A "moz-element" -like API would be useful here.

Perhaps we could solve this by implementing and extending the proposed
WEBGL_dynamic_texture extension:

https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/

Essentially, we would extend the same API but allow the WDTStream
interface to apply to more HTML elements, not just HTMLCanvasElement,
HTMLImageElement, or HTMLVideoElement.

We would also need to implement WEBGL_security_sensitive_resources to
enforce the security model:

https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/

Does this sound like a good idea?  I feel that this is something that
all WebGL developers would want, as it would make building front-ends
for games much easier.

If others feel the same, I would also like to follow up with a proposal
to make the captured HTML elements interactive through use of an
explicit "pick buffer" added to canvases.

I look forward to your feedback.

Cheers,
  - Kearwood "Kip" Gilbert

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Robert O'Callahan
On Tue, Jan 5, 2016 at 11:50 AM, Jeff Gilbert  wrote:

> I think that it would be most efficient just to have a meeting about
> these topics, instead of iterating slower via email.
>

FWIW I feel like it's more efficient to use email, especially if not all
issues can be resolved in a single meeting.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Jason Duell
Cameron,

The way the builtin protocols (HTTP/FTP/Websockets/etc) handle this is that
the protocol handler code checks whether we're in a child process or not
when a channel is created, and we hand out different things depending on
that.  In the parent, we hand out a "good old HTTP channel" (nsHttpChannel)
just as we've always done in single-process Firefox.  In the child we hand
out a stub channel (HttpChannelChild) that looks and smells like an
nsIHttpChannel, but actually uses IPDL (our C++ cross-platform messaging
language) to essentially shunt all the real work to the parent.  When
AsyncOpen is called on the child, the stub channel winds up telling the
parent to create a regular "real" http channel, which does the actual work
of creating/sending an HTTP request, and as the reply come back, sending
the data to the child, which dispatches OnStart/OnData/OnStopRequest
messages as they arrive from the parent.

One key ingredient here is to make sure all cross-process communication is
asynchronous whenever possible (and that should be 95%+ of the time).  You
want to avoid blocking the child process waiting for synchronous
cross-process communication (and you're not allowed to block the parent
waiting to the child to respond).  You also generally want to have the
parent channel send all the relevant data to the child that it will need to
service all nsI[Foo]Channel requests, as opposed to doing a "remote object"
style approach (where you'd send a message off to the parent process to ask
the "real" channel for the answer).  This is both because 1) that would be
painfully slow, and 2) the parent and child objects may not be in the same
state. For instance, if client code calls channel.isPending() on a child
channel that hasn't dispatched OnStopRequest yet, the answer should be
'true'.  But if you ask the parent channel, it may have already hit
OnStopRequest and sent that data for that to the child (where it's waiting
to be dispatched).  So for instance, HTTP channels ship the entire set of
HTTP response headers to the child as part of receiving OnStartRequest from
the parent, so that they can service any GetResponseHeader() calls without
asking the parent.

>From talking to folks who know JS better than I do, it sounds like the
mechanism you'll want to use for all your cross-process communication is
the Message Manager:


https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager

https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager/Message_manager_overview

http://mxr.mozilla.org/mozilla-central/source/dom/base/nsIMessageManager.idl?force=1#15

One difference between C++/IPDL and JS/MM  is that IPDL has the builtin
concept of an IPDL "channel": it's like a pipe you set up.  Each C++ necko
channel in e10s sets up its own IPDL 'channel' (which is really just a
unique ID under the covers).  So when, for instance, an OnDataAvailable
message gets sent from the parent to the child, we automatically know which
necko channel it belongs to (from the IPDL channel it arrives on).  The
Message Manager's messages are more like DOM events--there's no notion of a
channel that they belong to, so you'll need to include as part of the
message some kind of ID that you map back to the necko channel that it's
for (I'd say use the URI, but that wouldn't work if you've got multiple
channels open to the same URI.  So you'll probably assign each channel a
GUID and keep a hashtable on both the parent and child that lets you map
from GUID->channel.

I'm happy to help some more off-list with examples of how our current
protocols handle various things as you have questions.  Hopefully this will
get you started, along with this inspirational gopher:// video:

   https://www.youtube.com/watch?v=WaSUyYSQie8

:)

Jason

On Mon, Jan 4, 2016 at 4:03 PM, Cameron Kaiser  wrote:

> On 1/4/16 12:09 PM, Dave Townsend wrote:
>
>> On Mon, Jan 4, 2016 at 12:03 PM, Cameron Kaiser 
>> wrote:
>>
>>> What's different about nsIProtocolHandler in e10s? OverbiteFF works in 45
>>> aurora without e10s on, but fails to recognize the protocol it defines
>>> with
>>> e10s enabled. There's no explanation of this in the browser console and
>>> seemingly no error. Do I have to do extra work to register the protocol
>>> handler component, or is there some other problem? A cursory search of
>>> MDN
>>> was no help.
>>>
>>> Assuming you are registering the protocol handler in chrome.manifest
>> it will only be registered in the parent process but you will probably
>> need to register it in the child process too and make it do something
>> sensible in each case. You'll have to do that with JS in a frame or
>> process script.
>>
>
> That makes sense, except I'm not sure how to split it apart. Are there any
> examples of what such a parent-child protocol handler should look like in a
> basic sense? The p-c goop in netwerk/protocol/ is not really amenable to
> determining this, 

Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jeff Gilbert
On Mon, Jan 4, 2016 at 4:46 PM, Robert O'Callahan  wrote:
> On Tue, Jan 5, 2016 at 10:46 AM, Kearwood "Kip" Gilbert <
> kgilb...@mozilla.com> wrote:
>
>> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
>> in front of the user.  Additionally, we often wish to show 2d graphics,
>> video, and CSS animations as a texture in 3d scenes.  Creating these
>> textures is something that CSS and HTML are great at.
>>
>> Unfortunately, I am not aware of an easy and efficient way to capture an
>> animated of an interactive HTML Element and bring it into the WebGL
>> context.  A "moz-element" -like API would be useful here.
>>
>> Perhaps we could solve this by implementing and extending the proposed
>> WEBGL_dynamic_texture extension:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
>
>
> This proposal seems unnecessarily complex. Is there a way for me to send
> feedback?

Yeah, we have a public mailing list: public_we...@khronos.org
As with anything WebGL related, you can also just talk to me about it.

>
> Essentially, we would extend the same API but allow the WDTStream
>> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> HTMLImageElement, or HTMLVideoElement.
>>
>> We would also need to implement WEBGL_security_sensitive_resources to
>> enforce the security model:
>>
>>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>
>
> I wish I'd known about this proposal earlier! This looks pretty good,
> though I'd always thought this would be too complicated to spec and
> implement to be practical. Glad to be wrong! Although I think we should get
> as much feedback as possible on this in case of hidden gotchas.

Historically in our investigations of this, it seemed very very hard
to guarantee a lack of timing vectors, even just for arithmetic.
(Particularly since we're handing the sources to the driver, which
will try to optimize away as much as it can)

>
> Does this sound like a good idea?  I feel that this is something that
>> all WebGL developers would want, as it would make building front-ends
>>
> for games much easier.
>>
>
> Yes, I think together these would be very useful.

It only helps some games. Most game development efforts will likely be
unaffected, since it's generally easiest to use an existing engine,
which will already have such things handled.

Most games will either just use overlayed HTML, or have an in-engine
solution for UI. The desire to embed web elements in a scene itself is
relatively rare. (Certainly few existing games do this)

>
> If others feel the same, I would also like to follow up with a proposal
>> to make the captured HTML elements interactive through use of an
>> explicit "pick buffer" added to canvases.
>>
>
> How would that work? Being able to synthesize mouse (touch?) events in HTML
> elements would add another set of issues.
>
> I assume the idea of mixing CSS 3D-transformed elements into a WebGL scene
> has been rejected for some reason?

This can't reasonably be done given the level of abstraction provided
by GL APIs.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Cameron Kaiser

On 1/4/16 6:29 PM, Kris Maglione wrote:

On Mon, Jan 04, 2016 at 08:27:59PM -0500, Jason Duell wrote:

In the child we hand out a stub channel (HttpChannelChild) that looks
and smells like an nsIHttpChannel, but actually uses IPDL (our C++
cross-platform messaging language) to essentially shunt all the real
work to the parent.  When AsyncOpen is called on the child, the stub
channel winds up telling the parent to create a regular "real" http
channel, which does the actual work of creating/sending an HTTP
request, and as the reply come back, sending the data to the child,
which dispatches OnStart/OnData/OnStopRequest messages as they arrive
from the parent.

...

From talking to folks who know JS better than I do, it sounds like the
mechanism you'll want to use for all your cross-process communication
is the Message Manager


I think that doing this in an add-on is asking for trouble. I don't
think I've ever seen nsIChannel implemented in JavaScript. It might
work, but I'd be surprised if it didn't break in very strange ways. Even
if it works perfectly, though, it's not going to be fun to maintain.


OverbiteFF implements a Gopher channel object which is, indeed, an 
nsIChannel. It does have proxy support. Worked pretty well up until now 
... ;)



If you can implement the protocol handler purely in the content process,
you're going to have a much better time.

If you can't, I think your best bet would probably be to use an input
stream channel in the child, and only proxy the nsIStreamListener
callbacks from the parent to the child (ideally by sending messages
containing ArrayBuffers, using readArrayBuffer and
nsIArrayBufferInputStream). That's pretty close to how most protocols in
add-ons are implemented anyway.


This is what I'm considering, though the channel implementation is not 
fully clean and does some front-end manipulation as well (for example, 
it adds a notification box to the tab with the current gopher URL, so it 
needs to iterate over open tabs to find the right one). It also has 
getters and setters for the load group and notification callbacks, and 
uses the thread manager to get the current thread to use as an event 
sink for the socket. Would all this work in the child process?


Cameron Kaiser

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Kris Maglione

On Mon, Jan 04, 2016 at 06:29:26PM -0800, Kris Maglione wrote:
I think that doing this in an add-on is asking for trouble. I don't 
think I've ever seen nsIChannel implemented in JavaScript.


I take that back. I see Overbite is indeed implemented this way. 
It still seems like asking for trouble, though.


--
Kris Maglione
Firefox Add-ons Engineer
Mozilla Corporation

Memory is like an orgasm.  It's a lot better if you don't have to fake
it.
--Seymour Cray

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Robert O'Callahan
On Tue, Jan 5, 2016 at 1:18 PM, Jonas Sicking  wrote:

> A big problem is sticking HTML/CSS content into WebGL is that WebGL
> effectively enables reading pixel data through custom shaders and
> timing attacks.
>

If you read
https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
carefully I think it's designed to prevent timing attacks by forbidding
shader control flow from depending on security-sensitive texture data.

It's hard for me to judge how implementable it is, but in principle it
should be doable. It requires analysis of shader code.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jonas Sicking
On Mon, Jan 4, 2016 at 5:46 PM, Robert O'Callahan  wrote:
> On Tue, Jan 5, 2016 at 2:38 PM, Jeff Gilbert  wrote:
>
>> > Essentially, we would extend the same API but allow the WDTStream
>> >> interface to apply to more HTML elements, not just HTMLCanvasElement,
>> >> HTMLImageElement, or HTMLVideoElement.
>> >>
>> >> We would also need to implement WEBGL_security_sensitive_resources to
>> >> enforce the security model:
>> >>
>> >>
>> >>
>> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
>> >
>> >
>> > I wish I'd known about this proposal earlier! This looks pretty good,
>> > though I'd always thought this would be too complicated to spec and
>> > implement to be practical. Glad to be wrong! Although I think we should
>> get
>> > as much feedback as possible on this in case of hidden gotchas.
>>
>> Historically in our investigations of this, it seemed very very hard
>> to guarantee a lack of timing vectors, even just for arithmetic.
>> (Particularly since we're handing the sources to the driver, which
>> will try to optimize away as much as it can)
>>
>
> Feedback from GPU vendors would be key here, I think. I'd like to hear that
> before declaring it dead. If they already did --- RIP :-).

What if, rather than using a black-list approach indicating which
operations are forbidden, we use a white-list approach indicating a
small number of ways that security-sensitive texture data may be used?

For example only allow it to be scaled, added to and clamped to a
range. Or some such.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Cameron Kaiser

On 1/4/16 5:27 PM, Jason Duell wrote:

That makes sense, except I'm not sure how to split it apart. Are there any
examples of what such a parent-child protocol handler should look like in a
basic sense? The p-c goop in netwerk/protocol/ is not really amenable to
determining this, not least of which being written in C++.


The way the builtin protocols (HTTP/FTP/Websockets/etc) handle this is that
the protocol handler code checks whether we're in a child process or not
when a channel is created, and we hand out different things depending on
that.  In the parent, we hand out a "good old HTTP channel" (nsHttpChannel)
just as we've always done in single-process Firefox.  In the child we hand
out a stub channel (HttpChannelChild) that looks and smells like an
nsIHttpChannel, but actually uses IPDL (our C++ cross-platform messaging
language) to essentially shunt all the real work to the parent.  When
AsyncOpen is called on the child, the stub channel winds up telling the
parent to create a regular "real" http channel, which does the actual work
of creating/sending an HTTP request, and as the reply come back, sending
the data to the child, which dispatches OnStart/OnData/OnStopRequest
messages as they arrive from the parent.


This seems rather complex (I'm reading the HttpChannelChild code right 
now). Was there a particular reason for doing it that way?


If the child did all the channel work, etc., would that be fine too? 
When the child finishes transferring data, does it or the parent have to 
do anything else special?



Hopefully this will
get you started, along with this inspirational gopher:// video:

https://www.youtube.com/watch?v=WaSUyYSQie8

:)


It's in the hole!

Cameron Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Robert O'Callahan
On Tue, Jan 5, 2016 at 2:38 PM, Jeff Gilbert  wrote:

> Yeah, we have a public mailing list: public_we...@khronos.org
> As with anything WebGL related, you can also just talk to me about it.
>

I don't want to step on whatever you were thinking of changing, but I'm
happy to go to the list if you prefer that.

My initial thought is that it would be much simpler to leave all timing
issues up to the browser. If you draw one of these dynamic textures in a
requestAnimationFrame callback (either on the main thread or in a Worker),
the browser can guess the frame presentation time and bind the correct
source frame to the dynamic texture automatically. Latching would be
automatically scoped to the duration of the rAF callback.

My other comment is that the API currently doesn't support Workers but it
should. Perhaps the best way to do that would be an OffscreenVideo proxy
object similar to OffscreenCanvas (but without the constructor).

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving blame in Mercurial

2016-01-04 Thread Ehsan Akhgari
On Mon, Jan 4, 2016 at 9:33 PM, Ehsan Akhgari 
wrote:

> On 2015-12-11 6:28 PM, Boris Zbarsky wrote:
>
>> On 12/11/15 6:17 PM, Gregory Szorc wrote:
>>
>>> If you have ideas for making the blame/annotate functionality better,
>>> please capture them at https://www.mercurial-scm.org/wiki/BlamePlan
>>>
>>
> This is a great plan!  I'm excited to use this, I have never seen a good
> web-based blame UI.  :-)
>
> In case you haven't seen it yet, please checkout :Gblame from
> fugitive.vim: I find its interface quite usable, by letting you select a
> commit and display it in a bottom pane.  This plus going to the blame on
> the parent commit (using a keyboard shortcut) actually makes blaming fun!
> I think we can get some good ideas from it for the advanced UI mentioned in
> the wiki.
>

I meant to include a screenshot: <
http://f.cl.ly/items/070G0J0P3T0O3G2u2f0i/Screen%20Shot%202014-02-07%20at%203.38.20%20PM.png
>

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Jonas Sicking
On Mon, Jan 4, 2016 at 4:54 PM, Robert O'Callahan  wrote:
> On Tue, Jan 5, 2016 at 1:18 PM, Jonas Sicking  wrote:
>>
>> A big problem is sticking HTML/CSS content into WebGL is that WebGL
>> effectively enables reading pixel data through custom shaders and
>> timing attacks.
>
>
> If you read
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
> carefully I think it's designed to prevent timing attacks by forbidding
> shader control flow from depending on security-sensitive texture data.
>
> It's hard for me to judge how implementable it is, but in principle it
> should be doable. It requires analysis of shader code.

That's really cool if it's possible to do!

We'd have to be careful to make sure that security-sensitive texture
data can't affect paint performance in other ways.

For example if such data can be used (directly or indirectly) as alpha
channel of a surface, you could probably stick a slow-rendering
texture behind it and that way check for certain values.

Also, if any of the GLSL functions have performance which depends on
the values passed to it, then we'd have to forbid passing
security-sensitive texture data to such a function. So for example if
abs(x) is faster for positive numbers than negative numbers, if sin(0)
is faster than sin(x!=0), or if all([f, f, f, f]) is faster than
all([t, t, t, t]) we have to forbid passing security-sensitive texture
data to those functions.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving blame in Mercurial

2016-01-04 Thread Ehsan Akhgari

On 2015-12-11 6:28 PM, Boris Zbarsky wrote:

On 12/11/15 6:17 PM, Gregory Szorc wrote:

If you have ideas for making the blame/annotate functionality better,
please capture them at https://www.mercurial-scm.org/wiki/BlamePlan


This is a great plan!  I'm excited to use this, I have never seen a good 
web-based blame UI.  :-)


In case you haven't seen it yet, please checkout :Gblame from 
fugitive.vim: I find its interface quite usable, by letting you select a 
commit and display it in a bottom pane.  This plus going to the blame on 
the parent commit (using a keyboard shortcut) actually makes blaming 
fun!  I think we can get some good ideas from it for the advanced UI 
mentioned in the wiki.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Robert O'Callahan
On Tue, Jan 5, 2016 at 10:46 AM, Kearwood "Kip" Gilbert <
kgilb...@mozilla.com> wrote:

> In WebVR, we often present UI as a Head's Up Display (HUD) that floats
> in front of the user.  Additionally, we often wish to show 2d graphics,
> video, and CSS animations as a texture in 3d scenes.  Creating these
> textures is something that CSS and HTML are great at.
>
> Unfortunately, I am not aware of an easy and efficient way to capture an
> animated of an interactive HTML Element and bring it into the WebGL
> context.  A "moz-element" -like API would be useful here.
>
> Perhaps we could solve this by implementing and extending the proposed
> WEBGL_dynamic_texture extension:
>
>
> https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/


This proposal seems unnecessarily complex. Is there a way for me to send
feedback?

Essentially, we would extend the same API but allow the WDTStream
> interface to apply to more HTML elements, not just HTMLCanvasElement,
> HTMLImageElement, or HTMLVideoElement.
>
> We would also need to implement WEBGL_security_sensitive_resources to
> enforce the security model:
>
>
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/


I wish I'd known about this proposal earlier! This looks pretty good,
though I'd always thought this would be too complicated to spec and
implement to be practical. Glad to be wrong! Although I think we should get
as much feedback as possible on this in case of hidden gotchas.

Does this sound like a good idea?  I feel that this is something that
> all WebGL developers would want, as it would make building front-ends
>
for games much easier.
>

Yes, I think together these would be very useful.

If others feel the same, I would also like to follow up with a proposal
> to make the captured HTML elements interactive through use of an
> explicit "pick buffer" added to canvases.
>

How would that work? Being able to synthesize mouse (touch?) events in HTML
elements would add another set of issues.

I assume the idea of mixing CSS 3D-transformed elements into a WebGL scene
has been rejected for some reason?

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Kris Maglione

On Mon, Jan 04, 2016 at 06:47:10PM -0800, Cameron Kaiser wrote:
This is what I'm considering, though the channel implementation is not 
fully clean and does some front-end manipulation as well (for example, 
it adds a notification box to the tab with the current gopher URL, so 
it needs to iterate over open tabs to find the right one). It also has 
getters and setters for the load group and notification callbacks, and 
uses the thread manager to get the current thread to use as an event 
sink for the socket. Would all this work in the child process?


Many of those things will *only* work in the child process. You 
won't be able to iterate over tabs, but you'll only be handling 
streams that are in the same content process as your protocol 
handler. You certainly won't be able to get at the docshell of 
the window loading your channel from the parent process. You 
could technically still open a modal from the parent process, 
but probably not without running into problems.


--
Kris Maglione
Firefox Add-ons Engineer
Mozilla Corporation

Lisp has jokingly been called "the most intelligent way to misuse a
computer".  I think that description is a great compliment because it
transmits the full flavor of liberation: it has assisted a number of
our most gifted fellow humans in thinking previously impossible
thoughts.
--Edsger W. Dijkstra, CACM, 15:10

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "-Moz-Element" -like API by extending WEBGL_dynamic_texture

2016-01-04 Thread Robert O'Callahan
On Tue, Jan 5, 2016 at 2:38 PM, Jeff Gilbert  wrote:

> > Essentially, we would extend the same API but allow the WDTStream
> >> interface to apply to more HTML elements, not just HTMLCanvasElement,
> >> HTMLImageElement, or HTMLVideoElement.
> >>
> >> We would also need to implement WEBGL_security_sensitive_resources to
> >> enforce the security model:
> >>
> >>
> >>
> https://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/
> >
> >
> > I wish I'd known about this proposal earlier! This looks pretty good,
> > though I'd always thought this would be too complicated to spec and
> > implement to be practical. Glad to be wrong! Although I think we should
> get
> > as much feedback as possible on this in case of hidden gotchas.
>
> Historically in our investigations of this, it seemed very very hard
> to guarantee a lack of timing vectors, even just for arithmetic.
> (Particularly since we're handing the sources to the driver, which
> will try to optimize away as much as it can)
>

Feedback from GPU vendors would be key here, I think. I'd like to hear that
before declaring it dead. If they already did --- RIP :-).

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Kris Maglione

On Mon, Jan 04, 2016 at 08:27:59PM -0500, Jason Duell wrote:
In the child we hand out a stub channel (HttpChannelChild) that looks 
and smells like an nsIHttpChannel, but actually uses IPDL (our C++ 
cross-platform messaging language) to essentially shunt all the real 
work to the parent.  When AsyncOpen is called on the child, the stub 
channel winds up telling the parent to create a regular "real" http 
channel, which does the actual work of creating/sending an HTTP 
request, and as the reply come back, sending the data to the child, 
which dispatches OnStart/OnData/OnStopRequest messages as they arrive 
from the parent.


...

From talking to folks who know JS better than I do, it sounds like the 
mechanism you'll want to use for all your cross-process communication 
is the Message Manager


I think that doing this in an add-on is asking for trouble. I don't 
think I've ever seen nsIChannel implemented in JavaScript. It might 
work, but I'd be surprised if it didn't break in very strange ways. Even 
if it works perfectly, though, it's not going to be fun to maintain.


The actual proxying is going to be tricky, too, for that matter.

If you can implement the protocol handler purely in the content process, 
you're going to have a much better time.


If you can't, I think your best bet would probably be to use an input 
stream channel in the child, and only proxy the nsIStreamListener 
callbacks from the parent to the child (ideally by sending messages 
containing ArrayBuffers, using readArrayBuffer and 
nsIArrayBufferInputStream). That's pretty close to how most protocols in 
add-ons are implemented anyway.


--
Most of the great triumphs and tragedies of history are caused not by
people being fundamentally good or fundamentally evil, but by people
being fundamentally people.
--Terry Pratchett

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread David Rajchenbach-Teller
Yes, I mostly meant XPConnect.

On 04/01/16 14:36, smaug wrote:
> On 01/03/2016 06:35 PM, David Rajchenbach-Teller wrote:
>> Accessing XPCOM in a worker will most likely break the garbage-collector
>> in novel and interesting ways,
> 
> Why would it? Our workers are full of xpcom stuff.
> One needs to be careful what to touch sure, and deal with CC and GC
> handling like in the main thread.
> 
> But perhaps you meant xpconnect, not xpcom.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread smaug

On 01/03/2016 06:55 PM, David Rajchenbach-Teller wrote:

Well, XPConnect is designed for the main thread, and many of the things
it does assume that everything takes place on the main thread.

An example, from the topic of my head: whenever objects cross between
JavaScript and C++ through XPConnect, they need to be retained in
hashtables to preserve unicity, etc. For performance reasons, these
hashtables are not protected against concurrent access.


Indeed. Which is why he is thinking to re-implement xpconnect for workers using 
js-ctypes, no?
(personally I'd try to avoid xpconnect as much as possible and rather try to 
figure out how to make
 the webidl bindings usable for whatever use case we're talking about here.)



Another example: JavaScript objects can point towards C++ objects and
vice-versa. The garbage-collector (well, the cycle-collector) needs to
walk the graph in specific ways to get rid of possible cycles that
involve both C++ and JS.

This happens in workers too.



The worker's implementation of the
cycle-collector is much simpler (and quite different) than the main
thread's,

Not really. Cycle collector is exactly the same (CycleCollectorRuntime is a bit 
more complicated in main thread)...



since it doesn't need to handle XPConnect.

...XPConnect just adds couple of extra js value containers to trace. Not at all 
that different
to JSHolder hashtable which is used also in workers.
XPCWrappedJS is possibly the weirdest part of xpconnect from CC/GC point of 
view, given that it
implements also weakreference handling.


But anyhow, I would certainly try to avoid any new xpconnect usage and try to 
figure out how to utilize
webidl. Webidl can express APIs in more JS-friendly way and in Gecko case also 
the C++ side is way simpler
(and faster) than implementing idl interfaces in C++.


-Olli



> Mixing both

accidentally can lead to unpredictable results.

Oh, and XPConnect pointers can simply not be dereferenced from worker
threads. Attempting to do so will crash libxul by design to avoid accidents.

etc.

Cheers,
  David

On 03/01/16 17:45, 罗勇刚(Yonggang Luo)  wrote:



On Mon, Jan 4, 2016 at 12:35 AM, David Rajchenbach-Teller
> wrote:

 Accessing XPCOM in a worker will most likely break the garbage-collector
 in novel and interesting ways, so I don't suggest heading in that
 direction.

I'd like to hear more information about that,
For example, if I setting a finalize for  each XPCOM instance
Object(javascript), when the Object's is GCed, then I release
the xpcom instance, is that would not break the garbage-collector?
Or we have other problems about garbage-collector, I am interested in that.



 Cheers,
  David

 On 03/01/16 17:24, 罗勇刚(Yonggang Luo)  wrote:
 > So that we could be able to access xpcom in worker.
 > And we could be able  to implement thunderbird new msg protocol in
 pure
 > javascript
 >
 > On Sun, Jan 3, 2016 at 11:09 PM, Josh Matthews
 >
 > wrote:
 >
 >> What is the motivation for this work?

 >>
 >> ___
 >> dev-platform mailing list
 >> dev-platform@lists.mozilla.org
 
 >> https://lists.mozilla.org/listinfo/dev-platform
 >>
 >
 >
 >




--
  此致
礼
罗勇刚
Yours
 sincerely,
Yonggang Luo


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Platform and UX

2016-01-04 Thread Anne van Kesteren
At Mozlando we discussed how we could better coordinate between
platform and UX. Put simply, platform's problem is the lack of UX
input on API design, and UX' problem is not hearing about new platform
features soon enough to influence the API design to better suit the
needs of users.

As a small step towards improving this Ehsan created

  https://lists.mozilla.org/listinfo/dev-platform-ux
  https://groups.google.com/forum/#!forum/mozilla.dev.platform.ux

where ideally we discuss newly considered platform features early
enough that UX input still has value. Please subscribe if this aligns
with your interests.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread smaug

On 01/03/2016 06:35 PM, David Rajchenbach-Teller wrote:

Accessing XPCOM in a worker will most likely break the garbage-collector
in novel and interesting ways,


Why would it? Our workers are full of xpcom stuff.
One needs to be careful what to touch sure, and deal with CC and GC handling 
like in the main thread.

But perhaps you meant xpconnect, not xpcom.



so I don't suggest heading in that direction.

Cheers,
  David

On 03/01/16 17:24, 罗勇刚(Yonggang Luo)  wrote:

So that we could be able to access xpcom in worker.
And we could be able  to implement thunderbird new msg protocol in pure
javascript

On Sun, Jan 3, 2016 at 11:09 PM, Josh Matthews 
wrote:


What is the motivation for this work?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform







___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[Firefox Desktop] Issues found: December 28th to January 1st

2016-01-04 Thread Andrei Vaida

Hi everyone and Happy New Year!

Here's the list of new issues found and filed by the Desktop Manual QA 
team last week, December 28 - January 1 (Week 53).


Additional details on the team's priorities last week, as well as the 
plans for the current week are available at:


   https://public.etherpad-mozilla.org/p/DesktopManualQAWeeklyStatus


*RELEASE CHANNEL*
none

*BETA CHANNEL*
none

*AURORA CHANNEL*
none

*NIGHTLY CHANNEL*
SeverityID  Summary Status  Resolution  Is a regression 
Assigned to
NORMAL  1236025 
Call broke when one of the machines was turned back on after lockscreen
NEW 
TBD NOBODY


*ESR CHANNEL*
none


Regards,
Andrei
Andrei Vaida| QC Team Lead

SOFTVISION | 57 Republicii Street, 400489 Cluj-Napoca, Romania
Email: andrei.va...@softvision.ro  | 
Web: www.softvision.ro 


The content of this communication is classified as SOFTVISION 
Confidential and Proprietary Information.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsThread now leaks runnables if dispatch fails

2016-01-04 Thread Ting-Yu Chou
> In bug 1218297 we saw a case where dispatch to a thread (the socket
> transport service thread in this case) fails because the thread has
> already shut down.  In our brave new world, nsThread simply leaks the
> runnable.  It can't release the reference it holds, because that would
> reintroduce the race condition we wanted to avoid, and it can't
> release the reference on the correct thread so it's already gone away.
>

I am a bit confused with this, if the executing thread has already shut
down,
why would releasing the reference dispatching thread holds reintroduce the
race
condition? Who is racing with dispatching thread?

Ting
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
On Mon, Jan 4, 2016 at 9:53 PM, smaug  wrote:

> On 01/03/2016 06:55 PM, David Rajchenbach-Teller wrote:
>
>> Well, XPConnect is designed for the main thread, and many of the things
>> it does assume that everything takes place on the main thread.
>>
>> An example, from the topic of my head: whenever objects cross between
>> JavaScript and C++ through XPConnect, they need to be retained in
>> hashtables to preserve unicity, etc. For performance reasons, these
>> hashtables are not protected against concurrent access.
>>
>> Indeed. Which is why he is thinking to re-implement xpconnect for workers
> using js-ctypes, no?
> (personally I'd try to avoid xpconnect as much as possible and rather try
> to figure out how to make
>  the webidl bindings usable for whatever use case we're talking about
> here.)
>
>
> Another example: JavaScript objects can point towards C++ objects and
>> vice-versa. The garbage-collector (well, the cycle-collector) needs to
>> walk the graph in specific ways to get rid of possible cycles that
>> involve both C++ and JS.
>>
> This happens in workers too.
>
>
> The worker's implementation of the
>> cycle-collector is much simpler (and quite different) than the main
>> thread's,
>>
> Not really. Cycle collector is exactly the same (CycleCollectorRuntime is
> a bit more complicated in main thread)...

So, the a bit more complicated part are used to dealing with the XPConnect
things?
I am trying to figure out a way that support

>
>
>
> since it doesn't need to handle XPConnect.
>>
> ...XPConnect just adds couple of extra js value containers to trace. Not
> at all that different
> to JSHolder hashtable which is used also in workers.
> XPCWrappedJS is possibly the weirdest part of xpconnect from CC/GC point
> of view, given that it
> implements also weakreference handling.
>
>
> But anyhow, I would certainly try to avoid any new xpconnect usage and try
> to figure out how to utilize
> webidl. Webidl can express APIs in more JS-friendly way and in Gecko case
> also the C++ side is way simpler
> (and faster) than implementing idl interfaces in C++.
>
1、I was not trying implement new things in xpcom, our company(Kingsoft) are
maintaining a fork of thunderbird, and at the current time
We have to re-use existing XPCOM components that already exists in  the
thunderbid gecko world, beyond pure html
things, there is too much things we have to re-use(xpcom things), and we
are facing  performance problems,
the mork-db and the mime-parse, they are all working in synchronous way, so
I have to figure out a way to calling these components
in a worker directly, so that they would not cause UI-lag in main-thread.
That's all the reason why I was trying to re-implement XPConnect with
js-ctypes. So  that I can calling
the exist components in the worker. And free the main-thread.

2、Besides these reason, another important reason, I didn't see any
way(example) to use WebIDL without building
xul.dll, maintain the xul.dll is a big burden for small teams (for us,
about 8 developers), so even if we trying to implement WebIDL things,
we need a way to building WebIDL components independently, at the current
time, I didn't see any possibilities.

3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript, but
XPCOM seems more language-neutral, we could be able to
use xpcom in Java/Python and other languages, that's looks like a advantage
of XPCOM.

-Olli
>
>
>
> > Mixing both
>
>> accidentally can lead to unpredictable results.
>>
>> Oh, and XPConnect pointers can simply not be dereferenced from worker
>> threads. Attempting to do so will crash libxul by design to avoid
>> accidents.
>>
>> etc.
>>
>> Cheers,
>>   David
>>
>> On 03/01/16 17:45, 罗勇刚(Yonggang Luo)  wrote:
>>
>>>
>>>
>>> On Mon, Jan 4, 2016 at 12:35 AM, David Rajchenbach-Teller
>>> > wrote:
>>>
>>>  Accessing XPCOM in a worker will most likely break the
>>> garbage-collector
>>>  in novel and interesting ways, so I don't suggest heading in that
>>>  direction.
>>>
>>> I'd like to hear more information about that,
>>> For example, if I setting a finalize for  each XPCOM instance
>>> Object(javascript), when the Object's is GCed, then I release
>>> the xpcom instance, is that would not break the garbage-collector?
>>> Or we have other problems about garbage-collector, I am interested in
>>> that.
>>>
>>>
>>>
>>>  Cheers,
>>>   David
>>>
>>>  On 03/01/16 17:24, 罗勇刚(Yonggang Luo)  wrote:
>>>  > So that we could be able to access xpcom in worker.
>>>  > And we could be able  to implement thunderbird new msg protocol in
>>>  pure
>>>  > javascript
>>>  >
>>>  > On Sun, Jan 3, 2016 at 11:09 PM, Josh Matthews
>>>  >
>>>  > wrote:
>>>  >
>>>  >> What is the motivation for this work?
>>>
>>>  >>
>>>  >> ___
>>>  >> 

Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Joshua Cranmer 

On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:

1、I was not trying implement new things in xpcom, our company(Kingsoft) are
maintaining a fork of thunderbird, and at the current time
We have to re-use existing XPCOM components that already exists in  the
thunderbid gecko world, beyond pure html
things, there is too much things we have to re-use(xpcom things), and we
are facing  performance problems,
the mork-db and the mime-parse, they are all working in synchronous way, so
I have to figure out a way to calling these components
in a worker directly, so that they would not cause UI-lag in main-thread.
That's all the reason why I was trying to re-implement XPConnect with
js-ctypes. So  that I can calling
the exist components in the worker. And free the main-thread.


Mork, by design, can't be used off main-thread. So even if you're trying 
to subvert it by using JS-ctypes and etc., it's not going to work very 
well, let alone the problems you have with trying to maintain a 
pseudo-reimplementation of xpconnect.

3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript, but
XPCOM seems more language-neutral, we could be able to
use xpcom in Java/Python and other languages, that's looks like a advantage
of XPCOM.
XPIDL is effectively a fork of an old version of IDL. Its interfaces are 
incapable of cleanly representing union types or array types very well, 
something that WebIDL does far better, as WebIDL is partly a fork of a 
newer version of IDL. I believe there already exists WebIDL bindings for 
C++, JS, and Rust, and extending it to Java or Python would not be a 
challenging task. The only complexity is that the WebIDL bindings does 
not use a grand-central dispatch mechanism like XPConnect, but that 
merely means that adding new bindings requires writing a code generator 
and feeding all the interfaces through it instead of implementing 
several customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM 
have worked for the past several years.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
On Mon, Jan 4, 2016 at 11:37 PM, Joshua Cranmer  
wrote:

> On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:
>
>> 1、I was not trying implement new things in xpcom, our company(Kingsoft)
>> are
>> maintaining a fork of thunderbird, and at the current time
>> We have to re-use existing XPCOM components that already exists in  the
>> thunderbid gecko world, beyond pure html
>> things, there is too much things we have to re-use(xpcom things), and we
>> are facing  performance problems,
>> the mork-db and the mime-parse, they are all working in synchronous way,
>> so
>> I have to figure out a way to calling these components
>> in a worker directly, so that they would not cause UI-lag in main-thread.
>> That's all the reason why I was trying to re-implement XPConnect with
>> js-ctypes. So  that I can calling
>> the exist components in the worker. And free the main-thread.
>>
>
> Mork, by design, can't be used off main-thread. So even if you're trying
> to subvert it by using JS-ctypes and etc., it's not going to work very
> well, let alone the problems you have with trying to maintain a
> pseudo-reimplementation of xpconnect.
>
>> 3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript, but
>> XPCOM seems more language-neutral, we could be able to
>> use xpcom in Java/Python and other languages, that's looks like a
>> advantage
>> of XPCOM.
>>
> XPIDL is effectively a fork of an old version of IDL. Its interfaces are
> incapable of cleanly representing union types or array types very well,
> something that WebIDL does far better, as WebIDL is partly a fork of a
> newer version of IDL. I believe there already exists WebIDL bindings for
> C++, JS, and Rust, and extending it to Java or Python would not be a
> challenging task. The only complexity is that the WebIDL bindings does not
> use a grand-central dispatch mechanism like XPConnect, but that merely
> means that adding new bindings requires writing a code generator and
> feeding all the interfaces through it instead of implementing several
> customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM have worked
> for the past several years.

The core issue is WebIDL is not usable in non-gecko code yet, I didn't see
any example in thunderbird source tree, but I see lots of XPCOM codes. So I
do not have any clue how to using WebIDL in thunderbird world

>
>
> --
> Joshua Cranmer
> Thunderbird and DXR developer
> Source code archæologist
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 
 此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: iframe overflow:hidden from build29 FF and XUL?

2016-01-04 Thread rvj

PS

Assuming that the current behaviour is correct of W3C spec,

is it not possible to retain the the old moz property -moz-scroll-bars-none 
so that


content can still be scrolled (esp with touch screen devices) without 
showing the unwanted scrollbars ?









"rvj"  wrote in message 
news:qowdnspkipsfhhflnz2dnuu7-k2dn...@mozilla.org...


It appears that from build29 (the introduction of touch screen gesture 
scrolling) that  iframe with overflow:none no longer works

displaying annoying scrollbars.

..and adding scrolling=no simply prevents any kind of touch screen 
scrolling


Is there a reason for this why this has not been fixed?

Is there a workaround ?

PS If this is the wrong newsgroup please suggest appropriate.



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus




---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Bobby Holley
On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes  wrote:

> Hey Daniel,
>
> Thanks for the heads-up.  This is a useful thing to keep in mind as we work
> through the SHA-1 deprecation.
>
> To be honest, this seems like a net positive to me, since it gives users a
> clear incentive to uninstall this sort of software.
>

By "this sort of software" do you mean "Firefox"? Because that's what 95%
of our users experiencing this are going to do absent anything clever on
our end.

We clearly need to determine the scale of the problem to determine how much
time it's worth investing into this. But I think we should assume that an
affected user is a lost use in this case.

bholley



>
> --Richard
>
> On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
> wrote:
>
> > Heads-up, from a user-complaint/ support / "keep an eye out for this"
> > perspective:
> >  * Starting January 1st 2016 (a few days ago), Firefox rejects
> > recently-issued SSL certs that use the (obsolete) SHA1 hash algorithm.[1]
> >
> >  * For users who unknowingly have a local SSL proxy on their machine
> > from spyware/adware/antivirus (stuff like superfish), this may cause
> > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
> > autogenerated certificates.  (Every cert that gets sent to Firefox will
> > use SHA1 and will have an issued date of "just now", which is after
> > January 1 2016; hence, the cert is untrusted, even if the spyware put
> > its root in our root store.)
> >
> >  * I'm not sure what action we should (or can) take about this, but for
> > now we should be on the lookout for this, and perhaps consider writing a
> > support article about it if we haven't already. (Not sure there's much
> > help we can offer, since removing spyware correctly/completely can be
> > tricky and varies on a case by case basis.)
> >
> > (Context: I received a family-friend-Firefox-support phone call today,
> > who this had this exact problem.  Every HTTPS site was broken for her in
> > Firefox, since January 1st.  IE worked as expected (that is, it happily
> > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
> > remotely figure out what the piece of spyware was or how to remove it --
> > but the rejected certs reported their issuer as being "Digital Marketing
> > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
> > turn up anything useful, unfortunately; so I suspect this is "niche"
> > spyware, or perhaps the name is dynamically generated.)
> >
> > Anyway -- I have a feeling this will be somewhat-widespread problem,
> > among users who have spyware (and perhaps crufty "secure browsing"
> > antivirus tools) installed.
> >
> > ~Daniel
> >
> > [1]
> >
> >
> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving blame in Mercurial

2016-01-04 Thread Robert Kaiser

Gregory Szorc schrieb:

If you have ideas for making the blame/annotate functionality better,
please capture them at https://www.mercurial-scm.org/wiki/BlamePlan or let
me know by replying to this message. Your feedback will be used to drive
what improvements Mercurial makes.


I really liked a lot about the blame implementation that bonsai had, 
unfortunately we don't have a running copy any more AFAIK so it's hard 
to take a look at it.


For one thing instead of showing author and changeset (or version in 
CVS) for every line, it grouped subsequent lines changed at once 
together and showed that information only once for them.
For the other, when you moved your mouse over that author/changeset 
"blame" info, it didn't just show a tooltip but a full-fledged HTML box 
where e.g. bug numbers in the checkin comments could be linked and you 
could call up that link (in a new tab probably) right away and directly 
go to read the bug that changed this code.


Those two features would be great to see in hg blame/annotate as well.

KaiRo

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Eric Rescorla
On Mon, Jan 4, 2016 at 9:31 AM, Bobby Holley  wrote:

> On Mon, Jan 4, 2016 at 9:11 AM, Richard Barnes 
> wrote:
>
> > Hey Daniel,
> >
> > Thanks for the heads-up.  This is a useful thing to keep in mind as we
> work
> > through the SHA-1 deprecation.
> >
> > To be honest, this seems like a net positive to me, since it gives users
> a
> > clear incentive to uninstall this sort of software.
> >
>
> By "this sort of software" do you mean "Firefox"? Because that's what 95%
> of our users experiencing this are going to do absent anything clever on
> our end.
>
> We clearly need to determine the scale of the problem to determine how much
> time it's worth investing into this. But I think we should assume that an
> affected user is a lost use in this case.
>

I believe that Chrome will be making a similar change at a similar time

-Ekr


> bholley
>
>
>
> >
> > --Richard
> >
> > On Mon, Jan 4, 2016 at 3:19 AM, Daniel Holbert 
> > wrote:
> >
> > > Heads-up, from a user-complaint/ support / "keep an eye out for this"
> > > perspective:
> > >  * Starting January 1st 2016 (a few days ago), Firefox rejects
> > > recently-issued SSL certs that use the (obsolete) SHA1 hash
> algorithm.[1]
> > >
> > >  * For users who unknowingly have a local SSL proxy on their machine
> > > from spyware/adware/antivirus (stuff like superfish), this may cause
> > > *all* HTTPS pages to fail in Firefox, if their spyware uses SHA1 in its
> > > autogenerated certificates.  (Every cert that gets sent to Firefox will
> > > use SHA1 and will have an issued date of "just now", which is after
> > > January 1 2016; hence, the cert is untrusted, even if the spyware put
> > > its root in our root store.)
> > >
> > >  * I'm not sure what action we should (or can) take about this, but for
> > > now we should be on the lookout for this, and perhaps consider writing
> a
> > > support article about it if we haven't already. (Not sure there's much
> > > help we can offer, since removing spyware correctly/completely can be
> > > tricky and varies on a case by case basis.)
> > >
> > > (Context: I received a family-friend-Firefox-support phone call today,
> > > who this had this exact problem.  Every HTTPS site was broken for her
> in
> > > Firefox, since January 1st.  IE worked as expected (that is, it happily
> > > accepts the spyware's SHA1 certs, for now at least).  I wasn't able to
> > > remotely figure out what the piece of spyware was or how to remove it
> --
> > > but the rejected certs reported their issuer as being "Digital
> Marketing
> > > Research App" (instead of e.g. Digicert or Verisign).  Googling didn't
> > > turn up anything useful, unfortunately; so I suspect this is "niche"
> > > spyware, or perhaps the name is dynamically generated.)
> > >
> > > Anyway -- I have a feeling this will be somewhat-widespread problem,
> > > among users who have spyware (and perhaps crufty "secure browsing"
> > > antivirus tools) installed.
> > >
> > > ~Daniel
> > >
> > > [1]
> > >
> > >
> >
> https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Eric Rescorla
On Mon, Jan 4, 2016 at 9:47 AM, Mike Hoye  wrote:

> On 2016-01-04 12:31 PM, Bobby Holley wrote:
>
>> By "this sort of software" do you mean "Firefox"? Because that's what 95%
>> of our users experiencing this are going to do absent anything clever on
>> our end. We clearly need to determine the scale of the problem to determine
>> how much time it's worth investing into this. But I think we should assume
>> that an affected user is a lost use in this case
>>
> Is consumer-grade home networking gear on our radar here?  Many, many home
> APs will self-generate SHA-1 certificates on their first boot after a reset.
>

The certificates from those devices aren't valid in any case, because
they do not chain to a trust anchor.

-Ekr


>
> - mhoye
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Bobby Holley
On Mon, Jan 4, 2016 at 10:01 AM, 罗勇刚(Yonggang Luo) 
wrote:

>
>
> On Tue, Jan 5, 2016 at 1:33 AM, Bobby Holley 
> wrote:
>
>> On Mon, Jan 4, 2016 at 7:57 AM, 罗勇刚(Yonggang Luo) 
>> wrote:
>>
>>> On Mon, Jan 4, 2016 at 11:37 PM, Joshua Cranmer  
>>> wrote:
>>>
>>> > On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:
>>> >
>>> >> 1、I was not trying implement new things in xpcom, our
>>> company(Kingsoft)
>>> >> are
>>> >> maintaining a fork of thunderbird, and at the current time
>>> >> We have to re-use existing XPCOM components that already exists in
>>> the
>>> >> thunderbid gecko world, beyond pure html
>>> >> things, there is too much things we have to re-use(xpcom things), and
>>> we
>>> >> are facing  performance problems,
>>> >> the mork-db and the mime-parse, they are all working in synchronous
>>> way,
>>> >> so
>>> >> I have to figure out a way to calling these components
>>> >> in a worker directly, so that they would not cause UI-lag in
>>> main-thread.
>>> >> That's all the reason why I was trying to re-implement XPConnect with
>>> >> js-ctypes. So  that I can calling
>>> >> the exist components in the worker. And free the main-thread.
>>> >>
>>> >
>>> > Mork, by design, can't be used off main-thread. So even if you're
>>> trying
>>> > to subvert it by using JS-ctypes and etc., it's not going to work very
>>> > well, let alone the problems you have with trying to maintain a
>>> > pseudo-reimplementation of xpconnect.
>>> >
>>> >> 3、 There is an advantage of XPCOM, WebIDL seems merely for
>>> Javascript, but
>>> >> XPCOM seems more language-neutral, we could be able to
>>> >> use xpcom in Java/Python and other languages, that's looks like a
>>> >> advantage
>>> >> of XPCOM.
>>> >>
>>> > XPIDL is effectively a fork of an old version of IDL. Its interfaces
>>> are
>>> > incapable of cleanly representing union types or array types very well,
>>> > something that WebIDL does far better, as WebIDL is partly a fork of a
>>> > newer version of IDL. I believe there already exists WebIDL bindings
>>> for
>>> > C++, JS, and Rust, and extending it to Java or Python would not be a
>>> > challenging task. The only complexity is that the WebIDL bindings does
>>> not
>>> > use a grand-central dispatch mechanism like XPConnect, but that merely
>>> > means that adding new bindings requires writing a code generator and
>>> > feeding all the interfaces through it instead of implementing several
>>> > customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM have
>>> worked
>>> > for the past several years.
>>>
>>> The core issue is WebIDL is not usable in non-gecko code yet, I didn't
>>> see
>>> any example in thunderbird source tree, but I see lots of XPCOM codes.
>>> So I
>>> do not have any clue how to using WebIDL in thunderbird world
>>>
>>
>> You would certainly need to blaze this trail. But it's certainly possible
>> (given that you're willing to fork mozilla-central), and I guarantee it
>> will be more tractable than rewriting XPConnect with JS-CTypes.
>>
>>
> In fact, the only reason to use WebIDL is to getting javascript to using
> some native code, but the situation is I have no need to do that, all the
> new code we've writing down are in pure Javascript, if we have the willing
> to implement something with WebIDL, for example, the IMAP protocol, then
> we would choose the pure javascript to do that, like the jsmime does.
> The reason to rewriting XPConnect with JS-CTypes is not to use XPCOM  to
> implement new things, but to re-use exist things.
>

Are you talking about calling existing native code from JS? You can do that
with WebIDL by writing binding entry points for the classes you want.

If you're talking about impersonating arbitrary XPCOM components with
JS-implemented code...good luck doing that with anything other than
XPConnect. ;-)


> Besides the mork database are only capable to running in main-thread, is
> that possible to running the libmime in the worker?
> Cause the libmime is really CPU intensive, I've seen it occupied 100%
> usage of CPU, if that could be able to off main-thread, then that's would
> be a great performance improvement.
>
>>
>>> >
>>> >
>>> > --
>>> > Joshua Cranmer
>>> > Thunderbird and DXR developer
>>> > Source code archæologist
>>> >
>>> > ___
>>> > dev-platform mailing list
>>> > dev-platform@lists.mozilla.org
>>> > https://lists.mozilla.org/listinfo/dev-platform
>>> >
>>>
>>>
>>>
>>> --
>>>  此致
>>> 礼
>>> 罗勇刚
>>> Yours
>>> sincerely,
>>> Yonggang Luo
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
>
> --
>  此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org

Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Bobby Holley
On Mon, Jan 4, 2016 at 7:57 AM, 罗勇刚(Yonggang Luo) 
wrote:

> On Mon, Jan 4, 2016 at 11:37 PM, Joshua Cranmer  
> wrote:
>
> > On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:
> >
> >> 1、I was not trying implement new things in xpcom, our company(Kingsoft)
> >> are
> >> maintaining a fork of thunderbird, and at the current time
> >> We have to re-use existing XPCOM components that already exists in  the
> >> thunderbid gecko world, beyond pure html
> >> things, there is too much things we have to re-use(xpcom things), and we
> >> are facing  performance problems,
> >> the mork-db and the mime-parse, they are all working in synchronous way,
> >> so
> >> I have to figure out a way to calling these components
> >> in a worker directly, so that they would not cause UI-lag in
> main-thread.
> >> That's all the reason why I was trying to re-implement XPConnect with
> >> js-ctypes. So  that I can calling
> >> the exist components in the worker. And free the main-thread.
> >>
> >
> > Mork, by design, can't be used off main-thread. So even if you're trying
> > to subvert it by using JS-ctypes and etc., it's not going to work very
> > well, let alone the problems you have with trying to maintain a
> > pseudo-reimplementation of xpconnect.
> >
> >> 3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript,
> but
> >> XPCOM seems more language-neutral, we could be able to
> >> use xpcom in Java/Python and other languages, that's looks like a
> >> advantage
> >> of XPCOM.
> >>
> > XPIDL is effectively a fork of an old version of IDL. Its interfaces are
> > incapable of cleanly representing union types or array types very well,
> > something that WebIDL does far better, as WebIDL is partly a fork of a
> > newer version of IDL. I believe there already exists WebIDL bindings for
> > C++, JS, and Rust, and extending it to Java or Python would not be a
> > challenging task. The only complexity is that the WebIDL bindings does
> not
> > use a grand-central dispatch mechanism like XPConnect, but that merely
> > means that adding new bindings requires writing a code generator and
> > feeding all the interfaces through it instead of implementing several
> > customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM have worked
> > for the past several years.
>
> The core issue is WebIDL is not usable in non-gecko code yet, I didn't see
> any example in thunderbird source tree, but I see lots of XPCOM codes. So I
> do not have any clue how to using WebIDL in thunderbird world
>

You would certainly need to blaze this trail. But it's certainly possible
(given that you're willing to fork mozilla-central), and I guarantee it
will be more tractable than rewriting XPConnect with JS-CTypes.


>
> >
> >
> > --
> > Joshua Cranmer
> > Thunderbird and DXR developer
> > Source code archæologist
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
>
>
>
> --
>  此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Mike Hoye

On 2016-01-04 12:31 PM, Bobby Holley wrote:
By "this sort of software" do you mean "Firefox"? Because that's what 
95% of our users experiencing this are going to do absent anything 
clever on our end. We clearly need to determine the scale of the 
problem to determine how much time it's worth investing into this. But 
I think we should assume that an affected user is a lost use in this case
Is consumer-grade home networking gear on our radar here?  Many, many 
home APs will self-generate SHA-1 certificates on their first boot after 
a reset.



- mhoye
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
On Tue, Jan 5, 2016 at 1:33 AM, Bobby Holley  wrote:

> On Mon, Jan 4, 2016 at 7:57 AM, 罗勇刚(Yonggang Luo) 
> wrote:
>
>> On Mon, Jan 4, 2016 at 11:37 PM, Joshua Cranmer  
>> wrote:
>>
>> > On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:
>> >
>> >> 1、I was not trying implement new things in xpcom, our company(Kingsoft)
>> >> are
>> >> maintaining a fork of thunderbird, and at the current time
>> >> We have to re-use existing XPCOM components that already exists in  the
>> >> thunderbid gecko world, beyond pure html
>> >> things, there is too much things we have to re-use(xpcom things), and
>> we
>> >> are facing  performance problems,
>> >> the mork-db and the mime-parse, they are all working in synchronous
>> way,
>> >> so
>> >> I have to figure out a way to calling these components
>> >> in a worker directly, so that they would not cause UI-lag in
>> main-thread.
>> >> That's all the reason why I was trying to re-implement XPConnect with
>> >> js-ctypes. So  that I can calling
>> >> the exist components in the worker. And free the main-thread.
>> >>
>> >
>> > Mork, by design, can't be used off main-thread. So even if you're trying
>> > to subvert it by using JS-ctypes and etc., it's not going to work very
>> > well, let alone the problems you have with trying to maintain a
>> > pseudo-reimplementation of xpconnect.
>> >
>> >> 3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript,
>> but
>> >> XPCOM seems more language-neutral, we could be able to
>> >> use xpcom in Java/Python and other languages, that's looks like a
>> >> advantage
>> >> of XPCOM.
>> >>
>> > XPIDL is effectively a fork of an old version of IDL. Its interfaces are
>> > incapable of cleanly representing union types or array types very well,
>> > something that WebIDL does far better, as WebIDL is partly a fork of a
>> > newer version of IDL. I believe there already exists WebIDL bindings for
>> > C++, JS, and Rust, and extending it to Java or Python would not be a
>> > challenging task. The only complexity is that the WebIDL bindings does
>> not
>> > use a grand-central dispatch mechanism like XPConnect, but that merely
>> > means that adding new bindings requires writing a code generator and
>> > feeding all the interfaces through it instead of implementing several
>> > customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM have
>> worked
>> > for the past several years.
>>
>> The core issue is WebIDL is not usable in non-gecko code yet, I didn't see
>> any example in thunderbird source tree, but I see lots of XPCOM codes. So
>> I
>> do not have any clue how to using WebIDL in thunderbird world
>>
>
> You would certainly need to blaze this trail. But it's certainly possible
> (given that you're willing to fork mozilla-central), and I guarantee it
> will be more tractable than rewriting XPConnect with JS-CTypes.
>
>
In fact, the only reason to use WebIDL is to getting javascript to using
some native code, but the situation is I have no need to do that, all the
new code we've writing down are in pure Javascript, if we have the willing
to implement something with WebIDL, for example, the IMAP protocol, then we
would choose the pure javascript to do that, like the jsmime does.
The reason to rewriting XPConnect with JS-CTypes is not to use XPCOM  to
implement new things, but to re-use exist things.
Besides the mork database are only capable to running in main-thread, is
that possible to running the libmime in the worker?
Cause the libmime is really CPU intensive, I've seen it occupied 100% usage
of CPU, if that could be able to off main-thread, then that's would be a
great performance improvement.

>
>> >
>> >
>> > --
>> > Joshua Cranmer
>> > Thunderbird and DXR developer
>> > Source code archæologist
>> >
>> > ___
>> > dev-platform mailing list
>> > dev-platform@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-platform
>> >
>>
>>
>>
>> --
>>  此致
>> 礼
>> 罗勇刚
>> Yours
>> sincerely,
>> Yonggang Luo
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>


-- 
 此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 09:47 AM, Eric Rescorla wrote:
> I believe that Chrome will be making a similar change at a similar time
> 
> -Ekr

In contrast, it looks like IE & Edge will continue accepting SHA1 certs
on the web for another full year, until 2017. [1][2]

~Daniel

[1] https://blogs.windows.com/msedgedev/2015/11/04/sha-1-deprecation-update/
[2]
https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/973/102/important-change-announcement---deprecation-of-sha-1
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Mike Hoye

On 2016-01-04 12:48 PM, Eric Rescorla wrote:
On Mon, Jan 4, 2016 at 9:47 AM, Mike Hoye > wrote:


On 2016-01-04 12:31 PM, Bobby Holley wrote:

By "this sort of software" do you mean "Firefox"? Because
that's what 95% of our users experiencing this are going to do
absent anything clever on our end. We clearly need to
determine the scale of the problem to determine how much time
it's worth investing into this. But I think we should assume
that an affected user is a lost use in this case

Is consumer-grade home networking gear on our radar here? Many,
many home APs will self-generate SHA-1 certificates on their first
boot after a reset.


The certificates from those devices aren't valid in any case, because
they do not chain to a trust anchor.
I'm really asking about the user experience. If it's the same "add an 
exception and proceed" process, that's not great, but we're no worse off 
and my concerns are unfounded.



- mhoye
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 2:19 AM, Daniel Holbert wrote:

I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already.


I propose that we minimally should collect telemetry around this 
condition. It should be pretty easy to detect: look for cases where we 
reject very young SHA-1 certs that chain back to a CA we don't ship. 
Once we know the scope of the problem, we can make informed decisions 
about how urgent our subsequent actions should be.


It would also be potentially useful to know the cert issuer in these 
cases, since that might allow us to make some guesses about whether the 
failures are caused by malware, well-intentioned but kludgy malware 
detectors, or enterprise gateways. Working out how to do that in a way 
that respects privacy and user agency may be tricky, so I'd propose we 
go for the simple count first.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Yonggang Luo
>
>
>> In fact, the only reason to use WebIDL is to getting javascript to using
>> some native code, but the situation is I have no need to do that, all the
>> new code we've writing down are in pure Javascript, if we have the willing
>> to implement something with WebIDL, for example, the IMAP protocol, then
>> we would choose the pure javascript to do that, like the jsmime does.
>> The reason to rewriting XPConnect with JS-CTypes is not to use XPCOM  to
>> implement new things, but to re-use exist things.
>>
>
> Are you talking about calling existing native code from JS? You can do
> that with WebIDL by writing binding entry points for the classes you want.
>
I am talking about calling existing XPCOM code from JS :( if it's existing
native code, the js-ctypes maybe enough.
That's why I am doing such a thing that you are thinking weird and not
necessary. Indeed, I am not willing to do that things too, but I still have
no other options yet.

>
> If you're talking about impersonating arbitrary XPCOM components with
> JS-implemented code...good luck doing that with anything other than
> XPConnect. ;-)
>
Hope that would works

>
>
>> Besides the mork database are only capable to running in main-thread, is
>> that possible to running the libmime in the worker?
>> Cause the libmime is really CPU intensive, I've seen it occupied 100%
>> usage of CPU, if that could be able to off main-thread, then that's would
>> be a great performance improvement.
>>
>>>
 >
 >
 > --
 > Joshua Cranmer
 > Thunderbird and DXR developer
 > Source code archæologist
 >
 > ___
 > dev-platform mailing list
 > dev-platform@lists.mozilla.org
 > https://lists.mozilla.org/listinfo/dev-platform
 >



 --
  此致
 礼
 罗勇刚
 Yours
 sincerely,
 Yonggang Luo
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

>>>
>>>
>>
>>
>> --
>>  此致
>> 礼
>> 罗勇刚
>> Yours
>> sincerely,
>> Yonggang Luo
>>
>
>


-- 
 此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:21 AM, Adam Roach wrote:
> I propose that we minimally should collect telemetry around this
> condition. It should be pretty easy to detect: look for cases where we
> reject very young SHA-1 certs that chain back to a CA we don't ship.
> Once we know the scope of the problem, we can make informed decisions
> about how urgent our subsequent actions should be.

I had a similar thought, but I think it's too late for such telemetry to
be effective. The vast majority of users who are affected will have
already stopped using Firefox, or will immediately do so, as soon as
they discover that their webmail, bank, google, facebook, etc. don't work.

(We could have used this sort of telemetry before Jan 1 if we'd forseen
this potential problem.  I don't blame us for not forseeing this, though.)

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Daniel Holbert
On 01/04/2016 10:18 AM, Eric Rescorla wrote:
> I believe you are confusing two different things.
> 
> 1. Whether the browser supports SHA-1 certificates at all.
> 2. Whether the browser supports SHA-1 certificates signed after Jan 1 2016
> (The CA/BF Baseline Requirements forbid this, so no publicly valid
> certificate
> should fall into this category).
> 
> It's not clear to me how IE/Edge are behaving with respect to #2.

Sorry, I wasn't clear.

What I was saying was
 * The definitive statements I've found from/about MS about SHA1 on the
web *only* mention your point #1 (and do not mention anything about #2).
 * My one data point, from this affected user, indicates that IE still
works just fine with freshly-minted SHA1 certs.

So, in the absence of statements about #2 (and in the presence of proof
otherwise), I see no reason to think Microsoft is taking action on that
point.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Josh Matthews

On 2016-01-04 1:21 PM, Adam Roach wrote:

On 1/4/16 2:19 AM, Daniel Holbert wrote:

I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already.


I propose that we minimally should collect telemetry around this
condition. It should be pretty easy to detect: look for cases where we
reject very young SHA-1 certs that chain back to a CA we don't ship.
Once we know the scope of the problem, we can make informed decisions
about how urgent our subsequent actions should be.

It would also be potentially useful to know the cert issuer in these
cases, since that might allow us to make some guesses about whether the
failures are caused by malware, well-intentioned but kludgy malware
detectors, or enterprise gateways. Working out how to do that in a way
that respects privacy and user agency may be tricky, so I'd propose we
go for the simple count first.



Wouldn't the SSL cert failures also prevent submitting the telemetry 
payload to Mozilla's servers?


Cheers,
Josh
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform